(Update 1 – Aug 29, 2009:) It appears this configuration wasn’t working with HAProxy version 1.3.20 due to the “option nolinger” feature. I have removed it from the configuration and can confirm it works well with HAProxy v.1.3.15 to v.1.3.20. Because of this, you’ll also notice a significant increase in TIME_WAIT sessions, as well as ip_conntrack_count increasing from ~150 to ~925.
This post summarizes my reflections on failover, redundancy, and ultimately scaling MySQL databases using load-balancing software known as HAProxy.
At my current employer, we have been using HAProxy to build very simple server clusters to help clients scale their databases. It works for most people assuming their application:
- Has an acceptable ratio of reads/writes (i.e: 100:1)
- Can separate reads and writes at the application level
If your read/write ratio is lower, that’s when you need to look into different scaling solutions such as sharding.
I’ve designed a slightly more complex HAProxy configuration file which load-balances requests to MySQL databases. It detects failures such as broken replication and offline servers, and adjusts the availability of servers accordingly.
Each database server is running an xinetd daemon. Port 9201 is used to monitor replication and port 9200 is used to monitor mysql status. These ports are monitored by HAProxy as you will see in the configuration file below.
HAProxy backend to monitor replication
backend db01_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1 |
HAProxy backend to monitor mysql status
backend db01_status | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2 |
I modified the mysqlchk_status.sh script found at SysBible with my own.
The mysqlchk_replication.sh script is similar to the one above, except it checks a few other variables such as Slave_IO_Running, Slave_SQL_Running and Seconds Behind Master. Success will always return a ‘200 OK’ and failures will always return a ‘503 Service Unavailable’.
- 2 HAProxy load-balancers in Active-Passive mode (VRRP using Keepalived)
- 2 MySQL database servers with Master-Master replication in Active-Passive mode
- 3 MySQL database servers with slave replication (read-only)
Failure scenarios
Based on a small set of failure scenarios, we’re able to determine how the load balancers should direct traffic. We obviously don’t want read requests from a database server who’s not replicating its master. We also don’t want to send writes to a server who’s offline. The examples below describe how HAProxy will react in those scenarios.
1. Replication breaks, lags, or stops working on DB02
- DB01 becomes the master database server.
- HAProxy stops sending requests to DB02 and DB05 (its slave).
- Despite this, DB01 and DB05 are still able to receive replicated data from DB02.
2. Replication breaks, lags, or stops working on DB01
- DB02 becomes the master database server.
- HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
- Despite this, DB02, DB03 and DB04 are still able to receive replicated data from DB01.
3. Replication breaks, lags, or stops working on DB01 & DB02
- There is no writable master database server. Service is severely degraded and action should be taken to bring one master server back into replication.
- This is a split-brain problem. Both servers are online, but they aren’t replicating each other.
- HAProxy only sends read requests to DB01 and DB02
- HAProxy stops sending requests to DB03, DB04 and DB05 (the slaves).
- Despite this, DB03 and DB04 are still able to receive replicated data from DB01.
- Despite this, DB05 is still able to receive replicated data from DB02.
4. DB02 is offline, due to a server crash or something similar
- DB01 becomes the master database server.
- HAProxy stops sending requests to DB02 and DB05 (its slave).
- DB05 can’t receive replicated data from DB02.
- DB01 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.
5. DB01 is offline, due to a server crash or something similar
- DB02 becomes the master database server.
- HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
- DB03 and DB04 can’t receive replicated data from DB01.
- DB02 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.
6. DB01 and DB02 are offline, due to a server crash or something similar
- There is no master database server.
- HAProxy stops sending requests to all DB servers.
- Call your sysadmin because your website is probably down.
Download
WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.
Here are the scripts and config files, or scroll down to view the code:
# | |
# /etc/xinetd.d/mysqlchk | |
# | |
service mysqlchk_write | |
{ | |
flags = REUSE | |
socket_type = stream | |
port = 9200 | |
wait = no | |
user = nobody | |
server = /opt/mysqlchk_status.sh | |
log_on_failure += USERID | |
disable = no | |
only_from = 172.16.0.0/24 # recommended to put the IPs that need | |
# to connect exclusively (security purposes) | |
} | |
service mysqlchk_read | |
{ | |
flags = REUSE | |
socket_type = stream | |
port = 9201 | |
wait = no | |
user = nobody | |
server = /opt/mysqlchk_replication.sh | |
log_on_failure += USERID | |
disable = no | |
only_from = 172.16.0.0/24 # recommended to put the IPs that need | |
# to connect exclusively (security purposes) | |
} |
#!/bin/bash | |
# | |
# This script checks if a mysql server is healthy running on localhost. It will | |
# return: | |
# | |
# "HTTP/1.x 200 OK\r" (if mysql is running smoothly) | |
# | |
# - OR - | |
# | |
# "HTTP/1.x 500 Internal Server Error\r" (else) | |
# | |
# The purpose of this script is make haproxy capable of monitoring mysql properly | |
# | |
# Author: Unai Rodriguez | |
# | |
# It is recommended that a low-privileged-mysql user is created to be used by | |
# this script. Something like this: | |
# | |
# mysql> GRANT SELECT on mysql.* TO 'mysqlchkusr'@'localhost' \ | |
# -> IDENTIFIED BY '257retfg2uysg218' WITH GRANT OPTION; | |
# mysql> flush privileges; | |
# | |
# Script modified by Alex Williams - August 4, 2009 | |
# - removed the need to write to a tmp file, instead store results in memory | |
MYSQL_HOST="172.16.0.60" | |
MYSQL_PORT="3306" | |
MYSQL_USERNAME="replication_user" | |
MYSQL_PASSWORD="replication_pass" | |
# | |
# We perform a simple query that should return a few results :-p | |
ERROR_MSG=`/usr/bin/mysql --host=$MYSQL_HOST --port=$MYSQL_PORT --user=$MYSQL_USERNAME --password=$MYSQL_PASSWORD -e "show databases;" 2>/dev/null` | |
# | |
# Check the output. If it is not empty then everything is fine and we return | |
# something. Else, we just do not return anything. | |
# | |
if [ "$ERROR_MSG" != "" ] | |
then | |
# mysql is fine, return http 200 | |
/bin/echo -e "HTTP/1.1 200 OK\r\n" | |
/bin/echo -e "Content-Type: Content-Type: text/plain\r\n" | |
/bin/echo -e "\r\n" | |
/bin/echo -e "MySQL is running.\r\n" | |
/bin/echo -e "\r\n" | |
else | |
# mysql is fine, return http 503 | |
/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n" | |
/bin/echo -e "Content-Type: Content-Type: text/plain\r\n" | |
/bin/echo -e "\r\n" | |
/bin/echo -e "MySQL is *down*.\r\n" | |
/bin/echo -e "\r\n" | |
fi |
WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.
# HAProxy configuration - haproxy-db.cfg | |
## | |
## FRONTEND ## | |
## | |
# Load-balanced IPs for DB writes and reads | |
# | |
frontend db_write | |
bind 172.16.0.50:3306 | |
default_backend cluster_db_write | |
frontend db_read | |
bind 172.16.0.51:3306 | |
default_backend cluster_db_read | |
# Monitor DB server availability | |
# | |
frontend monitor_db01 | |
# | |
# set db01_backup to 'up' or 'down' | |
# | |
bind 127.0.0.1:9301 | |
mode http | |
#option nolinger | |
acl no_repl_db01 nbsrv(db01_replication) eq 0 | |
acl no_repl_db02 nbsrv(db02_replication) eq 0 | |
acl no_db01 nbsrv(db01_status) eq 0 | |
acl no_db02 nbsrv(db02_status) eq 0 | |
monitor-uri /dbs | |
monitor fail unless no_repl_db01 no_repl_db02 no_db02 | |
monitor fail if no_db01 no_db02 | |
frontend monitor_db02 | |
# | |
# set db02_backup to 'up' or 'down' | |
# | |
bind 127.0.0.1:9302 | |
mode http | |
#option nolinger | |
acl no_repl_db01 nbsrv(db01_replication) eq 0 | |
acl no_repl_db02 nbsrv(db02_replication) eq 0 | |
acl no_db01 nbsrv(db01_status) eq 0 | |
acl no_db02 nbsrv(db02_status) eq 0 | |
monitor-uri /dbs | |
monitor fail unless no_repl_db01 no_repl_db02 no_db01 | |
monitor fail if no_db01 no_db02 | |
frontend monitor_db03 | |
# | |
# set db03 read-only slave to 'down' | |
# | |
bind 127.0.0.1:9303 | |
mode http | |
#option nolinger | |
acl no_repl_db03 nbsrv(db03_replication) eq 0 | |
acl no_repl_db01 nbsrv(db01_replication) eq 0 | |
acl db02 nbsrv(db02_status) eq 1 | |
monitor-uri /dbs | |
monitor fail if no_repl_db03 | |
monitor fail if no_repl_db01 db02 | |
frontend monitor_db04 | |
# | |
# set db04 read-only slave to 'down' | |
# | |
bind 127.0.0.1:9304 | |
mode http | |
#option nolinger | |
acl no_repl_db04 nbsrv(db04_replication) eq 0 | |
acl no_repl_db01 nbsrv(db01_replication) eq 0 | |
acl db02 nbsrv(db02_status) eq 1 | |
monitor-uri /dbs | |
monitor fail if no_repl_db04 | |
monitor fail if no_repl_db01 db02 | |
frontend monitor_db05 | |
# | |
# set db05 read-only slave to 'down' | |
# | |
bind 127.0.0.1:9305 | |
mode http | |
#option nolinger | |
acl no_repl_db05 nbsrv(db05_replication) eq 0 | |
acl no_repl_db02 nbsrv(db02_replication) eq 0 | |
acl db01 nbsrv(db01_status) eq 1 | |
monitor-uri /dbs | |
monitor fail if no_repl_db05 | |
monitor fail if no_repl_db02 db01 | |
# Monitor for split-brain syndrome | |
# | |
frontend monitor_splitbrain | |
# | |
# set db01_splitbrain and db02_splitbrain to 'up' | |
# | |
bind 127.0.0.1:9300 | |
mode http | |
#option nolinger | |
acl no_repl01 nbsrv(db01_replication) eq 0 | |
acl no_repl02 nbsrv(db02_replication) eq 0 | |
acl db01 nbsrv(db01_status) eq 1 | |
acl db02 nbsrv(db02_status) eq 1 | |
monitor-uri /dbs | |
monitor fail unless no_repl01 no_repl02 db01 db02 | |
## | |
## BACKEND ## | |
## | |
# Check every DB server replication status | |
# - perform an http check on port 9201 (replication status) | |
# - set to 'down' if response is '503 Service Unavailable' | |
# - set to 'up' if response is '200 OK' | |
# | |
backend db01_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1 | |
backend db02_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db02 172.16.0.61:3306 check port 9201 inter 1s rise 1 fall 1 | |
backend db03_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db03 172.16.0.63:3306 check port 9201 inter 1s rise 1 fall 1 | |
backend db04_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db04 172.16.0.64:3306 check port 9201 inter 1s rise 1 fall 1 | |
backend db05_replication | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db05 172.16.0.65:3306 check port 9201 inter 1s rise 1 fall 1 | |
# Check Master DB server mysql status | |
# - perform an http check on port 9200 (mysql status) | |
# - set to 'down' if response is '503 Service Unavailable' | |
# - set to 'up' if response is '200 OK' | |
# | |
backend db01_status | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2 | |
backend db02_status | |
mode tcp | |
balance roundrobin | |
option tcpka | |
option httpchk | |
server db02 172.16.0.61:3306 check port 9200 inter 1s rise 2 fall 2 | |
# DB write cluster | |
# Failure scenarios: | |
# - replication 'up' on db01 & db02 = writes to db01 | |
# - replication 'down' on db02 = writes to db01 | |
# - replication 'down' on db01 = writes to db02 | |
# - replication 'down' on db01 & db02 = go nowhere, split-brain, cluster FAIL! | |
# - mysql 'down' on db02 = writes to db01_backup | |
# - mysql 'down' on db01 = writes to db02_backup | |
# - mysql 'down' on db01 & db02 = go nowhere, cluster FAIL! | |
# | |
backend cluster_db_write | |
# | |
# - max 1 db server available at all times | |
# - db01 is preferred (top of list) | |
# - db_backups set their 'up' or 'down' based on results from monitor_dbs | |
# | |
mode tcp | |
option tcpka | |
balance roundrobin | |
option httpchk GET /dbs | |
server db01 172.16.0.60:3306 weight 1 check port 9201 inter 1s rise 2 fall 1 | |
server db02 172.16.0.61:3306 weight 1 check port 9201 inter 1s rise 2 fall 1 backup | |
server db01_backup 172.16.0.60:3306 weight 1 check port 9301 inter 1s rise 2 fall 2 addr 127.0.0.1 backup | |
server db02_backup 172.16.0.61:3306 weight 1 check port 9302 inter 1s rise 2 fall 2 addr 127.0.0.1 backup | |
# DB read cluster | |
# Failure scenarios | |
# - replication 'up' on db01 & db02 = reads on db01, db02, all db_slaves | |
# - replication 'down' on db02 = reads on db01, slaves of db01 | |
# - replication 'down' on db01 = reads on db02, slaves of db02 | |
# - replication 'down' on db01 & db02 = reads on db01_splitbrain and db01_splitbrain only | |
# - mysql 'down' on db02 = reads on db01_backup, slaves of db01 | |
# - mysql 'down' on db01 = reads on db02_backup, slaves of db02 | |
# - mysql 'down' on db01 & db02 = go nowhere, cluster FAIL! | |
# | |
backend cluster_db_read | |
# | |
# - max 2 master db servers available at all times | |
# - max N slave db servers available at all times except during split-brain | |
# - dbs track 'up' and 'down' of dbs in the cluster_db_write | |
# - db_backups track 'up' and 'down' of db_backups in the cluster_db_write | |
# - db_splitbrains set their 'up' or 'down' based on results from monitor_splitbrain | |
# | |
mode tcp | |
option tcpka | |
balance roundrobin | |
option httpchk GET /dbs | |
server db01 172.16.0.60:3306 weight 1 track cluster_db_write/db01 | |
server db02 172.16.0.61:3306 weight 1 track cluster_db_write/db02 | |
server db01_backup 172.16.0.60:3306 weight 1 track cluster_db_write/db01_backup | |
server db02_backup 172.16.0.61:3306 weight 1 track cluster_db_write/db02_backup | |
server db01_splitbrain 172.16.0.60:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1 | |
server db02_splitbrain 172.16.0.61:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1 | |
# | |
# Scaling & redundancy options | |
# - db_slaves set their 'up' or 'down' based on results from monitor_dbs | |
# - db_slaves should take longer to rise | |
# | |
server db03_slave 172.16.0.63:3306 weight 1 check port 9303 inter 1s rise 5 fall 1 addr 127.0.0.1 | |
server db04_slave 172.16.0.64:3306 weight 1 check port 9304 inter 1s rise 5 fall 1 addr 127.0.0.1 | |
server db05_slave 172.16.0.65:3306 weight 1 check port 9305 inter 1s rise 5 fall 1 addr 127.0.0.1 |
Simon
August 11, 2009
9:33 pm
Very interesting. Using MySQL Proxy you could transparently redirects writes to master and reads to slaves. Also it could potentially do some of the availability testing logic for you.
http://forge.mysql.com/wiki/MySQL_Proxy
Alex
August 29, 2009
12:35 am
I’ve looked into this solution with MySQL Proxy but it doesn’t seem to be built for splitting reads/writes.
There are more details about splitting reads/writes with MySQL Proxy here: http://forge.mysql.com/wiki/MySQL_Proxy_RW_Splitting
Although I personally think it’s something which should be done at the application level.
Istvan Podor
September 6, 2009
4:42 am
Simon:
MysqlProxy is in ALPHA state and let me quote mysql.com : “MySQL Proxy is currently an Alpha release and should not be used within production environments. ”
So good for testing or proof-of-concept, but I would never risk my production systems. But Mysqlproxy will be a huge thing when it will be released finaly :)
Alex:
This is a great article thanks!
As I can see you are using multi-master set up. I think in case of a master failure its a kind of expensive to take offline all its slaves too and maybe this is where MMM (or something less friendly or advanced) could be better.
But of course, everything is depend on the application you are using :)
Thanks again.
Alex
September 6, 2009
12:14 pm
Istvan: That is correct regarding MySQL Proxy. That’s exactly why I haven’t bothered installing it after reading the docs and realizing its early infancy state. I’m also very anxious to see its official stable/production release down the road.
The goal with this configuration is to maintain available sites, as opposed to being “entirely down”. Even if the websites are accessed in a “degraded” state, i.e: read-only in split-brain situation, at least the websites are accessible and can still be server dynamic content, without the ability to modify. It definitely depends on the situation.
In regards to moving the slave, the problem is the slave will quickly start serving stale data as the other Master receives writes. Even if you use MMM to move the slave onto the new Master, you still need to perform some kind of consistency checks on the data, which MMM does *not* provide.
Istvan Podor
September 9, 2009
5:03 pm
Alex: You are right. Actually I already recommended this to someone. So back to basics, great article :))
With mmm I use maatkit to verify, actually with multi-master i have “cronjob-like” daily/weekly report (replication could become an enemy.. :) ). I would try this too where I work now, but I just have no resources :(
Hervé COMMOWICK
November 2, 2009
7:45 am
Hello,
it seems it lacks the mysqlchk_replication.sh script in your article, can you provide it ?
Thanks in advance.
Hervé.
David Cheng
November 2, 2009
6:35 pm
Hello,
Could you publish mysqlchk_replication.sh script in your article?
Thanks a lot.
–David
Laurent Spagnol
November 6, 2009
7:41 am
Hi,
Thanks for your great article.
I am working on a similary solution.
I’ll probably use “SQL Relay” as fronted, before HAproxy.
“SQL Relay” will be used for routing W request on “master pool” and R request on “slave pool”
http://sqlrelay.sourceforge.net/
client -> SQL Relay -> HAproxy -> master-”master”
| |
| —–> master-”slave”
|
|——> HAproxy -> slave1
|
—–> slave2
…..
(Replication use “master instance” of HAproxy)
What do you think about it ?
L.S.
Alex Williams
November 12, 2009
1:29 pm
Laurent: I’m not familiar with SQL Relay, although it looks interesting. It seems to perform the same functionality as the “backend cluster_db_read” which I have configured, with a few extra options. I guess it really depends on your needs. I would be curious to see the performance impact of an additional layer of routing added between haproxy, the web, and db servers.
As for the replication check script, you can try this: http://forge.mysql.com/tools/tool.php?id=6
macir
November 14, 2009
8:09 am
Hi Alex,
thanx for this great document. I am newbie to haproxy. I am a bir confused, in other configuration there is “listen” section, but in your config there isn’t. And my haproxy starts without errors, but don’t work. Is there something incomplete in haproxy.cfg?
thanks again…
Alex
November 23, 2009
7:26 pm
Hi,
Thank you. Yes there is missing the “global” and “defaults” section of the haproxy configuration.
The config I posted is only an example of how to perform certain complex tasks and monitoring with haproxy. The ‘listen’ section can be split in two by using a ‘frontend’ and ‘backend’. This allows multiple frontends to forward to the same backend.
Although it’s outdated, the following document has some good standard configurations you can use to start from:
http://haproxy.1wt.eu/download/1.3/doc/haproxy-en.txt
Josh Brown
November 25, 2009
9:21 pm
This is a great article. I am going to use a part of this, only instead of doing replication via MySQL Master/Slave, I am going to use DRBD with Heartbeat. I think DRBD/Hearbeat is a better choice because of the automatic reconciliation after a failure. Either way, love it.
Alex
November 28, 2009
5:52 pm
Hi Josh,
You have to be aware that DRBD has a limit of 2 nodes so that could be problematic if you want more than that. It’s great if you can deal with one server being “unuseable” all the time because the 2nd server basically acts as a failover only, so you’re not really load-balancing requests or distributing the load.
Finally, DRBD can be really useful when both your servers are on the same local LAN because the replication is block-level. It won’t do well over a WAN or unreliable VPN connection.
Josh Brown
November 30, 2009
6:42 pm
Thanks Alex. Luckily both of those cases I can accommodate. I didn’t quite give my overall use case, but basically I will have two HA Proxy MySQL vips, one for writes (2 nodes DRBD cluster), and N for reads (1 for now as a MySQL replication slave using the methods you spoke of above). Then using the JDBC layer in the application we can redirect all writes to one vip and all reads to the other.