Professional Documents
Culture Documents
MySQL - MySQL Cluster - Two Webserver Setup
MySQL - MySQL Cluster - Two Webserver Setup
Login | Register
Developer Zone
Downloads
Documentation
DevZone
Librarian
Articles
News and Events
Forums
Bugs
Forge
Planet MySQL
Podcasts
Labs
HOWTO set up a MySQL Cluster for two servers (three servers required for true
redundancy)
Introduction
This HOWTO was designed for a classic setup of two servers behind a loadbalancer. The aim is to have true redundancy - either
server can be unplugged and yet the site will remain up.
Notes:
You MUST have a third server as a managment node but this can be shut down after the cluster starts. Also note that I do not
recommend shutting down the managment server (see the extra notes at the bottom of this document for more information). You
can not run a MySQL Cluster with just two servers And have true redundancy.
Although it is possible to set the cluster up on two physical servers you WILL NOT GET the ability to "kill" one server and for the
cluster to continue as normal. For this you need a third server running the managment node.
mysql1.domain.com 192.168.0.1
mysql2.domain.com 192.168.0.2
mysql3.domain.com 192.168.0.3
Servers 1 and 2 will be the two that end up "clustered". This would be perfect for two servers behind a loadbalancer or using
round robin DNS and is a good replacement for replication. Server 3 needs to have only minor changes made to it and does NOT
require a MySQL install. It can be a low-end machine and can be carrying out other tasks.
cd /usr/local/
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
from/http://www.signal42.com/mirrors/mysql/
groupadd mysql
useradd -g mysql mysql
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
ln -s mysql-max-4.1.9-pc-linux-gnu-i686 mysql
1 of 5 01/02/2011 13:50
MySQL :: MySQL Cluster: Two webserver setup http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-ser...
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root .
chown -R mysql data
chgrp -R mysql .
cp support-files/mysql.server /etc/rc.d/init.d/
chmod +x /etc/rc.d/init.d/mysql.server
chkconfig --add mysql.server
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
from/http://www.signal42.com/mirrors/mysql/
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
cd mysql-max-4.1.9-pc-linux-gnu-i686
mv bin/ndb_mgm .
mv bin/ndb_mgmd .
chmod +x ndb_mg*
mv ndb_mg* /usr/bin/
cd
rm -rf /usr/src/mysql-mgm
You now need to set up the config file for this managment:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.0.3 # the IP of THIS SERVER
# Storage Engines
[NDBD]
HostName=192.168.0.1 # the IP of the FIRST SERVER
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2 # the IP of the SECOND SERVER
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]
ndb_mgmd
This is the MySQL managment server, not maganment console. You should therefore not expect any output (we will start the
console later).
vi /etc/my.cnf
Enter i to go to insert mode again and insert this on both servers (changing the IP address to the IP of the managment server
that you set up in stage 2):
2 of 5 01/02/2011 13:50
MySQL :: MySQL Cluster: Two webserver setup http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-ser...
[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
Now, we make the data directory and start the storage engine:
mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start
If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server.
Note: you should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the
managment.
/usr/local/mysql/bin/ndb_mgm
Enter the command SHOW to see what is going on. A sample output looks like this:
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.3 (Version: 4.1.9)
[mysqld(API)] 2 node(s)
id=4 (Version: 4.1.9)
id=5 (Version: 4.1.9)
ndb_mgm>
If you see
in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out
where you have gone wrong and change this HOWTO to fix it.
If you are OK to here it is time to test mysql. On either server mysql1 or mysql2 enter the following commands: Note that we
have no root password yet.
mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;
If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to
host 1 and see if it works. If it works then congratulations.
The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network
cable and see if the other server keeps on going fine (try the SELECT query). If you dont have physical access do the following:
3 of 5 01/02/2011 13:50
MySQL :: MySQL Cluster: Two webserver setup http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-ser...
In this case ignore the command "grep ndbd" (the last line) but kill the first two processes by issuing the command kill -9 pid pid:
Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server
has died. To restart it, just issue
ndbd
Note: no --inital!
I strongly recommend that you do not stop the managment server once it has started. This is for several resons:
Note that this is a really quick script. You ought really to write one that at least checks if ndbd is already started on the machine.
Use of hostnames
You will note that I have used IP addresses exclusively throught this setup. This is because using hostnames simply increases the
number of things that can go wrong. Mikael Ronstro"m of MySQL AB kindly explains: "Hostnames certainly work with MySQL
Cluster. But using hostnames introduces quite a few error sources since a proper DNS lookup system must be set-up, sometimes
/etc/hosts must be edited and their might be security blocks ensuring that communication between certain machines is not
possible other than on certain ports". I strongly suggest that while testing you use IP addresses if you can, then once it is all
working change to hostnames.
RAM
Use the following formula to work out the amount of RAM that you need on each storage node:
NumberofReplicas is set to two by default. You can change it in config.ini if you want. So for example to run a 4GB database over
two servers with NumbeOfReplicas set to two you need 4.4 GB of RAM on each storage node. For the SQL nodes and
managment nodes you dont need much RAM at all. To run a 4GB database over 4 servers with NumberOfReplicas set to two
you would need 2.2GB per node.
Note: A lot of people have emailed me querying the maths above! Remember that the cluster is fault tolerant, and each piece of
data is stored on at least 2 nodes. (2 by default, as set by NumberOfReplicas). So you need TWICE the space you would need
just for one copy, multiplied by 1.1 for overhead.
If you decide to add storage nodes, bear in mind that 3 is not an optimal numbers. If you are going to move from two (above)
then move to 4. Adding SQL nodes
4 of 5 01/02/2011 13:50
MySQL :: MySQL Cluster: Two webserver setup http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-ser...
To add storage nodes, you need to add another [NDBD] section to config.ini as per the template above, edit the
/etc/my.cnf on the new storage node as per the example above and then create the directory /var/lib/mysql-cluster.
You then need to SHUTDOWN the cluster, start the managment daemon (ndb_mgmd) start all the ndbd nodes including the new
one and then restart all the MySQL servers.
[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3 # the IP of the MANAGMENT (THIRD) SERVER
Then you need to make sure that there is another [MYSQLD] line at the end of config.ini on the managment server. Restart the
cluster (see below for an important note) and restart mysql on the new API. It should be connected.
If you ever change config.ini you must stop the whole cluster and restart it to re-read the config file. Stop the cluster with a
SHUTDOWN command to the ndb_mgm package on the managment server and then restart all the storage nodes.
Some useful configuration options that you will need if you have large tables:
DataMemory: defines the space available to store the actual records in the database. The entire DataMemory will be allocated
in memory so it is important that the machine contains enough memory to handle the DataMemory size. Note that DataMemory is
also used to store ordered indexes. Ordered indexes uses about 10 bytes per record. Default: 80MB
IndexMemory The IndexMemory is the parameter that controls the amount of storage used for hash indexes in MySQL Cluster.
Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Default: 18MB
MaxNoOfAttributes This parameter defines the number of attributes that can be defined in the cluster. Default: 1000
MaxNoOfTables Obvious (bear in mind that each BLOB field creates another table for various reasons so take this into
account). Default: 128
View this page for further information about the things you can put in the [NDBD] section of config.ini
MySQL cluster is not secure. By default anyone can connect to your managment server and shut the whole thing down. I suggest
the following precautions:
Install APF and block all ports except those you use (do NOT include any MySQL cluster ports). Add the IPs of your
cluster machines to the /etc/apf/allow_hosts file.
Run MySQL cluster over a second network card on a second, isolated, network.
Thanks
I must thank several others who have contributed to this: Mikael Ronström from MySQL AB for helping me to get this to work and
spotting my silly mistake right at the end, Lewis Bergman for proof-reading this page and pointing out some improvements, as
well as suffering the frustration with me and Martin Pala for explaining the final reason to keep the managment server up as well
as a few other minor changes. Thanks also to Terry from Advanced Network Hosts who paid me to set a cluster up and at the
same time produce a HOWTO.
Alex Davies would love to hear from you if you successfully set this cluster up, if you get stuck on something, if you find a mistake
in his HOWTO or you have any suggestions. Please Contact Him.
Please also see the Cluster forum and Cluster mailing list.
5 of 5 01/02/2011 13:50