Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Percona XtraDB Cluster in a Nutshell

Frdric Descamps Seppo Jaakola Percona Live UK (London) 2012

Who are we ?
Frdric Descamps @lefred Percona Consultant http://about.me/lefred devops believer Managing MySQL since 3.23 (as far as I remember) Seppo Jaakola @codership Founder of Codership

www.percona.com

Agenda

What is Percona XtraDB Cluster (in few words) Hands on !

www.percona.com

What is PXC ?

+ =
www.percona.com

To remember
Important urls
http://www.codership.com/wiki/doku.php?id=galera_parameters http://www.codership.com/wiki/doku.php?id=mysql_options_0.8

www.percona.com

Present

Currently we have one node (production) running Percona Server

www.percona.com

Future

We want to have a 3 nodes PXC We want to have a smooth migration with minimum downtime

www.percona.com

The Plan

Configure PXC on node2 and node3 Take a backup of node1 Restore the backup on node2 Play with the 2 nodes cluster Setup the current production server as 3rd node

www.percona.com

Connect to your servers

Test the connection (ssh) to all your servers (node1, node2 and node3) login: root password: vagrant
ssh -p 2221 root@127.0.0.1 (node1) ssh -p 2222 root@127.0.0.1 (node2) ssh -p 2223 root@127.0.0.1 (node3)

www.percona.com

The production

We have a script that simulate our production load


while true do pluk.py sleep 5 done

www.percona.com

The production

We have a script that simulate our Run the script (pluk.py) once on node1 production load
while true do pluk.py sleep 5 done

www.percona.com

Install PXC

On node2 and node3, install PerconaXtraDB-Cluster-Server You should use yum (or apt) We will use rpm as the files are already downloaded in /usr/local/rpms

www.percona.com

Configure PXC
[mysql] prompt="node2 mysql> " [mysqld] datadir=/var/lib/mysql user=mysql log_error=node2-error.log binlog_format=ROW innodb_log_file_size=64M innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2
www.percona.com

Configure PXC (2)


wsrep_cluster_name=pluk2k12 wsrep_cluster_address=gcomm://192. 168.70.1,192.168.70.2,192.168.70.3 wsrep_node_name=node2 wsrep_node_address=192.168.70.2 wsrep_provider=/usr/lib64/libgalera_smm.so wsrep_sst_method=rsync wsrep_slave_threads=4

www.percona.com

To remember
Disable selinux wsrep_cluster_address now supports multiple entries, wsrep_urls in [mysqld_safe] is deprecated SST method is defined in my.cnf when wsrep_node_address is used we can omit wsrep_sst_receive_address, wsrep_node_incoming_address and ist. recv_addr

www.percona.com

Let's have a look...


Check MySQL error log, what do we see? Check variables and status related to PXC

SHOW GLOBAL VARIABLES LIKE 'wsrep%'; SHOW GLOBAL STATUS LIKE 'wsrep%';

Play with the cluster (follow instructor)

www.percona.com

To remember
wsrep = 'Write Set Replicator' Settings are available with SHOW GLOBAL VARIABLES LIKE 'wsrep%'; Status counters available with SHOW GLOBAL STATUS LIKE 'wsrep%'; Are important to check cluster status: wsrep_local_state_comment wsrep_cluster_size wsrep_cluster_status wsrep_connected wsrep_ready
www.percona.com

What about State Snapshot Transfer (SST)


SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor Multiple SST methods:

skip rsync mysqldump xtrabackup

www.percona.com

What about State Snapshot Transfer (SST)


SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor Multiple SST methods:

skip rsync mysqldump xtrabackup

Test all SST methods


www.percona.com

What about State Snapshot Transfer (SST)


SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor No problem Multiple SST methods: with

skip rsync mysqldump xtrabackup

mysldump ?

www.percona.com

To remember
SST methods are not all the same. You can specify a donor per node Xtrabackup doesn't freeze the donor for the complete SST period Xtrabackup requires authentication parameter

www.percona.com

Loadbalancer

Let's configure a loadbalancer to connect to our cluster


HA proxy (needs clustercheck) glb

Install and configure glb on node2

www.percona.com

Loadbalancer - glb
glbd --daemon --threads 6 \ --control 127.0.0.1:4444 \ 192.168.70.2:3307 \ 192.168.70.2:3306 \ 192.168.70.3:3306

www.percona.com

Quorum and split brain

PXC checks for Quorum to avoid split brain situation

www.percona.com

Quorum and split brain

PXC checks for Quorum to avoid split brain situation

stop the communication between node2 and node3

www.percona.com

Quorum and split brain

BAD solution :-( wsrep_provider_options = pc. ignore_quorum = true and the GOOD solution.... next slide !

www.percona.com

Quorum and split brain (2)

Galera Arbitration Daemon (garbd)

www.percona.com

Quorum and split brain (2)

Galera Arbitration Daemon (garbd)


run garbd on node1

Test the following :


Stop mysql on node3: what's happening ? Stop garbd on node1: what's happening ? Start garbd on node1 and mysql on node3, block communication between node2 and node3, what's happening this time ? Block communication between node1 and node3: what's happening ?

www.percona.com

To remember
3 nodes is the minimum recommended ! odd numbers of nodes are always better you can use a "fake" node (garbd) even to replicate through it !

www.percona.com

Incremental State Transfer (IST)


Used to avoid full SST (using gcache) gcache.size can be specified using wsrep_provider_options Now works even after a crash if the state is consistent

www.percona.com

Incremental State Transfer (IST)


Used to avoid full SST (using gcache) gcache.size can be specified using wsrep_provider_options Now works even after a crash if the state is consistent
stop mysql on node3, run pluk.py on node2, restart node3
www.percona.com

Production Migration

Backup the production server and restore the production on node2 Run pluk.py on node1

www.percona.com

Production Migration (2)


Start node3 Run pluk.py on node1 Start the async replication of node1 to node2 What about node3 ? Run pluk.py on node1

www.percona.com

Production Migration (2)

www.percona.com

Production Migration (3)

Configure pluk.py to connect to the loadbalancer Run pluk.py Scratch data on node1 and install PXC Configure PXC on node1 Start the cluster on node1 (SST should be done with node3) Run pluk.py and check data on all nodes
www.percona.com

Production Migration (3)

www.percona.com

Production Migration (4)

www.percona.com

Online Schema Changes


Total Order Isolation (TOI) Rolling Schema Upgrade (RSU) wsrep_OSU_method pt-online-schema-change

www.percona.com

Online Schema Changes (2)

Create a large table to modify


CREATE database pluk; use pluk; CREATE TABLE `actor` ( `actor_id` int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB;

INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM dual; INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM actor;

repeat this step until it takes 10 sec+


www.percona.com

Online Schema Changes (3)

Use all three methods while running pluk.py against your new database and add each time a new column Check pluk.py output

www.percona.com

Thank you !

Visit: http://www.percona.com/live/london-2012/
www.percona.com

Annual Percona Live MySQL Conference and Expo


The Hyatt Regency Hotel, Santa Clara, CA

April 22nd-25th, 2013 Registration and Call for Papers are Open!
Visit: http://www.percona.com/live/mysql-conference-2013/
www.percona.com

You might also like