Professional Documents
Culture Documents
Percona XtraDB 集群内部
Percona XtraDB 集群内部
Who are we ?
Frdric Descamps @lefred Percona Consultant http://about.me/lefred devops believer Managing MySQL since 3.23 (as far as I remember) Seppo Jaakola @codership Founder of Codership
www.percona.com
Agenda
www.percona.com
What is PXC ?
+ =
www.percona.com
To remember
Important urls
http://www.codership.com/wiki/doku.php?id=galera_parameters http://www.codership.com/wiki/doku.php?id=mysql_options_0.8
www.percona.com
Present
www.percona.com
Future
We want to have a 3 nodes PXC We want to have a smooth migration with minimum downtime
www.percona.com
The Plan
Configure PXC on node2 and node3 Take a backup of node1 Restore the backup on node2 Play with the 2 nodes cluster Setup the current production server as 3rd node
www.percona.com
Test the connection (ssh) to all your servers (node1, node2 and node3) login: root password: vagrant
ssh -p 2221 root@127.0.0.1 (node1) ssh -p 2222 root@127.0.0.1 (node2) ssh -p 2223 root@127.0.0.1 (node3)
www.percona.com
The production
www.percona.com
The production
We have a script that simulate our Run the script (pluk.py) once on node1 production load
while true do pluk.py sleep 5 done
www.percona.com
Install PXC
On node2 and node3, install PerconaXtraDB-Cluster-Server You should use yum (or apt) We will use rpm as the files are already downloaded in /usr/local/rpms
www.percona.com
Configure PXC
[mysql] prompt="node2 mysql> " [mysqld] datadir=/var/lib/mysql user=mysql log_error=node2-error.log binlog_format=ROW innodb_log_file_size=64M innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2
www.percona.com
www.percona.com
To remember
Disable selinux wsrep_cluster_address now supports multiple entries, wsrep_urls in [mysqld_safe] is deprecated SST method is defined in my.cnf when wsrep_node_address is used we can omit wsrep_sst_receive_address, wsrep_node_incoming_address and ist. recv_addr
www.percona.com
Check MySQL error log, what do we see? Check variables and status related to PXC
SHOW GLOBAL VARIABLES LIKE 'wsrep%'; SHOW GLOBAL STATUS LIKE 'wsrep%';
www.percona.com
To remember
wsrep = 'Write Set Replicator' Settings are available with SHOW GLOBAL VARIABLES LIKE 'wsrep%'; Status counters available with SHOW GLOBAL STATUS LIKE 'wsrep%'; Are important to check cluster status: wsrep_local_state_comment wsrep_cluster_size wsrep_cluster_status wsrep_connected wsrep_ready
www.percona.com
SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor Multiple SST methods:
www.percona.com
SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor Multiple SST methods:
SST = full copy of cluster data to a specific node (from DONOR to JOINER) wsrep_sst_donor No problem Multiple SST methods: with
mysldump ?
www.percona.com
To remember
SST methods are not all the same. You can specify a donor per node Xtrabackup doesn't freeze the donor for the complete SST period Xtrabackup requires authentication parameter
www.percona.com
Loadbalancer
www.percona.com
Loadbalancer - glb
glbd --daemon --threads 6 \ --control 127.0.0.1:4444 \ 192.168.70.2:3307 \ 192.168.70.2:3306 \ 192.168.70.3:3306
www.percona.com
www.percona.com
www.percona.com
BAD solution :-( wsrep_provider_options = pc. ignore_quorum = true and the GOOD solution.... next slide !
www.percona.com
www.percona.com
www.percona.com
To remember
3 nodes is the minimum recommended ! odd numbers of nodes are always better you can use a "fake" node (garbd) even to replicate through it !
www.percona.com
Used to avoid full SST (using gcache) gcache.size can be specified using wsrep_provider_options Now works even after a crash if the state is consistent
www.percona.com
Used to avoid full SST (using gcache) gcache.size can be specified using wsrep_provider_options Now works even after a crash if the state is consistent
stop mysql on node3, run pluk.py on node2, restart node3
www.percona.com
Production Migration
Backup the production server and restore the production on node2 Run pluk.py on node1
www.percona.com
Start node3 Run pluk.py on node1 Start the async replication of node1 to node2 What about node3 ? Run pluk.py on node1
www.percona.com
www.percona.com
Configure pluk.py to connect to the loadbalancer Run pluk.py Scratch data on node1 and install PXC Configure PXC on node1 Start the cluster on node1 (SST should be done with node3) Run pluk.py and check data on all nodes
www.percona.com
www.percona.com
www.percona.com
Total Order Isolation (TOI) Rolling Schema Upgrade (RSU) wsrep_OSU_method pt-online-schema-change
www.percona.com
INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM dual; INSERT INTO actor (first_name, last_name) SELECT REPEAT('a', 45), REPEAT('b', 45) FROM actor;
Use all three methods while running pluk.py against your new database and add each time a new column Check pluk.py output
www.percona.com
Thank you !
Visit: http://www.percona.com/live/london-2012/
www.percona.com
April 22nd-25th, 2013 Registration and Call for Papers are Open!
Visit: http://www.percona.com/live/mysql-conference-2013/
www.percona.com