Quantcast
Channel: Percona Database Performance Blog
Viewing all articles
Browse latest Browse all 1785

How to Use ProxySQL 2 on Percona XtraDB Cluster for Failover

$
0
0
xtradbcluster

ProxySQLIf you are thinking of using ProxySQL in our Percona XtraDB Cluster environment, I’ll explain how to use ProxySQL 2 for failover tasks.

How to Test

ProxySQL uses the “weight” column to define who is the WRITER node. For this example, I’ll use the following list of hostnames and IPs for references:

+-----------+----------------+
| node_name | ip             |
+-----------+----------------+
| pxc1      | 192.168.88.134 |
| pxc2      | 192.168.88.125 |
| pxc3      | 192.168.88.132 |
+-----------+----------------+

My current WRITER node is the “pxc1” node, but how can I see who is the current WRITER? It’s easy, just run the following query:

proxysql> select hostgroup_id, comment, hostname, status, weight from runtime_mysql_servers;

This is the output:

+--------------+---------+----------------+--------+--------+
| hostgroup_id | comment | hostname       | status | weight |
+--------------+---------+----------------+--------+--------+
| 11           | pxc2    | 192.168.88.125 | ONLINE | 100    |
| 12           | pxc3    | 192.168.88.132 | ONLINE | 100    |
| 12           | pxc2    | 192.168.88.125 | ONLINE | 100    |
| 10           | pxc1    | 192.168.88.134 | ONLINE | 1000   | <--- WRITER
| 11           | pxc1    | 192.168.88.134 | ONLINE | 1000   |
| 11           | pxc3    | 192.168.88.132 | ONLINE | 100    |
+--------------+---------+----------------+--------+--------+

Now for some maintenance reasons I need to failover to “pxc2” node because on “pxc1” I need to do some hardware changes (like to increase the physical memory or increase the disk partition), so there are 3 steps.

1. Move the WRITER node to pxc2 node. We need to decrease the “weight” value on the current WRITER and increase the “weight” value on “pxc2” to the new WRITER.

proxysql> update mysql_servers set weight=100 where hostgroup_id=11 and hostname='<IP_PXC1>';
proxysql> update mysql_servers set weight=1000 where hostname='<IP_PXC2>';

proxysql> LOAD MYSQL SERVERS TO RUNTIME;

Note: I used these names <IP_PXC1> and <IP_PXC2> to be easy to read, but you need to change for the current IPs.

2. Get out the “pxc1” node from the cluster to avoid continuing to receive SELECTs, because after the failover this will be a READER node.

mysql> set global wsrep_reject_queries=all;

3. This step is optional in case you need to stop/start MySQL, and you keep this node out from the cluster.

$ vim /PATH/TO/my.cnf

wsrep_reject_queries=all

Finally, this should be the output after the failover:

admin ((none))>select hostgroup_id, comment, hostname, status, weight from runtime_mysql_servers;
+--------------+---------+----------------+--------+--------+
| hostgroup_id | comment | hostname       | status | weight |
+--------------+---------+----------------+--------+--------+
| 11           | pxc2    | 192.168.88.125 | ONLINE | 1000   |
| 12           | pxc3    | 192.168.88.132 | ONLINE | 100    |
| 14           | pxc1    | 192.168.88.134 | ONLINE | 100    | <--- OFFLINE GROUP
| 10           | pxc2    | 192.168.88.125 | ONLINE | 1000   | <--- NEW WRITER
| 11           | pxc3    | 192.168.88.132 | ONLINE | 100    |
+--------------+---------+----------------+--------+--------+

The “pxc2” node is the new WRITER and the “pxc1” node was moved to go the hostgroup 14 (offline_hostgroup), but is still online and in sync without receiving any query.

In case you restarted MySQL (previous step 3), it will continue out from ProxySQL, but now you need to allow accepting SELECTs:

mysql> set global wsrep_reject_queries=none;

And don’t forget to remove from your my.cnf file (previous step 3), if you did that.

Observations

Why do you need to move the WRITER node? Because sometimes it is needed to get out a node for maintenance, or the server is deprecated, etc.

On ProxySQL 1.X you need to configure a scheduler to run an external script to perform the backend health checks and update the database servers state. This “scheduler” table continues on ProxySQL 2 to keep consistency with previous versions, but now you don’t need to define a scheduler because there is a new feature and now this is supported natively for Galera Cluster or Percona XtraDB Cluster.

Another thing to keep in mind, for this new feature, is that it will check the MySQL status by monitoring the following statuses/variables:

read_only
wsrep_desync
wsrep_reject_queries
wsrep_sst_donor_rejects_queries
wsrep_local_state
wsrep_local_recv_queue
wsrep_cluster_status

This variable “pxc_maint_mode” is no longer used any more from ProxySQL 2.

There was a bug reported before about this param “pxc_maint_mode” for ProxySQL 2 because it is used a lot for us on ProxySQL 1, but now I recommend to use this new param “wsrep_reject_queries” to remove it from the rotation, in case you need to work on a particular server.

Also in case the current WRITER goes down, ProxySQL will failover to another mysql server automatically, checking the following high “weight” column, and when the server comes back, it will move the WRITER to the previous server.

Finally, but not least, here there are more details from this new feature/table.

CREATE TABLE mysql_galera_hostgroups (
writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL,
reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0),
offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0),
active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1,
writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1,2)) NOT NULL DEFAULT 0,
max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0,
comment VARCHAR,
UNIQUE (reader_hostgroup),
UNIQUE (offline_hostgroup),
UNIQUE (backup_writer_hostgroup));

There are a few settings for this table, I’ll explain it below.

writer_hostgroup: it will contain the writers (read_only=0 in case master/slave topology) or the writers nodes in Percona XtraDB, this last option depends on the “max_writers” config
backup_writer_hostgroup: this refers to the hostgroup that will contain the candidate servers
reader_hostgroup: it will contain the readers (read_only=1 in case master/slave topology) or the readers nodes in Percona XtraDB, this last depends on the “max_writers” config
offline_hostgroup: it will contain all those nodes which were deemed not usable
active: values 1/0 if this configuration needs to be used or not
max_writers: how many nodes you need to write at the same time, you can set up it up to the number of nodes, by default is 1
writer_is_also_reader: values 0/1/2, I’ll explain later, but the default value is 1
max_transactions_behind: based on wsrep_local_recv_queue status, if the node exceeds the “max_transactions_behind” the node will be marked as SHUNNED and it will not receive more traffic
comment: A little description about this config

Summary

This new feature makes it easy to manage/configure Percona XtraDB Cluster/Galera nodes. You only need to update the “weight” column and load to the runtime table and check the”runtime_mysql_servers” table to be sure if the WRITER node was changed. I hope you find this post helpful!

Check out some of my previous blogs:

ProxySQL Experimental Feature: Native ProxySQL Clustering

How to Add More Nodes to an Existing ProxySQL Cluster

How to Manage ProxySQL Cluster with Core and Satellite Nodes


Viewing all articles
Browse latest Browse all 1785

Trending Articles