Quantcast
Channel: Percona Database Performance Blog
Viewing all articles
Browse latest Browse all 1786

Testing Percona XtraDB Cluster 8.0 Using Vagrant

$
0
0
Percona XtraDB Cluster 8.0 Using Vagrant

Percona XtraDB Cluster 8.0 Using VagrantAs Alkin and Ramesh have shown us in their Testing Percona XtraDB Cluster 8.0 with DBdeployer post, it is now possible to easily deploy an environment to test the features provided by the brand new release of Percona XtraDB Cluster 8.0.

We have also worked on creating a testing environment available for those that use Vagrant instead. Be it that it’s what you are used to working with, or that you want a proper VM for each instance, in particular, you can use the following commands to easily deploy a three-node cluster.

Requirements

Vagrant runs in Linux, Mac OS, and Windows, you just need to have the packages installed. Visit Installing Vagrant if you haven’t done so already.

Apart from this, the only other special mention in this section is CPU and Memory requirements for each node. Make sure to tune the Vagrantfile file if needed, by default the project will use two CPUs and 4Gb of RAM per node:

vb.customize ["modifyvm", :id, "--memory", "4096"]
vb.customize ["modifyvm", :id, "--cpus", "2"]

Download and Setup

To get the project:

shell> cd ~/path/to/your/vagrant/projects/
shell> git clone -b pxc80-testing \
       https://github.com/guriandoro/vagrant_machines.git
shell> cd vagrant_machines/pxc/

Then, to start the nodes:

shell> vagrant up
Bringing machine 'node-1' up with 'virtualbox' provider...
Bringing machine 'node-2' up with 'virtualbox' provider...
Bringing machine 'node-3' up with 'virtualbox' provider...
...

This will result in having three nodes fully configured, and ready for us to log in. We can do so with the following commands:

shell> vagrant ssh node-1
shell> vagrant ssh node-2
shell> vagrant ssh node-3

After we are logged in, we can access the cluster nodes by simply using the MySQL command-line interface:

[vagrant@node1 ~]$ mysql
...
Server version: 8.0.18-9 Percona XtraDB Cluster (GPL), Release rel9, Revision a34c3d3, WSREP version 26.4.3
...
PXC: root@localhost ((none)) > show status like 'wsrep_cluster_s%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | 940f2b45-7dd5-11ea-9e3b-6e859b322b70 |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
3 rows in set (0.01 sec)

We have also included a basic wrapper script that runs sysbench, to be able to generate some load (it will run the cleanup first, then initialization, and finally the OLTP insert workload). To run it as both the vagrant or root OS users:

[vagrant@node1 ~]$ ~/run_sysbench.sh
sysbench 1.0.19 (using bundled LuaJIT 2.1.0-beta2)
...
Creating table 'sbtest1'...
Inserting 10000 records into 'sbtest1'
...
Threads started!
[ 1s ] thds: 1 tps: 262.06 qps: 262.06 (r/w/o: 0.00/262.06/0.00) lat (ms,95%): 7.56 err/s: 0.00 reconn/s: 0.00
...

In the provision.sh script, we created a user with mysql_native_password protocol for use with sysbench, to circumvent these issues.

The result of the sysbench execution will leave ten tables in the sbtest database:

PXC: root@localhost (sbtest) > SHOW TABLES;
+------------------+
| Tables_in_sbtest |
+------------------+
| sbtest1          |
| sbtest10         |
| sbtest2          |
| sbtest3          |
| sbtest4          |
| sbtest5          |
| sbtest6          |
| sbtest7          |
| sbtest8          |
| sbtest9          |
+------------------+
10 rows in set (0.00 sec)

If you want to have more control over what the sysbench script does, feel free to edit the run_sysbench.sh file to your liking.

Cleaning Up

To stop all running VMs, execute:

shell> vagrant halt
==> node-3: Attempting graceful shutdown of VM...
==> node-2: Attempting graceful shutdown of VM...
==> node-1: Attempting graceful shutdown of VM...

To terminate and delete all the VMs running, and go back to a clean state, execute:

shell> vagrant destroy -f
==> node-3: Forcing shutdown of VM...
==> node-3: Destroying VM and associated drives...
==> node-2: Forcing shutdown of VM...
==> node-2: Destroying VM and associated drives...
==> node-1: Forcing shutdown of VM...
==> node-1: Destroying VM and associated drives...

Network Addendum

To emulate network delay and packet loss between the nodes, we can use the following tc commands (as root). In this case, the eth1 interface was used for communication between the nodes, but double-check with the ip -a command in case it’s different in your setup.

To add a 150-millisecond delay with 5-millisecond uniform distribution:

shell> tc qdisc add dev eth1 root netem delay 150ms 5ms

To add 25 percent packet loss:

shell> tc qdisc add dev eth1 root netem loss 25%

Note that, in particular, using 100 percent packet loss is a good way to emulate a “pull the network plug” kind of situation.

To add both delays and packet loss:

shell> tc qdisc add dev eth1 root netem delay 150ms 5ms loss 25%

And finally, to remove it:

shell> tc qdisc del dev eth1 root netem

Setting Up More Nodes

To change the number of nodes used in the cluster, just edit the Vagrantfile file and modify the following line to whatever amount of nodes you want:

# the number of pxc nodes
number_of_nodes=3

Using Ansible

If you are an ansible user, you may find the following project interesting, too: https://github.com/nethalo/pxc8

The readme file has information on how you should deploy this environment. In this case, after the nodes are created (with vagrant up) you will need to manually bootstrap the first node, and then start the mysqld services in the remaining two nodes. It needs some additional steps, but it lets you be more involved in the process and helps understand other aspects of the operations side.

Summary

We have seen how to quickly deploy a three-node Percona XtraDB Cluster using Vagrant, how to access the nodes, how to execute MySQL commands, and how to run a sysbench script to generate load. Happy PXC testing!


Viewing all articles
Browse latest Browse all 1786

Trending Articles