
We have a quickstart guide for how to install Percona Distribution for MySQL Operator on minikube. Installing the minimal version works well as it is described in the guide. After that, we will have one HAproxy and one Percona XtraDB Cluster (PXC) node to work with.
Minikube provides Kubernetes locally. One can try using the provided local k8s to try the more advanced scenarios such as the one described here.
Following that guide, everything works well, until we get to the part of deploying a cluster with
deploy/cr.yaml
Even after that, things seemingly work.
$ kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-haproxy-0 0/2 ContainerCreating 0 5s cluster1-pxc-0 0/3 Init:0/1 0 5s percona-xtradb-cluster-operator-77bfd8cdc5-rcqsp 1/1 Running 1 62s
That is until the second pod is getting created. The creation of that pod will be stuck forever in a pending state.
$ kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-haproxy-0 1/2 Running 0 93s cluster1-pxc-0 3/3 Running 0 93s cluster1-pxc-1 0/3 Pending 0 10s percona-xtradb-cluster-operator-77bfd8cdc5-rcqsp 1/1 Running 1 2m30s
When checking cluster1-pxc-1 pods with
kubectl describe pod cluster1-pxc-1
the reason becomes clear.
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 66s (x2 over 66s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. Warning FailedScheduling 63s default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.
Anti-affinity rules are specified for different pods in the cluster, which makes sense, normally – one would want to have the different PXC instances in different failure domains, so we can have actual fault tolerance. I could have made this one work by editing the anti-affinity rules in cr.yaml, which would have been suitable for testing purposes, but I was wondering if there is a better way to have a more complicated local k8s setup. Kind can give that, and it’s an ideal playground for following the second guide. Alternatively, the anti-affinity rules can be edited, but I wanted to have an easy test environment for a full setup.
In this example, I am using macOS and DockerDesktop for Mac, kind can be installed via homebrew.
$ cat kind-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker - role: worker
This way I have one control and 3 worker nodes (running kubelet), a redundant control plane is also supported, but not needed for this testing. With this, the cluster can be created.
$ kind create cluster --name k8s-playground --config kind-config.yaml Creating cluster "k8s-playground" ... ✓ Ensuring node image (kindest/node:v1.21.1)✓ Preparing nodes
![]()
![]()
![]()
✓ Writing configuration
✓ Starting control-plane
✓ Installing CNI
✓ Installing StorageClass
✓ Joining worker nodes
Set kubectl context to "kind-k8s-playground" You can now use your cluster with: kubectl cluster-info --context kind-k8s-playground
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community
Each node will be a docker container.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6d404954433e kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker2 93a293dfc423 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 127.0.0.1:64922->6443/tcp k8s-playground-control-plane e531e10b0384 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker 383a89f6d9f8 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute k8s-playground-worker3
From this point on, kubectl is configured, and we can follow the second guide for the Percona Distribution for MySQL Operator.
After that, we need to wait for a while for the cluster to come up.
$ kubectl apply -f deploy/cr.yaml perconaxtradbcluster.pxc.percona.com/cluster1 created
$ kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-haproxy-0 0/2 ContainerCreating 0 4s cluster1-pxc-0 0/3 Init:0/1 0 4s percona-xtradb-cluster-operator-d99c748-d5nq6 1/1 Running 0 21s
After a few minutes, the cluster will be running as expected.
$ kubectl get pods NAME READY STATUS RESTARTS AGE cluster1-haproxy-0 2/2 Running 0 5m5s cluster1-haproxy-1 2/2 Running 0 3m20s cluster1-haproxy-2 2/2 Running 0 2m55s cluster1-pxc-0 3/3 Running 0 5m5s cluster1-pxc-1 3/3 Running 0 3m32s cluster1-pxc-2 3/3 Running 0 119s percona-xtradb-cluster-operator-d99c748-d5nq6 1/1 Running 0 5m22s
$ kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- mysql -h cluster1-haproxy -uroot -proot_password -e "show global status like 'wsrep_cluster_size'" mysql: [Warning] Using a password on the command line interface can be insecure. +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ pod "percona-client" deleted
For that last check, I used the default password from secret.yaml. If you changed that, use the password it’s changed to.
Kind will work on macOS out of the box like this as a simple solution. In order to try Percona software in local playgrounds (on Linux or in a Linux virtual machine), you can also check anydbver, created and maintained by Nickolay Ihalainen.
At the end of the experiments, the kind k8s can be destroyed.
$ kind delete cluster --name k8s-playground Deleting cluster "k8s-playground" ...