Quantcast
Channel: Percona Database Performance Blog
Viewing all articles
Browse latest Browse all 1816

Using Percona Kubernetes Operators With K3s Part 1: Installation

$
0
0
Using Percona Kubernetes Operators With K3s

Using Percona Kubernetes Operators With K3sRecently Peter provided an extensive tutorial on how to use Percona Kubernetes Operators on minikube: Exploring MySQL on Kubernetes with Minikube. Minikube is a great option for local deployment and to get familiar with Kubernetes on a single developer’s box.

But what if we want to get experience with setups that are closer to production Kubernetes and use multiple servers for deployments?

I think there is an alternative that is also easy to set up and easy to start. I am talking about K3s. In fact, it is so easy that this part one will be very short — there is just one requirement that we need to resolve, but it is also easy.

So let’s assume we have four servers we want to use for our Kubernetes deployments: one master and three worker nodes. In my case:

beast-node7-ubuntu
beast-node8-ubuntu (master)
beast-node5-ubuntu
beast-node6-ubuntu

For step one, on master we execute:

curl -sfL https://get.k3s.io | sh -

For step two, we need to prepare a script, and we need two parameters: the IP address of the master and the token for the master. Finding the token is probably the most complicated part of this setup and it is stored in the file

/var/lib/rancher/k3s/server/node-token
.

Having these parameters, the script for other nodes is:

k3s_url="https://10.30.2.34:6443"
k3s_token="K109a7b255d0cf88e75f9dcb6b944a74dbca7a949ebd7924ec3f6135eeadd6624e9::server:5bfa3b7e679b23c55c81c198cc282543"
curl -sfL https://get.k3s.io | K3S_URL=${k3s_url} K3S_TOKEN=${k3s_token} sh -

After executing this script on other nodes we will have our Kubernetes running:

kubectl get nodes
NAME                 STATUS   ROLES                  AGE    VERSION
beast-node7-ubuntu   Ready    <none>                 125m   v1.24.6+k3s1
beast-node8-ubuntu   Ready    control-plane,master   23h    v1.24.6+k3s1
beast-node5-ubuntu   Ready    <none>                 23h    v1.24.6+k3s1
beast-node6-ubuntu   Ready    <none>                 23h    v1.24.6+k3s1

This is sufficient for a basic Kubernetes setup, but for our Kubernetes Operators, we need an extra step: Dynamic Volume Provisioning, because our Operators request volumes to store data.

Actually, after further research, it appears that Operators will use the default local-path provisioner, which satisfies Operator requirements.

After this, the K3s cluster should be ready to deploy our Operators, which we will review in the next part of this series. Stay tuned!


Viewing all articles
Browse latest Browse all 1816

Latest Images

Trending Articles



Latest Images