
In this blog post, I’d like to share some experiences in setting up a Vitess environment for local tests and development on OSX/macOS. As previously, I have presented How To Test and Deploy Kubernetes Operator for MySQL(PXC) in OSX/macOS, this time I will be showing how to Run Vitess on Kubernetes.
Since running Kubernetes on a laptop is only experimental, I had faced several issues going through straight forward installation steps so I had to apply a few workarounds to the environment. This setup will have only minimum customization involved.
For a high-level overview of Vitess, please visit Part I of this series, Introduction to Vitess on Kubernetes for MySQL.
Housekeeping items needed during installation:
- Use and update homebrew
- Install minikube
- Install etcd operator
- Install helm
- Install mysql-client
- Install go 1.12+
- Install Vitess Client
Installation and Configuration
Minikube Installation
One of the main challenges I’ve faced was that the latest Kubernetes version wasn’t compatible with the existing development. The issue is filed here in GitHub, hence we start with the previous version, not the default.
$ minikube start -p vitess --memory=4096 --kubernetes-version=1.15.2 😄 [vitess] minikube v1.5.2 on Darwin 10.14.6 ✨ Automatically selected the 'virtualbox' driver 🔥 Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ... 🐳 Preparing Kubernetes v1.15.2 on Docker '18.09.9' ... 💾 Downloading kubelet v1.15.2 💾 Downloading kubeadm v1.15.2 🚜 Pulling images ... 🚀 Launching Kubernetes ... ⌛ Waiting for: apiserver 🏄 Done! kubectl is now configured to use "vitess" E1127 15:56:46.076308 30453 start.go:389] kubectl info: exec: exit status 1
Verify that the minikube is initialized and running.
$ kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-2zwsf 1/1 Running 0 1m coredns-5c98db65d4-qmslc 1/1 Running 0 1m etcd-minikube 1/1 Running 0 34s kube-addon-manager-minikube 1/1 Running 0 43s kube-apiserver-minikube 1/1 Running 0 41s kube-controller-manager-minikube 1/1 Running 0 28s kube-proxy-wrc5k 1/1 Running 0 1m kube-scheduler-minikube 1/1 Running 0 45s storage-provisioner 1/1 Running 0 1m
Installation of etcd Operator
The next item on the list is to get etcd operator running. At this point, we’ll still need to clone etcd to local directory to have access to files.
$ git clone https://github.com/coreos/etcd-operator.git
The issue reported here is a workaround to replace the deployment.yaml file. Once that’s done we can proceed with the installation.
Under /Users/[username]/Kubernetes/etcd-operator run;
$./example/rbac/create_role.sh Creating role with ROLE_NAME=etcd-operator, NAMESPACE=default clusterrole "etcd-operator" created Creating role binding with ROLE_NAME=etcd-operator, ROLE_BINDING_NAME=etcd-operator, NAMESPACE=default clusterrolebinding "etcd-operator" created $ kubectl create -f example/deployment.yaml deployment "etcd-operator" created $ kubectl get customresourcedefinitions NAME KIND etcdclusters.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io
If the above steps don’t work, alternatively you may install etcd-operator via helm.
$ kubectl get customresourcedefinitions NAME KIND etcdbackups.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io etcdclusters.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io etcdrestores.etcd.database.coreos.com CustomResourceDefinition.v1beta1.apiextensions.k8s.io
Installation of helm
$ brew install helm
Another issue faced with Helm is it can’t find tiller.
$ helm init $HELM_HOME has been configured at /Users/askdba/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Installation of Vitess Client
Get Vitess client installed using go:
$ go get vitess.io/vitess/go/cmd/vtctlclient go: finding vitess.io/vitess v2.1.1+incompatible go: downloading vitess.io/vitess v2.1.1+incompatible go: extracting vitess.io/vitess v2.1.1+incompatible go: downloading golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271 go: downloading github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b go: finding github.com/youtube/vitess v2.1.1+incompatible go: extracting github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b go: downloading github.com/youtube/vitess v2.1.1+incompatible go: extracting golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271 go: extracting github.com/youtube/vitess v2.1.1+incompatible go: downloading github.com/golang/protobuf v1.3.2 go: downloading google.golang.org/grpc v1.24.0 go: extracting github.com/golang/protobuf v1.3.2 go: extracting google.golang.org/grpc v1.24.0 go: downloading google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6 go: downloading golang.org/x/text v0.3.2 go: downloading golang.org/x/sys v0.0.0-20191028164358-195ce5e7f934 go: extracting golang.org/x/sys v0.0.0-20191028164358-195ce5e7f934 go: extracting golang.org/x/text v0.3.2 go: extracting google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6 go: finding github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b go: finding golang.org/x/net v0.0.0-20191028085509-fe3aa8a45271 go: finding github.com/golang/protobuf v1.3.2 go: finding google.golang.org/grpc v1.24.0 go: finding google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6 go: finding golang.org/x/text v0.3.2
Configuration of Vitess Cluster
Now we’re ready to launch our test cluster in Vitess which consists of the sample database schema. We will go over this in the next blog by creating a single keyspace and sharding across instances using Vitess.
$ helm install ../../helm/vitess -f 101_initial_cluster.yaml NAME: steely-owl LAST DEPLOYED: Wed Nov 27 16:46:00 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME AGE vitess-cm 2s ==> v1/Job NAME AGE commerce-apply-schema-initial 2s commerce-apply-vschema-initial 2s zone1-commerce-0-init-shard-master 2s ==> v1/Pod(related) NAME AGE commerce-apply-schema-initial-fzm66 2s commerce-apply-vschema-initial-j9hzb 2s vtctld-757df48d4-vbv5z 2s vtgate-zone1-5cb4fcddcb-fx8xd 2s zone1-commerce-0-init-shard-master-zd7vs 2s zone1-commerce-0-rdonly-0 2s zone1-commerce-0-replica-0 1s zone1-commerce-0-replica-1 1s ==> v1/Service NAME AGE vtctld 2s vtgate-zone1 2s vttablet 2s ==> v1beta1/Deployment NAME AGE vtctld 2s vtgate-zone1 2s ==> v1beta1/PodDisruptionBudget NAME AGE vtgate-zone1 2s zone1-commerce-0-rdonly 2s zone1-commerce-0-replica 2s ==> v1beta1/StatefulSet NAME AGE zone1-commerce-0-rdonly 2s zone1-commerce-0-replica 2s ==> v1beta2/EtcdCluster NAME AGE etcd-global 2s etcd-zone1 2s NOTES: Release name: steely-owl To access administrative web pages, start a proxy with: kubectl proxy --port=8001 Then use the following URLs: vtctld: http://localhost:8001/api/v1/namespaces/default/services/vtctld:web/proxy/app/ vtgate: http://localhost:8001/api/v1/namespaces/default/services/vtgate-zone1:web/proxy/ $ kubectl describe service vtgate-zone1 Name: vtgate-zone1 Namespace: default Labels: app=vitess cell=zone1 component=vtgate Selector: app=vitess,cell=zone1,component=vtgate Type: NodePort IP: 10.109.14.82 Port: web 15001/TCP NodePort: web 30433/TCP Endpoints: 172.17.0.7:15001 Port: grpc 15991/TCP NodePort: grpc 30772/TCP Endpoints: 172.17.0.7:15991 Port: mysql 3306/TCP NodePort: mysql 31090/TCP Endpoints: 172.17.0.7:3306 Session Affinity: None No events.
Here we face another issue even though our cluster is up and running. We aren’t able to access this environment from the laptop. Under /Users/askdba/go/src/vitess.io/vitess/examples/helm there’s a script called kmysql.sh which figures out the hostname and port number for this cluster, but it fails due to the above-mentioned issue.
This script returns an error as follows:
$ ./kmysql.sh ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
To fix this, we’ll need to create a pod to access Kubernetes running inside minikube:
$ kubectl run -i --rm --tty percona-client --image=percona:5.7 --restart=Never -- bash -il Waiting for pod default/percona-client to be running, status is Pending, pod ready: false If you don't see a command prompt, try pressing enter.
This allows us required access to the cluster.
$ mysql -h vtgate-zone1 -P 3306 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.10-Vitess Percona Server (GPL), Release 23, Revision 500fcf5 Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> \s -------------- mysql Ver 14.14 Distrib 5.7.26-29, for Linux (x86_64) using 6.2 Connection id: 1 Current database: commerce Current user: vt_app@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.5.10-Vitess Percona Server (GPL), Release 23, Revision 500fcf5 Protocol version: 10 Connection: vtgate-zone1 via TCP/IP Server characterset: utf8 Db characterset: utf8 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 -------------- mysql> show tables; +--------------------+ | Tables_in_commerce | +--------------------+ | corder | | customer | | product | +--------------------+ 3 rows in set (0.01 sec)
Summary of issues:
- Kubernetes version compatibility documented
- etcd-operator “unable to recognize “deployment.yaml” documented
- Helm says tiller is installed AND could not find tiller documented
- Connecting to minikube cluster on the local laptop documented
- Running vtctld console on OSX due to xdg-console incompatibility (Not Resolved).
- Accessing local file system via kubernetes operator.
Part III of this series will be published shortly, so please stay tuned. Read Part I of this series: Introduction to Vitess on Kubernetes for MySQL – Part I
References
Credits
- Daniel Guzman Burgos – Technical Lead (MySQL)
- Mykola Marzhan – Director of Server Engineering
- Sergey Kuzmichev – Support Engineer