Setup a Kubernetes Cluster with Ansible
Reading time4 Minutes
Although all large Cloud provider nowadays offer Managed Kubernetes Clusters, I prefer to have access to a local cluster especially during development.
In this post, we will setup a Kubernetes Cluster using Ansible and Kubeadm. The cluster will include a single master node and two (or more) worker nodes.
Most of the work done here is based on a tutorial by bsder1.
I will use three Ubuntu 18.04 LTS (Bionic Beaver) servers, each with 4GB RAM and 2 CPUs, you should also be fine with 1GB RAM.
All servers have been updated to the latest packages and a SSH key for access is also deployed.
All playbooks are available in the GitHub repository2 and should be cloned first.
inventory file contains the nodes’ hostnames and should match your servers. You can also add additional nodes if you want to build a larger cluster.
A good test to see, if the inventory is configured correctly is
ansible all -i inventory -m ping
This should return a
pong from each node.
Install required software
kube-install-software playbook to perform the required steps:
ansible-playbook -i inventory kube-install-software.yml -K ... PLAY RECAP ********************************************************************* kubernetes : ok=5 changed=3 unreachable=0 failed=0 kubernetes-node-1 : ok=5 changed=3 unreachable=0 failed=0 kubernetes-node-2 : ok=5 changed=3 unreachable=0 failed=0
Now that all nodes fulfill the basic requirements we are ready to setup the cluster. We will use
kubeadm here and rely on
flannel6 as the network fabric.
All steps described below are also included in the
ansible-playbook -i inventory kube-setup-cluster.yml -K
It’s surprisingly simple to create a kubernetes cluster with
kubeadm init --pod-network-cidr=10.244.0.0/16
Kubeadm uses the network
10.96.0.0/12 as default but
10.244.0.0/16 which we need to pass as option.
The command provides a very detailed explanation what we need to do to get access to your cluster:
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.0.9.88:6443 --token se28o1.ljmh27sev5umdz9x --discovery-token-ca-cert-hash sha256:6c9f49f5fed776e19aabe2b3f8f938c15f3ddb30519d63acded61cd4397e8f85
As we picked
flannel we can install it with this command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Finally we can add the nodes to the cluster by running the
kubeadm join command on each node:
kubeadm join 10.0.9.88:6443 --token se28o1.ljmh27sev5umdz9x --discovery-token-ca-cert-hash sha256:6c9f49f5fed776e19aabe2b3f8f938c15f3ddb30519d63acded61cd4397e8f85
Now we can ssh into our kubernetes master node, become root and check the status of our cluster:
root@kubernetes:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes NotReady master 10m v1.13.1 kubernetes-node-1 Ready <none> 9m31s v1.13.1 kubernetes-node-2 Ready <none> 9m27s v1.13.1
After some time the nodes should become ready:
root@kubernetes:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes Ready master 27m v1.13.1 kubernetes-node-1 Ready <none> 27s v1.13.1 kubernetes-node-2 Ready <none> 5m9s v1.13.1
The pods should look similar to this:
root@kubernetes:~# kubectl get pods --all-namespaces -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-8zwph 1/1 Running 0 9m9s 10.244.1.3 kubernetes-node-2 <none> <none> kube-system coredns-86c58d9df4-tr9rh 1/1 Running 0 9m8s 10.244.1.2 kubernetes-node-2 <none> <none> kube-system etcd-kubernetes 1/1 Running 1 26m 10.0.9.88 kubernetes <none> <none> kube-system kube-apiserver-kubernetes 1/1 Running 1 26m 10.0.9.88 kubernetes <none> <none> kube-system kube-controller-manager-kubernetes 1/1 Running 1 26m 10.0.9.88 kubernetes <none> <none> kube-system kube-flannel-ds-42cht 1/1 Running 0 2m19s 10.0.9.90 kubernetes-node-2 <none> <none> kube-system kube-flannel-ds-gb2rm 1/1 Running 0 2m19s 10.0.9.88 kubernetes <none> <none> kube-system kube-flannel-ds-mzgxm 1/1 Running 0 50s 10.0.9.89 kubernetes-node-1 <none> <none> kube-system kube-proxy-8rx9z 1/1 Running 0 5m32s 10.0.9.90 kubernetes-node-2 <none> <none> kube-system kube-proxy-hrz6p 1/1 Running 0 50s 10.0.9.89 kubernetes-node-1 <none> <none> kube-system kube-proxy-qb5pc 1/1 Running 1 27m 10.0.9.88 kubernetes <none> <none> kube-system kube-scheduler-kubernetes 1/1 Running 1 26m 10.0.9.88 kubernetes <none> <none>
Install k8s-self-hosted-recovery (optional, recommended)
Since version 1.8 of kubeadm a new limitation for self-hosted kubernetes clusters was introduced. Those clusters do no longer recover from a reboot without manual intervention7.
On github a small script can be found8 which creates a service that performs all required steps so your cluster survives reboots.
The service is included in the
ansible-playbook -i inventory kube-self-hosted-recovery.yml -K
Here we go, you should now have a Kubernetes Cluster with two worker nodes deployed and ready for further exploration.
I do not recommend this installation for a public system as it does not contain any security related changes, please consult the official documentation if you want to use it for production.
Spring Boot Application in OpenShift / OKDNow that we have packaged an existing Spring Boot application into a Docker Image, we can deploy it to a Kubernet cluster as well.
In this example the additional features of OpenShift/OKD are used to enable a continuous deployment of the application.
Kubernetes Logging with fluentd and logz.ioBy using logz.io, it is relatively easy to outsource Kubernetes logfiles that do not contain sensitive data to an external service and analyze them there with Kibana.
Autoscaling GitLab Runner Instances on Google Cloud PlatformMigrating GitLab CI jobs to Google Cloud Platform is possible with little effort due to the good support provided by GitLab and relieves the load off your hardware.
This can be worthwhile even for small projects or private GitLab instances without generating major costs.