Although all large Cloud provider nowadays offer Managed Kubernetes Clusters, I prefer to have access to a local cluster especially during development.
In this post, we will setup a Kubernetes Cluster using Ansible and Kubeadm. The cluster will include a single master node and two (or more) worker nodes.
Most of the work done here is based on a tutorial by bsder.
I will use three Ubuntu 18.04 LTS (Bionic Beaver) servers, each with 4GB RAM and 2 CPUs, you should also be fine with 1GB RAM.
All servers have been updated to the latest packages and a SSH key for access is also deployed.
All playbooks are available in the GitHub repository and should be cloned first.
inventory file contains the nodes’ hostnames and should match your servers. You can also add additional nodes if you want to build a larger cluster.
A good test to see, if the inventory is configured correctly is
This should return a
pong from each node.
Install required software
All nodes need a basic set of software, namely docker, kubelet and kubeadm. Docker is available in the official repository but for Kubernetes we need to add
kube-install-software playbook to perform the required steps:
Now that all nodes fulfill the basic requirements we are ready to setup the cluster. We will use
kubeadm here and rely on
flannel as the network fabric.
All steps described below are also included in the
It’s surprisingly simple to create a kubernetes cluster with
Kubeadm uses the network
10.96.0.0/12 as default but
10.244.0.0/16 which we need to pass as option.
The command provides a very detailed explanation what we need to do to get access to your cluster:
As we picked
flannel we can install it with this command:
Finally we can add the nodes to the cluster by running the
kubeadm join command on each node:
Now we can ssh into our kubernetes master node, become root and check the status of our cluster:
After some time the nodes should become ready:
The pods should look similar to this:
Install k8s-self-hosted-recovery (optional, recommended)
Since version 1.8 of kubeadm a new limitation for self-hosted kubernetes clusters was introduced. Those clusters do no longer recover from a reboot without manual intervention.
On github a small script can be found which creates a service that performs all required steps so your cluster survives reboots.
The service is included in the
Here we go, you should now have a Kubernetes Cluster with two worker nodes deployed and ready for further exploration.
I do not recommend this installation for a public system as it does not contain any security related changes, please consult the official documentation if you want to use it for production.