This example will focus on Fluentd1 as log collector and Logz.io2 as backend.
Fluentd is an open source collector for log files and a good choice when deployed into Kubernetes. We will create a DaemonSet3 and use the fluentd-kubernetes-daemonset4 docker image. A DaemonSet ensures, that the configured pods run on each node in the cluster and new notes are automatically provisioned.
You can find multiple tags of the image which provide support for different backends (e.g. Elasticsearch, Cloudwatch or Stackdriver).
As a backend we will use Logz.io which is build on the ELK stack and offers a free Community plan with 3GB data daily.
All required Kubernetes files are available in the GitHub repository5 and should be cloned first.
To separate the logging from the existing namespaces, we create a new namespace kube-logging first.
Logz.io requires a Token and Type as authentication parameters. You can find the values after a sucessful registration in your settings.
Both values are encoded in base64 and added to the fluentd-secret.yml.
Now we can create the secret in our cluster:
Daemonset, ServiceAccount and ClusterRole
The last step is the creation of a ServiceAccount used by the fluentd pods and a matching ClusterRole.
You can now verify the process of the DaemonSet deployment:
And get the Pods:
Inside the logs of fluentd you should see the parsed log-files:
If everything works it will take only seconds until the first logs become available in the Kibana Dashboard6.