Managing several Raspberry Pi can be a lot of work. This article will teach you how Kubernetes and Docker can help.
Foreword
I own 4 Raspberry Pi and I got interested in Kubernetes when I was tired of managing my Raspberry Pi and keeping track of what was installed and running on which machine.
Using Docker containers allows me to make sure my applications are packaged with their dependencies. Kubernetes is a platform that manages containers on hosts. It allocates the containers on the available Raspberry Pi. I can pull out one of them to do something else, and when I’m done, I add it back into the cluster.
Install Kubernetes on Raspberry Pi
For this step, I won’t reinvent the wheel. A tutorial has been made to install Kubernetes on the Raspberry Pi. This gist is updated on a regular basis to keep up with any breaking changes.
This can be tedious because you have to repeat some steps (like burning the SD cards, install Docker, etc.) for each Raspberry Pi. It took around an hour for four machines.
Read carefully, some steps are to be performed on the master node only and others are to be performed on all the nodes (or all the slave nodes).
The only thing that didn’t work as mentioned in the tutorial is getting the 3/3 Running
on the kube-dns
pods. It only showed 3/3 Running
after I launched the command.
$ kubectl apply -f \
“https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
Take your time to get it working. I’ll wait for you.
If you don’t have a cluster of Raspberry Pi available but you still want to try Kubernetes on one machine, you can use Minikube.
Hello world example
Maybe you didn’t notice but if you followed the tutorial to install Kubernetes on Raspbian, you ran an elaborate Hello world example when you executed
kubectl create -f function.yml
Take a look at the function.yml
file. There are two files in the file. You created a service which maps the port 8080
of your pod to the port 31118
of your cluster. You also created a deployment of one pod from the Docker image functions/markdownrender:latest-armhf
exposing an API on port 8080
. Your API is now available on port 31118
from outside of the cluster.
You interact with Kubernetes with the kubectl
command. Here you read a configuration file and create the objects described in the file.
Pods are the base unit of Kubernetes. It can contain containers and volumes. All containers within a pod share the same port space and IP address (they can find each other on localhost).
Deployments are a way to manage multiple pods together (for example replications of the same pod). They are the preferred way of managing the state of the cluster (e.g. “I want 3 replicas of this pod from that image).
Services are the network interfaces of the pods. It allows mapping the port of the pods to external port. They also act as load balancers when you use replicas. There are a lot of different objects corresponding to different functionalities (like jobs, configuration, replication, etc.).
Use cases
Home server
I use my cluster as a home server to host applications. For example, it serves as a backend for my humidity and temperature monitoring sensors, following a water damage in my apartment. I log my data in InfluxDB and plot them with Grafana. Kubernetes answers my problem because:
- InfluxDB and Grafana each run in a pod with the default settings from the default Docker images. I had nothing to configure to set them up (except using InfluxDB as the data source in the Grafana GUI).
- I can use my NAS as a NFS volume that I can mount on my pods. I don’t run the risk of wiping the data because of an accidental SD car wiping.
I also have deployed some applications like my burndown chart app that I use to track my goals. Before that, I was using Heroku on the free tier but the application was slow to start and it was public. Now:
- I can run the app privately without having to implement an auth system.
- Again, my data is on my NAS and I don’t run the risk of losing it.
Experiments with distributed systems
My other use case with my cluster is the experimentation with distributed systems. What happens if I launch two MySQL pods on the same data volume? How many messages/second can I send through RabbitMQ? Is it easy to set up Consul? How fast is eventual consistency in Cassandra?
You can deploy an image in a few lines of configuration and set up your experiments.
Overall, Kubernetes answers my needs: I can host my applications without having to manage individual machines. There are also some bonuses:
- Kubernetes automatically restarts in the right state after a power failure.
- It detects when I unplug a Raspberry Pi from the cluster, then move its pods to healthy nodes.
It’s a bit over-engineered but as it’s designed to work on a huge number of hosts with a huge number of pods, it works really well on my small setup.
Thanks to Flavian Hautbois, Alexandre Sapet, and Vincent Quagliaro.
If you are looking for Data Engineering experts, don't hesitate to contact us !