Adaptive Kind
 

Kubernetes Raspberry Pi cluster with k3s

Published on May 25, 2024 by Ian Homer

I had a couple of Raspberry Pis hanging around in my office, and having been using k3s recently on my local laptop, I thought I'd spin up a kubernetes cluster on Pis with k3s. Doing this has been on my mind for a while to help with some deeper understanding of Kubernetes.

Installing master node

One of my Pis will be the master node, which I've called home. First we need to edit /boot/cmdline.txt and add cgroup_enable=memory since k3s needs cgroups to start the systemd service. I didn't set cgroup_memory=1 as mentioned in the docs, because I read that is redundant now, and it all worked fine without it. After the edit the first line of this file will read something like:

console=tty1 root=PARTUUID=00000000-02
rootfstype=ext4 fsck.repair=yes
rootwait cgroup_enable=memory

Reboot the Pi for this to take effect:

sudo reboot now

Then install k3s:

curl -sfL https://get.k3s.io | sh -

And check that the node is up.

$ sudo k3s kubectl get node
NAME   STATUS   ROLES                  AGE   VERSION
home   Ready    control-plane,master   2m    v1.29.4+k3s1

View the token for the master node in the node-token file. We'll need this to register another node.

sudo cat /var/lib/rancher/k3s/server/node-token

Installing an additional node

I installed another Pi afresh to act as another node in the cluster. I wrote an OS image to a SD Card with the Raspberry Pi Imager, selecting the lightweight Raspberry Pi OS List (64-Bit) and customising the install by setting an admin username and password along with SSH service so that I could quickly SSH into onto the server once it started up. I didn't set up Wifi on this node (or the master node). Both Pis are on a local wired network, since I wanted the cluster to have a faster network between each of the nodes. I do have wireless access to that network from my laptop so I can administer the nodes and access the cluster remotely.

With the SD card in the Pi, I turned the Raspberry Pi on, logged in and was ready to provision the node.

Again, cgroup_enable=memory needs to be added to cmdline.txt. This is a Pi 5 and the file is now located at /boot/firmware/cmdline.txt.

I installed k3s on this node and connected it to the master node, with the environment variables K3S_URL and K3S_TOKEN as documented in the k3s environment variables.

curl -sfL https://get.k3s.io |
  K3S_URL=https://192.168.178.72:6443 \
  K3S_TOKEN=K___::server:0___ \ # Token from master node
  sh -

Running kubectl from master node, we can see the new node is up and running.

$ sudo k3s kubectl get node
NAME          STATUS   ROLES                  AGE   VERSION
home          Ready    control-plane,master   62m   v1.29.4+k3s1
raspberrypi   Ready    <none>                 42s   v1.29.4+k3s1

Accessing the cluster from a local laptop

I have SSH keys set up to access the master (a.k.a home) node, so can readily get the cluster config from the master node and register it as the default cluster context.

ssh admin@home sudo k3s kubectl config view --raw |
  sed s/127.0.0.1/home/                 \
  > ~/.kube/config

Now from local I can view the running nodes

❯ kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
home          Ready    control-plane,master   72m   v1.29.4+k3s1
raspberrypi   Ready    <none>                 10m   v1.29.4+k3s1

Next steps

I'm considering setting up Argo CD on this cluster and configuring an "app of apps" configuration to make it easier manage service deployments.

I'm also thinking about persistent storage. In particular to set up storage for Grafana, Prometheus, and Loki for observability on the cluster and to collect metrics from elsewhere. To help with this, I'll possibly set up a network file system with csi-driver-nfs.

And thirdly I wouldn't mind to experiment with some chaos engineering and start pulling our wires and corrupting deployments. All with the intention to test my ability to recover the system.

Clean up

For now though, I'm going to clean this up and shut it all down to save on energy. First de-register the node from the cluster.

kubectl drain node raspberrypi
kubectl delete node raspberrypi

On the agent node uninstall k3s.

/usr/local/bin/k3s-agent-uninstall.sh

Shutdown the node.

sudo shutdown now

I've also got this Pi on a Tapo P100 Mini Smart Wi-Fi Socket so I can see how much energy it is using. This also allows me to turn office the power from my mobile phone, without being near the Raspberry Pi and unplugging any wires.

And on the master node let's uninstall k3s:

/usr/local/bin/k3s-uninstall.sh

Finally lets remove my local k8s config file, since the keys are no longer valid.

rm ~/.kube/config