Deploying Kubernetes on a Raspberry Pi cluster with k3s
K3s is a fantastic lightweight Kubernetes distribution that is so quick to install on a collection of Raspberry Pis. I had a couple of Raspberry Pis sitting idle so had a go at spinning up a Kubernetes cluster on them. It's great way to get some deeper understanding and hone your Kubernetes skills.
Installing a control plane node
One of my Pis will be a control plane node. First we need
to edit /boot/firmware/cmdline.txt
and add cgroup_enable=memory
since K3s needs
cgroups to start the systemd
service.
I didn't set cgroup_memory=1
as mentioned in the docs, because I read that is
redundant
now,
and it all worked fine without it. After the edit the first line of this file
will read something like:
console=tty1 root=PARTUUID=00000000-02
rootfstype=ext4 fsck.repair=yes
rootwait cgroup_enable=memory
Reboot the Pi for this to take effect:
sudo reboot now
Then install k3s:
curl -sfL https://get.k3s.io | sh -
And check that the node is up.
$ sudo k3s kubectl get node
NAME STATUS ROLES AGE VERSION
control Ready control-plane,master 2m v1.29.4+k3s1
View the token for the master node in the node-token
file. We'll need this to
register another node.
sudo cat /var/lib/rancher/k3s/server/node-token
Installing an additional node
I installed another Pi afresh to act as another node in the cluster. I wrote an OS image to a SD Card with the Raspberry Pi Imager, selecting the lightweight Raspberry Pi OS List (64-Bit) and customising the install by setting an admin username and password along with SSH service so that I could quickly SSH into onto the server once it started up. I didn't set up Wifi on this node (or the master node). Both Pis are on a local wired network, since I wanted the cluster to have a faster network between each of the nodes. I do have wireless access to that network from my laptop so I can administer the nodes and access the cluster remotely.
With the SD card in the Pi, I turned the Raspberry Pi on, logged in and was ready to provision the node.
Again, cgroup_enable=memory
needs to be added to cmdline.txt
. This is a Pi 5
and the file is now located at /boot/firmware/cmdline.txt
.
I installed k3s on this node and connected it to the master node, with the
environment variables K3S_URL
and K3S_TOKEN
as documented in the k3s
environment
variables.
curl -sfL https://get.k3s.io |
K3S_URL=https://192.168.178.72:6443 \
K3S_TOKEN=K___::server:0___ \ # Token from master node
sh -
Running kubectl
from master node, we can see the new node is up and running.
$ sudo k3s kubectl get node
NAME STATUS ROLES AGE VERSION
control Ready control-plane,master 62m v1.29.4+k3s1
worker Ready <none> 42s v1.29.4+k3s1
Accessing the cluster from a local laptop
I have SSH keys set up to access the control plane node, so can readily get the cluster config from the master node and register it as the default cluster context.
ssh admin@control sudo k3s kubectl config view --raw |
sed s/127.0.0.1/control/ \
> ~/.kube/config
Now from local I can view the running nodes
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
control Ready control-plane,master 72m v1.29.4+k3s1
worker Ready <none> 10m v1.29.4+k3s1
(Option) Install with Ansible collection
Added 13th August 2024.
As hinted to in the summary below, I went on to include the k3s cluster install in an Ansible collection for a research lab which can install, re-install and upgrade a cluster with ease. This used an Ansible collection provided by the k3s team worth mentioning here.
These steps for the k3s cluster install have all been wrapped up into a convenient Ansible collection at k3s-ansible.
In its simplest form, we can create the inventory.yaml
file.
k3s_cluster:
children:
server:
hosts:
control:
agent:
hosts:
worker:
vars:
ansible_port: 22
ansible_user: admin
k3s_version: v1.30.2+k3s1
token: "changeme!"
api_endpoint: ""
extra_server_args: ""
extra_agent_args: ""
Replace the variable token
with the output from:
openssl rand -base64 64
Install the k3s-ansible
collection
ansible-galaxy collection install git@github.com:k3s-io/k3s-ansible.git
And run playbook
ansible-playbook k3s.orchestration.site -i inventory.yaml
The playbook takes care of the manual steps we went through earlier, including
updating the /boot/firmware/cmdline.txt
file, restarting the nodes and wiring
up the tokens.
Next steps
I'm considering setting up Argo CD on this cluster and configuring an "app of
apps" configuration to make it easier manage service deployments. Edit - This is now covered in Deploying Kubernetes Resources with the app of apps pattern.
I'm also thinking about persistent storage. In particular to set up storage for Grafana, Prometheus, and Loki for observability on the cluster and to collect metrics from elsewhere. To help with this, I'll possibly set up a network file system with csi-driver-nfs.
And thirdly I wouldn't mind to experiment with some chaos engineering and start pulling our wires and corrupting deployments. All with the intention to test my ability to recover the system.
Clean up
For now though, I'm going to clean this up and shut it all down to save on energy. First de-register the node from the cluster.
kubectl drain node raspberrypi
kubectl delete node raspberrypi
On the agent node uninstall k3s.
/usr/local/bin/k3s-agent-uninstall.sh
Shutdown the node.
sudo shutdown now
I've also got this Pi on a Tapo P100 Mini Smart Wi-Fi Socket so I can see how much energy it is using. This also allows me to turn office the power from my mobile phone, without being near the Raspberry Pi and unplugging any wires.
And on the master node let's uninstall k3s:
/usr/local/bin/k3s-uninstall.sh
Finally lets remove my local k8s config file, since the keys are no longer valid.
rm ~/.kube/config