Comparing Local k8s stacks ; k3d, minikube and microK8s
I explored spinning up a local cluster with k3d recently, however I realised I should have compared k3d with other approaches. Let's compare kind, minikube and microk8s and see how these different k8s stacks help for local development.
Preparation
I'll be comparing how to spin up local k8s stacks with different tools. The installation steps are based on install on macOS, although documentation for each tool provide guidance on how to install on other environments.
The tools install in a docker container by default, so ensure you have docker installed locally, before starting up each cluster.
Minikube
Install and start a Minikube cluster
brew install minikube
minikube start
Which starts up with:
đ minikube v1.33.1 on Darwin 14.4.1 (arm64)
⨠Automatically selected the docker driver
đ Using Docker Desktop driver with root privileges
đ Starting "minikube" primary control-plane node in "minikube" cluster
đ Pulling base image v0.0.44 ...
đž Downloading Kubernetes v1.30.0 preload ...
> preloaded-images-k8s-v18-v1...: 319.81 MiB / 319.81 MiB 100.00% 39.89 M
> gcr.io/k8s-minikube/kicbase...: 435.76 MiB / 435.76 MiB 100.00% 22.72 M
đĨ Creating docker container (CPUs=2, Memory=4000MB) ...
đŗ Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
âĒ Generating certificates and keys ...
âĒ Booting up control plane ...
âĒ Configuring RBAC rules ...
đ Configuring bridge CNI (Container Networking Interface) ...
đ Verifying Kubernetes components...
âĒ Using image gcr.io/k8s-minikube/storage-provisioner:v5
đ Enabled addons: storage-provisioner, default-storageclass
đ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
View pods deployed:
> kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7db6d8ff4d-7k7cj 1/1 Running 0 29s
kube-system etcd-minikube 1/1 Running 0 43s
kube-system kube-apiserver-minikube 1/1 Running 0 43s
kube-system kube-controller-manager-minikube 1/1 Running 0 43s
kube-system kube-proxy-fwg69 1/1 Running 0 29s
kube-system kube-scheduler-minikube 1/1 Running 0 43s
kube-system storage-provisioner 1/1 Running 1 (17s ago)
41s
Clean up once done, with:
minikube delete --all
Kind
Install and start a kind Kubernetes cluster
brew install kind
kind create cluster
Which starts up with:
Creating cluster "kind" ...
â Ensuring node image (kindest/node:v1.30.0) đŧ
â Preparing nodes đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
Set kubectl context to "kind-kind"
View pods deployed:
⯠kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7db6d8ff4d-lnlhl 1/1 Running 0 12s
kube-system coredns-7db6d8ff4d-tgwfp 1/1 Running 0 12s
kube-system etcd-kind-control-plane 1/1 Running 0 29s
kube-system kindnet-tl7c5 1/1 Running 0 13s
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 29s
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 29s
kube-system kube-proxy-7dn4m 1/1 Running 0 13s
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 29s
local-path-storage local-path-provisioner-988d74bc-dgltm 1/1 Running 0 12s
And clean up when done:
kind delete cluster
K3s with multipass
K3s doesn't support macOS directly,
but it can be installed with a multipass
VM
brew install multipass
multipass launch --name k3s --memory 2G --disk 20G
multipass shell k3s
curl -sfL https://get.k3s.io | sh -
sudo kubectl get pods -A
And clean up when done:
multipass delete k3s k3s-worker
multipass purge
K3s with docker compose
K3s can also be installed with docker, as shown in the example docker-compose.yaml. I've removed the persistent volumes from this example since we're creating an ephemeral environment, but if you did want to have a stack that you could stop and start, you can add those back in.
services:
server:
image: "rancher/k3s:latest"
command: server
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_TOKEN=${K3S_TOKEN:-not-secret}
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- .:/output
ports:
- 80:80
- 6443:6443
- 443:443
agent:
image: "rancher/k3s:latest"
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_TOKEN=${K3S_TOKEN:-not-secret}
Start up with docker compose in daemon mode:
K3S_TOKEN=`uuidgen` docker-compose up -d
This will write the kubeconfig.yaml
file into the local directory, which we can
use to connect to the cluster and view the pods. It may take a few seconds for
this to show any resources, but you can view the logs of the stack starting up
with docker compose logs -f
. When up we can run:
kubectl --kubeconfig kubeconfig.yaml get pods -A
To show the pods that are running in this cluster.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576bfc4dc7-zt4nl 1/1 Running 0 114s
kube-system local-path-provisioner-75bb9ff978-mhbfp 1/1 Running 0 114s
kube-system helm-install-traefik-crd-pbngp 0/1 Completed 0 114s
kube-system svclb-traefik-40cd1bb7-vc5xk 2/2 Running 0 102s
kube-system svclb-traefik-40cd1bb7-rd9q8 2/2 Running 0 102s
kube-system helm-install-traefik-vf88h 0/1 Completed 1 114s
kube-system traefik-5fb479b77-drdfn 1/1 Running 0 102s
kube-system metrics-server-557ff575fb-f66hf 1/1 Running 0 114s
Once done, clean up the stack. Also remove the local kubeconfig.yaml
configuration
file since that's no longer of use.
docker compose down
rm kubeconfig.yaml
k3s with k3d
See previous blog on k3d, but essentially we can start up a cluster with:
brew install k3d
k3d cluster create my-cluster
kubectl get pods -A
Then clean up:
k3d cluster delete my-cluster
Microk8s
MicroK8s can be spun up with:
brew install multipass
brew install ubuntu/microk8s/microk8s
microk8s install
multipass exec microk8s-vm -- sudo snap install microk8s --classic
microk8s start
mkdir ~/.microk8s
microk8s config > ~/.microk8s/config
microk8s status --wait-ready
microk8s kubectl get pod -A
Which after a little wait (~60 seconds) showed the pods running
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-77bd7c5b-6tzdd 1/1 Running 0 2m47s
kube-system calico-node-tbcdg 1/1 Running 0 2m47s
kube-system coredns-864597b5fd-r8qxb 1/1 Running 0 2m47s
Then clean up
microk8s reset
multipass delete microk8s-vm
multipass purge
Which approach to use?
Each of the tools make it straightforward to install a local stack. When it comes to k3s it is interesting to be able to install it directly, however I'll focus on k3d for comparison purposes since it is a convenient wrapper of k3s. For Microk8s, although I got it working, installation on my environment wasn't as smooth as the others and the clean reset and cold restart took much longer than the others (> 60 seconds), making me think I may need to find a quicker and easier way to spin it up. I may come back to Microk8s at a later date, since it looks like it may some handy addons to make certain aspects of k8s easier to set up.
This gave me 3 approaches to compare minikube, kind, and k3d.
To align k3d defaults with kind and minikube, for these benchmarks I have disabled traefik on creation of the cluster with:
k3d cluster create my-cluster --k3s-arg '--disable=traefik@server:*'
On my MacBook M1 Pro (16GB), start up and tear down times for all 3 are comparable, with minikube marginally slower. To give a ballpark feeling of speed of spin up and tear down, I measured average time for create cluster command to complete, time for all the service pods to be running, time for a deployment of nginx to complete, and time for clean up of the cluster.
create | running | deployment | clean | |
---|---|---|---|---|
minikube | 17s | 31s | 36s | 39s |
kind | 14s | 29s | 34s | 35s |
k3d | 14s | 26s | 31s | 32s |
Resource usage, a few minutes after spin up according to docker stats
, was again similar for all
three, with k3d coming in with slightly less CPU and memory usage.
CPU | MEM | |
---|---|---|
minikube | 15% | 600 MiB |
kind | 15% | 580 MiB |
k3d | 12% | 520 MiB |
Minikube is a more comprehensive k8s stack, and I was pleasantly surprised that on brief inspection it isn't much over an overhead over k3d. I've historically tended toward k3d for quick local experiments. In the light of the ease of spinning up a minikube cluster local and observing that it doesn't have as much overhead as I expected, I'll start to use minikube more for local work.
All in all minikube, k3d, and kind are all great options for local Kubernetes development. Take your pick and enjoy them all.