Comparing local k8s stacks

Published on May 19, 2024

I explored spinning up a local cluster with k3d recently, however I realised I should have compared k3d with other approaches. In particular, looking at kind, minikube and microk8s for local development. Let's compare these different k8s stacks for local development.


I'll be comparing how to spin up local k8s stacks with various tools. The installation steps are based on install on macOS, however each of the tools provide guidance on how to install on other environments.

Each of the tools install in a docker container by default, so ensure you have docker installed locally, before starting up each cluster.


Install and start a Minikube cluster

brew install minikube
minikube start

Which starts up with:

😄  minikube v1.33.1 on Darwin 14.4.1 (arm64)
✨  Automatically selected the docker driver
📌  Using Docker Desktop driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.44 ...
💾  Downloading Kubernetes v1.30.0 preload ...
    > preloaded-images-k8s-v18-v1...:  319.81 MiB / 319.81 MiB  100.00% 39.89 M
    >  435.76 MiB / 435.76 MiB  100.00% 22.72 M
đŸ”Ĩ  Creating docker container (CPUs=2, Memory=4000MB) ...
đŸŗ  Preparing Kubernetes v1.30.0 on Docker 26.1.1 ...
    â–Ē Generating certificates and keys ...
    â–Ē Booting up control plane ...
    â–Ē Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    â–Ē Using image
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

View pods deployed:

> kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE
kube-system   coredns-7db6d8ff4d-7k7cj           1/1     Running   0             29s
kube-system   etcd-minikube                      1/1     Running   0             43s
kube-system   kube-apiserver-minikube            1/1     Running   0             43s
kube-system   kube-controller-manager-minikube   1/1     Running   0             43s
kube-system   kube-proxy-fwg69                   1/1     Running   0             29s
kube-system   kube-scheduler-minikube            1/1     Running   0             43s
kube-system   storage-provisioner                1/1     Running   1 (17s ago)

Clean up once done, with:

minikube delete --all


Install and start a kind Kubernetes cluster

brew install kind
kind create cluster

Which starts up with:

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.30.0) đŸ–ŧ
 ✓ Preparing nodes đŸ“Ļ
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹ī¸
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"

View pods deployed:

❯ kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-7db6d8ff4d-lnlhl                     1/1     Running   0          12s
kube-system          coredns-7db6d8ff4d-tgwfp                     1/1     Running   0          12s
kube-system          etcd-kind-control-plane                      1/1     Running   0          29s
kube-system          kindnet-tl7c5                                1/1     Running   0          13s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          29s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          29s
kube-system          kube-proxy-7dn4m                             1/1     Running   0          13s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          29s
local-path-storage   local-path-provisioner-988d74bc-dgltm        1/1     Running   0          12s

And clean up when done:

kind delete cluster

k3s with multipass

k3s doesn't support macOS directly, but it can be installed with a multipass VM

brew install multipass
multipass launch --name k3s --memory 2G --disk 20G
multipass shell k3s
curl -sfL | sh -
sudo kubectl get pods -A

And clean up when done:

multipass delete k3s k3s-worker
multipass purge

k3s with docker compose

k3s can also be installed with docker, as shown in the example docker-compose.yaml. I've removed the persistent volumes from this example since we're creating an ephemeral environment, but if you did want to have a stack that you could stop and start, you can add those back in.

    image: "rancher/k3s:latest"
    command: server
      - /run
      - /var/run
    privileged: true
      - K3S_TOKEN=${K3S_TOKEN:-not-secret}
      - K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
      - .:/output
      - 80:80
      - 6443:6443
      - 443:443

    image: "rancher/k3s:latest"
      - /run
      - /var/run
    privileged: true
      - K3S_URL=https://server:6443
      - K3S_TOKEN=${K3S_TOKEN:-not-secret}

Start up with docker compose in daemon mode:

K3S_TOKEN=`uuidgen` docker-compose up -d

This will write the kubeconfig.yaml file into the local directory, which we can use to connect to the cluster and view the pods. It may take a few seconds for this to show any resources, but you can view the logs of the stack starting up with docker compose logs -f. When up we can run:

kubectl --kubeconfig kubeconfig.yaml get pods -A

To show the pods that are running in this cluster.

NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-576bfc4dc7-zt4nl                  1/1     Running     0          114s
kube-system   local-path-provisioner-75bb9ff978-mhbfp   1/1     Running     0          114s
kube-system   helm-install-traefik-crd-pbngp            0/1     Completed   0          114s
kube-system   svclb-traefik-40cd1bb7-vc5xk              2/2     Running     0          102s
kube-system   svclb-traefik-40cd1bb7-rd9q8              2/2     Running     0          102s
kube-system   helm-install-traefik-vf88h                0/1     Completed   1          114s
kube-system   traefik-5fb479b77-drdfn                   1/1     Running     0          102s
kube-system   metrics-server-557ff575fb-f66hf           1/1     Running     0          114s

Once done, clean up the stack. Also remove the local kubeconfig.yaml configuration file since that's no longer of use.

docker compose down
rm kubeconfig.yaml

k3s with k3d

See previous blog on k3d, but essentially we can start up a cluster with:

brew install k3d
k3d cluster create my-cluster
kubectl get pods -A

Then clean up:

k3d cluster delete my-cluster


MicroK8s can be spun up with:

brew install multipass
brew install ubuntu/microk8s/microk8s
microk8s install
multipass exec microk8s-vm -- sudo snap install microk8s --classic
microk8s start
mkdir ~/.microk8s
microk8s config > ~/.microk8s/config
microk8s status --wait-ready
microk8s kubectl get pod -A

Which after a little wait (~60 seconds) showed the pods running

NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-77bd7c5b-6tzdd   1/1     Running   0          2m47s
kube-system   calico-node-tbcdg                        1/1     Running   0          2m47s
kube-system   coredns-864597b5fd-r8qxb                 1/1     Running   0          2m47s

Then clean up

microk8s reset
multipass delete microk8s-vm
multipass purge

Which approach to use?

Each of the tools make it straightforward to install a local stack. When it comes to k3s it is interesting to be able to install it directly, however I'll focus on k3d for comparison purposes since it is a convenient wrapper of k3s. For Microk8s, although I got it working, installation on my environment wasn't as smooth as the others and the clean reset and cold restart took much longer than the others (> 60 seconds), making me think I may need to find a quicker and easier way to spin it up. I may come back to Microk8s at a later date, since it looks like it may some handy addons to make certain aspects of k8s easier to set up.

This gave me 3 approaches to compare minikube, kind, and k3d.

To align k3d defaults with kind and minikube, for these benchmarks I have disabled traefik on creation of the cluster with:

k3d cluster create my-cluster --k3s-arg '--disable=traefik@server:*'

On my MacBook M1 Pro (16GB), start up and tear down times for all 3 are comparable, with minikube marginally slower. To give a ballpark feeling of speed of spin up and tear down, I measured average time for create cluster command to complete, time for all the service pods to be running, time for a deployment of nginx to complete, and time for clean up of the cluster.

create running deployment clean
minikube 17s 31s 36s 39s
kind 14s 29s 34s 35s
k3d 14s 26s 31s 32s

Resource usage, a few minutes after spin up according to docker stats, was again similar for all three, with k3d coming in with slightly less CPU and memory usage.

minikube 15% 600 MiB
kind 15% 580 MiB
k3d 12% 520 MiB

Minikube is a more comprehensive k8s stack, and I was pleasantly surprised that on brief inspection it isn't much over an overhead over k3d. I've historically tended toward k3d for quick local experiments. In the light of the ease of spinning up a minikube cluster local and observing that it doesn't have as much overhead as I expected, I'll start to use minikube more for local work.

All in all minikube, k3d, and kind are all great options for local Kubernetes development. Take your pick and enjoy them all.