Local Kubernetes Stack with k3d in Seconds
Kubernetes helps with the running of containerized applications at scale, however you don't need a complex infrastructure to test many of the aspects of a Kubernetes cluster. You can spin up a cluster with k3d on your local machine in seconds and start experimenting with Kubernetes.
k3s is a lightweight Kubernetes distribution, less than 100MB, that spins up with ease on low spec machines. Beyond providing foundations for a local cluster, k3s is also useful for deployments onto a small devices, for example onto a Raspberry Pi. (edit: Which I explored in a later blog on deploying k3s cluster on a Pi.)
k3d is a lightweight wrapper that runs k3s in docker. It helps with creating and starting up a Kubernetes cluster quickly. If you're familiar with docker and already have it installed, this is a good choice. Kind is a similarly impressive tool with many of the same goals if you wish to take a look at an alternative.
Using k3d can help demystify some aspects of Kubernetes, allowing you to learn and experiment in a safe local environment. It can also help with the validation of changes locally before deploying to a shared environment, giving an efficient and quicker feedback loop on your changes.
Installing k3d
Before starting, make sure you have docker installed to provide the runtime environment for k3d. You will also need the Kubernetes command line tool kubectl.
With those foundations in place, you can install k3d and start experimenting with a local cluster. On MacOS you can install with brew.
brew install k3d
Or you can use chocolatey on Windows.
choco install k3d
Starting up cluster
With docker running on your local machine, create a new Kubernetes cluster with the k3d command.
k3d cluster create my-cluster
You can now interact with your locally running cluster. For example, list the running pods.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6c86858495-sj6gh 1/1 Running 0 2m27s
kube-system coredns-6799fbcd5-4tzq4 1/1 Running 0 2m27s
kube-system helm-install-traefik-crd-z25w2 0/1 Completed 0 2m27s
kube-system svclb-traefik-76abe76e-2f2d2 2/2 Running 0 2m18s
kube-system helm-install-traefik-hjph9 0/1 Completed 1 2m27s
kube-system traefik-f4564c4f4-78vn8 1/1 Running 0 2m18s
kube-system metrics-server-54fd9b65b-p744s 1/1 Running 0 2m27s
This is a great place to start if you are new to Kubernetes and you are learning
more about the kubectl
commands. Have a look at the
kubectl command reference for
more info.
k3d has updated your kube config file, ~/.kube/config
, and the Kubernetes
current context should be referencing that cluster configuration. It is this
configuration that defines which cluster you are working with. You can check
this with the
kubectl config
current-context
command.
kubectl config current-context
If all's well, this should return k3d-my-cluster
which is the name of the newly
created cluster with k3d. If you already have other cluster contexts configured
locally, you can view the get-contexts
and change context with use-context
.
kubectl config get-contexts
kubectl config use-context <CONTEXT_NAME>
Create a lightweight image we can test with
Let's create a docker image that we can deploy. This is just lightweight nginx service serving some static HTML, and is enough for us to create a deployment and test the Kubernetes routing.
docker build --tag my/hello-world - <<EOF
FROM nginx:alpine
RUN echo "<html><body><p>Hello, World!</p></body></html>" > /usr/share/nginx/html/index.html
EOF
You can see this docker image that you have built in the local docker image registry.
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
my/hello-world latest 3c86fe6a3708 10 seconds ago 49.7MB
We can run this image directly with docker:
docker run -p 8080:80 my/hello-world
And see the web page at http://localhost:8080/
, however for the purpose of
this exercise we will want to instead deploy this into our Kubernetes cluster.
We'll need to make this image available to the cluster. We can do this by creating a k3d private registry
k3d registry create my-registry --port 5111
And then pushing our image to this registry
docker tag my/hello-world localhost:5111/my/hello-world:latest
docker push localhost:5111/my/hello-world:latest
In the next step a deployment will pull this image from this registry before running it.
Deploying our test service
To allow the cluster to pull from this local registry we need to reference this
registry in the cluster. This can be done at creation time in k3d. First we need
a k3s registries configuration file. Let's call it my-registries.yaml
and set
up the registry as bellow.
cat <<EOF > my-registries.yaml
mirrors:
"localhost:5111":
endpoint:
- http://k3d-my-registry:5111
EOF
Then delete the current cluster, to allow us to recrete.
k3d cluster delete my-cluster
Recreate the cluster anew with the registry configuration referenced. We'll
also start up this cluster with the local port 8080
mapped onto the internal
port 80
of the load balancer in the cluster so that we can route onto the
service we will deploy.
k3d cluster create my-cluster --registry-use k3d-my-registry:5111 \
-p "8080:80@loadbalancer" \
--registry-config my-registries.yaml
This has created a service
with type LoadBalancer
in the cluster which
allows services in the cluster to be exposed externally.
Create a deployment
from our published image.
kubectl create deployment my-hello-world \
--image k3d-my-registry:5111/my/hello-world
Create a service
with type ClusterIP
so that the app my-hello-world
is
available, internally in the cluster, with the host name my-hello-world
.
kubectl create service clusterip my-hello-world --tcp=80:80
Apply the ingress route onto our my-hello-world
service.
kubectl create ingress my-hello-world --rule="/=my-hello-world:80"
And then, access the deployed "Hello, World!" service at
http://localhost:8080/
.
Summary
We've created a local Kubernetes cluster with k3d and deployed a lightweight docker image into the cluster. We then exposed the service so we can access it from outside the cluster. From this foundation, we can experiment with Kubernetes and learn more about the Kubernetes stack.
Kubernetes, to start with, can be hard to get into. I the early days I found spinning up a local k8s environment genuinely hard. This made learning and experimentation slower, and could detract other members of the team gaining skills with Kubernetes.
With k3d, and also other similar tools like kind, these skills became much more accessible, helping break down some knowledge divides. It also opens up to the potential for helping with quality control, embedding in CI/CD processes, troubleshooting, and validating approaches before you try them in the wild.
In subsequent blogs I'll be using this locally deployed k3d cluster to explore other core concepts and tooling of the Kubernetes ecosystem.
Clean up
Once you are done, you can remove the local cluster and local registry that you created.
k3d cluster delete my-cluster
k3d registry delete k3d-my-registry