Adaptive Kind
 

Routing k3d deployments with a traefik reverse proxy

Published on May 28, 2024 by Ian Homer

I started writing a blog that needed a clean way to expose two services from the cluster, and it got me wondering the best way to do this. I'm often aiming to write blogs on this site in a zero-to-goal manner, so they can be read in isolation. k3d by default uses traefik, which is pretty neat when it comes to low configuration, however it did lead me to experimenting with ways to expose services with a lightweight approach that I could use in future blogs. Let's have a look at some of the options.

Create our k3d cluster

First let's create our cluster with a local registry configured, so we can deploy a custom app to see the behaviour of traefik when it comes to reverse proxies. We can also expose ingress on port 8080, which is the port where we'll access services in the cluster.

k3d registry create my-registry --port 5111
cat <<EOF > my-registries.yaml
mirrors:
  "localhost:5111":
    endpoint:
      - http://k3d-my-registry:5111
EOF
k3d cluster create my-cluster --registry-use k3d-my-registry:5111 \
  -p "8080:80@loadbalancer"                                       \
  --registry-config my-registries.yaml

The cluster should be up, and we're ready to explore our options. Read spinning up a local k8s cluster with k3d if you want to look into more of the details behind that spin up.

Create a test app to inspect headers

I wanted a quick app that showed me the request headers that are received by the service. traefik injects some extra headers that I wanted to see.

We can create a small python flask app, app.py, that outputs request headers. Flask is a really powerful micro web framework and perfect for a small web app (as well as robust production deployments). For our case we'll just have one route that returns the request headers to the browser in JSON.

from flask import Flask, request, make_response

app = Flask(__name__)

@app.route("/")
def hello_world():
    return make_response(dict(request.headers), {"Content-Type": "application/json"})

app.run(debug=True, host='0.0.0.0', port=8080)

host is to 0.0.0.0 to serve the app on any IP address, otherwise you'll get a 403 forbidden error. We can wrap this app in a Dockerfile.

FROM python:alpine
RUN pip install flask
COPY app.py .
CMD [ "python", "app.py" ]

Build the image.

docker build --tag my/responder .

And push to the registry so that our cluster can pull the image.

docker tag my/responder localhost:5111/my/responder:latest
docker push localhost:5111/my/responder:latest

Deploy the test service to our cluster

Now we can create a deployment in our cluster:

kubectl create deployment my-responder \
  --image k3d-my-registry:5111/my/responder
kubectl create service clusterip my-responder --tcp=8080:8080
kubectl create ingress my-ingress --rule="/=my-responder:8080"

With this app deployed and ingress set up, we can access the deployed responder service at http://localhost:8080/. This shows us the headers from traefik,

{
  "X-Forwarded-For": "10.42.0.1",
  "X-Forwarded-Host": "localhost:8080",
  "X-Forwarded-Port": "8080",
  "X-Forwarded-Proto": "http",
  "X-Forwarded-Server": "traefik-f4564c4f4-mqngc",
  "X-Real-Ip": "10.42.0.1"
}

Reverse proxy on alternative paths

We've got the app on the root path here, however for the purpose of this blog and other user cases we need to serve up multiple services from the cluster. First delete the single ingress on to the root path that we had previously created.

kubectl delete ingress my-ingress

traefik uses middleware resources to customise the ingress behaviour. Create a file middleware.yaml that defines such a traefik middleware resource that will strip a prefix from the path, allowing us to mount the service onto a sub-path. We're using the StripPrefix middleware to be explicit about which paths to support. We could use the StripPrefixRegex middleware if wanted a more generic approach.

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
  name: proxy
spec:
  stripPrefix:
    prefixes:
      - /foo

Apply this proxy middleware.

kubectl apply -f proxy.yaml

And then deploy the application onto a different path, by binding the traefik router.middleware on the kubernetes ingress.

kubectl create ingress my-ingress-foo \
  --annotation "traefik.ingress.kubernetes.io/router.middlewares=default-proxy@kubernetescrd" \
  --rule="/foo/=my-responder:8080"

Now when we access the URL http://localhost:8080/foo/, we access the responder service and get an extra header in the response:

{
  "X-Forwarded-Prefix": "/foo"
}

Setting routes for Grafana and Prometheus

We can see the basics of the traefik routing, however we can learn more by trying with a real application, such as running Grafana behind a reverse proxy. Let's install a Grafana stack with helm to try this out.

helm repo add prometheus-community \
  https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
  --set 'grafana.grafana\.ini.server.root_url=%(protocol)s://%(domain)s:%(http_port)s/grafana/' \
  --set 'grafana.grafana\.ini.server.serve_from_sub_path=true'

In this Helm chart install, we set some of the grafana.ini configuration to explicitly serve Grafana from a sub-path. When reading the documentation on how to run Grafana behind a proxy I was thinking that I could set up a route on Grafana without this explicit configuration. I might work that out at a later date, but for now this explicit configuration is good enough for my purpose.

We can edit the traefik middleware proxy configuration in place and add a prefix /grafana.

kubectl edit middleware.traefik.io/proxy

Create an ingress for us to access the Grafana dashboard on the prefix /grafana. Note in this rule we have a * which lead to the ingress being set up with a path type of Prefix instead of Exact.

kubectl create ingress prometheus-grafana \
  --annotation "traefik.ingress.kubernetes.io/router.middlewares=default-proxy@kubernetescrd" \
  --rule="/grafana/*=prometheus-grafana:80"

Now we have, both, http://localhost:8080/grafana/ serving the Grafana dashboard and http://localhost:8080/foo/ serving our responder app.

Using an IngressRoute to match on path

We can achieve similar result with a traefik IngressRoute CRD, which saves us having to set up the middleware and helps keep each ingress route atomically deployable.

Create the IngressRoute resource file route.yaml.

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  namespace: default
  name: my-ingress-route
spec:
  entryPoints:
    - web
  routes:
    - match: PathPrefix(`/grafana`)
      kind: rule
      services:
        - name: prometheus-grafana
          port: 80

And apply

kubectl apply -f route.yaml

This also gives us Grafana on http://localhost:8080/grafana/ with the same results as before.

Mapping route based on requested host

An alternative approach is to control the routes via host matching, reducing the need for any explicit routing configuration in Grafana.

To demonstrate this, let's reinstall Grafana without this configuration.

helm uninstall prometheus
helm install prometheus prometheus-community/kube-prometheus-stack

Change the route.yaml to:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: my-ingress-route
spec:
  routes:
    - match: Host(`grafana.local`)
      kind: Rule
      services:
        - name: prometheus-grafana
          port: 80

And apply.

kubectl apply -f route.yaml

With this in place, we can register grafana.local host in our /etc/hosts file by adding the following line.

127.0.0.1 grafana.local

Now we can access Grafana http://grafana.local:8080/ and we haven't had to rewrite any paths, nor veer from the Grafana default configuration.

Summary

The aim of this blog was to find a way that I could quickly spin up, from zero, a cluster and multiple routes onto multiple services to help with blogs on topics where access to multiple services are required.

The IngressRoute with host mapping is probably the cleanest and in many times closer to what we might be doing in a production environment. In a local environment we could use dnsmasq to set up local wild card mapping, such that once that dnsmasq is running we wouldn't need to do any edits to the /etc/hosts file. However, using that service is not something I want to keep mentioning in future blogs and in an extended team, it's debatable whether it's a practice everyone would want to do. In general both /etc/hosts configuration and dnsmasq is extra cognitive load I don't want to bring into blogs where we need multiple routes.

With this in mind, I find IngressRoute with PathPrefix to be the easiest to contain. It does have the side effect that the underlying service, Grafana in this case, needs extra configuration. I wonder if that is a misconfiguration on my part and, if I work out how to drop that, I will.

All that said, I'll use the IngressRoute with PathPrefix approach in future blogs, since I can spin up multiple routes with a few lines of code and clean up at the end by deleting the cluster.

EDIT: I'm swinging to preferring IngressRoute with Host matching. It allows a common naming pattern to form, makes TLS termination a possibility, helps with local password manager and allows backend service to be agnostic to load ingress routing.

Clean up our cluster

Once all done, delete the cluster.

k3d delete my-cluster

If you edited the /etc/hosts file, remove the host line added as well.