Adaptive Kind

Monitoring Kubernetes metrics with Grafana and Prometheus

Published on by

Grafana is an open observability platform that gives your team a centralised view on the health and behaviour of your system. We'll use Kubernetes metrics to quickly spin up some dashboards and provide foundations to experiment and learn more about Grafana.

Prometheus with kube-state-metrics

To start with let's spin up a Grafana stack that is reading from a Prometheus data source, ingesting metrics from kube-state-metrics. We can use these metrics to see visualisations that are available on our Grafana dashboards.

Start up docker locally and create our k3d cluster.

k3d cluster create my-cluster -p "8080:80@loadbalancer"

Deploying a local k3d cluster was covered in a previous blog if you'd like to read more on k3d.

Install the kube-prometheus-stack Helm chart, to give a ready baked set of deployments including Prometheus (for metric store), Grafana (for visualisation) and kube-state-metrics (to generate and expose cluster-level metrics)

Install kube-state-metrics Helm chart

helm repo add prometheus-community \
  https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack

We should now have the following pods running:

❯ kubectl get pods -o custom-columns="NAME:.metadata.name,STATUS:.status.phase"
NAME                                                     STATUS
prometheus-prometheus-node-exporter-6rqcj                Running
prometheus-kube-prometheus-operator-f556cf9c6-wsgm8      Running
prometheus-kube-state-metrics-547454f49d-r766v           Running
alertmanager-prometheus-kube-prometheus-alertmanager-0   Running
prometheus-prometheus-kube-prometheus-prometheus-0       Running
prometheus-grafana-d5679d5d7-69sgw                       Running

We can create an ingress route to the Grafana service so we can access the Grafana dashboard from our browser. This could be exposed via kubectl port-forward tunnel if you prefer.

kubectl create ingress prometheus-grafana --rule="/*=prometheus-grafana:80"

Get the Grafana admin user password into our clipboard:

kubectl get secret prometheus-grafana -o jsonpath="{.data.admin-password}" |
  base64 --decode | pbcopy

And then log into Grafana dashboard at http://localhost:8080/ with the admin user and the password from your clipboard, to view the dashboards.

Grafana dashboards from Helm chart

Several dashboards on k8s metrics should have been created for you by the installation of the kube-prometheus-stack. Click on the navigation hamburger icon and then click on Dashboards to see these dashboards that have been set up.

Grafana Dashboards Menu Item

Drill down into one of the dashboards, for example Kubernetes / Compute Resources / Node (Pods) to see the CPU usages of each of the pods.

Grafana Compute Node

What next?

This blog is light introduction to Grafana. I wanted to get to the point where we have a Grafana stack running, can see metrics and being able to start playing with the dashboards.

In a more real situation we would want to set up data persistence, since in this stack all the collected data is lost when the pod is terminated. We also may want to deploy Grafana into a dedicated namespace to encapsulate the concerns. We're also likely to need more control on the deployments beyond which the kube-prometheus-stack helm chart can give us.

That said, it's great to know that we can get a fresh version of Grafana up and running locally if want to check something quickly about Grafana and / or Prometheus in an isolated environment.

Clean up

Once done, you can clean up the deployment by deleting the cluster, to get yourself back to a clean state.

k3d cluster delete my-cluster