Adaptive Kind

Deploying Kube Resources with the Argo CD App of Apps Pattern

Published on by

Now that I have my local kube stack with K3s on a couple of Raspberry Pis, my next task I wanted to tackle was to set up an App of Apps structure so that I could (repeatedly) go from empty kube stack to my desired set of applications deployed into the cluster. The app of apps pattern in Argo CD helps define the apps that we want deployed, all driven from Git repository that describes a desired state.

Argo CD App of Apps

What I would like is:

  1. a regular set of apps installed a kube cluster, such as Prometheus, Grafana, Loki and cert-manager.
  2. the apps configured in a specific way, for example with trusted ingress https routes and a collection of dashboards set up by default.
  3. for it to be quick to make changes over time to experiment with other applications and configurations.

I'll take you through this spin up.

Argo CD install

Starting with a running k3s cluster we can install Argo CD in the cluster with Helm:

helm install argocd argo/argo-cd --namespace argocd --create-namespace

I introduced Argo CD in a previous getting started blog, but to recap, Argo CD gets the state of a Kubernetes cluster into a desired state as defined in a Git repository, or a collection of Git repositories. When the Git repository is updated, Argo CD will reflect those changes in the cluster, applying and changing resources as needed.

Install App of Apps

Argo CD deployments start with an Application. I'll capitalise the word Application to indicate that it is a CRD defining a Kubernetes resource. As with any kube CRD we can describe it see all the properties available.

kubectl describe crd applications.argoproj.io

When we deploy an Application, Argo CD will get to work on deploying the it into the cluster. In the case of an App or Apps, it'll also, in turn, apply all contained Applications and other Kubernetes resources found in the Git repository, into the cluster.

We can apply an Application into our Kubernetes context with the kubectl command.

cat <<EOF | kubectl create -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: base
  namespace: argocd
spec:
  destination:
    server: https://kubernetes.default.svc
  project: default
  source:
    path: env/lab
    repoURL: https://github.com/adaptivekind/app-of-apps.git
  syncPolicy:
    automated: {}
    syncOptions:
    - Prune=true

This Application is telling Argo CD that we'd like to install the resources found in the path env/lab in the given Git repository.

This resource is hosted in a public git repository that I've created, so it can also be applied from a URL to the resource defined in the Git repository "adaptivekind/app-of-apps". This should give the same result as the in-line Application above. Just do one or the other.

kubectl apply -f https://raw.githubusercontent.com/adaptivekind/app-of-apps/main/env/base.yaml

This Application has a reference to other Applications that all in turn install resources from another path from a given Git repository. In this case, the referenced Git repository is the same App of Apps repository, but it could be a different one, perhaps owned by another team. This continues recursively through all the contained Applications in a given Application.

This can be thought of as a tree of resources that we want deployed. The rough structure of this App of Apps, with apps grouped into a boot configuration and a monitoring set of apps, looks like:

  • base
    • boot
      • cert-manager
      • Argo CD extras
        • Argo CD certificate
        • Argo CD ingress route
    • kubernetes dashboard
    • monitoring
      • Grafana
        • Grafana operator
        • Grafana
      • Loki
      • Prometheus

Whilst Argo CD is spinning up, let's register local host names for ArgoCD and Grafana in /etc/hosts so that we can access both those dashboards when they are ready.

<ip of cluster> argocd.local
<ip of cluster> grafana.local

Back to the spin up, we can check in the progress of the applications by inspecting the status of the applications in the cluster.

$ kubectl -n argocd get applications
NAME                    SYNC STATUS   HEALTH STATUS
...
argocd-app              Synced        Healthy
...

Give it a few minutes to finish deployments. If some of the applications remain in an unhealthy, in particular the argocd application, then read the section below on accessing the Argo CD dashboard with port forward to troubleshoot.

If you wish, you can inspect any unhealthy applications and see any error messages that have been reported, by describing the resource.

kubectl describe -n argocd applications base

If all is well, the applications should all end up in a healthy state and we can continue to log into the Argo CD dashboard.

Trusting the CA certificate

Before logging in, we can trust the TLS Argo certificate. We don't have to do this, however if we don't we will get certificate trust warnings when we connect.

Connections to the Argo CD dashboard and connections to the API from the Argo CD command line are over an https connection. Argo CD, by default, creates a self-signed certificate for this connection. The App of Apps installed above has modified the default and used the cert-manager to create a self-signed CA (certificate authority) certificate; use that to create a new CA signed certificate for Argo CD ; and configure the TLS certificate in a secret for Argo CD to use. We can trust this CA certificate which gives us the advantage that other certificates that are created from this CA will also be trusted, such as the TLS certificate to access to the Grafana dashboard.

To trust the CA certificate download it from the cert-manager root secret:

kubectl get secret -n cert-manager root-secret -o jsonpath="{.data.ca\.crt}" |
  base64 -d > /tmp/lab-cluster-ca.crt

And trust the downloaded certificate on your local machine. On macOS you can trust a certificate with the security command.

sudo security add-trusted-cert -d -r trustRoot \
  -k "/Library/KeyChains/System.keychain"      \
  /tmp/lab-cluster-ca.crt

Log in to the Argo CD dashboard

We can get the initial admin password for Argo CD from the Argo CD command line. Install the Argo CD CLI, for example with Homebrew.

brew install argocd

Get the initial password

argocd admin initial-password -n argocd

It's a good idea to change this initial admin password and we can do this with the Argo CD command line. Log in to the Argo CD instance with the command line and change the admin password to something more secret.

argocd login argocd.local --username admin
argocd account update-password

Now log in to the Argo CD dashboard at https://argocd.local and we should see a dashboard indicating the applications are installed and healthy.

Argo CD healthy apps

Argo CD healthy apps

Access the Grafana dashboard

One of the applications that was included in this App of Apps is Grafana. Before Grafana installation can complete, it will need a password set. Set the password via a kube secret, for example:

kubectl create -n monitoring secret generic grafana-password \
  --from-literal=admin-password=super-secret                 \
  --from-literal=admin-user=admin

The Grafana application should finish the install and then you can log in at https://grafana.local/. You should be able to see one of the dashboards loaded from the App of Apps collection.

Grafana Dashboard

Asides

Before I wrap up the blog I wanted to run through a few related topics that might help you with troubleshooting any issues you may have with this spin up.

Hiding secrets from command line

I'm never keen on writing a password directly in command line, since it exposes the secret in the command history in plain text. Often I will use direnv to set environment variables in my shell when I enter a given directory. Then I can set the Grafana password from an environment variable with:

kubectl create -n monitoring secret generic grafana-password \
  --from-literal=admin-password=$GRAFANA_ADMIN_PASSWORD      \
  --from-literal=admin-user=admin

Dashboard via port forward

If for some reason the Argo CD dashboard is not available via the ingress route then you can access the dashboard via port forward to troubleshoot.

kubectl port-forward -n argocd svc/argocd-server 8443:443

Then you then access the Argo CD dashboard through localhost at https://localhost:8443.

Namespace stuck terminating

Whilst developing this App of Apps, I deleted and reinstalled the stack many times. The stack can be deleted, by deleting the top level app in Argo CD dashboard (or with kubectl command line) and then deleting the namespaces. Sometimes when I deleted the argocd namespace, the namespace would get stuck in a terminating state due to a finalizer deadlock. To unblock this, the finalizer annotations in the applications can be deleted. After this annotation has been deleted, the namespace deletion should complete.

kubectl -n argocd get application -o name |
  xargs -I {} kubectl -n argocd patch {} --type json \
  --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'

Summary

The App of Apps pattern in Argo CD offers a powerful and efficient way to manager the deployment of Kubernetes application. Grouping applications together encapsulates the detailed configuration and composition of the applications.

The App of Apps used in this blog allows me to experiment rapidly with different applications and different configurations, whilst maintaining a base line of common configuration I need. In this case, I want observability services (Grafana, Prometheus etc) and certificate management spun up in a consistent way and I want it just working at the press of a button. I can readily spin up cluster variations, experiment, do destructive testing and tear them down when done.

I intend to be updating the App of Apps Git repository to continue with experiments and I will be doing this on the trunk. Feel free to fork the repository if you want stability and if you want try out your own variations. If you want to keep your fork private, you can read the instructions on how to connect to Argo CD to a private GitHub repository with a GitHub app.