Setting up Persistent Volumes for Storage in a k3d Cluster
To set up any service that needs to retain data, we're going to need to make sure the storage is persistent through pod and cluster restarts. In Kubernetes we can set up a PersitentVolume (PV) to define the storage resource, and a PersitentVolumeClaim (PVC) to use the PersitentVolume.
A single node k3d cluster is a good place to start to see these concepts before moving to provision of storage in a multi node cluster.
Starting up a k3d cluster with a volume
Create a k3d cluster and
define a volume with the argument --volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]
.
mkdir -p ~/local/storage/k3d
k3d cluster create my-cluster \
--volume $HOME/local/storage/k3d:/var/storage
Apply a PersitentVolume resource
Let's create a PersitentVolume with 1Gi
of storage. A PersitentVolume is a resource in the cluster that when provisioned acts as storage in the cluster.
cat > my-persistent-volume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-persistent-volume
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/storage
EOF
kubectl apply -f my-persistent-volume.yaml
We can now see this PV created.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
my-pv 1Gi RWO Retain Available
Apply a PersitentVolumeClaim resource
Let's create PVC requesting just 10Mi
of storage which we'll use to mount a volume
from a pod.
cat > my-persistent-volume-claim.yaml << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
EOF
kubectl apply -f my-persistent-volume-claim.yaml
Test our storage with a deployment
We can run an interactive pod and mount a volume with this claim. Apologies, the command is a little verbose, however I want to spin up an interactive pod and for it to delete on exit. It's a way to quickly get in and out of the cluster.
kubectl run -it alpine --image alpine --rm --overrides='
{"spec": { "containers": [{
"name": "alpine", "image": "alpine",
"args": [ "sh" ],
"stdin": true, "stdinOnce": true, "tty": true,
"volumeMounts": [ { "mountPath": "/mnt/storage", "name": "my-volume" }]
}],
"volumes": [ {
"name": "my-volume",
"persistentVolumeClaim": { "claimName": "my-pvc" }
}]}
}'
The mount directory /mnt/storage
is currently empty, but we can quickly write
some output to a file in that directory.
date > /mnt/storage/dates.txt
Exit the pod. With the --rm
argument we provided on the command line the
pod will be deleted on exit.
To give us confidence this file is persistent, we can see that file is now available on our local file system.
❯ cat ~/local/storage/k3d/dates.txt
Sun May 26 07:25:19 UTC 2024
Persistence through cluster recreation
This storage is now persistent. We can now delete the cluster, recreate it, spin up the pod again and the file will still be there.
k3 cluster delete my-cluster
k3d cluster create my-cluster \
--volume $HOME/local/storage/k3d:/var/storage
kubectl apply -f my-persistent-volume.yaml
kubectl apply -f my-persistent-volume-claim.yaml
Run our interactive pod again:
kubectl run -it alpine --image alpine --rm --overrides='
{"spec": { "containers": [{
"name": "alpine", "image": "alpine",
"args": [ "sh" ],
"stdin": true, "stdinOnce": true, "tty": true,
"volumeMounts": [ { "mountPath": "/mnt/storage", "name": "my-volume" }]
}],
"volumes": [ {
"name": "my-volume",
"persistentVolumeClaim": { "claimName": "my-pvc" }
}]}
}'
And we can see that the old date output is still there
# cat /mnt/storage/dates.txt
Sun May 26 07:25:19 UTC 2024
We can create a new entry:
date >> /mnt/storage/dates.txt
Exit the pod and then check the file from my local machine, and see both date entries logged and available from our local system.
❯ cat ~/local/storage/k3d/dates.txt
Sun May 26 07:25:19 UTC 2024
Sun May 26 08:13:25 UTC 2024
Storage persists through cluster recycles
Using k3d we can quickly get our hands on a PersitentVolume and a PersitentVolumeClaim and see how we can mount the storage onto a container. This is a one node cluster, however I'll be looking how to extend this into a multi-node cluster, such as a k3s cluster on a couple of Raspberry Pis, in a future blog.
Clean up
All done, let's delete the cluster:
k3d cluster delete my-cluster
And remove the dates file.
rm ~/local/storage/k3d/dates.txt