All the code for this chapter (and other chapters) is available at https://github.com/param108/kubernetes101 check the directory 020

In the last post we talked about volumes. The problem with Volumes is that they are deleted with the Pod. What do we do if we need them to persist.

One option we did see was to use the node’s filesystem to store the data. The problem with that is that you are never certain which node your pod will be scheduled.

If the design of your cluster involves assumptions of which node your pod will be scheduled on then you probably need to re-look at your design.

Persistent Volumes

Kubernetes has your back here though! Using Persistent Volumes, you can create disk space in the cloud (say) and hook it into a pod when it is scheduled.

Here is the spec for a Persistent Volume (also available in 020/pv.yml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
  labels:
    name: nfsvolume
spec:
  volumeMode: Filesystem
  storageClassName: slow
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.0.42
    path: "/mnt/nfs"

You can learn how to mount nfs here . Make sure to mark the mount as insecure like below as kubernetes will use a port greater than 1024.

$ cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#		to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/mnt/nfs 192.168.0.0/16(rw,sync,no_subtree_check,insecure)

Persistent Volume Claim

A persistent volume claim is the resource that wraps a Persistent Volume so that in can be used by the Pod. At the time of Pod Scheduling the Persistent Volume Claim is bound to a Persistent Volume and then the Claim is attached to the Pod as a volume.

Here is a Persistent Volume Claim specification. It is available in the code repo at 020/pvc.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfsclaim
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 1Mi
  storageClassName: slow
  selector:
    matchLabels:
      name: "nfsvolume"

Pod Specification

apiVersion: "v1"
kind: Pod
metadata:
  name: ubuntu-pod
  labels:
    app: ubuntu-pod
    version: v5
    role: backend
spec:
  containers:
  - name: ubuntu-container
    image: plainubuntu:latest
    imagePullPolicy: Never
    command: ["/bin/bash"]
    args: ["-c", "while [ \"a\" = \"a\" ]; do echo \"Hi\"; sleep 5; done" ]
    volumeMounts:
      - mountPath: /v1
        name: nfs-volume
  volumes:
    - name: nfs-volume
      persistentVolumeClaim:
        claimName: nfsclaim

Play

After applying all 3 configs, we can see that the ubuntu-pod has a directory /v1 which points to the nfs exported directory.

I was able to create a file in the nfs exported directory and see it in the pod.

# /mnt/nfs is the nfs directory on the server
$ ls /mnt/nfs
man  man2

# I can see it in the ubuntu-pod at /v1
$ kubectl exec -it pods/ubuntu-pod /bin/bash
root@ubuntu-pod:/# ls /v1
man  man2
root@ubuntu-pod:/# 

Learnings

Persistent Volumes and Persistent Volume Claims allow the pod specification to be independent of the underlying hardware of the storage.

The number of options are huge. Make sure you set up the storage as you would desire.

You can also control the manner of access to the storage. Is it readOnly or readWriteOnce or readWriteMany ?

Conclusion

Using PVCs and PVs we can persist the directory for our postgres and hence provide a modicum of resilience. For high availability though you would need a solution like stolon.

This ends the section on volumes. In the next post we will close out the section on scaling by looking at Deployments which make CD really easy on postgres.