Skip to content

NFS Mount for Flamenco Pods

The Cluster nodes, predominately the Worker Nodes running Flamenco Worker Pods, need to access the same NFS Server. There are a few ways to do this in kubernetes by mounting volumes etc.

In this project, the NFS Server is mounted using nfs-subdir-external-provisioner which is done as part of the cluster-foundation Ansible Playbook.

This is a bit overkill and primarily intended for dynamic NFS folders for Pods. However, in this project, the base-path for all Pods is the same and points to the NFS Server where Blender (running in the Pods) can read and write.

NFS Client on the Worker Node

The requirements to use this chart is that the k8s nodes have nfs-common (the client) installed (which is done via the k8s-install playbook).

NFS PVCs inside Cluster

Using https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner Dynamic Storage class.

Pods can use this storage-class type to mount NFS volumes. This mount is located on the Gateway Server (eg: 192.168.57.30:/nfs/share/)

Mounting NFS Location in a Pod

Since there is an NFS storage-class it can be used by the Pods by mounting a ReadWriteMany PVC.

For example, the install-flamenco-workers.yaml Ansible Role creates such a PVC

- name: "k8s: Flamenco-Worker PVC to NFS PV"
  kubernetes.core.k8s:
    kubeconfig: /home/kube/.kube/config
    state: present
    definition:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: flamenco-worker-pvc
        namespace: flamenco-worker
        labels:
          storage.k8s.io/name: nfs
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: nfs-client
        resources:
          requests:
            storage: 1Gi  # This has no effect

then the Flamenco Worker Pod can reference it

- name: "k8s: Deployment Flamenco-Worker"
  kubernetes.core.k8s:
    kubeconfig: /home/kube/.kube/config
    state: present
    definition:
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: flamenco-worker
        namespace: flamenco-worker
...
          spec:
            volumes:
              - name: nfs-pvc
                persistentVolumeClaim:
                  claimName: flamenco-worker-pvc 
            containers:            
              - name: flamenco
...
                volumeMounts:
                  - name: nfs-pvc
                    mountPath: /media/shared/flamenco          
...

Then the Flamenco Worker will have /nfs/share/flamenco/ mount point inside the Container and seamlessly access the NFS Server export.

The actual mapping between NFS Server and different Workstations (Operating Systems, Mount points etc) is discussed in Blender / Setup / NFS in more detail.

For the Flamenco Workers in the Cluster they are considered Flamenco linux Platform type and their path is /media/shared/flamenco.

Cluster Resources

NFS Subdir External Provisioner

#> kubectl -n nfs-subdir-external-provisioner describe pod nfs-subdir-external-provisioner-6556cc98c8-4pvbp

Name:             nfs-subdir-external-provisioner-6556cc98c8-4pvbp
Namespace:        nfs-subdir-external-provisioner
Priority:         0
Service Account:  nfs-subdir-external-provisioner
Node:             k8s-node7/192.168.57.37
Start Time:       Wed, 29 Mar 2023 17:58:02 +0000
Labels:           app=nfs-subdir-external-provisioner
                  pod-template-hash=6556cc98c8
                  release=nfs-subdir-external-provisioner
Annotations:      cni.projectcalico.org/containerID: ba2c60ed22c9af5471cfacd058e599f8ce6944d7e28b54d8f961cd72da5bd5a9
                  cni.projectcalico.org/podIP: 10.244.189.19/32
                  cni.projectcalico.org/podIPs: 10.244.189.19/32
Status:           Running
IP:               10.244.189.19
IPs:
  IP:           10.244.189.19
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-6556cc98c8
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   containerd://9e368cf0b48d1ebc6150a93937eaed0a6237ba39dc7217f10f884f1688913f85
    Image:          k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Image ID:       k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 30 Mar 2023 09:50:55 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Thu, 30 Mar 2023 09:50:08 +0000
      Finished:     Thu, 30 Mar 2023 09:50:38 +0000
    Ready:          True
    Restart Count:  2
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        nfs.cluster.home
      NFS_PATH:          /nfs/share/
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mb4m6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    nfs.cluster.home
    Path:      /nfs/share/
    ReadOnly:  false
  kube-api-access-mb4m6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                From     Message
  ----     ------          ----               ----     -------
  Normal   SandboxChanged  12m                kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         11m                kubelet  Back-off restarting failed container
  Normal   Pulled          11m (x2 over 11m)  kubelet  Container image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
  Normal   Created         11m (x2 over 11m)  kubelet  Created container nfs-subdir-external-provisioner
  Normal   Started         11m (x2 over 11m)  kubelet  Started container nfs-subdir-external-provisioner

NFS Storage Class

#> kubectl -n nfs-subdir-external-provisioner get storageclass
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   16h

NFS PVs

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                   STORAGECLASS   REASON   AGE
pvc-2a7f74e1-1d0f-4ac4-a19f-1345a5bc0e0b   1Gi        RWX            Delete           Bound    flamenco-manager/flamenco-manager-pvc   nfs-client              16h
pvc-ad4e2273-310c-4727-9968-871a80b015ea   1Gi        RWX            Delete           Bound    flamenco-worker/flamenco-worker-pvc     nfs-client              16h

NFS PVCs

Flamenco Manager

#> kubectl -n flamenco-manager get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
flamenco-manager-pvc   Bound    pvc-2a7f74e1-1d0f-4ac4-a19f-1345a5bc0e0b   1Gi        RWX            nfs-client     16h

#> kubectl -n flamenco-worker get pvc
NAME                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
flamenco-worker-pvc   Bound    pvc-ad4e2273-310c-4727-9968-871a80b015ea   1Gi        RWX            nfs-client     16h

NFS Mount inside Worker Pod

#> kubectl -n flamenco-worker exec -it flamenco-worker-7ff45d9fc5-4drkv -- /bin/bash

root@flamenco-worker-7ff45d9fc5-4drkv:/code/flamenco# ls /media/shared/flamenco/output/mike-redcubes-flamenco/
2023-03-14_184957  2023-03-14_192805