hi any idea how do i mount EFS (PV) to the kuberne...
# prefect-kubernetes
y
hi any idea how do i mount EFS (PV) to the kubernetes flow pods (only for selective flows that needs it)? once i add this to my base job template
Copy code
"volumeMounts": {
                  "mountPath": "{{ mount_path }}",
                  "name": "{{ volume_name }}",
                  "subPath": "{{ sub_path }}"
                 }
how do i specify for when the specific flow does not need the extra mount? also saw this https://discourse.prefect.io/t/how-to-mount-a-folder-on-my-local-filesystem-to-a-flow-run-pod-on-kubernetes/441 when i was searching. I dont want to put it into my image because the file will be huge and i dont want to download it repetitively everytime the job runs.
hello any idea on this? how do i mount a PVC on a prefect flow pod? since we dun have direct access to the pod yaml file. I have added this on the base job template.json in the 🧵
Copy code
"job_configuration": {
    "env": "{{ env }}",
    "name": "{{ name }}",
    "labels": "{{ labels }}",
    "command": "{{ command }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "kind": "Job",
      "spec": {
        "template": {
          "spec": {
            "volumes": [
              {
                "name": "{{ volume_name }}",
                "persistantVolumeClaim": {
                  "claimName": "{{ volume_claim_name }}"
                }
              }
            ],
            "containers": [
              {
                "env": "{{ env }}",
                "args": "{{ command }}",
                "name": "prefect-job",
                "image": "{{ image }}",
                "resources": {
                  "limits": {
                    "<http://nvidia.com/gpu|nvidia.com/gpu>": "{{ gpu_enabled }}"
                  },
                  "requests": {
                    "cpu": "{{ request_cpu }}",
                    "memory": "{{ request_memory }}"
                  }
                },
                "volumeMounts": [
                  {
                    "name": "{{ volume_name }}",
                    "subPath": "{{ sub_path }}",
                    "mountPath": "{{ mount_path }}"
                  }
                ],
                "imagePullPolicy": "{{ image_pull_policy }}"
              }
            ],
            "completions": 1,
            "parallelism": 1,
            "restartPolicy": "Never",
            "serviceAccountName": "{{ service_account_name }}"
          }
        },
        "backoffLimit": 0,
        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
      },
      "metadata": {
        "labels": "{{ labels }}",
        "namespace": "{{ namespace }}",
        "generateName": "{{ name }}-"
      },
      "apiVersion": "batch/v1"
    },
    "stream_output": "{{ stream_output }}",
    "cluster_config": "{{ cluster_config }}",
    "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
    "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
  }
so apparently is a typo error:
persistentVolumeClaim
in the job template
o
Hi @Ying Ting Loo! I was looking a way to use GPU:s and found your example. Is it really that simple that you just edit the base template and set
gpu_enabled
variable in
prefect.yaml
?
y
this is not prefect.yaml. this is job template of the prefect k8 worker work pool that is supposed to start prefect flows. yeah it works for me with
"<http://nvidia.com/gpu|nvidia.com/gpu>": "{{ gpu_enabled }}"
👍 1