esgn
07/29/2025, 12:33 PMMarvin
07/29/2025, 12:33 PMMarvin
07/29/2025, 12:34 PMyaml
# values.yaml
worker:
extraVolumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
# Or any other volume type supported by Kubernetes
extraVolumeMounts:
- name: my-volume
mountPath: /path/in/container
readOnly: false # optional
Key points to note:
1. The extraVolumes
section follows the standard Kubernetes volume specification format
2. The extraVolumeMounts
section follows the standard Kubernetes volumeMount specification
3. The name
in extraVolumeMounts
must match the corresponding volume name in extraVolumes
You can use any valid Kubernetes volume type in the extraVolumes
section, such as:
- persistentVolumeClaim
- configMap
- secret
- hostPath
- emptyDir
Example with multiple volumes:
yaml
worker:
extraVolumes:
- name: config-volume
configMap:
name: my-config
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc
extraVolumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
- name: data-volume
mountPath: /data
These volumes will be mounted in all pods created by the worker. Make sure that any PersistentVolumeClaims or ConfigMaps referenced in your configuration exist in the cluster before deploying the worker.
Would you like me to provide examples for any specific type of volume you're trying to mount?esgn
07/29/2025, 1:06 PMMarvin
07/29/2025, 1:06 PMMarvin
07/29/2025, 1:07 PMjson
{
"spec": {
"volumes": [
{
"name": "my-volume",
"persistentVolumeClaim": {
"claimName": "my-pvc"
}
}
],
"containers": [
{
"name": "prefect-job",
"volumeMounts": [
{
"name": "my-volume",
"mountPath": "/path/in/container"
}
]
}
]
}
}
2. Create or update your work pool with this configuration:
bash
prefect work-pool create "my-pool" \
--type kubernetes \
--base-job-template path/to/your/template.json
Or if you're using the UI, you can paste the JSON configuration in the "Base Job Template" section when creating/editing the work pool.
The key difference is:
- Worker values.yaml: Configures the worker deployment itself
- Work pool configuration: Configures how the flow run pods are created
This is why you see the volume configuration being accepted in the helm deployment (it's valid YAML) but not appearing in your flow run pods (because it's not in the work pool configuration where the flow run pods get their configuration).
Would you like me to provide more specific examples for your particular volume configuration? Or would you like to see how to verify that your work pool configuration is correctly set up?esgn
07/29/2025, 1:27 PMMarvin
07/29/2025, 1:27 PMMarvin
07/29/2025, 1:27 PMvalues.yaml configuration
↓
Worker Pod
↓
Creates Flow Run Pods → Uses Work Pool Configuration
(base job template)
3. Why This Design Makes Sense:
- Isolation: Each flow run gets a clean, isolated environment
- Flexibility: Different flows in the same work pool can have different configurations
- Security: The worker pod (which has permissions to create other pods) can have different security contexts than the flow run pods
4. Correct Configuration Locations:
- Use values.yaml for: Worker pod configuration, worker resources, worker security context
- Use work pool configuration for: Flow run pod configurations, volume mounts, environment variables, resource requests
This is why your volume mounts started working when you moved them to the work pool configuration - you were putting the configuration where the flow run pods actually look for it, rather than in the worker pod's configuration which doesn't get propagated.
Think of it like a hotel: the hotel manager (worker) doesn't share their office configuration with each guest room (flow run pods). Instead, each room is set up according to a room template (work pool configuration).
Would you like me to explain any specific aspect of this architecture in more detail?