Andrea Ceruti
03/07/2025, 4:29 PMMarvin
03/07/2025, 4:29 PMttlSecondsAfterFinished
field in your pod or job spec. This field specifies the time to live (TTL) for a finished pod or job, after which it will be eligible for garbage collection.
Here is a brief example of how you might configure this in a job spec:
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: example
image: busybox
command: ["echo", "Hello World"]
restartPolicy: Never
ttlSecondsAfterFinished: 100 # Deletes the job 100 seconds after it completes
In this example, the job will be deleted 100 seconds after it completes. Ensure that your Kubernetes version supports this feature, as it might not be available in older versions.
For more details, you can refer to the Kubernetes documentation on TTL Controller for Finished Resources.Andrea Ceruti
03/07/2025, 4:30 PMMarvin
03/07/2025, 4:30 PMttlSecondsAfterFinished
setting is not configured on the worker or engine; rather, it is specified directly in the Kubernetes manifest for the job or pod. This field is part of the job or pod specification and is managed by the Kubernetes control plane itself, specifically by the TTL Controller, which is responsible for cleaning up finished resources after the specified TTL period.
Here's a quick breakdown of how it works:
1. Job or Pod Specification: You include the ttlSecondsAfterFinished
field in the job or pod spec when you define it. This tells Kubernetes how long to wait after the job or pod has completed before it should be deleted.
2. Kubernetes Control Plane: The TTL Controller, which is part of the Kubernetes control plane, monitors completed jobs or pods. Once the specified TTL has elapsed, the controller will delete the resource.
3. No Worker Configuration Needed: You don't need to configure this setting on individual workers or nodes; it's managed at the cluster level by Kubernetes.
By specifying this field in your job or pod spec, you communicate directly with the Kubernetes engine on how to handle completed resources.Andrea Ceruti
03/07/2025, 4:31 PMMarvin
03/07/2025, 4:32 PMyaml
apiVersion: v1
kind: Pod
metadata:
name: my-custom-pod-name
spec:
containers:
- name: my-container
image: my-image
2. Controller-Generated Names: When using controllers like Deployments, ReplicaSets, or Jobs, you typically specify a name for the controller, and Kubernetes generates pod names based on this name. For example, if you name a Deployment my-deployment
, the pods might be named my-deployment-<random-suffix>
.
3. Customizing Names with Labels and Annotations: While you can't directly control the suffix added to pod names by controllers, you can use labels and annotations to add additional metadata that can help identify pods. This doesn't change the pod name but can make it easier to organize and manage pods.
4. StatefulSets: If you need more control over pod naming, consider using a StatefulSet. StatefulSets provide stable, unique pod names that follow a predictable pattern: <statefulset-name>-<ordinal>
. This is useful for applications that require stable network identities.
In summary, while you can't fully customize the random suffix added to pod names by controllers, you can influence the base name and use other Kubernetes features like StatefulSets for more predictable naming patterns.Andrea Ceruti
03/07/2025, 4:32 PMMarvin
03/07/2025, 4:32 PMjob_configuration
section and a variables
section.
2. Job Configuration: The job_configuration
section can include placeholders that reference values provided in the variables
section or use hard-coded values. You can use this to influence the naming convention by setting up a pattern or including specific identifiers.
3. Customization: The variables
section allows deployment creators to customize worker behavior per deployment or flow run. This can include setting custom fields that may be used in the job name.
To implement a custom naming convention, you would need to define a job template with specific placeholders or hard-coded values that dictate how the job names are structured.
For more details on configuring work pools and job templates, you can refer to the Prefect documentation on Work Pools.