Arthur
09/10/2024, 10:07 AMMarvin
09/10/2024, 10:07 AMArthur
09/10/2024, 10:07 AMMarvin
09/10/2024, 10:08 AMyaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-agent
spec:
replicas: 1
selector:
matchLabels:
app: prefect-agent
template:
metadata:
labels:
app: prefect-agent
spec:
containers:
- name: agent
image: prefecthq/prefect:2.x # update to your specific Prefect version
args: ["prefect", "agent", "start", "kubernetes"]
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
env:
- name: PREFECT__CLOUD__AGENT__AUTH_TOKEN
value: "<YOUR_AUTH_TOKEN>"
- name: PREFECT__CLOUD__API
value: "<https://api.prefect.io>"
- name: PREFECT__CLOUD__AGENT__LABELS
value: "<YOUR_AGENT_LABELS>"
- name: PREFECT__LOGGING__LEVEL
value: "INFO" # Or DEBUG, based on your requirement
### Key Parts:
1. `httpGet`: Defines the endpoint for the liveness probe.
2. `initialDelaySeconds`: How long to wait before performing the first probe.
3. `periodSeconds`: How often to perform the probe thereafter.
For further guidance, you might want to refer to the official Kubernetes documentation on liveness probes.
### Additional Links:
- Prefect GitHub Issue #3882: Discusses similar liveness probe issues.
If you need more specific details or run into issues, feel free to ask!Arthur
09/10/2024, 10:11 AMMarvin
09/10/2024, 10:11 AMyaml
apiVersion: batch/v1
kind: Job
metadata:
labels: "{{ labels }}"
namespace: "{{ namespace }}"
generateName: "{{ name }}-"
spec:
ttlSecondsAfterFinished: "{{ finished_job_ttl }}"
template:
spec:
parallelism: 1
completions: 1
restartPolicy: Never
serviceAccountName: "{{ service_account_name }}"
containers:
- name: prefect-job
image: prefecthq/prefect:2.20.0
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
env:
- name: PREFECT__CLOUD__API
value: "<https://api.prefect.io>"
- name: PREFECT__LOGGING__LEVEL
value: "INFO"
# (Add other necessary configurations like volume mounts, resources, etc.)
### References:
- Prefect Kubernetes Worker Documentation
- Prefect Guides on Kubernetes Deployment
By including the livenessProbe
section within the container spec, Kubernetes will periodically check the health of the Prefect job, ensuring it is running smoothly. If the probe fails, Kubernetes will attempt to restart the worker.Arthur
09/10/2024, 10:14 AMMarvin
09/10/2024, 10:15 AMvalues.yaml
file to include a liveness probe:
### Step-by-Step Instructions:
1. Add the Prefect Helm repository:
sh
helm repo add prefect <https://prefecthq.github.io/prefect-helm>
helm repo update
2. Create a namespace for Prefect:
sh
kubectl create namespace prefect
3. Create a Kubernetes secret for the Prefect API key:
sh
kubectl create secret generic prefect-api-key \
--namespace=prefect --from-literal=key=your-prefect-cloud-api-key
4. Configure Helm chart values (values.yaml
):
yaml
worker:
cloudApiConfig:
accountId: <target account ID>
workspaceId: <target workspace ID>
config:
workPool: <target work pool name>
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
5. Deploy the Helm chart:
sh
helm install prefect-worker prefect/prefect-helm -f values.yaml --namespace prefect
### Example values.yaml
Configuration:
Here's a full example including the liveness probe configuration:
yaml
worker:
cloudApiConfig:
accountId: "your-account-id"
workspaceId: "your-workspace-id"
config:
workPool: "your-work-pool"
livenessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
By following these steps, you should be able to deploy a Prefect Kubernetes worker with a liveness probe using Helm.
### Reference:
- Prefect Kubernetes Worker Documentation
- Prefect Guide on Kubernetes Deployment using Helm
If you encounter any issues or have further questions, feel free to ask!