Deceivious
04/20/2023, 1:42 PMpefect2-worker-dev
[image below]. When I try to execute a flow run in that infrastructure, it gives out the following error.
Submission failed. kubernetes.client.exceptions.ApiException: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': 'e6<<SOME STUFF HERE>>37:14 GMT', 'Content-Length': '330'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"jobs.batch is forbidden: User \"system:serviceaccount:prefect2-worker-dev:prefect-worker\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"default\"","reason":"Forbidden","details":{"group":"batch","kind":"jobs"},"code":403}
I am unsure as to what actually determines which namespace the job is created in. [prefect v2.10.4 on python 3.10]redsquare
04/20/2023, 1:50 PMDeceivious
04/20/2023, 2:12 PMredsquare
04/20/2023, 2:16 PMDeceivious
04/20/2023, 2:44 PMdefault
NS and not the one I explicitly provided and how do I change it?redsquare
04/20/2023, 2:47 PMDeceivious
04/20/2023, 2:49 PMredsquare
04/20/2023, 2:51 PMDeceivious
04/20/2023, 3:46 PMdefault
NS no matter what I put in the KubernetesJob during deployment.
The deployment being same, when I turn down the helm workers and switch to manifest worker. The SA automatically picks the NS specified in KubernetesJob during deployment.
So either theres something wrong on the helm chart OR I am not deploying helm chart correctly.
I use helm install --namespace=prefect2-worker-dev --values=./worker.dev.yaml prefect-worker prefect/prefect-worker
to deploy.jawnsy
04/20/2023, 5:05 PMDeceivious
04/20/2023, 5:34 PMfrom prefect.infrastructure.kubernetes import KubernetesJob
I am using this as infrastructure.alex
04/21/2023, 3:07 PMKubernetesJob
infrastructure block, then you’ll need to use an agent instead of the Kubernetes worker. You can use the agent prefect helm chart to deploy an agent instead of the worker helm chart.Deceivious
04/21/2023, 3:21 PMalex
04/21/2023, 3:35 PMKubernetesJob
block and poll for flow runs from prefect-agent
typed work pools. Workers are a newer concept (they’re still in beta), but they are like an agent and an infrastructure block combined and poll for flow runs from typed work pools.
All this means that you either need to use a agent + infra block or a worker.Deceivious
04/21/2023, 3:46 PMsjammula
04/21/2023, 7:24 PMDeceivious
04/21/2023, 8:41 PMprefect kubernetes manifest agent
, you need to specify which name space to run it in [theres an cli parameter for it].
And when you deploy the flow, you need to ensure that the KubernetesJob has the same namespace.prefect kubernetes manifest
command works with the correct queue and the helm
deployment fails. If both the workers are equivalent despite the deployment method, either both should work or both should fail. Unsure about the change in behavior based on deployment method.alex
04/21/2023, 8:59 PMprefect kubernetes manifest
is a manifest for agent deployment. We haven’t added a worker manifest to that CLI command yet. You might be deploying an agent with one method and a worker with the other method, depending on which helm chart you’re using.Deceivious
04/24/2023, 7:37 AMredsquare
04/24/2023, 7:44 AMJoshua Greenhalgh
06/06/2023, 11:29 AMjawnsy
06/06/2023, 1:51 PMalex
06/06/2023, 1:54 PMJoshua Greenhalgh
06/06/2023, 2:08 PMjawnsy
06/06/2023, 2:10 PMredsquare
06/06/2023, 2:27 PMalex
06/06/2023, 2:32 PMJoshua Greenhalgh
06/06/2023, 2:40 PM---
# Source: prefect-worker/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: prefect-worker
namespace: "prefect"
---
# Source: prefect-worker/templates/role.yaml
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: Role
metadata:
name: prefect-worker
namespace: "prefect"
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/status"]
verbs: ["get", "watch", "list"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: [ "get", "list", "watch", "create", "update", "patch", "delete" ]
---
# Source: prefect-worker/templates/rolebinding.yaml
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: RoleBinding
metadata:
name: prefect-worker
namespace: "prefect"
roleRef:
apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>
kind: Role
name: prefect-worker
subjects:
- kind: ServiceAccount
name: prefect-worker
namespace: "prefect"
---
# Source: prefect-worker/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-worker
namespace: "prefect"
labels:
app: prefect-worker
spec:
replicas: 1
selector:
matchLabels:
app: prefect-worker
template:
metadata:
labels:
app: prefect-worker
spec:
serviceAccountName: prefect-worker
securityContext:
fsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
containers:
- name: prefect-worker
image: "prefecthq/prefect:2.10.12-python3.9-kubernetes"
imagePullPolicy: IfNotPresent
command:
- prefect
- worker
- start
- --type
- kubernetes
- --pool
- default-agent-pool
- --work-queue
- default
workingDir: /home/prefect
env:
- name: HOME
value: /home/prefect
- name: PREFECT_AGENT_PREFETCH_SECONDS
value: "10"
- name: PREFECT_AGENT_QUERY_INTERVAL
value: "5"
- name: PREFECT_API_ENABLE_HTTP2
value: "true"
- name: PREFECT_API_URL
value: "<http://host.docker.internal:4200/api>"
- name: PREFECT_KUBERNETES_CLUSTER_UID
value: ""
- name: PREFECT_DEBUG_MODE
value: "false"
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
volumeMounts:
- mountPath: /home/prefect
name: scratch
subPathExpr: home
- mountPath: /tmp
name: scratch
subPathExpr: tmp
volumes:
- name: scratch
emptyDir: {}