Thomas Ronnet
06/21/2023, 10:44 AMapiVersion: <http://argoproj.io/v1alpha1|argoproj.io/v1alpha1>
kind: Application
metadata:
name: eu1-dev-prefect-install
namespace: argocd
spec:
project: default
source:
repoURL: <https://prefecthq.github.io/prefect-helm>
chart: prefect-worker
targetRevision: 2023.06.16
helm:
values: |
worker:
cloudApiConfig:
accountId: 86be2a24-XXXX-XXXX-XXXX-XXXXXXXXXXXX
workspaceId: f48edd49-XXXX-XXXX-XXXX-XXXXXXXXXXXX
apiKeySecret:
name: prefect-cloud-api-key
key: prefect.cloud.api.key
config:
workPool: eu1dev-cluster-pool
destination:
namespace: prefect
name: data-platform-eu1-dev
syncPolicy:
syncOptions:
- CreateNamespace=true
I have a secret with the api-key which is deployed in the correct namespace:
Name: prefect-cloud-api-key
Namespace: prefect
Labels: <http://app.kubernetes.io/instance=eu1dev-prefect|app.kubernetes.io/instance=eu1dev-prefect>
Annotations: <http://reconcile.external-secrets.io/data-hash|reconcile.external-secrets.io/data-hash>: f3c2b73ee3f7058fb4f645cb64664080
Type: Opaque
Data
====
prefect.cloud.api.key: 40 bytes
But from the worker pod, I get this error:
prefect.exceptions.PrefectHTTPStatusError: Client error '403 Forbidden' for url '<https://api.prefect.cloud/api/accounts/86be2a24-b6d3-4192-801b-d4c029ad12bd/workspaces/f48edd49-c3dd-45b1-b63a-a203dd02ec26/work_pools/eu1dev-cluster-pool>'
Response: {'detail': 'Unauthorized'}
The secret looks correct and I can use the api key on my local machine using the prefect cli.
Any help hugely appreciated 🙂Thomas Ronnet
06/21/2023, 10:47 AMName: prefect-worker-5d559448f-fkc57
Namespace: prefect
Priority: 0
Node: ip-10-225-9-99.eu-west-1.compute.internal/10.225.9.99
Start Time: Wed, 21 Jun 2023 12:35:32 +0200
Labels: <http://app.kubernetes.io/component=worker|app.kubernetes.io/component=worker>
<http://app.kubernetes.io/instance=eu1-dev-prefect-install|app.kubernetes.io/instance=eu1-dev-prefect-install>
<http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
<http://app.kubernetes.io/name=prefect-worker|app.kubernetes.io/name=prefect-worker>
<http://helm.sh/chart=prefect-worker-2023.06.16|helm.sh/chart=prefect-worker-2023.06.16>
pod-template-hash=5d559448f
prefect-version=2.10.15-python3.11-kubernetes
Annotations: <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
Status: Running
IP: 10.225.10.63
IPs:
IP: 10.225.10.63
Controlled By: ReplicaSet/prefect-worker-5d559448f
Containers:
prefect-worker:
Container ID: <containerd://80341c84c76d5fe25f52e84c447a8974b26a3f0bc88b335da3872bfbb39e618>4
Image: prefecthq/prefect:2.10.15-python3.11-kubernetes
Image ID: <http://docker.io/prefecthq/prefect@sha256:49423e30320ef90aed5b0c5631477853776ac3182bf03030513de0780862d950|docker.io/prefecthq/prefect@sha256:49423e30320ef90aed5b0c5631477853776ac3182bf03030513de0780862d950>
Port: <none>
Host Port: <none>
Command:
prefect
worker
start
--type
kubernetes
--pool
eu1dev-cluster-pool
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 21 Jun 2023 12:41:47 +0200
Finished: Wed, 21 Jun 2023 12:41:49 +0200
Ready: False
Restart Count: 6
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 100m
memory: 256Mi
Environment:
HOME: /home/prefect
PREFECT_AGENT_PREFETCH_SECONDS: 10
PREFECT_AGENT_QUERY_INTERVAL: 5
PREFECT_API_ENABLE_HTTP2: true
PREFECT_API_URL: <https://api.prefect.cloud/api/accounts/86be2a24-XXXX-XXXX-XXXX-XXXXXXXXXXXX/workspaces/f48edd49-XXXX-XXXX-XXXX-XXXXXXXXXXXX>
PREFECT_KUBERNETES_CLUSTER_UID:
PREFECT_API_KEY: <set to the key 'prefect.cloud.api.key' in secret 'prefect-cloud-api-key'> Optional: false
PREFECT_DEBUG_MODE: false
Mounts:
/home/prefect from scratch (rw)
/tmp from scratch (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fdcrz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
scratch:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-fdcrz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m57s default-scheduler Successfully assigned prefect/prefect-worker-5d559448f-fkc57 to ip-10-225-9-99.eu-west-1.compute.internal
Normal Pulling 7m57s kubelet Pulling image "prefecthq/prefect:2.10.15-python3.11-kubernetes"
Normal Pulled 7m48s kubelet Successfully pulled image "prefecthq/prefect:2.10.15-python3.11-kubernetes" in 8.572369991s
Normal Created 6m2s (x5 over 7m48s) kubelet Created container prefect-worker
Normal Started 6m2s (x5 over 7m48s) kubelet Started container prefect-worker
Normal Pulled 6m2s (x4 over 7m42s) kubelet Container image "prefecthq/prefect:2.10.15-python3.11-kubernetes" already present on machine
Warning BackOff 2m49s (x23 over 7m39s) kubelet Back-off restarting failed container
Nate
06/21/2023, 12:26 PMapiVersion: v1
stringData:
key: pnu_XXXXX
kind: Secret
metadata:
name: prefect-api-key
namespace: default
type: Opaque
Jey Farmer
06/21/2023, 12:28 PMThomas Ronnet
06/21/2023, 12:29 PMThomas Ronnet
06/21/2023, 12:30 PMThomas Ronnet
06/21/2023, 12:49 PMJey Farmer
06/21/2023, 12:53 PMThomas Ronnet
06/21/2023, 1:00 PMJeff Hale
06/29/2023, 1:07 PM