Hi, looking for some help deploying prefect-worker...
# prefect-getting-started
t
Hi, looking for some help deploying prefect-worker or agent to k8s (eks cluster v1.24). I've tried to follow that guide to deploy a worker using helm. Also using ArgoCD to deploy using helm. Here's what my config looks like:
Copy code
apiVersion: <http://argoproj.io/v1alpha1|argoproj.io/v1alpha1>
kind: Application
metadata:
  name: eu1-dev-prefect-install
  namespace: argocd
spec:
  project: default
  source:
    repoURL: <https://prefecthq.github.io/prefect-helm>
    chart: prefect-worker
    targetRevision: 2023.06.16
    helm:
      values: |
        worker:
          cloudApiConfig:
            accountId: 86be2a24-XXXX-XXXX-XXXX-XXXXXXXXXXXX
            workspaceId: f48edd49-XXXX-XXXX-XXXX-XXXXXXXXXXXX
            apiKeySecret:
              name: prefect-cloud-api-key
              key: prefect.cloud.api.key
          config:
            workPool: eu1dev-cluster-pool
  destination:
    namespace: prefect
    name: data-platform-eu1-dev
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
I have a secret with the api-key which is deployed in the correct namespace:
Copy code
Name:         prefect-cloud-api-key
Namespace:    prefect
Labels:       <http://app.kubernetes.io/instance=eu1dev-prefect|app.kubernetes.io/instance=eu1dev-prefect>
Annotations:  <http://reconcile.external-secrets.io/data-hash|reconcile.external-secrets.io/data-hash>: f3c2b73ee3f7058fb4f645cb64664080

Type:  Opaque

Data
====
prefect.cloud.api.key:  40 bytes
But from the worker pod, I get this error:
Copy code
prefect.exceptions.PrefectHTTPStatusError: Client error '403 Forbidden' for url '<https://api.prefect.cloud/api/accounts/86be2a24-b6d3-4192-801b-d4c029ad12bd/workspaces/f48edd49-c3dd-45b1-b63a-a203dd02ec26/work_pools/eu1dev-cluster-pool>'
Response: {'detail': 'Unauthorized'}
The secret looks correct and I can use the api key on my local machine using the prefect cli. Any help hugely appreciated 🙂
1
Here's the pod description if that could help too:
Copy code
Name:         prefect-worker-5d559448f-fkc57
Namespace:    prefect
Priority:     0
Node:         ip-10-225-9-99.eu-west-1.compute.internal/10.225.9.99
Start Time:   Wed, 21 Jun 2023 12:35:32 +0200
Labels:       <http://app.kubernetes.io/component=worker|app.kubernetes.io/component=worker>
              <http://app.kubernetes.io/instance=eu1-dev-prefect-install|app.kubernetes.io/instance=eu1-dev-prefect-install>
              <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
              <http://app.kubernetes.io/name=prefect-worker|app.kubernetes.io/name=prefect-worker>
              <http://helm.sh/chart=prefect-worker-2023.06.16|helm.sh/chart=prefect-worker-2023.06.16>
              pod-template-hash=5d559448f
              prefect-version=2.10.15-python3.11-kubernetes
Annotations:  <http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
Status:       Running
IP:           10.225.10.63
IPs:
  IP:           10.225.10.63
Controlled By:  ReplicaSet/prefect-worker-5d559448f
Containers:
  prefect-worker:
    Container ID:  <containerd://80341c84c76d5fe25f52e84c447a8974b26a3f0bc88b335da3872bfbb39e618>4
    Image:         prefecthq/prefect:2.10.15-python3.11-kubernetes
    Image ID:      <http://docker.io/prefecthq/prefect@sha256:49423e30320ef90aed5b0c5631477853776ac3182bf03030513de0780862d950|docker.io/prefecthq/prefect@sha256:49423e30320ef90aed5b0c5631477853776ac3182bf03030513de0780862d950>
    Port:          <none>
    Host Port:     <none>
    Command:
      prefect
      worker
      start
      --type
      kubernetes
      --pool
      eu1dev-cluster-pool
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 21 Jun 2023 12:41:47 +0200
      Finished:     Wed, 21 Jun 2023 12:41:49 +0200
    Ready:          False
    Restart Count:  6
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     100m
      memory:  256Mi
    Environment:
      HOME:                            /home/prefect
      PREFECT_AGENT_PREFETCH_SECONDS:  10
      PREFECT_AGENT_QUERY_INTERVAL:    5
      PREFECT_API_ENABLE_HTTP2:        true
      PREFECT_API_URL:                 <https://api.prefect.cloud/api/accounts/86be2a24-XXXX-XXXX-XXXX-XXXXXXXXXXXX/workspaces/f48edd49-XXXX-XXXX-XXXX-XXXXXXXXXXXX>
      PREFECT_KUBERNETES_CLUSTER_UID:  
      PREFECT_API_KEY:                 <set to the key 'prefect.cloud.api.key' in secret 'prefect-cloud-api-key'>  Optional: false
      PREFECT_DEBUG_MODE:              false
    Mounts:
      /home/prefect from scratch (rw)
      /tmp from scratch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fdcrz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  scratch:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-fdcrz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  7m57s                   default-scheduler  Successfully assigned prefect/prefect-worker-5d559448f-fkc57 to ip-10-225-9-99.eu-west-1.compute.internal
  Normal   Pulling    7m57s                   kubelet            Pulling image "prefecthq/prefect:2.10.15-python3.11-kubernetes"
  Normal   Pulled     7m48s                   kubelet            Successfully pulled image "prefecthq/prefect:2.10.15-python3.11-kubernetes" in 8.572369991s
  Normal   Created    6m2s (x5 over 7m48s)    kubelet            Created container prefect-worker
  Normal   Started    6m2s (x5 over 7m48s)    kubelet            Started container prefect-worker
  Normal   Pulled     6m2s (x4 over 7m42s)    kubelet            Container image "prefecthq/prefect:2.10.15-python3.11-kubernetes" already present on machine
  Warning  BackOff    2m49s (x23 over 7m39s)  kubelet            Back-off restarting failed container
n
hmm i dont see anything immediately wrong (that i can tell) - if it helps, when i used the helm chart directly i didnt change the expected secret name for the key and just defined it like this. maybe you could try not altering the expected key name?
Copy code
apiVersion: v1
stringData:
  key: pnu_XXXXX
kind: Secret
metadata:
  name: prefect-api-key
  namespace: default
type: Opaque
j
Hey Thomas, unsure if you’re encountering the same issue as I did yesterday. For me, it was that my user’s account id from the UI wasn’t the account id the helm chart wants. When you’re in the ui / your workspace, you should use the account id in the URL bar i.e https://app.prefect.cloud/account/<ACCOUNT_ID>/workspace/<WORKSPACE_ID>/flow-runs
🙌 1
t
Hi Jey, well, looks like I made the same mistake because the id in my helm chart isn't the same as in my url
I'll try that
👍 1
And that worked... Thanks a lot @Jey Farmer. @Nate maybe the docs could be improved on how to retrieve the account id, since from the UI the account id we get seems to be the user account id or something. 🙂
👍 1
j
Nice! Agreed on the docs, I struggled with that longer than I’d care to admit. Had to come here to get the info that the account id they’re wanting is more of a tenant id 🤷
1
t
Hehe well same here, I've become really frustrated and you made my day now 😄
🙌 1
j
Thanks, all. Sorry for the confusion. PR to show how to get the Account ID & Workspace ID in the guide is open.
🙏 2