Hello everyone, I'm having an issue deploying a Ku...
# ask-community
r
Hello everyone, I'm having an issue deploying a Kubernetes Prefect 2 agent that I'm stuck on. After running a
kubectl -n <my_namespace> apply -f prefect2_agent.yaml
the deployment launches but keeps returning
Response: {'detail': 'Not Found'}
when attempting to connect to the work-queue. Even after manually exec'ing into the deployment pod and running a
prefect cloud login -k $PREFECT_API_KEY
receiving a login confirmation and choosing the correct workspace I cannot interact with any resource. For example running
prefect cloud workspace ls
raise the error
AttributeError: 'NoneType' object has no attribute 'groups'
even though the workspace was correctly set in the cloud login step. If anyone could shine a light where my issue may be I'd be grateful.
r
you sure the url env var is correct?
r
Yes, just doubled checked to be sure, ran
prefect config view
locally and confirmed the URL is the same as the env var set up as PREFECT_API_URL inside the deployment yaml file, also running a
echo $PREFECT_API_URL
inside the deployment pod returns the correct URL
just double checking as I have seen people get mixed up with what they should be
👍 1
r
Yes,
👍 1
r
assuming your using the cloud
which image on the agent?
r
prefecthq/prefect:2.4.5-python3.9
followed this Discourse page
r
oh thats old old
2.8.7 now
I suggest using something much more recent
r
updated to 2.8.7 but the same error occurs, except now the deployment stops executing when receiving the error instead of looping once a minute
r
is your account fine in the ui 🤔
r
yup, no issues on the UI, local install also work perfectly
It's weird since doing a manual log in shows the account correctly, so the pod is able to connect to the Prefect Cloud, but any operation fails
r
my command is ["prefect", "agent", "start", "<WORK_QUEUE>"]
not sure if that matters compared to the article
r
same thing without -q, in fact it tells me I should probably add the -q flag 🙃
😁 1
r
apiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-agent-<AGENT_NAME>-<COUNTRY>-<ENV>
namespace: data-prefect-<COUNTRY>-<ENV>
spec:
selector:
matchLabels:
app: prefect-agent-<AGENT_NAME>-<COUNTRY>-<ENV>
replicas: 1
template:
metadata:
labels:
app: prefect-agent-<AGENT_NAME>-<COUNTRY>-<ENV>
spec:
serviceAccountName: default
automountServiceAccountToken: true
containers:
- name: agent
image: xxxprefect-agent:<IMAGE_VERSION>
command: ["prefect", "agent", "start", "<WORK_QUEUE>"]
imagePullPolicy: "IfNotPresent"
volumeMounts:
- mountPath: /.prefect
name: prefectscratch
env:
- name: ENVIRONMENT
value: "<ENV>"
- name: COUNTRY
value: <COUNTRY>
- name: IMAGE_VERSION
value: "<IMAGE_VERSION>"
- name: WORK_QUEUE
value: "<WORK_QUEUE>"
- name: "AGENT_NAME"
value: "<AGENT_NAME>"
envFrom:
- secretRef:
name: data-prefect-agent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ALL]
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 1000
imagePullSecrets:
- name: azure-docker
volumes:
- emptyDir:
sizeLimit: 100Mi
medium: Memory
name: prefectscratch
thats my agent deploy template
try connecting to the work pool instead?
r
exec'ing into the pod and running a manual login attempt it shows me as already logged in, when doing a
prefect cloud workspace ls
raises an exception
message has been deleted
r
I'd be inclined to try a fresh pod
r
god dammit my API URL had an extra quote at the end... I was blind to it, I'll just go make some more coffee
it's working now, thanks for the patience
r
haha
cool, glad your up & running
👀 1