Mohamed Alaa
08/11/2022, 12:59 PMOscar Björhn
08/11/2022, 1:50 PMAlex Shea
08/11/2022, 2:36 PMUser \"system:serviceaccount:prefect:prefect-agent\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"prefect\"
I as you might be able to tell, I have the prefect agent running in the prefect namespace. I have a service account for it called prefect agent. I have used the role and rolebinding from the prefect documentation, but updated the namespace on both and the service account name from default to prefect -agent.Nelson Griffiths
08/11/2022, 2:56 PMJames Brady
08/11/2022, 3:31 PMprefect agent kubernetes install
but that seems to be v1.x only…Bigya Man Pradhan
08/11/2022, 3:36 PMprefect deployment apply ...
and prefect deployment build ...
CLI commands to deploy and build flows. Is there a python equivalent to these commands?Sam Garvis
08/11/2022, 4:05 PMAlexander Belikov
08/11/2022, 4:28 PMprefect deployment build ...
and b) prefect deployment apply ...
prefect deployment run ...
runs successfully.
I've created the work queue and the agent is running, but scheduled flows are not being executed, instead at time X they change their state from Scheduled
to Pending
.
The work queue is empty (!). What could be the problem?Chandrashekar Althati
08/11/2022, 4:46 PMhttps://infinitelambda.com/wp-content/uploads/2021/02/prefect-steps-2048x643.png▾
Mars
08/11/2022, 4:58 PMprefect deployment build
gives me a deployment YAML, but is no longer producing the manifest JSON file. It’s only producing the deployment YAML. How can I debug this?
Here is the build command used:
prefect deployment build myflow.py:myflow -n myflow-deployment -t dev -ib kubernetes-job/k8dev -sb remote-file-system/dev
John Kang
08/11/2022, 6:12 PMChandrashekar Althati
08/11/2022, 6:36 PMJon Ruhnke
08/11/2022, 7:11 PMMars
08/11/2022, 7:19 PMEXTRA_PIP_PACKAGES
for the Docker executor. Is there something similar for the k8s job executor?Yardena Meymann
08/11/2022, 7:53 PMSlackbot
08/11/2022, 8:19 PMSlackbot
08/11/2022, 8:28 PMJeff LaPorte
08/11/2022, 8:30 PMJoe Goldbeck
08/11/2022, 8:48 PMprefect register --dry-run
? I would like to add a CI step that ensures that we won’t fail after a merge when we try to register flows in CDNeil Natarajan
08/11/2022, 8:59 PMEdmondo Porcu
08/11/2022, 11:58 PMIlya Galperin
08/12/2022, 12:37 AMdeployment
build
and apply
commands from a parent directory containing multiple flows. This way, we use only one storage block but if multiple developers are working in the same shared bucket like in a staging environment, the blast radius is bigger. For example, if a developer is working on only a single flow, there is a risk of overwriting someone else’s in-development code if you commit code from another flow/folder accidentally.
deployment build flow_a/flow.py:entry_point -n flow_a_deployment --storage-block s3/universal-storage-block
flows/ <working directory>
flow_a/
flow_b/
2. Every flow gets its own storage block and running deployment
build
and apply
commands from that flow’s root directory. This will obviously require us to use more storage blocks, but seems to decrease blast radius.
deployment build ./flow.py:entry_point -n flow_a_deployment --storage-block s3/individual-storage-block
flow_/a <working directory>
It seems to me like option 2 is more optimal, but are there any disadvantages or limitations we should be aware of in using multiple storage blocks?Dean Magee
08/12/2022, 12:59 AMLennert Van de Velde
08/12/2022, 8:00 AMMarcin Grzybowski
08/12/2022, 9:07 AMJonathan Mathews
08/12/2022, 12:14 PMJames Brady
08/12/2022, 12:50 PMprefect deployment apply …
after updating Python files, but I can see in the S3 bucket that the Python files aren't being updated to match what's on my local computer.Tomas Knoetze
08/12/2022, 1:00 PMSean Malone
08/12/2022, 1:30 PMAndrei Tulbure
08/12/2022, 1:50 PMAndrei Tulbure
08/12/2022, 1:50 PMAnna Geller
08/12/2022, 1:54 PMAndrei Tulbure
08/12/2022, 3:07 PMAnna Geller
08/12/2022, 4:38 PM