Ihsan Topaloglu
07/08/2024, 10:38 PMNate
07/09/2024, 12:46 AMIhsan Topaloglu
07/09/2024, 3:25 PMIhsan Topaloglu
07/09/2024, 3:43 PMNate
07/09/2024, 3:52 PMprefect worker start --pool my-docker-pool
, and then you can write and deploy flows from anywhere, assign them to that work pool, and then the worker doesnt need to know anything about which deployments it can run - its just any deployments using that work pool at a given point in time
> keep forgetting that the code needs to be hosted somewhere
yep! prefect doesnt know anything about your source code 🙂 except where it lives
but! you can bake your flow code into your images in a situation where you have dynamic infra (ecs, docker, k8s work pool etc)Ihsan Topaloglu
07/10/2024, 1:56 AMNate
07/10/2024, 2:12 AMenv
var you can set e.g.
# kwarg in .deploy or run_deployment
job_variables=dict(env=dict(EXTRA_PIP_PACKAGES=“pandas numpy”))
to install packages on top of your provided image in a bind
but yes i agree new images for individual deployments with different deps would be best if possible, there can be weirdness with installing packages at runtime.
as long as my code dependencies … don’t changethe docker worker is nice because it doesn’t care at all about the flow runs python runtime deps, it just has to be able to submit the run as a container on the host docker engine. so even if your dependencies do change, that one worker can submit them (in contrast to a process worker that will need all of them installed)