Hello, Looking to migrate from prefect v1 to prefe...
# ask-community
p
Hello, Looking to migrate from prefect v1 to prefect v2 and have a question related to DASK on k8s. In prefect v1 when you ran a flow with dask executor on k8s it would spin up a prefect pod and n+1 dask pods where n is the number of workers configured. This was run via map function. Now in prefect v2 when I run a flow with dasktaskrunner it appears that the cluster is being spun up inside the prefect pod. It is correct that the cluster is being launched inside the pod? if so is there anyway to have each worker run it its own pod? if a pod dies I do not want to impact others or the main flow. Thanks
c
Hey Peter! That is accurate - you can connect to external clusters by providing a scheduler address to the
DaskTaskRunner
, but Prefect does not currently spin up a full cluster per-run
p
Thanks. Liked that prefect spun up(managed) its own cluster if one was not provided. Is there any plan to bring this back?
Also liked the fact we could each flow essentially handled its own cluster and only persistence when needed. Not all flows require a dask cluster but sometimes more than 1 flow run requires a cluster. "Prefect does not currently spin up a full cluster per-run" can you explain what this means?
Thanks for getting back to me Chris!
šŸ™ 1
c
It's something we could definitely look into, but it will most likely be lower priority so I can't make a promise on any timelines; here are a few options for you: • open an issue on our prefect-dask repository requesting this feature; if other people chime in it would help us prioritize • you could consider using a hosted Dask service like Coiled that is optimized for observability, performance and on-demand clusters; they run within your VPC and we work closely with their team (e.g., https://docs.coiled.io/user_guide/labs/prefect-cli.html) • if you're up for it, you could consider building this yourself - task runners can be fully customized, and you could look at the details of our v1 integration and try to port it over (much of the logic should be identical); under the hood we were using dask-kubernetes
p
Thanks again Chris! Will discuss with my team.
šŸ™ 1