There's a blurb in the Prefect 3.0 announcement ab...
# ask-community
d
There's a blurb in the Prefect 3.0 announcement about heterogeneous workloads:
Second, the diversity of modern data processes requires flexible automation capable of reliably handling heterogeneous workloads - a capability existing tools lack. Third, the heterogeneous compute resources needed to run data processes have exploded, creating deep coordination friction that slows businesses.
I'm having a hard time finding any info in the 3.0 docs about this aspect, though. I was hoping this meant that Prefect 3.0 deployments could potentially specify resources (CPU, GPU, memory, disk, Docker image, etc.) at the task level rather than at the workflow level. Is this not the case?
b
I'm curious about this as well. I have 10 different docker images with incompatible dependencies. After extensive looking I don't think that Prefect supports that type of workflow and I haven't found anything in 3.0 to suggest it has improvements in that area. If you want to chat let me know though I don't have a good solution. I'm still exploring tools trying to figure out what would work best for my use case.
d
I'm doing computational genomics and most of the tools in my workflows are not Python-based, so in my case the better alternatives are WDL or Nextflow. Sticking with Python, Flyte might be the best option out there, but it's K8S-based and deploying a cluster myself was really complex and beyond my abilities. Metaflow might also be worth exploring.
b
I am doing computational genomics as well. I also came to the same conclusion about Flyte. Right now I'm looking at how hard it might be to modify Prefect's worker code to allow the docker container to be passed on the fly. It looks like it probably isn't that hard, but I'm not thrilled about doing it because obviously no support, difficulty keeping my fork up to date etc.
I send you DM, hope you don't mind.