Having read through the Prefect docs, I love the lightweight philosophy, Python-friendliness, and improvements over Airflow. But the documentation is lacking, are there plans to expand it? For our computationally heavy use case, we would want each task to run in its own Docker container and independently on a collection of worker nodes (e.g., as a k8s job). The documentation doesn't address this common use case. There is barebones k8s API reference but no conceptual material or examples. The closest thing I can find is
https://docs.prefect.io/core/tutorials/dask-cluster.html which says "Take this example to the next level by storing your flow in a Docker container and deploying it with Dask on Kubernetes using the excellent dask-kubernetes project! Details are left as an exercise to the reader. 😉" Ideally this exercise would not be left to the reader. But beyond that, we don't want to store the entire flow in a single Docker container, rather each task gets its own Docker container since each task has different computation requirements (CPU-heavy vs. RAM-heavy vs. i/o-heavy vs. needing access to a large reference DB vs. ...), also parallel tasks should be able to run on different workers. Please advise.
Also: Prefect Cloud sounds appealing as a persistence solution, but do we have the safety net of being able to implement our own persistence - are there API hooks to support that?