Thank you @Matt Conger, for the answer. I checked the docs. I'm a bit confused about the connection and ways to use prefect with a cloud like GCP. The storage part is clear for me, I think. For example, I can store data (prefect database and other metadata) locally or on GCS.
I just don't understand the running part. Here is my understanding and my questions:
• We can have the code on our local machine and create a VM on the cloud. This VM can be the Orion server on the VM. Then we can also connect the local code and the VM to run the flow on the VM.
• In this case, the VM should always run with the Orion server.
• If one of the tasks needs a GPU, for example, the training task, the VM should have a GPU, which will cost a lot to be running all the time when we just use it a few hours a month.
• Is it possible to create one cheap VM for the server and one with a GPU for training tasks and launch it when necessary (to avoid costs - once a month, for example), have the code locally, and connect all of these together? Or do we have to go for Kubernetes? (I try not to go for that as I have not worked with it)
• By all of these difficulties and confusions, what is the benefit compared to VertexAI workflows and ML training services? Using serverless services doesn't seem expensive there.
• I'm looking for some examples of architecture now. I know you mention a lot of flexibility, but this just adds more confusion.
• I also have MLflow in the training task for experiment tracking. I think the data for MLflow will be saved where we run the task or flow. How to manage this? Is there any help from Prefect to manage that?