Hi Everyone.
We have a project where we have to complete an inference on 100k images. we have created a flow, deployement of an algorithm on which these 100k will run. we have to complete this in a week Can anyone help me with how we can implement this and processed them smoothly. we can't go prefect cloud.
we have hosted a prefect dockers on GCP and can use prefect-kubernetes block to run each flows in gke and put it on scaling.
Any thought on this will be very helpful.
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.