We are working on ML applied to computer vision, therefore our pipelines take long time and deal with "_heavy_" data that's best kept close
to the training code. So far we have been babysitting the processing and training pipelines, but that's of course not scalable and we are starting to take too long time. I am currently investigating how we can make our pipelines fully automated, reproducible and traceable. However, I am not finding many examples of Prefect applied to image processing and computer vision on the internet. I am concerned with the fact that images are batch-processed but most often we actually want to stream, process and load to another location. Using Prefect's
seems like a good choice, but won't it make the
Flow Runs UI
cluttered and not usable ? In your experience, how do you see people solve this problem ?