Ah ok.I’m just wondering so my last job we had a parallelized grid search that ran daily. We would then review the metrics and plots and then decide how to alter the grid search. It wasn’t quite “mid-run” though.
There is a use case though for Bayesian optimization type approaches that are iterative and build on top of the last result and go on indefinitely. I suppose you could PAUSE, and see if the model training converged, and then resume if not. But with that, I’m not sure the parameters would be changed mid-run and you’d have to start a new run.
I think as long as the experiment results are persisted somewhere like MLFlow and you have history, creating new Prefect flows will work.