Hi, does anyone have a best-practice code example on an ML pipeline from data ingestion from an external source to cleaning to feature generation to fitting and and subsequent use of the model using prefect? Wondering whether to split up these parts as tasks or individual flows and whether to store the interim results or not taking into account the possibility for (easier?) debugging in case they’re flows?
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.