Hey guys, I am exploring prefect in order to imple...
# ask-community
a
Hey guys, I am exploring prefect in order to implement the various parts of an ML application (with multiple models) as a flow. Here are my queries regarding prefect tasks : 1. Are prefect flows a wise choice for productionalizing online streaming style ML models ? If yes what's the recommendation on mirroring the business logic as flows - is a flow of mapped flows standard or is creating a master flow to orchestrate other flow runs wiser choice? 2. Implementing an online streaming ML model as a task - right now I've been able to load the model inside of a task for prediction, but the model loading takes most of the time of the task run. Does prefect have mechanisms for me to persist the ML model in memory?
k
Hey @Abhas P, Prefect is mainly for batch processing and not meant for streaming purposes since there is an overhead that comes from wrapping Python code as tasks and monitoring the state of it. For example, there are at least 3 API calls with each task run to update the state (starting, running, finished). For number 2, I guess Prefect does not. You should have to spin up a Flask API independent of Prefect and then send requests to that API. Prefect spins up and down batch jobs so it doesn’t have a mechanism for keeping models in memory. You could put the model in Docker, but the container would still have to be downloaded.
💡 1
a
Thank you for addressing my queries Kevin 😊