Hi all, I’m currently looking into data processing...
# ask-community
l
Hi all, I’m currently looking into data processing frameworks. One of the requirements my team has is to be able to integrate with an exterior service in a processing pipeline. For example if we have a tensorflow model being served with TFServe or something like that, how can a call to such a service be done within a prefect workflow? T1 -> TFServe -> T3
k
Hey @Lucas Giger, this is a bit tricky because Prefect is oriented with batch processes and I think what you are asking is how to get that server running with the model. I think you might need to find if it can be spun up in a detached mode? And then Prefect can be used to spin up that container by calling a Python script or a Shell script? The goal is for Prefect to spin it up as a different process here.
l
I see, thank you for the quick response. I think in the end what we want to reduce is the overhead of spinning up a container, load models into memory and only then run inference. With a separate, horizontally scaled inference server this becomes a lot faster. I guess what would be nice to have in our case is if prefect would be able to just send a request to the inference service instead of spinning up a container. But maybe our use case does not fit very well to what prefect offers - I’ll have to read a bit more about it ask ask better questions, sorry
k
I think sending an API request it doable inside a Prefect task. If you can do it in Python, it should be doable.
🙌 1