Hi Everyone, I need help implementing a specific o...
# ask-community
n
Hi Everyone, I need help implementing a specific objective. Context: We are using Prefect for our inferencing pipeline, where different algorithms have their own respective flows. For example, we have three algorithms—*a1, a2, and a3*—and for each, we create separate flows—*f1, f2, and f3*. The issue is that every time we create a new flow, we need to update the Docker image (we are running Prefect using Docker on a local VM). However, most of the code in all these flows is the same, with only a few differences that can be handled through environment variables. Objective: I want to simplify this process so that if a new algorithm (a4) is introduced and requires a new flow (f4), I can simply run a bash script that: • Automatically creates the new flow (f4)Handles the necessary deploymentDoes not require rebuilding or changing the Docker image This is one approach I could think of, but I'm open to better solutions. Key Requirement: I do not want to modify or rebuild the Docker image each time a new flow is added. If anyone has suggestions or alternative solutions, I would greatly appreciate them! Hope I was able to explain my issue clearly. Thanks!
n
hi @Nimesh Kumar - this is a common pattern among prefect users today where you have common deps built into your
Dockerfile
and then your business logic (that changes frequently) is stored in a remote filesystem like github that you
pull
in a pull step at runtime so either
git_clone
or
pull_from_s3
etc if you're using
prefect.yaml
or instead
from_source
if you're using the python deployment interface
n
So as far i understood and please correct me if my understanding is wrong here. What you meant is, we put the new flows on remote-file-system and docker will pull from there and create deployment of same ?
n
i’m not sure what you mean by “create deployment of same” and i’m also not clear on whether your flows f1, f2 are separate deployments as well, but i assume they likely are in general what i’m saying is this • build one image that contains common deps all your flows will need • build separate deployments for each flow, each can specify same common image but you can pull down the source code specific to that flow at runtime, whether your remote filesystem is github or somewhere else