Looking for guidance on how to structure a small s...
# prefect-getting-started
i
Looking for guidance on how to structure a small setup. Prefect is great for big projects but is there a way to run it like the n8n project where I can schedule 3-5 small scripts to run every so often on a single container? I'm having a rough time with multiple containers with each of them running a single deployment.
n
hi @Ihsan Topaloglu - have you checked out the serve utility yet? you could place some number of flows (like serve(*[flow.to_deployment(…) for flow in flow_objects]) ) and then specify this file as the ENTRYPOINT for a container that has prefect and your deps installed, then you can just run that container and all of those flows will run when scheduled if that sounds interesting i can whip up an example
i
Hi @Nate, that's a very good approach, thank you. I can practically combine all the flows into a single container and run it like that. I wish there was a way to add a flow to a running container, just like how you can define a new workflow with n8n.
Oh, also, thank you for the offer. I do remember the serve with multiple flows from the tutorials section so I think I can manage it. I keep falling into the fallacy that prefect is a workflow system where I can just push the code and let it run and keep forgetting that the code needs to be hosted somewhere and pulled by a worker or directly served by another container etc. Prefect is more like an orchestrator for tracking status from a single location.
n
> I wish there was a way to add a flow to a running container this is one virtue of the work pool paradigm. if you have a docker work pool, you can just have that running in a container
prefect worker start --pool my-docker-pool
, and then you can write and deploy flows from anywhere, assign them to that work pool, and then the worker doesnt need to know anything about which deployments it can run - its just any deployments using that work pool at a given point in time > keep forgetting that the code needs to be hosted somewhere yep! prefect doesnt know anything about your source code 🙂 except where it lives but! you can bake your flow code into your images in a situation where you have dynamic infra (ecs, docker, k8s work pool etc)
🚀 1
i
Oh yeah, @Nate keeps delivering great solutions. Then here’s how imagine things would work like, I hope you can tell me it’s a feasible approach. I can create a docker container with an orchestrator, a worker pool, and a location to store the python code that can be accessed by the workers, either a volume or an object storage (minio maybe). As long as my code dependencies, pandas, sqlalchemy etc. doesn’t change, I should be able to add new flows and deploy them into the worker pool. Thinking of dependencies, now I see why there’s an option to create new docker containers for each deployment because at a particular scale, that’s the most efficient way. My use case is more like pulling some data from a database, and sending slack notifications on new rows or doing some conditional checks etc. for orders below certain quantity. Let’s see what I can do with the scaled down approach. Thank you for your time Nate!
n
that sounds like a good approach! in case it’s useful, there’s a special
env
var you can set e.g.
Copy code
# kwarg in .deploy or run_deployment
 job_variables=dict(env=dict(EXTRA_PIP_PACKAGES=“pandas numpy”))
to install packages on top of your provided image in a bind but yes i agree new images for individual deployments with different deps would be best if possible, there can be weirdness with installing packages at runtime.
as long as my code dependencies … don’t change
the docker worker is nice because it doesn’t care at all about the flow runs python runtime deps, it just has to be able to submit the run as a container on the host docker engine. so even if your dependencies do change, that one worker can submit them (in contrast to a process worker that will need all of them installed)
👍 1