We are attempting to move our infrastructure into ...
# ask-community
j
We are attempting to move our infrastructure into docker containers… Currently we just run our prefect worker on the same server as the Django application where all the code resides. I have already moved from Agent to Worker (on the server). When building a docker container and setting the worker pool type to
docker
it seems like it is going to want to spin up it’s own docker container within the container. I don’t need it to spin another container within itself. Am I missing something?
k
the work pool type is about what kind of execution you want for your flow runs, so it'll expect to boot up the image you specify for each flow run
could it be you want to run a process type worker, just inside a container?
j
Well we’ve been running a process type worker… so that’s a good question. Where would it spin up the docker image if we went with a pre-built image?
k
if you had a container that was runner a docker worker, each flow run would be a container inside that container
which imo is perfectly valid
j
If you were taking our process worker from the current method and “dockerizing” it … would you want Prefect to spin an image to run, or just run the job within a process in an already running image?
I know that may be a question for our architect… but I’m not sure he has the answer as our company was just brought on board and they are trying to save the costs of AWS by moving us over to Azure where they run containers for everything.
As in our company was bought out … and we’re being moved over to the new owner’s existing infrastructure for his other company.
I hope that all makes sense.
k
my personal preference is to have images for my flows so their requirements are well defined and isolated. but if you think a process worker with all the requirements for any potential flow is sufficient, you don't need to change much aside from the fact that your worker is now running inside a container
j
We run a daily scheduled flow (which runs any number of flows it finds within the directory structure that matches the maintenance keys) … we run some flows on demand from our web interface.
Mostly using this to offload tasks from the web to an outside process and it uses the full code from the web worker. We used the process worker on a secondary web server initially.
So for now, our process and needs for the worker are simple. It has all the pre-requisites to run the full web application. It’s just doing maintenance or out of web request work to then be sent via email.
Kinda like a job queue…
So … I think to get toward your preference which would allow us in future to run other Prefect flows that may not be directly related to our web application… I would need to ensure that my new docker container also installed docker, it would as the command run
prefect worker … my_docker_pool
… then when Prefect Cloud indicated there was work available the
worker_container
would spin a new container within itself to do the actual work?
I feel like that might be going a bit deep for our initial use-case but trying to understand the options so I can run that by our architect…
n
@Jarvis Stubblefield you might be interested in using
uv
in a
run_shell_script
pull
step or just run the environment ephemerally for each flow uv docs are good (in general!) in installing different sets of requirements and managing them centrally
j
Well I have a
prefect worker
running within a docker container. IT should grab the work and run it in a local process (of the docker container?) … we have been doing this on our secondary prod server… this was simple and worked well. Now when I’m trying to use
from prefect.deployments import Deployment
I’m getting a warning that the
Deployment
is not in the
___all___
declaration.
@Nate I’ll peek into
uv
… I’ve not seen it before. Just trying to keep this part simple for now, and that seems like another tool to learn, etc.
We are using Prefect differently than most… we realize.