https://prefect.io logo
c

Chris Gunderson

08/02/2023, 6:48 PM
Hi Prefect Team - We are able to start a docker container on our linux box to act as an agent. The agent is started and the work-queue can be started as well. Should we start this docker container from systemd to restart it in case it crashes?
I attempted this, but it failed to start the docker container
image.png
Is that the proper way to start the container, or should we start it like this with a docker restart always? sudo docker run -it --restart always prefecthq/prefect:2.10.20-python3.10 /bin/sh -c "prefect cloud login -k *****; prefect agent start -q dev-agent-ny4"
I think this will work when creating a work pool and worker: sudo docker run -t -d --restart always prefecthq/prefect:2.10.20-python3.10 /bin/sh -c "prefect cloud login -k pnb_******; prefect worker start -t docker -p dev-test-ny4-worker --install-policy if-not-present"
n

Nate

08/02/2023, 9:11 PM
hi @Chris Gunderson - this might be helpful. yeah systemd is typically what we recommend if you're running a worker (or agent) on a VM if you're just getting set up, I'd recommend a worker, as they are more configurable and designed to be next gen agents
c

Chris Gunderson

08/03/2023, 12:42 PM
@Nate Thanks Nate. I was using this example. I wanted to use a docker container, so we can point to two different workspaces on the same Linux box.
n

Nate

08/03/2023, 4:08 PM
i think you could have 2 different systemd services, with different api_urls, which run docker workers. so they'd be regular systemd processes, but the flow runs they submit (for their respective workspace) would be containers
c

Chris Gunderson

08/03/2023, 7:51 PM
@Nate I tried that, but got the failures above.
n

Nate

08/03/2023, 8:27 PM
i dont often see people run the worker process itself as a container like that, is there a reason you're doing that? because you have
prefect worker start -t docker
, each flow run will be submitted as a new container to the docker engine available to the worker process, so I'm actually not sure how that would work if the worker process itself is running as a container usually, running the worker as a systemd process that submits flow runs as containers to the VMs docker engine is what I see people doing