Hi there - getting up to speed on setting up a loc...
# ask-community
v
Hi there - getting up to speed on setting up a local test server. I have my flows running successfully, but confused about deployment and workers. In the docs for v2, I dont see build and apply for deployments being used. Here is what I have now - has this changed or is there a newer way to do this?
Copy code
prefect deployment build test.py:test-flow -n test-deployment
prefect deployment apply test-deployment.yaml
prefect worker start -t process -p local-pool
n
hey @Victor - there have been some changes basically
prefect.yaml
is the way to declaratively define deployments, which you then deploy with
prefect deploy
(as opposed to
deployment.yaml
+
prefect deployment build / apply
) https://docs.prefect.io/latest/guides/upgrade-guide-agents-to-workers/?h=age
if you wanna see some examples
psst also
prefect init
in a fresh repo will be helpful
v
@Nate So something like this ? Do I need to create the work pool in the UI first ?
Copy code
prefect init --recipe local 
prefect deploy --prefect-file prefect.yaml --name test-deployment
prefect worker start -p <work pool name>
n
you don't need to create the work pool in the UI first, you can just
prefect worker start --type process --pool my-pool
but that would be created with all defaults, so only if that's what you want, UI is nice for tweaking config on the work pool later but yeah
prefect init
is just a nice way to bootstrap a repo so you don't find yourself worrying about yaml formatting so much
v
All set. Now testing scheduling. If i setup a schedule, then close my terminal , where the worker process is running, will this interrupt the schedule?
n
scheduler is server side, worker is client side. if you shut off a worker, scheduled runs will pile up
which is a use case for automations, ie, if a work pool becomes unhealthy, mark a deployment as inactive (i.e. stop scheduling runs)
v
And automations are only available in the cloud version, correct? So what to do to keep worker pools active locally?
So to run locally looks like I have to have both the server and workers running constantly?
n
in general yeah you if you need to run work at any time and you're not using cloud, you'd need to run a server and at least 1 worker all the time
unless you want to use
.serve()
, in which case you just need to keep that process alive
and just to be clear, work pools are just config stored on the server about how deployments run workers are the processes that you need to keep alive, that submit flow runs from deployments according to the config it finds on the work pool
v
@Nate I am still a bit unclear about worker processes running locally which needs to be kept alive. I read the instructions on how to setup a systemd service ( I am on an ubuntu server). However, it requires using a work pool name. Does this assume there is just one main work pool for all local running workers which will use that single systemd worker service? Or is it required to create a separate systemd service for every single local work pool if there are multiple pools ?
e
Hey @Nate I too had an old deployment like @Victor but my mine used using v2.10.x. Now on v2.16.5, looked at the docs and they were missing the old process I was used too, I tried the new process of xxx.serve() but it half installed the deployment and I had to go into the database to make it right. From what I can see nowhere in the new guides/docs do they mention yaml as the way to install deployments. https://docs.prefect.io/latest/tutorial/deployments/
sorry i lost track of this thread
> Does this assume there is just one main work pool for all local running workers which will use that single systemd worker service? in general, workers listen to one work pool. you can have many workers listening to the same pool, but often that's not necessary in my experience work pools should be 1 to 1 with unique infrastructure types that you execute flows in, so if you have some VM running a process worker via systemd, you should have 1 work pool that represents that runtime environment. you can do job variable overrides to give different deployments different config while reusing the same work pool