Jessica Smith
01/15/2024, 6:33 PMlimit
parameter of the Runner
class when using serve
? It seems like Runner accepts this parameter but there is nowhere to actually provide a value for itMarvin
01/15/2024, 6:33 PMlimit
parameter of the Runner
class when using the serve
method in Prefect 2.x documentation. The search results provided information on general deployment and API rate limits rather than specifics on configuring the Runner
class.
To get a precise answer, it may require going through the API reference for Prefect 2.x or looking at the source code for the Runner
class in the Prefect GitHub repository. Since this feature seems to be not well-documented, another efficient way to find an answer might be looking at the Github issues for Prefect to see if others have encountered the same question and found a solution.
Shall I look it up for you in the Github issues?Jessica Smith
01/15/2024, 6:35 PMKevin Grismore
01/15/2024, 6:41 PMPREFECT_RUNNER_PROCESS_LIMIT
env var or setting in your prefect profile: https://docs.prefect.io/latest/api-ref/prefect/settings/?h=settings#prefect.settings.PREFECT_RUNNER_PROCESS_LIMITJessica Smith
01/15/2024, 6:46 PMserve
has been the easiest way to migrate without having to add those complications. But I can't find a way to do flow concurrency with serve
, which we make use of a lot in V1.
Setting the runner limit at the serve level wouldn't be quite the same as what we have in V1, since it wouldn't let you combine concurrency limits, but it would work for us in the meantime.
I thought the global concurrency would have worked but when I use that as a context manager in the flow body it just enters and exits immediately, I assume due to how tasks are submitted, so that option didn't work either.Kevin Grismore
01/15/2024, 6:47 PMJessica Smith
01/15/2024, 6:54 PMflow.deploy
by defaults create a docker image to run the flow in. I didn't see any guidance on how to use .deploy
when wanting to just use a process worker and the documentation specifically says you should probably just use .serve
instead.
I think the other issue I ran into was that the code is stored locally and when I tried .deploy
it was failing to find my code, as it was taking the relative path and adding it to some other default path. I'll have to look into it again to get the specifics.
For reference, this is what I was reading when trying to use `.deploy`: https://docs.prefect.io/latest/guides/prefect-deploy/ I'd love if there was more guidance on how to use .deploy
with just processes, since .serve
is somewhat limitedKevin Grismore
01/15/2024, 6:55 PM.deploy
doesn't currently support process work pools unfortunately. The path forward I'd recommend for now is deploying with a prefect.yaml
and prefect deploy
when using a process work poolKevin Grismore
01/15/2024, 6:59 PM.serve
and a process worker are similar, but don't have exactly the same features and control mechanisms. We're having discussions right now about how to present this information effectively, and what we can do to best support use cases like yours.Nate
01/15/2024, 8:03 PMlimit=<some int>
as a kwarg to serve(*deployments)
(as opposed to some_specific_flow.serve
) like this
from time import sleep
from prefect import flow, serve
@flow
def my_flow():
sleep(10)
if __name__ == "__main__":
serve(my_flow.to_deployment("my-deployment"), limit=1)
where ill some warning in the logs if i kick it off a bunch
14:02:15.703 | INFO | prefect.runner - Flow run limit reached; 1 flow runs in progress. You can control this limit by adjusting the PREFECT_RUNNER_PROCESS_LIMIT setting.
Kevin Grismore
01/15/2024, 8:13 PMNate
01/15/2024, 8:16 PMRunner
instance responsible for serving the flow(s)
so here
> ran this multiple times on the same machine
as long as you mean "this script" by "this" you'd have a separate runner for each serve process and the limits would be enforced independently - does that answer your question?Kevin Grismore
01/15/2024, 8:16 PMKevin Grismore
01/15/2024, 8:17 PMJessica Smith
01/16/2024, 1:00 PM**kwargs
argument for serve, but that looks like exactly what i was looking for. thanks! I'll start with this and go from there