<@ULVA73B9P> is is possible to have dynamic concur...
# ask-marvin
a
@Marvin is is possible to have dynamic concurrency limits on subflows being called using run_deployment?
m
To assist you with the question about dynamic concurrency limits on subflows being called using
run_deployment
, I need to determine the version of Prefect you're using. Could you please specify your Prefect version, or indicate whether you are using Prefect 2.x or 3.x? This will help me provide you with the most relevant information and resources.
a
@Marvin using prefect 2.
m
Currently, there isn't direct information specifying dynamic concurrency limits for subflows invoked using
run_deployment
in Prefect 2.x. The documentation describes how you can use
run_deployment
to treat deployments as upstream dependencies, which allows for orchestration and sequencing of tasks or flows Prefect Docs. Concurrency within Prefect is generally controlled at the level of the agent, where you can configure work queues to maintain a certain parallelism, but this applies to the overall task execution environment rather than dynamically at the level of individual subflows or tasks within a flow. If you need dynamic control over concurrency for subflows, a potential workaround might involve using custom logic within tasks to manage queuing or controlling execution counts, albeit these would be custom implementations rather than built-in Prefect capabilities. In the absence of issues or explicit documentation addressing this feature, I would recommend checking Prefect's forum or submitting an inquiry for a more tailored solution depending on specific use cases. Let me know if there's anything else you'd like help with!
n
hi @Aj Floersch - I think you could just set a concurrency limit on the deployment that you're triggering as long as you have a recent enough version of prefect where deployment concurrency limits are present or what exactly do you mean by dynamic?
a
@Nate what Marvin suggested regarding building in some custom logic is generally what I expected, just figured I'd ask. I'm dealing with a scenario where the integration I'm writing will generally need write approx. 500 records from one system to another every 15 minutes. However, there are spikes where sometimes it will need to process 50K within that window. I'm want to test an approach where under normal conditions is will only kick-up 10 subflows to process, but at peak it will kickup 50 (max) subflows. I guess that's where I was hoping there might some concurrency parameter I could modify on the fly, but I figured it might just be I need to come up with a custom way to control the subflow triggering.
n
hmm ok what I might do based on that description is setup a
with concurrency(...):
gcl context manager in the flow you're triggering, and then pass
occupy: int
as a new parameter from your caller depending on if you're in a "spiky time" or not like
run_deployment(...., parameters={... , n_slots_to_occupy=42})
etc
(all of the below is perhaps unsolicited overthinking 🙂 ) erm, actually that would only work if you don't need to change resource allocations for the deployment you're triggering like cpu / mem requests ... actually in that case you should should be able to control what you need by passing dynamic values to
job_variables
as well
a
@Nate awesome - I will check this out. Thank you!