Nimesh Kumar

01/20/2023, 12:27 PM
*"*There is no limit to the number of concurrent flow runs or task runs you can have." Prefect supports unlimited concurrent execution of your flows. I have set up prefect on a local machine with a configuration of
processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
stepping        : 9
microcode       : 0xea
cpu MHz         : 800.086
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 6
initial apicid  : 6
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
My question is 1. Does Number of concurrent flow depends on Hardware configuration like threads etc ? a. if not! then on what basis i can select the number of concurrent flow for smooth processing b. will there be any case where i will get (out of memory error) ? c. Also how can i select the number of parallel flow? THANKS

Giuliano Mega

01/20/2023, 12:39 PM
DISCLAIMER: I don't work for Prefect, but will try to answer this based on what I've seen. a. Prefect does not attempt to cap concurrency based on hardware. That's a hard call to make because the resource-intensiveness of a flow is not guided by how many are running, but rather by the resources a flow run requires. Some flows are I/O bound and may run on async libs, meaning you can probably run thousands on a single core. Others are memory and CPU hungry, so you may only be able to run one per core. Figuring out your workload type and setting parameters is not part of Prefect's scope for now. However, it may be possible to some extent to delegate resource scaling to the underlying runtime with something like GKE autopilot, but my experience thus far with this hasn't been great. b. Yes. Again that depends on what you're running, but if you go overboard with memory usage your process might end up being killed by the Kernel or the runtime you're using (e.g. if you're spawning a JVM or smth).

Nimesh Kumar

01/20/2023, 12:49 PM
Hi Thanks for your prompt response. If i am correct what you meant is "lesser the complexity flow, better will be the concurrency" if i am right ? This is kindof my flow on prefect. recieve_request -> unzip -> inferencing -> recieve_result_from_inferencing -> send back the result from where it came. As per you, how my concurrent flow this above mentioned can handle ?

Peyton Runyan

01/20/2023, 1:07 PM
Howdy! There's really no way of knowing that without some empirical testing. Try at different orders of magnitude (10, 100, 1000) and then once you've found where performance drops, start binary searching your way through the boundaries. If you find things get bad at 1000 but are good at 100, try 500. If 500 is bad, try 250, etc.

Nimesh Kumar

01/20/2023, 1:11 PM
Yes, this sounds good, Thanks, will update the insights here
:thank-you: 1
Hey, Peyton I am exactly at the situation i mentioned above. I have create a deployment with a queue assigned to it not when i start inferencing for 50 image the queue is getting unhealthy and it is getting crashed Can you please help me on this