Hey Adam! We're currently evaluating different ways to implement this. There are a few different use cases that we want to make sure we address, or else intentionally do not address, e.g. should subsequent flow runs block if one is already running, or fail immediately?
For the moment, you can use
global concurrency limits to achieve limiting parallelism, e.g.:
GLOBAL_CONCURRENCY_LIMIT_NAME = "only-one-flow"
@flow
async def my_flow():
async with concurrency(GLOBAL_CONCURRENCY_LIMIT_NAME, occupy=1):
for x, y in [(1, 2), (2, 3), (3, 4), (4, 5)]:
await asyncio.sleep(random.randint(0, 5))
return x + y
Just be aware that the concurrency limit in this example is implicitly created in an
inactive state. You can activate it from the UI.