Tom Klein

08/30/2023, 9:57 PM
is this really the best way to get the size of some work_queue from within a flow?
Copy code
        async with get_client() as client:
            queue = await client.read_work_queue_by_name("redacted", work_pool_name="redacted2")
            print(f"Concurrency limit: {queue.concurrency_limit}")
    except (ObjectNotFound, RequestError) as e:
        logger.warning(f"Could not read concurrency limit or it does not exist: {e}")
the motivation is that we’re launching subflows with
(async’ly) and we prefer to not launch (concurrently) more than the work_queue allows, so that we don’t unnecessarily create 100-200 “late” flow runs. basically we’d like to treat it like a semaphore. Does that make sense? are we missing something? is there some other way to prevent hundreds of “late” flow_runs?