Kyle Clark
01/21/2025, 10:49 PMMarvin
01/21/2025, 10:49 PMprefect work-pool create
CLI command with the --provision-infra
flag, which provisions necessary serverless cloud resources. This feature requires the relevant cloud CLI library installed and authenticated with your cloud provider. Explore infrastructure examples.
3. Concurrency and Managed Execution: Prefect’s managed execution work pools have limitations based on your account tier, affecting the number of concurrent flow runs and managed execution work pools you can have. This is crucial for planning and right-sizing your infrastructure. Read more on managed infrastructure.
Given your preference for an imperative programming style, Prefect's flexibility in defining tasks and flows should align well with your coding approach. Remember that in Prefect 3.x, agents have been replaced by workers, and work pools replace the infrastructure blocks from earlier versions.
If you have more specific questions or need further assistance, feel free to ask!Kyle Clark
01/21/2025, 10:52 PMNate
01/21/2025, 11:02 PM@flow
def orchestrate_all_the_things(params: ...):
flow_run_a = run_deployment("flow-a/deployment-a", parameters=dict(...)
flow_run_b = run_deployment("flow-b/deployment-b", parameters=dict(...))
flow_run_c = run_deployment("flow-c/deployment-c", parameters=dict(...))
key thing is that run_deployment
triggers the work remotely in whatever infra is implied by each deployment's work pool and you can optionally add normal python for conditionals and stuff like that
• alternatively, you can declaratively chain these deployments with event triggers. for example
â—¦ flow B should run when flow A produces a flow-run.Completed
event
â—¦ flow C should run when flow B produces a flow-run.Completed
AND flow A also emits flow-run.Completed
so in the latter world, you put a trigger
on a deployment
other resources you mind find useful
• template repo w boilerplate for deploying
• getting started yt seriesKevin Grismore
01/21/2025, 11:07 PMrun_deployment()
calls can also accept a job_variables
dict where you can specify those per the specification of your work pool type. So if your second deployment was determined to need 4GiB of memory just before it's kicked off, you could do something like
ecs_mem_allocation = determine_dataset_size() # returns 4096, for example
flow_run_b = run_deployment("flow-b/deployment-b", parameters=dict(...), job_variables={"memory": ecs_mem_allocation))
Kyle Clark
01/21/2025, 11:21 PM