<@ULVA73B9P> iam calling multiple deployments in ...
# ask-community
k
@Marvin iam calling multiple deployments in a sequential order from my main deployment, can i show the deplendency between these sub deployments in the UI?
m
thought for 179.6 seconds
Short answer: yes—if you trigger those sub-deployments from inside your main flow using
run_deployment(..., as_subflow=True)
, the UI will show them as child runs of the main flow run. You’ll see the parent/child relationship in the flow run page (Related runs and timeline). The UI does not currently draw explicit edges between sibling sub-deployments, but the parent-child lineage and execution order are visible. How to do it - Call each sub-deployment from your main flow with
as_subflow=True
(it’s the default in Prefect 3.x, but setting it explicitly is clear). - Call them sequentially to enforce the order you want. Example:
Copy code
from prefect import flow
from prefect.deployments import run_deployment

@flow
def orchestrator():
    # Step 1
    fr1 = run_deployment(
        "ingest/production", 
        parameters={"date": "2025-10-09"},
        as_subflow=True,
        flow_run_name="ingest-2025-10-09"
    )

    # Step 2 (runs after Step 1 completes)
    fr2 = run_deployment(
        "transform/production", 
        parameters={"date": "2025-10-09"},
        as_subflow=True,
        flow_run_name="transform-2025-10-09"
    )

    # Step 3
    fr3 = run_deployment(
        "load/production", 
        parameters={"date": "2025-10-09"},
        as_subflow=True,
        flow_run_name="load-2025-10-09"
    )
What you’ll see in the UI - On the orchestrator run page: a list of child runs (the three sub-deployments), plus a timeline showing their order. - On each child run: a link back to its parent run. - If you need a single DAG-like graph, consider modeling these as subflows/tasks within one flow (which is essentially what
as_subflow=True
gives you, but the graph between siblings is not drawn as edges). Notes - If you trigger deployments from outside a flow (e.g., with the API, CLI, or an Automation), there won’t be a parent-child link in the UI—use
run_deployment
from within a running flow to preserve lineage. - You can improve readability with
flow_run_name
and
tags
. Related references: - Deployments API (3.x) - Workflow design patterns blog If you can share how you’re invoking the sub-deployments today (inside a flow with
run_deployment
vs. externally), I can suggest the exact pattern to get the lineage you want in the UI.
k
@Marvin is it possible to see with arrow marks? in the timeline graph?
m
thought for 72.5 seconds
Short answer: no—Prefect’s timeline view doesn’t draw dependency arrows/edges. It shows timing and grouping (e.g., parent/child runs), but not arrows between runs. What you can use instead - Graph tab (per flow run): Shows arrows between tasks/subflows inside a single run. If you call other flows as subflows (or tasks) and chain them (e.g., pass one’s output to the next), you’ll see edges in this graph. - Parent/child lineage: If you use
run_deployment(..., as_subflow=True)
, the UI links runs through parent/child “Related runs,” but it won’t draw arrows between sibling sub-deployments on the timeline. If you need true arrowed visualization between the sub-deployments: - Model them as subflows/tasks within one orchestrator flow, pass results between them to create explicit edges in the Graph view. - Or keep using deployments for isolation/infra and accept parent/child lineage without arrows on the timeline. References: - Deployments API (3.x) - Workflow design patterns blog If you describe the level of detail you want (task-level vs run-level) and whether you must keep separate deployments, I can suggest the exact pattern to get the clearest graph.
k
@Marvin so I have a main deployment named prime-main-deployment,in this i will call a get_json deployment,if it's status is completed, i will call mainq1 deployment ,if it is run sucesfully i will call the mainq2 deployment, if it is succesful, i wil call my client deployment , since there is a dependency (a deplyment will run only after the previous one is run successfully), i wnat to show the grph in the ui , ffrom get_json to mainq1 deploymetm mainq1 to mainq2 deployment, mainq2 to client_deployment
m
thought for 105.6 seconds
Got it — you want the UI to show a clear chain: get_json → mainq1 → mainq2 → client, and only advance when the previous finishes successfully. What Prefect can show - Timeline: no arrows. - Graph tab (on the main run): you can get arrowed edges if you model each stage as a node with explicit upstream dependency. Two ways to get those arrows: A) Keep using deployments (recommended for your setup) - Wrap each
run_deployment
call in a small task and pass the previous task’s output into the next task. This creates explicit edges in the Graph of your main run. - Use
as_subflow=True
so each stage also shows as a child run with lineage. Example:
Copy code
from prefect import flow, task
from prefect.deployments import run_deployment

@task
def trigger(deployment_name: str, params: dict | None = None, upstream_id: str | None = None):
    # upstream_id is only to create a dependency edge; not used otherwise
    fr = run_deployment(deployment_name, parameters=params, as_subflow=True)
    return str(fr.id)  # returning something lets the next call depend on this

@flow(name="prime-main-deployment")
def orchestrate():
    s1 = trigger("get_json/production")
    s2 = trigger("mainq1/production", upstream_id=s1)
    s3 = trigger("mainq2/production", upstream_id=s2)
    s4 = trigger("client/production", upstream_id=s3)
What you’ll see: - In the Graph tab of the main run: arrows s1 → s2 → s3 → s4. - Each node corresponds to the “trigger” task; each also creates a linked child run (the actual deployment) you can click into. Behavior notes: -
run_deployment(..., as_subflow=True)
links the child run to the parent and waits for it to finish before continuing. If a stage fails, the main flow will stop and downstream stages won’t start. B) If you can call flows directly (not via deployments) - Import the underlying flows and call them as subflows, passing each result to the next. This gives very clean arrows between subflow nodes in the Graph. - Tradeoff: they’ll run on the orchestrator’s infra, not the infra attached to each deployment. Docs/refs: - Run deployments from flows (3.x) - Workflow design patterns blog If you want, share the names/parameters for each deployment and I’ll tailor the orchestrator snippet exactly to your setup.
k
@Marvin from prefect import flow,task from prefect.deployments import run_deployment from prefect import get_client import os import asyncio print("starting getjson_prime_deployment deployment") @flow(log_prints=True) async def prime_main_flow(): async with get_client() as client: get_json_flowrun=await run_deployment(name="get-json-flow/getjson_prime_deployment", job_variables={"cpu": 256, "memory": 1024}, as_subflow=True, flow_run_name="get_json" ) current_working_directory=os.getcwd() created_filepath=os.path.join(current_working_directory,"dummy_file.txt") #if get_json_flowrun.is_completed() get_json_flowrun_details=await client.read_flow_run(get_json_flowrun.id) get_json_flowrun_status=get_json_flowrun_details.state_type # async with get_client() as client: # mainq2_flowrun_details = await client.read_flow_run(mainq2_flow_run.id) # mainq2_flowrun_status=mainq2_flowrun_details.state_type print(f"get json run status is {get_json_flowrun_status} ") print(f"created path is {created_filepath}") #if get_json_flowrun_status=="COMPLETED" and os.path.exists(created_filepath): if get_json_flowrun_status=="COMPLETED": mainq1_flowrun=await run_deployment(name="mainq1-flow/mainq1_latest_deployment", job_variables={"cpu": 256, "memory": 1024}, as_subflow=True, flow_run_name="mainq1" ) mainq1_flowrun_details= await client.read_flow_run(mainq1_flowrun.id) mainq1_flowrun_status=mainq1_flowrun_details.state_type if mainq1_flowrun_status=="COMPLETED": mainq2_flowrun=await run_deployment(name="mainq2/mainq2_latest_deployment", job_variables={"cpu": 256, "memory": 1024}, as_subflow=True, flow_run_name="mainq2" ) mainq2_flowrun_details=await client.read_flow_run(mainq2_flowrun.id) mainq2_flowrun_status=mainq2_flowrun_details.state_type # client_tasks = [ # ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], # ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail # ["PS_LoanAccountSweeper"], # "PS_Reporting", # final task # ] # if mainq2_flowrun_status=="COMPLETED": # for clients={"client_1": [ ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail ["PS_LoanAccountSweeper"], ["PS_Reporting"] # final task ],"client_2": [ ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail ["PS_LoanAccountSweeper"], ["PS_Reporting"] # final task ], "client_3": [ ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail ["PS_LoanAccountSweeper"], ["PS_Reporting"] # final task ], "client_4": [ ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail ["PS_LoanAccountSweeper"], ["PS_Reporting"] # final task ], "client_5": [ ["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], # MTM will fail ["PS_LoanAccountSweeper"], ["PS_Reporting"] # final task ]} futures=[] #clients for client in clients: future=run_deployment(name="client-flow/client_latest_deployment", parameters={"client": client, "tasks":clients[client]}, job_variables={"cpu": 256, "memory": 1024}, as_subflow=True, flow_run_name=client ) futures.append(future) await asyncio.gather(*futures) #print(f"triggered client {client}") else: print("reached else block") how can i chain here
m
thought for 91.3 seconds
Great question — to get a clear arrowed chain in the UI (get_json → mainq1 → mainq2 → client), wrap each
run_deployment
call in a Prefect task and pass the previous task’s result into the next task. That creates explicit upstream/downstream edges in the Graph tab of your main run. Here’s a cleaned-up version of your flow that does exactly that:
Copy code
from prefect import flow, task
from prefect.deployments import run_deployment
from prefect.client.schemas.objects import StateType
import asyncio

# Each stage is a task; passing the previous stage's result creates edges in the Graph

@task(name="get_json")
async def run_get_json():
    fr = await run_deployment(
        name="get-json-flow/getjson_prime_deployment",
        job_variables={"cpu": 256, "memory": 1024},
        as_subflow=True,
        flow_run_name="get_json",
    )
    return fr.id  # returning something to wire dependencies downstream

@task(name="mainq1")
async def run_mainq1(prev_id: str):
    fr = await run_deployment(
        name="mainq1-flow/mainq1_latest_deployment",
        job_variables={"cpu": 256, "memory": 1024},
        as_subflow=True,
        flow_run_name="mainq1",
    )
    return fr.id

@task(name="mainq2")
async def run_mainq2(prev_id: str):
    fr = await run_deployment(
        name="mainq2/mainq2_latest_deployment",
        job_variables={"cpu": 256, "memory": 1024},
        as_subflow=True,
        flow_run_name="mainq2",
    )
    return fr.id

@task(name="client")
async def run_client(prev_id: str, client: str, tasks: list[list[str]]):
    fr = await run_deployment(
        name="client-flow/client_latest_deployment",
        parameters={"client": client, "tasks": tasks},
        job_variables={"cpu": 256, "memory": 1024},
        as_subflow=True,
        flow_run_name=client,
    )
    return fr.id

@flow(log_prints=True, name="prime-main-deployment")
async def prime_main_flow():
    # Stage 1
    s1 = await run_get_json()
    # Stage 2 (runs after Stage 1 completes)
    s2 = await run_mainq1(s1)
    # Stage 3 (runs after Stage 2 completes)
    s3 = await run_mainq2(s2)

    # Optional fan-out to multiple clients AFTER mainq2
    clients = {
        "client_1": [["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], ["PS_LoanAccountSweeper"], ["PS_Reporting"]],
        "client_2": [["PS_Extend_General_PSwaps", "PS_TPLSweep_General_PSwaps"], ["PS_Generate", "PS_MTM", "PS_Sweeping"], ["PS_LoanAccountSweeper"], ["PS_Reporting"]],
        # add more clients...
    }

    # Run client deployments concurrently, but each depends on mainq2
    await asyncio.gather(
        *[run_client(s3, client=c, tasks=clients[c]) for c in clients]
    )
Why this creates arrows in the Graph - Each deployment trigger is a Prefect task node. - Passing the previous task’s output (even if unused) to the next creates explicit dependencies: get_json → mainq1 → mainq2 → client. - With
as_subflow=True
, each task also creates a linked child run (clickable in the UI under Related runs). Notes -
run_deployment(..., as_subflow=True)
waits for the child run to finish; if a stage fails, the flow will stop and downstream nodes won’t run. - Prefer comparing states using the enum (e.g.,
StateType.COMPLETED
) if you branch by state; however, the chain above enforces order without manual state checks. - The timeline view still won’t draw arrows; use the Graph tab on the main run to see the chain. Docs for reference: - Run deployments from flows (3.x) If you share the exact deployment names/parameters you need, I can drop them into this template verbatim.