Hi all! I'm trying to read up on Parameters and ho...
# best-practices
s
Hi all! I'm trying to read up on Parameters and how to pass them to a Flow run when triggered. In the Documentation the flow is usually triggered from within the script and a static variable is passed, but what if I want to trigger a flow/deployment with a certain email, list of dates etc to be used in the @flow?
1
a
Are you asking how to set up default parameter values or how you can override the defaults at runtime when triggering run ad-hoc? Are you scheduling this flow at all or running it purely ad hoc parametrized?
🙂 1
s
The second bit. Some parameters would have defaults, others must be specified. This would not be a scheduled flow, but triggered ad-hoc when a user makes a request for data with an email, startdate, stopdate, area-object etc 🙃
a
here is an example - you'd need to pass those within the empty dictionary:
Copy code
import asyncio
from prefect.client import get_client
from prefect import flow, task, get_run_logger


@task
async def child_flow():
    async with get_client() as client:
        depl_id = "71334806-93ff-4bdc-ba98-fde1f97d5622"  
        response = await client.create_flow_run_from_deployment(
            depl_id, parameters=dict()
        )
        logger = get_run_logger()
        <http://logger.info|logger.info>(response)


@flow
async def parent_flow():
    await child_flow()


if __name__ == "__main__":
    asyncio.run(parent_flow())
or if you run it from some application:
Copy code
import asyncio
from prefect.client import get_client


async def main():
    async with get_client() as client:
        depl_id = "074db2e5-229a-460e-85ad-fca31b379fd2"
        response = await client.create_flow_run_from_deployment(
            depl_id, parameters=dict()
        )
        print(response)


if __name__ == "__main__":
    asyncio.run(main())
the UI integration for parametrized ad hoc runs is on the roadmap
s
Ok, thanks! How would I pass it as a dict from command line if I'd like to execute a flow/deployment?
For a use case example. I'd have a Flow and two Deployments of said flow. If a user requests a large amount of data it would go to Deployment #1 where it would join a queue. If it's a small amount of data I'd send it to Deployment #2 which would process instantly.
a
no CLI integration yet
s
Ah ok. But I could trigger via rest API then? That would be better.
a
for your use case, subflows would be a much easier solution
s
Can you elaborate? I'm not sure I understand how subflows helps me start a flow run in this case. If no CLI is available (yet?) then my only option is the rest API?
a
You said: If user requests a large amount of data, you run flow 1, otherwise you run flow 2. This is something you could much easier implement using if/else with Subflows
s
Aah I see. Let me clarify. Large amount of data = sent to the queue/Deployment for that flow which allows x amount of concurrent runs, while small amounts get sent to the queue which allows them to run instantaneously 🙂
a
this seems like implementation detail though, you can still use subflows for this use case
implementation detail + a question of how you would want to design it - both are viable options with subflows being much easier