01/06/2023, 5:14 PM
I use a builder pattern to generate multiple flows with different structures based on some configs. With Prefect 1.0 I was able to deploy and run the flows locally but I'm having issues with the LocalStorage with Prefect2.0. Details in thread:
from prefect import flow, task
from dataclasses import dataclass

from prefect.deployments import Deployment

class FlowBuildSpecs:
    name: str
    do_double: bool
    do_square: bool

def double(x):
    return 2 * x

def square(x):
    return x * x

def get_flow(specs):
    def my_flow(x=5):
        val = x
        if specs.do_double:
            val = double(val)
        if specs.do_square:
            val = square(val)
        return val

    return my_flow

configs = [
    FlowBuildSpecs(name="a", do_double=True, do_square=True),
    FlowBuildSpecs(name="b", do_double=True, do_square=False),
for configs in configs[:1]:
    flow = get_flow(configs)

    deployment = Deployment.build_from_flow(
    _id = deployment.apply()
    print(f"Deployment ID: {_id}")
I get this error
prefect.exceptions.MissingFlowError: Flow function with name 'my_flow' not found in ''.
Any ideas on how I can work with this pattern with Prefect 2.0?

Kalise Richmond

01/06/2023, 7:46 PM
Hi @alex, It looks like you are trying to create multiple deployments of the my_flow() and I think you don't actually need get_flow() with the configs. You can have multiple deployments for one flow and since you already have conditional logic for your flow with parameters, you can create two deployments for the same flow with just different parameters.


01/06/2023, 8:00 PM
Hi Kalise, thanks for your reply. This was just a simplified example, in practice I have jobs running for multiple clients, each job has some configurations I use to disable certain tasks in the flow structure. The configurations aren't changed and they are plenty of them so I use this approach to avoid polluting the parameters and only keep ones we want to modify. My second concern is that I would like to be able to monitor jobs for each client, and based on the UI right now, it will be easier to get a high level overview of the historical progress when the jobs are separated as different flows vs different deployments

Samuel Hinton

03/01/2023, 11:05 PM
Hi @alex, Im in the same boat for this one (trying to build flows dynamically at runtime). Did you ever manage to solve this?


03/01/2023, 11:21 PM
@Samuel Hinton unfortunately not, we had to to create a single root level flow and use the parameters to configure the steps for each deployment. There is an open issue you can follow for adding Pickle storage which is a requirement for being able to dynamically build flows.

Samuel Hinton

03/01/2023, 11:48 PM
Thanks for letting me know. I’ve added my use case for this to the issue at