Reposting this for hopefully some traction. Trying...
# ask-community
s
Reposting this for hopefully some traction. Trying to speed up very slow deployments, code looks like:
Copy code
deployments = []
for our_flow in list_of_our_flows:
    flow_from_source = await our_flow.from_source(repo, entrypoint=entrypoint)   # Takes 2s to run
    deployments.append(await flow_from_source.to_deployment(...))                # Takes 4s to run
await prefect.deploy(*deployments, ...)
With about 100 flows/deployment combinations, that's 10 minutes of registering flows, all belonging to the same repo, and given we have a dev env before prod (ie we deploy twice), our pipeline to update a flow has broken 20minutes. Does anyone know either why making a deployment takes multiple seconds, or if theres a way of doing this differently or somehow moving the
<http://flow_from_source.to|flow_from_source.to>_deployment
into a batch? I'm also not sure on best practises here, like should I do what I found in one tutorial (to_deployment and then prefect.deploy, or just flow.deploy, or something else). Our use case is we have one repo, which contains 100 or so flows, each that have a RRule pattern on them, and we want to make one deployment per flow+schedule pair. It seems like cloning things 100 times isnt ideal, but it seems to be what the documentation suggests?
w
Not sure about the slowness. But its possible to have multiples schedules to the same deployment: https://github.com/PrefectHQ/prefect/issues/12092 The only thing is that this schedules must have the same parameters. For different parameter per schedule there is a open issue: https://github.com/PrefectHQ/prefect/issues/14524
n
hi @Samuel Hinton - would you be willing to make an issue that says roughly what you said above about the slowness?
👍 1
s
Ah yeah we really only have one schedule per flow, we just have lots of flows
@Nate https://github.com/PrefectHQ/prefect/issues/15428 And not to necro an old post, but did you have any further thoughts on https://prefect-community.slack.com/archives/CL09KU1K7/p1724300792655309?thread_ts=1724127210.012189&amp;cid=CL09KU1K7? Right now we're putting all the networking info into a prefect secret and relying on the downsteam flows to configure themselves using it, which is highly error prone