<@ULVA73B9P> - How to define optional parameters f...
# ask-community
m
@Marvin - How to define optional parameters for a deployment and those parameters should be shown under optional tab in the UI while running a Custom Run from the UI. Right now, the parameter_openapi_schema is being updated to all the deployments with single parameter which is defined in only 1 flow. Need to know what is the clean way of defining optional parameters for flows and how the yank should be updated so that each flow / deployment should has its own optional parameters in the UI
m
thought for 364.9 seconds
Short answer - In Prefect 3.x, a parameter is shown under Optional in the Custom Run dialog if it has a default value in your flow signature (e.g.,
param: str | None = None
or
param: int = 5
). Parameters without defaults are Required. - You generally should not edit
parameter_openapi_schema
yourself; Prefect computes and stores it per deployment from the flow’s signature. Each deployment gets its own schema. - Per-deployment default values belong in the deployment’s
parameters
field (in
prefect.yaml
or in
.deploy(...)
). This does not change required vs optional; it just sets the defaults used when launching from that deployment. Clean pattern to define optional params 1) Define optionality in the flow function signature
Copy code
from typing import Optional
from prefect import flow, get_run_logger

@flow
def etl(required_date: str, lookback_days: Optional[int] = None, dry_run: bool = False):
    logger = get_run_logger()
    <http://logger.info|logger.info>(f"{required_date=}, {lookback_days=}, {dry_run=}")
-
required_date
is Required (no default) -
lookback_days
and
dry_run
are Optional (have defaults). Using
Optional[...] = None
is recommended when “no value” is meaningful. 2) Give each deployment its own entrypoint and parameters - With Python API
Copy code
from prefect import flow

flow.from_source(
    "<https://github.com/org/repo>",
    "flows/etl.py:etl",
).deploy(
    name="etl-daily",
    work_pool_name="k8s",
    parameters={"dry_run": False, "lookback_days": 1},  # per-deployment defaults
)

flow.from_source(
    "<https://github.com/org/repo>",
    "flows/notify.py:send_notifications",
).deploy(
    name="notify-critical",
    work_pool_name="k8s",
    parameters={"severity": "critical"},
)
- With prefect.yaml and CLI
Copy code
# prefect.yaml
name: my-project
prefect-version: ">=3.0.0"

deployments:
  - name: etl-daily
    entrypoint: flows/etl.py:etl
    work_pool:
      name: k8s
    parameters:
      dry_run: false
      lookback_days: 1

  - name: notify-critical
    entrypoint: flows/notify.py:send_notifications
    work_pool:
      name: k8s
    parameters:
      severity: "critical"
Then run:
Copy code
prefect deploy
Docs: prefect.yaml guide and Deployments concepts About parameter_openapi_schema showing up “for all deployments” - Each deployment stores its own
parameter_openapi_schema
derived from the specific flow entrypoint’s signature at deployment time. It should not “bleed” to other deployments unless: - The deployments in your
prefect.yaml
all point to the same entrypoint, or - You are programmatically updating
parameter_openapi_schema
on multiple deployments, or - You have placed parameter settings at the wrong scope (e.g., top-level instead of per-deployment). - The clean approach is to: - Ensure each deployment entry references the correct
entrypoint
for that flow. - Do not manually set `parameter_openapi_schema`; let Prefect generate it. - If a deployment’s schema looks stale, re-deploy that specific deployment so it’s recomputed from the flow signature. A few tips - Required vs Optional is controlled by the flow signature only. The
parameters
you supply on a deployment just set defaults for that deployment’s runs; they do not make a required param optional. - Keep
enforce_parameter_schema=True
(the default) so type validation and UI rendering stay consistent with your annotations and defaults. Can you share: - Your Prefect version (assuming 3.x)? - How you’re creating deployments (CLI with
prefect.yaml
or Python via
.deploy
)? - A snippet of your
prefect.yaml
and one flow’s signature? - Where/how you’re seeing
parameter_openapi_schema
updated “for all deployments”? With that, I can spot the scoping issue and suggest the exact change.
m
@Marvin Okay I will share those and let’s breakdown the issue. My first concern is why the openapi schema field is updated with same config for all the deployments though it is updated only for that particular flow. Now my prefect table has the same values for openapi schema for all the deployments. How should I fix it
m
thought for 184.6 seconds
Got it — thanks for narrowing the issue. What should be happening - In Prefect 3.x, each deployment stores its own
parameter_openapi_schema
, generated from that deployment’s flow entrypoint at deploy time. - Changing parameters for one deployment should not change others’ schemas. If all deployments now have the same schema, it typically means one of: - A manual update (via UI or API) wrote the same schema to multiple deployment records. - Multiple deployments actually point to the same entrypoint (so they legitimately share a schema). - A script or CI step updated deployments in bulk. How to diagnose quickly - Confirm entrypoints and current schemas:
Copy code
prefect deployment ls
prefect deployment inspect "flow-name/deployment-name"
This will show
entrypoint
and the current
parameter_openapi_schema
. Docs: Deployments and CLI help (inspect) output shows the schema. How to fix deployments that have the wrong schema Pick one of these: 1) Reset to auto-generated in the UI (if you used the form builder) - In the UI, open the problem deployment -> Run form/Parameters -> switch back to “auto-generated from flow signature” (or equivalent reset), then Save. - Docs: Form building in Prefect 3 2) Re-deploy each affected deployment so Prefect recomputes the schema from its flow signature - If you deploy via Python:
Copy code
from prefect import flow

flow.from_source("<https://github.com/org/repo>", "flows/etl.py:etl").deploy(
    name="etl-daily",
    work_pool_name="k8s",
    parameters={"dry_run": False, "lookback_days": 1},
    enforce_parameter_schema=True,  # ensure schema is (re)generated
)
- If you deploy with prefect.yaml:
Copy code
# prefect.yaml
deployments:
  - name: etl-daily
    entrypoint: flows/etl.py:etl
    work_pool:
      name: k8s
    parameters:
      dry_run: false
      lookback_days: 1
Then run:
Copy code
prefect deploy
Docs: prefect.yaml guide 3) Delete and recreate only the incorrect deployments
Copy code
prefect deployment inspect "flow-a/deploy-1"
prefect deployment delete "flow-a/deploy-1"
prefect deploy  # or your .deploy() calls
Preventing a repeat - Do not manually set
parameter_openapi_schema
via API unless you intend a custom form. Let Prefect compute it. - Ensure each deployment has the correct unique
entrypoint
for its flow in
prefect.yaml
or your
.deploy(...)
calls. - Keep
enforce_parameter_schema=True
(default) during deploys so the schema matches your flow signature. - Mark parameters optional in your flow signature to control the UI’s Optional tab (e.g.,
x: int | None = None
or
y: str = "default"
). The deployment
parameters
field sets per-deployment default values but does not change required vs. optional. Open questions to help me pinpoint the root cause - Which Prefect version and environment (Cloud vs. self-hosted)? - How did you “update” the schema for the one flow — via the UI form builder, API, or code? - Are all your deployments pointing to unique entrypoints? If possible, paste one or two
prefect deployment inspect
outputs showing
entrypoint
and the unexpected
parameter_openapi_schema
. If you can share one affected deployment name and how you’re deploying (prefect.yaml vs. Python), I’ll give you exact commands/steps to reset only those deployments.
m
@Marvin, @Kevin Grismore - I'm using Self hosted 3,x version parameter openapi schema - "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" this is how my parameter openapi schema is for all the deployments but this value is set in only one deployment. @flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run") def forecaster_evaluation_flow_run( tenant_id: str, edix_custom_config: dict, config: Optional[DictConfig] = None, ) -> pd.DataFrame: this is how the flow is defined, but only im seeing edix_custom_config in all the openapi_schema and not any other optional value. Since this is how the table is updated, the UI in Custom Run shows only the edix_custom_config as optional variable in all deployments where it was not defined "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}" "{""type"": ""object"", ""title"": ""Parameters"", ""properties"": {""edix_custom_config"": {""title"": ""edix_custom_config"", ""default"": {}, ""position"": 0}}, ""definitions"": {}}"
m
thought for 74.3 seconds
Thanks — that detail helps. What I’m seeing - Your DB has the same
parameter_openapi_schema
on every deployment:
{"type":"object","title":"Parameters","properties":{"edix_custom_config":{"title":"edix_custom_config","default":{},"position":0}},"definitions":{}}
- That schema only defines one property (
edix_custom_config
) with a default
{}
and no other parameters. Why that’s a problem - That JSON is a custom run-form schema. It’s not the auto-generated schema from your flow signature. If Prefect generated the schema from your example flow, it would include: -
tenant_id
(required, no default) -
edix_custom_config
(required if no default, optional if you add a default) -
config
(optional, because
Optional[...] = None
) - The presence of a
position
field and a default
{}
strongly suggests a custom form was saved somewhere (UI or API) and then applied to multiple deployments. Prefect does not normally propagate a deployment’s schema to other deployments. Likely causes - A script or migration updated
parameter_openapi_schema
for many deployments. - The wrong scope was used when updating via API (e.g., iterating deployments and setting the same schema). - All deployments share a single edited schema object in code and you’re writing it repeatedly. - Less likely: all deployments legitimately point to the same entrypoint and you customized the form for that deployment, then redeployed in a way that overwrote others. Still, you’d normally see the full param list even then. How to fix it Pick one path and verify with inspect afterward. 1) Reset in the UI (if you used the form builder) - Open each deployment in the UI, go to the Run form/Parameters section, and reset to “auto-generated from flow signature” (wording may vary). Save. - Docs: Form building in Prefect 3 2) Re-deploy each deployment so Prefect recomputes the schema from the flow signature - Python API example:
Copy code
from prefect import flow

flow.from_source(
    "<https://github.com/org/repo>",
    "flows/forecaster.py:forecaster_evaluation_flow_run",
).deploy(
    name="forecaster-eval",
    work_pool_name="k8s",
    enforce_parameter_schema=True,  # recompute and store schema
)
- Or with `prefect.yaml`:
Copy code
# prefect.yaml
deployments:
  - name: forecaster-eval
    entrypoint: flows/forecaster.py:forecaster_evaluation_flow_run
    work_pool:
      name: k8s
Then run:
Copy code
prefect deploy
Docs: prefect.yaml guide and Deployments concepts 3) Delete and recreate the affected deployments (only if you prefer a clean slate)
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
prefect deployment delete "Forecaster Evaluation Flow/forecaster-eval"
prefect deploy  # or run your .deploy() code again
Make the flow’s optionality explicit If
edix_custom_config
should be optional, avoid using
{}
as a default (mutable defaults). Use
None
and handle it inside the flow; also prefer standard typing so the schema generator can infer a good JSON schema:
Copy code
from typing import Optional, Mapping, Any
from prefect import flow

@flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run")
def forecaster_evaluation_flow_run(
    tenant_id: str,
    edix_custom_config: Optional[Mapping[str, Any]] = None,
    config: Optional[Mapping[str, Any]] = None,  # consider avoiding DictConfig in the signature
):
    ...
- Using
DictConfig
in the type annotation can confuse schema generation. If you need it, annotate as
Mapping[str, Any] | None
and convert internally. Verify after the fix - Before and after, run:
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
- You should see
parameter_openapi_schema
include all three parameters with the right required/optional status.
A couple of questions to pinpoint the root cause - Did you edit the Run form for one deployment in the UI recently? - Do you have a script that calls the Prefect API to update deployments (and might be setting
parameter_openapi_schema
)? - Are all your deployments created from a script/CI that could be reusing a single hard-coded schema? If you share how you create deployments (prefect.yaml vs Python) and one
prefect deployment inspect
output for an affected deployment (showing its
entrypoint
and
parameter_openapi_schema
), I can give you exact, minimal steps to reset only those deployments.
m
@Marvin edix_custom_config: -- but edix_custom config is not optional it is a mandatory one, thats why the yaml was udpated with default value i actually rewrote the prefect table with below query UPDATE deployment SET parameter_openapi_schema = '{ "type": "object", "title": "Parameters", "properties": {}, "definitions": {} }';
m
thought for 52.6 seconds
Thanks — that explains the symptom. Since you directly updated the database, every deployment now carries the same custom run-form schema. Prefect won’t auto-recompute it just because the DB row changed; it only generates
parameter_openapi_schema
at deployment time (or when you explicitly edit the run form). Key points - Don’t write to the Prefect DB directly for deployments. It bypasses the logic that builds per-deployment schemas from the flow signature. - Your edited schema includes a default
{}
for
edix_custom_config
, which makes it optional in the UI even though your flow signature makes it required. If a parameter should be required, do not give it a default in the schema or in the flow signature. How to fix (clean and reliable) 1) Re-deploy each deployment so Prefect recomputes the schema from the flow’s signature - If you manage deployments with
prefect.yaml
, run this in each project’s repo:
Copy code
prefect deploy
Docs: prefect.yaml guide - If you deploy via Python:
Copy code
from prefect import flow

flow.from_source(
    "<https://github.com/org/repo>",
    "flows/forecaster.py:forecaster_evaluation_flow_run",
).deploy(
    name="forecaster-eval",
    work_pool_name="k8s",
    enforce_parameter_schema=True,  # recompute and store per-deployment schema
)
2) Or, reset per-deployment in the UI - Open each deployment -> Run form/Parameters -> switch back to auto-generated from flow signature (remove the custom JSON), then Save. Docs: Advanced form building 3) Verify after the fix
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
You should see a schema that includes: -
tenant_id
as required (no default) -
edix_custom_config
as required (no default) -
config
as optional (since it’s
Optional[...] = None
) Prevent this going forward - Avoid updating
deployment.parameter_openapi_schema
via SQL. If you need to change the run form, do it in the UI (per deployment) or re-deploy. - Keep parameters required/optional strictly via the flow signature: - Required: no default (e.g.,
edix_custom_config: dict
) - Optional: give a default (e.g.,
config: Mapping[str, Any] | None = None
) - If you previously used a default
{}
to force a “mandatory” field, remove it. A default means optional. For required, there must be no default. If you share how you usually create deployments (prefect.yaml vs Python) and one
prefect deployment inspect
output for an affected deployment, I can give you the exact minimal set of steps to fix just those deployments.
m
@Marvin - You did not understand one thing here, I rewrote because all the rows has same edix_custom_config and I wanted to reset the value.
m
thought for 58.4 seconds
Got it — thanks for clarifying. You reset the column because all rows had the same bad schema. That makes sense. One important behavior to call out: Prefect does not auto-regenerate
parameter_openapi_schema
when you change the DB directly. Whatever is in that column is what the UI uses. Setting it to an empty object won’t trigger regeneration; it just gives you a blank/custom form. Ways to properly reset the schema so it’s generated per deployment from the flow signature - Easiest: re-deploy the affected deployments so Prefect recomputes the schema from the entrypoint’s function signature. - If you use prefect.yaml, run this in each project repo:
Copy code
prefect deploy
Docs: prefect.yaml guide - If you deploy via Python:
Copy code
flow.from_source("<https://github.com/org/repo>", "flows/forecaster.py:forecaster_evaluation_flow_run").deploy(
      name="forecaster-eval",
      work_pool_name="k8s",
      enforce_parameter_schema=True,
  )
- UI-based per-deployment reset (no code changes): open each deployment in the UI → Run form/Parameters → switch back to “auto-generated from flow signature” (remove custom JSON) → Save. Docs: Form building - If you need a bulk fix without redeploying code: we can script it against the API - For each deployment, compute the schema from the flow entrypoint (using Prefect’s schema generator) and update the deployment via the API. - The generator you want is
prefect.utilities.callables.parameter_schema_from_entrypoint(entrypoint)
. - Then update that deployment’s
parameter_openapi_schema
via the Prefect API. If you want, I can help draft this script once you confirm where your deployments live (single repo vs many) and whether async Python is okay in your environment. What you should see after the reset - For your example flow: -
tenant_id
— required (no default) -
edix_custom_config
— required (no default). Do not add a default in the schema or in the flow if you want it required. -
config
— optional (
Optional[...] = None
) - Verify with:
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
This prints the current
parameter_openapi_schema
. Next steps/questions so I can give you the shortest path - Is re-running
prefect deploy
feasible in your setup? If yes, that’s the cleanest fix. - Roughly how many deployments need resetting? If it’s a lot, I’ll share a small script to iterate deployments, compute
parameter_openapi_schema
from their entrypoints, and update them via the API.
m
@Marvin - Lets take it this way, now i will change the table with my UPDATE command . Then , i will define this for my flow. How should my prefect yaml look like ? @flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run") def forecaster_evaluation_flow_run( tenant_id: str, edix_custom_config: dict, config: Optional[DictConfig] = None, ) -> pd.DataFrame:
m
thought for 26.4 seconds
Perfect — if you’re going to reset the table with your UPDATE, the next step is to re-create each deployment so Prefect will regenerate the parameter schema from the flow signature. Given your flow:
Copy code
@flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run")
def forecaster_evaluation_flow_run(
    tenant_id: str,
    edix_custom_config: dict,
    config: Optional[DictConfig] = None,
) -> pd.DataFrame:
    ...
Here’s a clean
prefect.yaml
that will produce a deployment with an auto-generated schema:
Copy code
name: forecasting-project
prefect-version: ">=3.0.0"

# Optional: project-level defaults
# build: false
# push: false

deployments:
  - name: forecaster-eval
    entrypoint: flows/forecaster.py:forecaster_evaluation_flow_run
    description: "Run the forecaster evaluation flow"
    work_pool:
      name: k8s  # or your pool
      # job_variables: {}  # if you need them
    # parameters here are per-deployment defaults; they DO NOT change required/optional
    # Leave out required parameters so the UI shows them as Required
    # parameters:
    #   config: null  # optional param; you can omit entirely if you want it blank
    tags:
      - forecasting
    # Optional schedule examples (pick one or omit)
    # schedules:
    #   - interval: 86400  # seconds
    #   - cron: "0 6 * * *"
What this does - On
prefect deploy
, Prefect reads your flow signature from
entrypoint
and writes a per-deployment
parameter_openapi_schema
. - With your signature: -
tenant_id
will be Required (no default) -
edix_custom_config
will be Required (no default) -
config
will be Optional (has a default of
None
) - Since you’re resetting the DB schema to empty, running
prefect deploy
is what triggers regeneration. Commands to run
Copy code
# From the project root containing prefect.yaml and the flows/forecaster.py file
prefect deploy

# Verify the regenerated schema
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
Notes and tips - Do not put a default for
edix_custom_config
in the flow signature or in the custom form; if you add a default in the schema, the UI will treat it as optional. - The
parameters
section in
prefect.yaml
just pre-fills defaults for runs launched from that deployment. It does not change required vs. optional. - The schema generator may not fully understand
DictConfig
. If the UI form for
config
looks odd, consider annotating it as a standard mapping type and converting internally:
Copy code
from typing import Optional, Mapping, Any
config: Optional[Mapping[str, Any]] = None
Docs - Prefect YAML: prefect.yaml guide - Deployments overview: Deployments concepts - Form building (if you later want to customize per-deployment forms): Advanced form building If you share your real entrypoint path (file:func) and work pool name, I can tailor the exact
prefect.yaml
for your repo layout.
m
@Marvin - my edix_custom_config - is an optional one and is not a mandatory one
m
thought for 22.2 seconds
Thanks for clarifying. If
edix_custom_config
is optional, make that clear in the flow signature and avoid a mutable default. Recommended flow signature
Copy code
from typing import Optional, Mapping, Any
from prefect import flow

@flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run")
def forecaster_evaluation_flow_run(
    tenant_id: str,
    edix_custom_config: Optional[Mapping[str, Any]] = None,  # optional
    config: Optional[Mapping[str, Any]] = None,               # optional
):
    ...
Notes: - Using
= None
makes it optional in Prefect’s schema and shows it under the Optional tab in the UI. - Avoid
edix_custom_config: dict = {}
(mutable defaults). - If you must use
DictConfig
, still annotate as
Mapping[str, Any] | None
for schema generation, and convert to
DictConfig
inside the flow. prefect.yaml for this flow
Copy code
name: forecasting-project
prefect-version: ">=3.0.0"

deployments:
  - name: forecaster-eval
    entrypoint: flows/forecaster.py:forecaster_evaluation_flow_run  # update to your real path
    description: "Run the forecaster evaluation flow"
    work_pool:
      name: k8s  # update to your real pool
    # parameters: {}           # omit; let the UI render from the flow signature
    tags:
      - forecasting
    # schedules:               # optional; add if needed
    #   - cron: "0 6 * * *"
After you run your SQL UPDATE (clearing schemas) - Re-deploy so Prefect regenerates each deployment’s
parameter_openapi_schema
from the flow signature:
Copy code
prefect deploy
- Verify the regenerated schema includes
tenant_id
as Required and both
edix_custom_config
and
config
under Optional:
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval"
If you prefer not to re-deploy code - You can reset per-deployment in the UI: open the deployment → Run form/Parameters → restore to auto-generated from flow signature → Save. - Or we can script a bulk reset via the API by computing the schema from each deployment’s entrypoint and updating
parameter_openapi_schema
for each. I can share that script if you want. Docs for reference: - prefect.yaml guide - Deployments concepts - Form building and custom run forms If you share your actual entrypoint path and work pool name, I’ll tailor the YAML exactly for your repo.
m
@Marvin - But I generally pass the value for tenant_id as we create different deployments for different tenants for same flow
m
thought for 40.0 seconds
Totally fine — for per-tenant deployments, keep
tenant_id
required in the flow signature and set it per deployment via parameters. Mark
edix_custom_config
optional in the signature so it shows under Optional in the UI. Flow (make edix_custom_config optional with a safe default)
Copy code
from typing import Optional, Mapping, Any
from prefect import flow

@flow(name="Forecaster Evaluation Flow", flow_run_name="Forecaster Evaluation Run")
def forecaster_evaluation_flow_run(
    tenant_id: str,                                      # required
    edix_custom_config: Optional[Mapping[str, Any]] = None,  # optional
    config: Optional[Mapping[str, Any]] = None,              # optional
):
    ...
prefect.yaml with per-tenant deployments
Copy code
name: forecasting-project
prefect-version: ">=3.0.0"

deployments:
  - name: forecaster-eval-tenant-a
    entrypoint: flows/forecaster.py:forecaster_evaluation_flow_run  # update path
    description: "Forecaster evaluation for tenant A"
    work_pool:
      name: k8s   # update to your pool
    parameters:
      tenant_id: "tenant-a"         # pre-filled per deployment
      # edix_custom_config: null    # optional; omit or set to null
      # config: null                # optional; omit or set to null
    tags: [forecasting, tenant-a]

  - name: forecaster-eval-tenant-b
    entrypoint: flows/forecaster.py:forecaster_evaluation_flow_run
    description: "Forecaster evaluation for tenant B"
    work_pool:
      name: k8s
    parameters:
      tenant_id: "tenant-b"
    tags: [forecasting, tenant-b]
What you’ll see in the UI (Custom Run) after you reset the DB column and redeploy -
tenant_id
remains Required (because the flow signature has no default). The deployment pre-fills it with the value you set in
parameters
, but it stays under Required. -
edix_custom_config
and
config
appear under Optional (because they have
= None
). Steps to regenerate schemas after your SQL reset 1) Run deploy in the repo with the prefect.yaml above:
Copy code
prefect deploy
2) Verify the schema is correct:
Copy code
prefect deployment inspect "Forecaster Evaluation Flow/forecaster-eval-tenant-a"
Notes - Avoid mutable defaults like
{}
for
edix_custom_config
. Use
None
and handle it inside the flow. - The deployment
parameters
section sets per-deployment defaults but does not change required vs optional; only the flow signature controls that. - If you previously used the UI form builder for a deployment, make sure it’s set to auto-generate from the flow signature (per deployment) rather than a custom JSON form if you want this behavior. Refs: - prefect.yaml guide - Deployments concepts - Custom run forms and schema
m
@Marvin - I will try that. But there is this case, where the flow is defined like this @flow( name="General Forecast Flow Run", retries=3, retry_delay_seconds=10, ) def general_forecast_flow_run( tenant_id: str, resolution_type: str, forecast_type: str, parent_run_info: Optional[dict] = None, override_config: Optional[dict] = None, edix_prefect_config: dict[str, Any] | None = None, ) -> dict: but still none of these Optional parameters are shown as optional in the ui
m
thought for 24.3 seconds
Thanks for the example. Given that those fields have defaults, they should show under Optional — if they don’t, it’s almost certainly because the deployment still has a custom/blank
parameter_openapi_schema
stored from your DB edits (or an older custom form). The UI only renders what’s in that schema; it won’t regenerate it automatically. What to do 1) Verify the schema on an affected deployment
Copy code
prefect deployment inspect "General Forecast Flow Run/<your-deployment-name>"
If you see an empty/custom schema or anything not matching your flow signature, that’s the reason the UI isn’t showing the Optional params. 2) Reset the run form schema per deployment Pick one: - UI: Deployment -> Run form/Parameters -> switch to “auto-generated from flow signature” -> Save. - Or re-deploy so Prefect recomputes from the flow signature. Your flow (keep optional params with None defaults)
Copy code
from typing import Optional, Mapping, Any
from prefect import flow

@flow(
    name="General Forecast Flow Run",
    retries=3,
    retry_delay_seconds=10,
)
def general_forecast_flow_run(
    tenant_id: str,                                  # required
    resolution_type: str,                            # required
    forecast_type: str,                              # required
    parent_run_info: Optional[Mapping[str, Any]] = None,   # optional
    override_config: Optional[Mapping[str, Any]] = None,   # optional
    edix_prefect_config: Optional[Mapping[str, Any]] = None,  # optional
) -> dict:
    ...
Notes: - Avoid mutable defaults like `{}`; use
None
. - Using
Mapping[str, Any] | None
helps schema generation more than custom types. Example prefect.yaml for per-tenant deployments
Copy code
name: forecasting-project
prefect-version: ">=3.0.0"

deployments:
  - name: general-forecast-tenant-a
    entrypoint: flows/general_forecast.py:general_forecast_flow_run  # update path
    description: "General forecast for tenant A"
    work_pool:
      name: k8s  # your pool
    parameters:
      tenant_id: "tenant-a"
      resolution_type: "daily"
      forecast_type: "baseline"
    tags: [forecasting, tenant-a]

  - name: general-forecast-tenant-b
    entrypoint: flows/general_forecast.py:general_forecast_flow_run
    description: "General forecast for tenant B"
    work_pool:
      name: k8s
    parameters:
      tenant_id: "tenant-b"
      resolution_type: "daily"
      forecast_type: "baseline"
    tags: [forecasting, tenant-b]
Then run:
Copy code
prefect deploy
What you should see after redeploy/reset - Required tab:
tenant_id
,
resolution_type
,
forecast_type
(because no defaults) - Optional tab:
parent_run_info
,
override_config
,
edix_prefect_config
(because they default to None) If you prefer bulk fixing without redeploying code, we can script against the API to compute and write the correct
parameter_openapi_schema
for each deployment from its entrypoint. If that’s preferable, tell me roughly how many deployments and how you want to target them (by tag, name pattern, etc.), and I’ll share a targeted script.
m
@Marvin - Why not dict ?
m
thought for 18.7 seconds
Great question — dict is perfectly fine. Why I suggested Mapping[str, Any] (but dict works too) - Either of these will generate an “object” in the UI and be accepted as JSON: -
edix_prefect_config: dict | None = None
-
edix_prefect_config: dict[str, Any] | None = None
-
edix_prefect_config: Mapping[str, Any] | None = None
- Mapping is just a bit more flexible as an input type (accepts any mapping-like object, including things like OmegaConf’s DictConfig), while not committing your API to a concrete dict. But semantically for Prefect parameters, both map to JSON objects and work the same in practice. What actually controls “Optional” in the UI - Optional vs Required is determined only by the flow signature’s defaults, not by choosing dict vs Mapping. - Required: no default (e.g.,
tenant_id: str
) - Optional: has a default (e.g.,
edix_prefect_config: dict | None = None
) - If your UI is not showing your optional params under Optional, it’s because the deployment’s stored
parameter_openapi_schema
is stale or custom — not because you used dict. Safety tip - Avoid mutable defaults like `{}`:
Copy code
# bad: mutable default
  edix_prefect_config: dict = {}
  # good: optional with safe default
  edix_prefect_config: dict | None = None
- If you’re using
DictConfig
in your code, prefer annotating as
Mapping[str, Any] | None
(or
dict[str, Any] | None
) and convert internally — Prefect’s schema generator understands standard types better than custom ones. If you reset the DB schema and re-deploy, a parameter declared as:
Copy code
edix_prefect_config: dict | None = None
will appear under Optional in the Custom Run form. References: - Form building and schemas: Form building - Deployments and prefect.yaml: prefect.yaml guide
m
@Marvin - Give the update command to reset the table
m
thought for 24.3 seconds
Got it. If your goal is to truly “reset” the stored run-form schema so Prefect will stop using the stale/custom JSON, set
parameter_openapi_schema
to NULL. Then re-deploy (or reset per-deployment in the UI) so Prefect regenerates it from the flow signature. Postgres-safe snippets (pick one) - Reset ALL deployments
Copy code
UPDATE deployment
SET parameter_openapi_schema = NULL;
- Reset only rows that currently contain the edix_custom_config-only schema (broad match)
Copy code
UPDATE deployment
SET parameter_openapi_schema = NULL
WHERE parameter_openapi_schema::text LIKE '%"edix_custom_config"%';
- Reset only rows that exactly match the known bad JSON (exact match with dollar-quoting)
Copy code
UPDATE deployment
SET parameter_openapi_schema = NULL
WHERE parameter_openapi_schema::text = $json$
{"type":"object","title":"Parameters","properties":{"edix_custom_config":{"title":"edix_custom_config","default":{},"position":0}},"definitions":{}}
$json$;
- Reset for a single deployment by name (plus flow name)
Copy code
UPDATE deployment d
SET parameter_openapi_schema = NULL
FROM flow f
WHERE d.flow_id = f.id
  AND f.name = 'Forecaster Evaluation Flow'
  AND d.name = 'forecaster-eval-tenant-a';
Recommended workflow 1) Run one of the UPDATEs above (wrap in a transaction if you like). 2) Re-deploy the affected deployments so Prefect recomputes the schema from the flow signature:
Copy code
prefect deploy
3) Verify:
Copy code
prefect deployment inspect "Your Flow Name/your-deployment-name"
Note - Leaving the column as an empty JSON object will not trigger regeneration; setting it to NULL is the clean reset. - Optional vs required is still controlled by your flow signature defaults. After redeploy, your Optional params (with
= None
) will show under the Optional tab.
m
@Marvin - I have a question, my prefect.yaml is in a different repo and it refers to the code from github. Would it be able to point to the actual code and get the optional parameters from the actual flow and show it in the UI ?
m
thought for 255.1 seconds
Short answer: Yes—Prefect can show your flow’s parameters (including optional defaults) in the UI, but only if the deployment process can actually import the flow function to infer its signature. If your
prefect.yaml
lives in a different repo than the code, you need to ensure the code is importable at deploy time. How it works in Prefect 3.x - The parameter schema that appears in the UI is captured when you create/update the deployment (e.g., via
prefect deploy
or
flow.deploy(...)
). - Prefect infers parameters by importing the flow function and inspecting its signature. If the flow cannot be imported at deploy time, Prefect cannot infer parameters, and the UI won’t show a structured parameter form. What this means for your setup - If repo A has
prefect.yaml
and repo B has the flow code: - If you run
prefect deploy
from repo A without a way to import the flow in repo B, Prefect will not infer parameters. - You have two reliable options to make parameter inference work: 1) Run a Python-based deploy that references the code source and can import the flow: - Run this in an environment that can fetch/clone or otherwise import the code (and has access tokens for private repos). - Example:
Copy code
from prefect.flows import Flow

         flow = Flow.from_source(
             source="<https://github.com/<owner>/<repo>>",
             entrypoint="path/to/flows.py:my_flow",
         )

         flow.deploy(
             name="prod",
             work_pool_name="my-pool",
         )
This approach will only capture parameters if that script is able to resolve/import
my_flow
. If the environment cannot import it, parameters won’t be inferred. 2) Use
prefect deploy
but make the code importable first: - In CI or locally, clone repo B (or install the project as a package), then run
prefect deploy
from a project that points to that entrypoint so the CLI can import the flow. - If your
prefect.yaml
is in repo A, consider running the deploy job from a workspace where both repos are available (e.g., check out both repos into the job) or switch to the Python-based approach above. Notes and limitations - Private GitHub repos: provide credentials (e.g., GITHUB_TOKEN or SSH key) so the deploy environment can access the code. - If the flow can’t be imported during deploy, the deployment will still work at runtime (the worker can pull the code), but the UI won’t show a structured parameters form; you may just see an empty/default JSON editor. - You can still set deployment-level default values in
prefect.yaml
under
deployments[].parameters
, but this does not replace full schema inference from the flow’s signature. Helpful links - prefect.yaml reference - Deploy a flow - Store flow code for deployments - Flow.from_source API A couple of quick questions to make sure I point you to the best setup: - How are you creating the deployment today:
prefect deploy
from repo A, or a Python script calling
flow.deploy()
? - Is repo B (the code) public or private? - Are you running this locally or in CI (and can that environment clone/import repo B)?
m
@Marvin - Let me share you a snippet of my yaml prefect-version: 2.14.12 pull: #single Worker image - prefect.deployments.steps.set_working_directory: directory: /opt/prefect - prefect.deployments.steps.git_clone: repository: #repo link access_token: "{{ prefect.blocks.secret.ghb-tkn }}" branch: main id: clone - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/{{ clone.directory }} - prefect.deployments.steps.run_shell_script: script: | poetry install --no-root deployments: - name: Inventory Forecaster Pipeline version: tags: ["Inventory Intelligence", ""] description: "Run the forecaster pipeline" entrypoint: pipelines/forecaster_pipeline.py:forecaster_training_flow_run parameters: tenant_id: ' work_pool: name: ecs-work-pool job_variables: # image: '{{ build_image.image }}' vpc_id: "" cluster: "" task_definition_arn: "" #No revision Number env: # Use of env config perflow schedule: cron: 0 4 * * * timezone: UTC day_or: true There is a separate workflow file which deploys using the prefect deploy command from the script echo $filename prefect --no-prompt deploy --all --prefect-file $filename
m
thought for 45.6 seconds
Thanks for sharing the yaml and how you’re deploying — that helps a lot. You’re on Prefect 2.14.12. In 2.x, the UI will only show your flow’s parameters (including optional defaults) if the deploy process can import the flow function. In your current setup, all the Git clone + install actions are under pull steps. Pull steps only run on the worker at runtime, not during
prefect deploy
. That means when
prefect deploy --all --prefect-file $filename
runs (from the repo that just contains the prefect.yaml), it cannot import
pipelines/forecaster_pipeline.py:forecaster_training_flow_run
, so Prefect cannot infer parameters. The deployment will work at runtime, but the UI won’t have the structured parameter form. Two reliable ways to fix this in 2.x: Option A: Add build steps so the deploy process can import the flow - Clone and install your flow repo in build steps (these run during
prefect deploy
), set the working directory so the
entrypoint
resolves, then keep your pull steps for runtime. Example (adjust repo URL, branch, and install commands as needed):
Copy code
prefect-version: 2.14.12

build:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: code
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ code.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

pull:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: clone
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

deployments:
  - name: Inventory Forecaster Pipeline
    tags: ["Inventory Intelligence"]
    description: "Run the forecaster pipeline"
    entrypoint: pipelines/forecaster_pipeline.py:forecaster_training_flow_run
    parameters:
      # You can set defaults here; schema still comes from the flow signature
      tenant_id: null
    work_pool:
      name: ecs-work-pool
      job_variables:
        vpc_id: ""
        cluster: ""
        task_definition_arn: ""
        env: {}
    schedule:
      cron: "0 4 * * *"
      timezone: UTC
      day_or: true
Notes: - Ensure Poetry is available in the environment where
prefect deploy
runs, otherwise the build step will fail. - With the working directory set to the cloned repo, your
entrypoint
remains
pipelines/forecaster_pipeline.py:forecaster_training_flow_run
. - I removed the empty tag
""
and fixed a stray quote on
tenant_id
. Option B: Checkout the code in your CI before running
prefect deploy
- In your workflow, checkout both repos (or just the repo that contains the flow code), then run:
Copy code
prefect --no-prompt deploy --all --prefect-file $filename
- Since the code is present locally, Prefect can import the flow during deploy and infer the parameter schema for the UI. What happens if you don’t do either? - The deployment still works at runtime (because your worker executes the pull steps and can fetch/install the code), but the UI won’t show the flow’s structured parameter form; you’ll see a plain JSON editor instead. If you’re open to Prefect 3.x in the future,
Flow.from_source(...).deploy(...)
makes this pattern more straightforward, but your current 2.14.12 setup will work well with one of the two options above. Questions to confirm: - Is the flow repo private (i.e., will the build environment have valid credentials for cloning)?
- Do you prefer adding build steps in prefect.yaml or checking out the code in CI before calling
prefect deploy
? Helpful reading (3.x pages, but the principle about needing to import the flow to infer parameters applies in 2.x as well): - prefect.yaml reference - Store flow code for deployments
m
@Marvin - im open to use prefect 3.1.0
m
thought for 126.2 seconds
Awesome — moving to Prefect 3.1.0 will simplify this. Short answer: Yes, Prefect 3 will show your flow’s parameters (including optional defaults) in the UI as long as the deployment process can load the flow function. With 3.x the cleanest pattern is to deploy via Python using
Flow.from_source(...)
that points to your GitHub repo. Recommended approach (Prefect 3.1.0) - Use a small deploy script that references your code repo and entrypoint, then calls
deploy()
. - Run this in CI (or locally) with access to the repo (e.g., GITHUB_TOKEN for private repos). This allows Prefect to import the flow during deploy and capture the parameter schema for the UI. Example deploy script
Copy code
from prefect.flows import Flow

# Source: your code repo; Entrypoint: path to file and flow function
flow = Flow.from_source(
    source="<https://github.com/<owner>/<repo>>",
    entrypoint="pipelines/forecaster_pipeline.py:forecaster_training_flow_run",
)

# Create or update the deployment
flow.deploy(
    name="inventory-forecaster-prod",
    work_pool_name="ecs-work-pool",
    tags=["Inventory Intelligence"],
    cron="0 4 * * *",  # optional schedule
    job_variables={
        "vpc_id": "",
        "cluster": "",
        "task_definition_arn": "",
        "env": {},  # your env map
    },
    parameters={
        # optional deploy-time defaults; schema comes from the flow signature
        "tenant_id": None,
    },
)
Key points - Parameter schema is inferred at deploy time by importing the flow. If the deploy environment can’t access the repo/entrypoint, Prefect cannot infer parameters; the UI will show a plain JSON editor instead of a structured form. - Using
Flow.from_source(...)
records where to fetch code at runtime too, so workers can pull the code when running. - If your repo is private, ensure your CI has credentials to access it (e.g.,
GITHUB_TOKEN
). If you prefer the CLI - In 3.x,
prefect deploy
expects to run from a project that can import the entrypoint. That usually means checking out the code repo in the same job before running
prefect deploy
. Since your YAML lives in a different repo, the Python approach above is typically smoother. Docs - Deploy a flow - Flow.from_source API - Store flow code for deployments Quick checks - Is the code repo public or private? - Will you run the deploy from CI (so we can set up the Git credentials there)?
m
@Marvin - but i still want to use my prefect.yaml only as i don't want to make too much change to the current structure
m
thought for 138.5 seconds
Totally fine to keep using prefect.yaml. In Prefect 3.1.0 you can keep your current structure—just make sure the deploy process can import the flow so the UI can infer parameters. You can do that with either of these minimal-change options: Option 1: Add build steps (minimal yaml change) - Duplicate your existing pull steps into a new build section so the code is available during prefect deploy. This lets Prefect import the flow and capture the parameter schema for the UI. Example (updated for 3.1.0, with tiny cleanups):
Copy code
prefect-version: 3.1.0

build:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: code
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ code.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

pull:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: clone
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

deployments:
  - name: Inventory Forecaster Pipeline
    tags: ["Inventory Intelligence"]
    description: "Run the forecaster pipeline"
    entrypoint: pipelines/forecaster_pipeline.py:forecaster_training_flow_run
    parameters:
      tenant_id: null
    work_pool:
      name: ecs-work-pool
      job_variables:
        vpc_id: ""
        cluster: ""
        task_definition_arn: ""
        env: {}
    schedule:
      cron: "0 4 * * *"
      timezone: UTC
      day_or: true
Notes: - Keep running prefect deploy from the directory containing this prefect.yaml. Your CI command can remain nearly the same; if possible, run it from the repo with the yaml to avoid needing special flags. - Ensure your deploy environment has git and poetry available (build steps run at deploy time). - Private repos: your
access_token
block must be resolvable during deploy. Option 2: No yaml changes — check out code in CI before deploy - In your workflow, clone the code repo (the one with pipelines/forecaster_pipeline.py) before running:
Copy code
prefect --no-prompt deploy --all
- Since the code is present locally during deploy, Prefect can import your flow and infer parameters for the UI. This keeps your prefect.yaml untouched. Why this matters - Prefect infers parameters by importing your flow during deployment. If the code is only fetched in pull steps (runtime), the UI won’t show a structured parameter form. Either making the code available in build steps or checking it out in CI solves that. Helpful docs - prefect.yaml reference - Store flow code for deployments Want me to adapt your exact yaml (with your repo URL and any private token handling) and your CI snippet step-by-step?
m
@Marvin - Will the build step create a docker image now ?
m
thought for 249.6 seconds
Short answer: No. The build steps we added only prep your code so Prefect can import the flow and infer parameters. They do not build a Docker image. How to handle Docker images with prefect.yaml in 3.1.0: - If you use ECS job_variables.task_definition_arn: - ECS will use that task definition as-is. Prefect will not build or push any image. Make sure the task definition already references the image you want. - If you want to use ECS job_variables.image: - Prefect/ECS will launch a task using that image, but Prefect will not build it for you. You must ensure the image exists in your registry. Ways to build/push the image while keeping prefect.yaml: 1) Build in your CI before prefect deploy (recommended) - Have CI run docker build/push to ECR (or your registry), then run:
Copy code
prefect --no-prompt deploy --all --prefect-file prefect.yaml
- Set the resulting image in
work_pool.job_variables.image
. 2) Build via a build step (shell) in prefect.yaml - Add a step that runs docker build/push. Your deploy environment must have Docker and registry creds. Example:
Copy code
build:
     - prefect.deployments.steps.run_shell_script:
         script: |
           aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin <acct>.dkr.ecr.$<http://AWS_REGION.amazonaws.com|AWS_REGION.amazonaws.com>
           docker build -t <repo>:${GIT_SHA} .
           docker tag <repo>:${GIT_SHA} <acct>.dkr.ecr.$<http://AWS_REGION.amazonaws.com/<repo>:${GIT_SHA}|AWS_REGION.amazonaws.com/<repo>:${GIT_SHA}>
           docker push <acct>.dkr.ecr.$<http://AWS_REGION.amazonaws.com/<repo>:${GIT_SHA}|AWS_REGION.amazonaws.com/<repo>:${GIT_SHA}>
   deployments:
     - name: Inventory Forecaster Pipeline
       work_pool:
         name: ecs-work-pool
         job_variables:
           image: "<acct>.dkr.ecr.<region>.<http://amazonaws.com/<repo>:${GIT_SHA}|amazonaws.com/<repo>:${GIT_SHA}>"
           env: {}
- Note: This runs at deploy time, not on the worker. ECS precedence reminder: - If
task_definition_arn
is set, ECS uses it and ignores other container settings like
image
in job_variables. - If no ARN, and
image
is set, Prefect’s ECS worker will register/use a task definition for that image. Docs: - prefect.yaml reference - ECS work pool guide Do you want me to wire up a concrete build step for your registry (ECR/GCR/ACR) and update your yaml accordingly, or will you handle the image build in CI?
m
@Marvin - right now there is this sync.sh which actually creates the edix_custom_config to all filename=$1 cd $(dirname $filename) filename=$(basename $filename) prefect config set PREFECT_API_URL=$PREFECT_API_URL for tbl in $(yq '.deployments.[].entrypoint' $filename) do flowdirname=$(dirname $(echo $tbl | cut -d':' -f1)) flowpyname=$(basename $(echo $tbl | cut -d':' -f1)) flowfunc=$(echo $tbl | cut -d':' -f2) mkdir -p $flowdirname cat << EOF > $(echo $tbl | cut -d':' -f1 ) from prefect import Flow @Flow def $flowfunc(edix_custom_config=dict()): print("Sync") EOF done echo $filename prefect --no-prompt deploy --all --prefect-file $filename rm -rf ~/.prefect/* so if i remove the parameter here and add the build step in my yaml, prefect shoul dbe able to recognize my optional parameters right ?
m
thought for 58.8 seconds
Yes—with one important tweak: stop generating those placeholder flow files entirely. If you add build steps that clone/install your real code before
prefect deploy
runs, Prefect will import the actual flow functions from your repo and infer their parameters (including optional defaults) for the UI. Concretely: - Remove the part of
sync.sh
that writes dummy files like:
Copy code
from prefect import Flow
  @Flow
  def <flow>(edix_custom_config=dict()):
      ...
Those placeholders override your real code and prevent accurate schema inference. - Add build steps in your prefect.yaml that: - git clone your code repo - set the working directory to the cloned repo - install dependencies (e.g., poetry install) This ensures that when
prefect deploy
runs, the entrypoints in your yaml point to real files, and Prefect can import your real flow functions and see the true parameter signatures. Example shape (3.1.0):
Copy code
prefect-version: 3.1.0

build:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: code
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ code.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

pull:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/<owner>/<repo>>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: main
      id: clone
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.run_shell_script:
      script: |
        poetry install --no-root

deployments:
  - name: Inventory Forecaster Pipeline
    entrypoint: pipelines/forecaster_pipeline.py:forecaster_training_flow_run
    tags: ["Inventory Intelligence"]
    description: "Run the forecaster pipeline"
    parameters:
      # Optional: deployment-level defaults. Schema still comes from the flow signature.
      tenant_id: null
    work_pool:
      name: ecs-work-pool
      job_variables:
        image: "<your prebuilt image or leave ARN>"
        # OR use task_definition_arn if you prefer a fixed TD
        env: {}
    schedule:
      cron: "0 4 * * *"
      timezone: UTC
      day_or: true
Then your
sync.sh
can be simplified to just:
Copy code
filename=$1
prefect config set PREFECT_API_URL=$PREFECT_API_URL
prefect --no-prompt deploy --all --prefect-file $filename
Requirements for this to work: - The deploy environment must be able to clone the repo (access token present) and install dependencies (git/poetry available). - The
entrypoint
paths in your yaml must resolve to the actual files in the cloned repo. - Keep the
pull
steps so workers fetch/install the code at runtime as well. If those conditions are met, Prefect will import your real flows during deploy and the UI will show your actual optional parameters and defaults. Links for reference: - prefect.yaml reference - Store flow code for deployments
m
@Marvin - Would this be okay ? filename=$1 cd $(dirname $filename) filename=$(basename $filename) prefect config set PREFECT_API_URL=$PREFECT_API_URL for tbl in $(yq '.deployments.[].entrypoint' $filename) do flowdirname=$(dirname $(echo $tbl | cut -d':' -f1)) flowpyname=$(basename $(echo $tbl | cut -d':' -f1)) flowfunc=$(echo $tbl | cut -d':' -f2) mkdir -p $flowdirname cat << EOF > $(echo $tbl | cut -d':' -f1 ) EOF done echo $filename prefect --no-prompt deploy --all --prefect-file $filename rm -rf ~/.prefect/*
m
thought for 239.3 seconds
Short answer: partly okay, but a few important fixes. What’s okay - In Prefect 3.x, these are valid: - Global flag:
prefect --no-prompt ...
(verified with
prefect --help
) -
prefect deploy --prefect-file <file>
-
prefect deploy --all
-
entrypoint
format should be
path/to/file.py:flow_function_name
. -
prefect deploy
should be run from the project root where your
prefect.yaml
lives, so the
cd
is fine. What needs attention - Creating empty Python files for each entrypoint will fail.
prefect deploy
imports the entrypoint at deploy time to validate the callable. An empty file without a
@flow
function will cause an import/attribute error. - Your
yq
invocation will likely include quotes and split on whitespace. Use raw output and a safe read loop to avoid breakage. -
prefect config set PREFECT_API_URL=$PREFECT_API_URL
is fine, but make sure
PREFECT_API_KEY
is set too if you’re targeting Prefect Cloud. Also note this mutates your current profile. -
rm -rf ~/.prefect/*
is destructive. It deletes profiles, settings, and local caches. Only do this in an ephemeral CI environment, not on a dev machine. Safer version of your script
Copy code
#!/usr/bin/env bash
set -euo pipefail

PREFECT_FILE="${1:-prefect.yaml}"
cd "$(dirname "$PREFECT_FILE")"
PREFECT_FILE="$(basename "$PREFECT_FILE")"

# Optional: set API URL (and ensure API key is present for Cloud)
if [[ -n "${PREFECT_API_URL:-}" ]]; then
  prefect config set "PREFECT_API_URL=${PREFECT_API_URL}"
fi

# Validate entrypoints exist and are importable (avoid creating empty stubs)
if command -v yq >/dev/null 2>&1; then
  # Use raw output (-r) to avoid quoted strings
  while IFS= read -r entry; do
    # Expect format path/to/file.py:flow_func
    file="${entry%%:*}"
    func="${entry##*:}"

    if [[ ! -f "$file" ]]; then
      echo "Error: entrypoint file '$file' not found for '$entry'" >&2
      exit 1
    fi
    # Optional: basic importability check
    python - <<PY
import importlib.util, sys
from pathlib import Path
p = Path("$file")
spec = importlib.util.spec_from_file_location(p.stem, p)
m = importlib.util.module_from_spec(spec)
spec.loader.exec_module(m)  # may raise
assert hasattr(m, "$func"), "Module has no attribute '$func'"
PY
  done < <(yq -r '.deployments[].entrypoint' "$PREFECT_FILE")
fi

prefect --no-prompt deploy --all --prefect-file "$PREFECT_FILE"

# Only do this in CI/ephemeral environments
# rm -rf ~/.prefect/*
Notes and alternatives - If your flows live in a different repo and you’re trying to avoid checking them out locally, don’t create empty files. Either: - Run
prefect deploy
in a checkout that actually contains the flows referenced by
entrypoint
, or - Switch to the Python-first pattern and deploy from source: - In Prefect 3.x:
flow.from_source("<git-url>@<ref>", entrypoint="path/to/file.py:flow").deploy(...)
-
prefect.yaml
reference and options: - Prefect 3.x
prefect.yaml
guide: Prefect YAML (3.x) - Entrypoint type/format: Entrypoint type - Settings overview: Manage settings Quick check: Is your goal to deploy from a CI job without having the flow code present at deploy time? If so, I can suggest the cleanest approach for your setup (CLI vs. Python
from_source
, and whether you’re using a git pull step in your runtime).
m
@Marvin - If you look at the previous messages, my prefect yaml is in different repo, and i have updated the build step to pull the code from github. Now i have a deployment.yaml which calls this sync.sh file which creates a dummy python file and creates a flow inside which is messing up things as the actual flow is not referenced. What would be best way to navigate this issue, I want the flow to be referenced during deployment and the parameters should be picked.
m
thought for 286.8 seconds
Got it — since your prefect.yaml lives in a separate “deploy” repo and your build step pulls the flow code from GitHub, the right fix is to stop creating dummy files and ensure the real flow code is present and importable at deploy time. That way Prefect can load the entrypoint to infer parameters. Two solid approaches: Option A — YAML-first with a clone in build and pull - In build: clone the flows repo so
prefect deploy
can import the entrypoint and infer parameters - In pull: clone the same repo so workers fetch the code at runtime - Do not create any placeholder Python files Example prefect.yaml
Copy code
name: my-project
prefect-version: ">=3.0.0"

build:
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        rm -rf flows-repo
        git clone --depth 1 <https://github.com/your-org/your-flows-repo.git> flows-repo

deployments:
  - name: my-deployment
    entrypoint: flows-repo/path/to/flow.py:my_flow
    # Optional: defaults to show in the UI and used when scheduling
    parameters:
      foo: "bar"
      num_rows: 100
    work_pool:
      name: process-pool
    tags: [prod]

# Runtime prep for workers: fetch the same code before execution
pull:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      branch: main
      # or, if private:
      # access_token: ${GITHUB_TOKEN}
  - prefect.deployments.steps.pull.set_working_directory:
      directory: flows-repo
Why this works -
prefect deploy
imports
flows-repo/path/to/flow.py:my_flow
during deploy, so it can read the function signature and infer parameters (defaults appear in the UI). Any values you set under
parameters:
become the deployment’s defaults. - The
pull
steps ensure your workers clone the same repo at run time so the entrypoint is importable on the agent/worker side as well. Option B — Python-first with Flow.from_source (avoids cloning during deploy) - If you’d rather not clone during deploy, define deployments in a small Python script that points to your repo:
Copy code
from prefect.flows import Flow

flow = Flow.from_source(
    "<https://github.com/your-org/your-flows-repo.git@main>",
    entrypoint="path/to/flow.py:my_flow",
)
flow.deploy(
    name="my-deployment",
    work_pool_name="process-pool",
    parameters={"foo": "bar", "num_rows": 100},
)
- Run this in CI to create/update the deployment. Prefect will use the source reference and entrypoint without needing the code locally. - This is the 3.x replacement pattern for old 2.x “build from flow” commands. Clean up your current script - Remove the part that creates empty/dummy Python files. That’s what’s breaking parameter inference and entrypoint validation. - If you keep a
sync.sh
, it should only
git clone
(and maybe set working directory), nothing else. Docs for reference - Prefect YAML (3.x): prefect.yaml guide - Entrypoint format: Entrypoint type - Managing settings: Manage settings Quick check - Do you prefer to keep the YAML-first approach with
prefect deploy
, or would you like a ready-to-run Python
from_source().deploy(...)
script for each deployment? I can tailor either to your exact repo paths and work pool.
m
@Marvin - I will go with the option 1, so how should my sync file look now ? Is the below correct ? filename=$1 cd $(dirname $filename) filename=$(basename $filename) prefect config set PREFECT_API_URL=$PREFECT_API_URL for tbl in $(yq '.deployments.[].entrypoint' $filename) do flowdirname=$(dirname $(echo $tbl | cut -d':' -f1)) flowpyname=$(basename $(echo $tbl | cut -d':' -f1)) flowfunc=$(echo $tbl | cut -d':' -f2) mkdir -p $flowdirname cat << EOF > $(echo $tbl | cut -d':' -f1 ) EOF done echo $filename prefect --no-prompt deploy --all --prefect-file $filename rm -rf ~/.prefect/*
m
thought for 75.6 seconds
Great choice. With Option 1, your sync script should not touch or generate any Python files. Let the build/pull steps in prefect.yaml clone the real repo so
prefect deploy
can import the true entrypoints and infer parameters. Use this minimal, safe sync.sh
Copy code
#!/usr/bin/env bash
set -euo pipefail

PREFECT_FILE="${1:?Usage: sync.sh path/to/prefect.yaml}"

cd "$(dirname "$PREFECT_FILE")"
PREFECT_FILE="$(basename "$PREFECT_FILE")"

# Optional: set API URL for this run (and ensure PREFECT_API_KEY is set for Cloud)
if [[ -n "${PREFECT_API_URL:-}" ]]; then
  prefect config set "PREFECT_API_URL=${PREFECT_API_URL}"
fi

# Do not create dummy files. Build/pull steps in prefect.yaml will fetch the real code.
prefect --no-prompt deploy --all --prefect-file "$PREFECT_FILE"

# Only do this in ephemeral CI containers. Never on a developer machine.
# rm -rf ~/.prefect/*
Make sure your prefect.yaml does the cloning - Build: clone repo so deploy-time import/parameter inference works - Pull: clone repo so workers can import at runtime - Reference the entrypoint path relative to the project root after build (e.g., include the folder name you clone into) Example prefect.yaml
Copy code
name: my-project
prefect-version: ">=3.0.0"

build:
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        rm -rf flows-repo
        git clone --depth 1 <https://github.com/your-org/your-flows-repo.git> flows-repo

deployments:
  - name: my-deployment
    entrypoint: flows-repo/path/to/flow.py:my_flow
    parameters:
      foo: "bar"
      num_rows: 100
    work_pool:
      name: process-pool
    tags: [prod]

pull:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      branch: main
      # If private:
      # access_token: ${GITHUB_TOKEN}
  - prefect.deployments.steps.pull.set_working_directory:
      directory: flows-repo
Notes - Remove the loop that writes empty Python files — that’s what breaks parameter inference. - Ensure the entrypoint path in prefect.yaml matches the location after the build clone (e.g.,
flows-repo/...
). - For private repos, pass a token via
access_token
in the
git_clone
step using an env var (e.g.,
${GITHUB_TOKEN}
). Refs: - Prefect YAML guide: prefect.yaml (3.x) - Entrypoint format: Entrypoint type - Settings: Manage settings If you share your exact repo URL and entrypoint path, I can double-check the YAML and script for you.
m
@Marvin - The sync file step is clear, below is my pull step, how should my build step look ? pull: #single Worker image - prefect.deployments.steps.set_working_directory: directory: /opt/prefect - prefect.deployments.steps.git_clone: repository: #repo link access_token: "{{ prefect.blocks.secret.ghb-tkn }}" branch: 1.2.0 id: clone - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/{{ clone.directory }} - prefect.deployments.steps.run_shell_script: #install Misssing pakages script: | poetry install --no-root
m
thought for 57.6 seconds
Great — your pull step is clear. Make the build step mirror the same clone + working-directory setup so the entrypoint resolves at deploy time. Also, use the fully qualified step paths. Key fixes - Use these step names: - git_clone:
prefect.deployments.steps.pull.git_clone
- set_working_directory:
prefect.deployments.steps.pull.set_working_directory
- run_shell_script:
prefect.deployments.steps.utility.run_shell_script
- Keep paths consistent between build and pull. If you
set_working_directory
into the repo directory in pull, do the same in build and make the
entrypoint
path relative to the repo root (no top-level repo dir prefix). Example build step
Copy code
build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: build_clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "{{ build_clone.directory }}"

  # Optional: sanity check importability at deploy time
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        python - <<'PY'
        import importlib.util, pathlib, sys
        p = pathlib.Path("path/to/flow.py")
        assert p.exists(), f"Missing {p}"
        spec = importlib.util.spec_from_file_location(p.stem, p)
        m = importlib.util.module_from_spec(spec)
        spec.loader.exec_module(m)
        assert hasattr(m, "my_flow"), "Flow function 'my_flow' not found"
        PY
Then your deployment should use an entrypoint relative to the repo root:
Copy code
deployments:
  - name: my-deployment
    entrypoint: path/to/flow.py:my_flow
    parameters:
      foo: bar
      num_rows: 100
Your pull step (adjusted to correct step paths) looks like:
Copy code
pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: clone
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        poetry install --no-root
Notes - Ensure
poetry
is available in your worker image, or consider using
prefect.deployments.steps.utility.pip_install_requirements
if you have a requirements file. - If the repo is private, the
access_token
in both build and pull steps must be valid in those environments. Docs: - YAML steps and schema: prefect.yaml (3.x) - Entrypoint format: Entrypoint type
m
@Marvin - Why the build step doesnt have /opt/prefect as working directory like in pull step ?
m
thought for 74.6 seconds
Short answer: because build steps run on the machine that executes prefect deploy (your CI/runner), not inside your worker container. The absolute path /opt/prefect is meaningful inside your worker image at runtime, but it likely doesn’t exist (or isn’t writable) on your CI host during deploy. Details - Build steps are only there to make the entrypoint importable so Prefect can infer parameters at deploy time. You don’t need a fixed absolute path; just clone and set the working directory to the cloned repo’s directory. - Pull steps run on the worker before each flow run. There you can safely use /opt/prefect because you control the worker image/layout. So for build, prefer relative/current workspace paths:
Copy code
build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: build_clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "{{ build_clone.directory }}"

  # optional: sanity check imports
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        python - <<'PY'
        import importlib.util, pathlib
        p = pathlib.Path("path/to/flow.py")
        assert p.exists(), f"Missing {p}"
        spec = importlib.util.spec_from_file_location(p.stem, p)
        m = importlib.util.module_from_spec(spec)
        spec.loader.exec_module(m)
        assert hasattr(m, "my_flow"), "Flow function 'my_flow' not found"
        PY
And keep your pull steps using /opt/prefect:
Copy code
pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: clone
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        poetry install --no-root
This separation keeps deploy-time logic portable to any CI/host while runtime logic targets your worker filesystem layout. Docs: prefect.yaml (3.x)
m
@Marvin - Got this error Deploying all flows with an existing deployment configuration... ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated. It will not be available in new releases after Sep 2024. Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects. Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 936, in exec_module File "<frozen importlib._bootstrap_external>", line 1073, in get_code File "<frozen importlib._bootstrap_external>", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py'
m
thought for 60.4 seconds
Thanks for the details. You’re hitting two separate things: 1) Deprecation warning - Replace
schedule:
with
schedules:
and provide a list. Example:
Copy code
deployments:
  - name: my-deployment
    schedules:
      - cron: "0 2 * * *"
2) FileNotFoundError during deploy - Prefect is trying to import your entrypoint at deploy time and can’t find the file: /home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py This means the entrypoint path in your deployment does not match where the file actually exists after your build step runs. Since you’re cloning the flows repo in build, you need your entrypoint to be relative to the cloned repo root (and the build working directory should be set accordingly). Do this in your build
Copy code
build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: build_clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "{{ build_clone.directory }}"

  # Optional: debug what’s on disk
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        pwd
        ls -R
        python -c "import pathlib; p=pathlib.Path('edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py'); print('Exists?', p.exists())"
Then update the entrypoint to match repo layout - If the file is at repo_root/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py and your build working dir is set to the repo root (as above), use:
Copy code
deployments:
  - name: forecasting branch items - Synthetic
    entrypoint: edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:your_flow_function
Why the error showed environments/dev/...? - That path is coming from your current entrypoint. It likely still points to a previous local path you used to generate dummy files. Now that dummy-file step is removed, that path doesn’t exist. Point it to the path inside the cloned repo instead. Sanity checks - Ensure the file exists at that path in the 1.2.0 branch/tag you’re cloning. - Keep your pull step aligned with build. You’re already doing:
Copy code
pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: clone
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        poetry install --no-root
- With this, workers will also import the flow at the same relative path at runtime. If you paste the deployment’s
entrypoint:
and the repo’s actual file path, I can confirm the exact string you should use.
m
@Marvin - The below step is failing : - name: Syncing Dev changed files run: | # Loop through each changed file envdir='environments/dev' for FILE in $(git diff --name-only HEAD^ HEAD); do is_temp=$(echo $FILE | grep Template | wc -l) if [[ "$is_temp" -eq 0 && "$FILE" == "$envdir/"* ]]; then echo "syncing file: $FILE" PREFECT_API_URL="http://localhost:${PREFECT_PORT}/api" ./environments/sync-file.sh $FILE fi done Please find the logs below: syncing file: environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/forecaster.yaml /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured. To use this config key, add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured. To use this config key, add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) Set 'PREFECT_API_URL' to 'http://localhost:4200/api'. PREFECT_API_URL is also set by an environment variable which will override your config value. Run
unset PREFECT_API_URL
to clear it. Updated profile 'ephemeral'. /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured. To use this config key, add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured. To use this config key, add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) Deploying all flows with an existing deployment configuration... ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated. It will not be available in new releases after Sep 2024. Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects. Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 936, in exec_module File "<frozen importlib._bootstrap_external>", line 1073, in get_code File "<frozen importlib._bootstrap_external>", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/flows.py", line 1742, in load_flow_from_entrypoint flow = import_object(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 205, in import_object module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 168, in load_script_as_module raise ScriptError(user_exc=exc, path=path) from exc prefect.exceptions.ScriptError: Script at 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' encountered an exception: FileNotFoundError(2, 'No such file or directory') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/cli/_utilities.py", line 42, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/cli/_types.py", line 153, in sync_fn return asyncio.run(async_fn(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/cli/deploy.py", line 438, in deploy await _run_multi_deploy( File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/cli/deploy.py", line 877, in _run_multi_deploy await _run_single_deploy(deploy_config, actions, prefect_file=prefect_file) File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/client/utilities.py", line 103, in with_injected_client return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/cli/deploy.py", line 501, in _run_single_deploy flow = load_flow_from_entrypoint(deploy_config["entrypoint"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/flows.py", line 1751, in load_flow_from_entrypoint flow = safe_load_flow_from_entrypoint(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/flows.py", line 2001, in safe_load_flow_from_entrypoint func_def, source_code = _entrypoint_definition_and_source(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/flows.py", line 2235, in _entrypoint_definition_and_source source_code = Path(path).read_text() ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/pathlib.py", line 1058, in read_text with self.open(mode='r', encoding=encoding, errors=errors) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/pathlib.py", line 1044, in open return io.open(self, mode, buffering, encoding, errors, newline) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' An exception occurred.
m
Your message was too long, here's your message at the allowed limit: - The below step is failing - name Syncing Dev changed files run | # Loop through each changed file envdir='environments/dev' for FILE in $(git diff --name-only HEAD^ HEAD); do is_temp=$(echo $FILE | grep Template | wc -l) if [[ $is_temp -eq 0 && $FILE == $envdir/ * ]]; then echo syncing file $FILE PREFECT_API_URL= <http //localhost> ${PREFECT_PORT}/api /environments/sync-file sh $FILE fi done Please find the logs below syncing file environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/forecaster yaml /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Set 'PREFECT_API_URL' to '<http //localhost 4200/api>' PREFECT_API_URL is also set by an environment variable which will override your config value Run
unset PREFECT_API_URL
to clear it Updated profile 'ephemeral' /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Deploying all flows with an existing deployment configuration ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated It will not be available in new releases after Sep 2024 Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects Traceback (most recent call last) File <frozen importlib _bootstrap_external> line 936 in exec_module File <frozen importlib _bootstrap_external> line 1073 in get_code File <frozen importlib _bootstrap_external> line 1130 in get_data FileNotFoundError [Errno 2] No such file or directory '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' The above exception was the direct cause of the following exception Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/flows py line 1742 in load_flow_from_entrypoint flow = import_object(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 205 in import_ob
ject module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 168 in load_script_as_module raise ScriptError(user_exc=exc path=path) from exc prefect exceptions ScriptError Script at 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' encountered an exception FileNotFoundError(2 'No such file or directory') During handling of the above exception another exception occurred Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_utilities py line 42 in wrapper return fn(*args *kwargs) ^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_types py line 153 in sync_fn return asyncio run(async_fn(*args *kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3
m
@Marvin - - The below step is failing - name Syncing Dev changed files run | # Loop through each changed file envdir='environments/dev' for FILE in $(git diff --name-only HEAD^ HEAD); do is_temp=$(echo $FILE | grep Template | wc -l) if [[ $is_temp -eq 0 && $FILE == $envdir/ * ]]; then echo syncing file $FILE PREFECT_API_URL= <http //localhost> ${PREFECT_PORT}/api /environments/sync-file sh $FILE fi done Please find the logs below syncing file environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/forecaster yaml /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Set 'PREFECT_API_URL' to '<http //localhost 4200/api>' PREFECT_API_URL is also set by an environment variable which will override your config value Run
unset PREFECT_API_URL
to clear it Updated profile 'ephemeral' /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Deploying all flows with an existing deployment configuration ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated It will not be available in new releases after Sep 2024 Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects Traceback (most recent call last) File <frozen importlib _bootstrap_external> line 936 in exec_module File <frozen importlib _bootstrap_external> line 1073 in get_code File <frozen importlib _bootstrap_external> line 1130 in get_data FileNotFoundError [Errno 2] No such file or directory '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' The above exception was the direct cause of the following exception Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/flows py line 1742 in load_flow_from_entrypoint flow = import_object(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 205 in import_ob [11:29 AM] ject module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 168 in load_script_as_module raise ScriptError(user_exc=exc path=path) from exc prefect exceptions ScriptError Script at 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' encountered an exception FileNotFoundError(2 'No such file or directory') During handling of the above exception another exception occurred Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_utilities py line 42 in wrapper return fn(*args *kwargs) ^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_types py line 153 in sync_fn return asyncio run(async_fn(*args *kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3
m
Your message was too long, here's your message at the allowed limit: - - The below step is failing - name Syncing Dev changed files run | # Loop through each changed file envdir='environments/dev' for FILE in $(git diff --name-only HEAD^ HEAD); do is_temp=$(echo $FILE | grep Template | wc -l) if [[ $is_temp -eq 0 && $FILE == $envdir/ * ]]; then echo syncing file $FILE PREFECT_API_URL= <http //localhost> ${PREFECT_PORT}/api /environments/sync-file sh $FILE fi done Please find the logs below syncing file environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/forecaster yaml /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Set 'PREFECT_API_URL' to '<http //localhost 4200/api>' PREFECT_API_URL is also set by an environment variable which will override your config value Run
unset PREFECT_API_URL
to clear it Updated profile 'ephemeral' /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured To use this config key add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/pydantic_settings/main py 425 UserWarning Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured To use this config key add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook self _settings_warn_unused_config_keys(sources self model_config) Deploying all flows with an existing deployment configuration ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated It will not be available in new releases after Sep 2024 Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects Traceback (most recent call last) File <frozen importlib _bootstrap_external> line 936 in exec_module File <frozen importlib _bootstrap_external> line 1073 in get_code File <frozen importlib _bootstrap_external> line 1130 in get_data FileNotFoundError [Errno 2] No such file or directory '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' The above exception was the direct cause of the following exception Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/flows py line 1742 in load_flow_from_entrypoint flow = import_object(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 20
5 in import_ob [11 29 AM] ject module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/utilities/importtools py line 168 in load_script_as_module raise ScriptError(user_exc=exc path=path) from exc prefect exceptions ScriptError Script at 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline py' encountered an exception FileNotFoundError(2 'No such file or directory') During handling of the above exception another exception occurred Traceback (most recent call last) File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_utilities py line 42 in wrapper return fn(*args *kwargs) ^^^^^^^^^^^^^^^^^^^ File /opt/hostedtoolcache/Python/3 11 4/x64/lib/python3 11/site-packages/prefect/cli/_types py line 153 in sync_fn return asyncio
m
@Marvin - Please find the logs below: syncing file: environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/forecaster.yaml /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured. To use this config key, add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured. To use this config key, add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) Set 'PREFECT_API_URL' to 'http://localhost:4200/api'. PREFECT_API_URL is also set by an environment variable which will override your config value. Run
unset PREFECT_API_URL
to clear it. Updated profile 'ephemeral'. /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
pyproject_toml_table_header
is set in model_config but will be ignored because no PyprojectTomlConfigSettingsSource source is configured. To use this config key, add a PyprojectTomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) /opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/pydantic_settings/main.py425 UserWarning: Config key
toml_file
is set in model_config but will be ignored because no TomlConfigSettingsSource source is configured. To use this config key, add a TomlConfigSettingsSource source to the settings sources via the settings_customise_sources hook. self._settings_warn_unused_config_keys(sources, self.model_config) Deploying all flows with an existing deployment configuration... ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ Defining a schedule via the
schedule
key in the deployment has been deprecated. It will not be available in new releases after Sep 2024. Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects. Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 936, in exec_module File "<frozen importlib._bootstrap_external>", line 1073, in get_code File "<frozen importlib._bootstrap_external>", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' The actual step in deployment is - name: Syncing Dev changed files run: | # Loop through each changed file envdir='environments/dev' for FILE in $(git diff --name-only HEAD^ HEAD); do is_temp=$(echo $FILE | grep Template | wc -l) if [[ "$is_temp" -eq 0 && "$FILE" == "$envdir/"* ]]; then echo "syncing file: $FILE" PREFECT_API_URL="http://localhost:${PREFECT_PORT}/api" ./environments/sync-file.sh $FILE fi done
m
thought for 54.3 seconds
Thanks for the logs — this confirms the entrypoint is being imported before your code exists at that path in CI. What’s happening - Build steps run during prefect deploy on your CI host. Your entrypoint points to environments/dev/.../edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py, but you aren’t creating that path in build anymore (and you shouldn’t). So the import fails. Fix it in two parts 1) Add a build section that clones your flows repo and sets the working directory to the repo root
Copy code
build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: build_clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "{{ build_clone.directory }}"

  # Optional: debug to ensure your file is present
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        echo "CI build PWD:"
        pwd
        echo "Repo tree:"
        ls -R | head -n 200
        python - <<'PY'
        import pathlib
        p = pathlib.Path("edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py")
        print("Exists?", p.exists(), "->", p.resolve())
        PY
2) Update the deployment entrypoint to be relative to the cloned repo root - If your file is at repo_root/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py and the flow function is my_flow:
Copy code
deployments:
  - name: forecasting branch items - Synthetic
    entrypoint: edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:my_flow
    # replace schedule with schedules (list)
    schedules:
      - cron: "0 2 * * *"
    # keep any parameters you need here as defaults
- Do not include environments/dev/... in the entrypoint — that was the old placeholder path and won’t exist after build. Align your pull steps (runtime) with the same repo layout - Your pull section is good conceptually; just ensure you use fully-qualified step names:
Copy code
pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/your-org/your-flows-repo.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.2.0
      id: clone
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}
  - prefect.deployments.steps.utility.run_shell_script:
      script: |
        poetry install --no-root
Why we don’t set /opt/prefect in build - Build runs on your CI host; /opt/prefect is a path inside your worker image. Use the cloned repo directory in build; use /opt/prefect in pull. Action items for you - Add the build section above. - Change the entrypoint to the repo-relative path. - Replace schedule with schedules as shown. - Re-run deploy. If it still fails, paste the
pwd/ls
output from the build step and your exact entrypoint so I can pinpoint the path. Refs - prefect.yaml (3.x) - Entrypoint format
m
@Marvin - But are you sure the build step is working ? I feel the indentation is not correct and i don't it printing anything my entry point is always like this entrypoint: edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run find my build and pull steps indentation build: - prefect.deployments.steps.pull.git_clone: repository: https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git access_token: "{{ prefect.blocks.secret.ghb-tkn }}" branch: 1.5.0 id: build_clone - prefect.deployments.steps.pull.set_working_directory: directory: "{{ build_clone.directory }}" # Optional: debug what's on disk - prefect.deployments.steps.utility.run_shell_script: script: | pwd ls -R python -c "import pathlib; p=pathlib.Path('edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py'); print('Exists?', p.exists())" pull: #single Worker image - prefect.deployments.steps.set_working_directory: directory: /opt/prefect
m
thought for 33.1 seconds
You’re right to double-check. Two things to fix: 1) Step names and indentation - Use the fully-qualified step names: - git_clone:
prefect.deployments.steps.pull.git_clone
- set_working_directory:
prefect.deployments.steps.pull.set_working_directory
- run_shell_script:
prefect.deployments.steps.utility.run_shell_script
- Indentation under
pull:
must be indented as a list (your snippet shows it at the root). Same idea for
build:
. 2) Make the build step noisy and fail fast if the file isn’t present - Explicitly stream output and add a hard failure if the flow file can’t be found. This will prove the build step is running and that the path is correct. Try this exact structure
Copy code
name: my-project
prefect-version: ">=3.0.0"

build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: build_clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "{{ build_clone.directory }}"

  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "Build PWD: $(pwd)"
        echo "Repo contents (top-level):"
        ls -la
        python - <<'PY'
        import pathlib, sys
        p = pathlib.Path("edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py")
        print("Flow file exists?", p.exists(), "->", p.resolve())
        if not p.exists():
            sys.exit("ERROR: flow file not found at expected path during build")
        PY

pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect

  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: clone

  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/{{ clone.directory }}

  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "Runtime PWD: $(pwd)"
        poetry install --no-root

deployments:
  - name: forecasting branch items - Synthetic
    entrypoint: edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
    # migrate away from 'schedule:' to 'schedules:' per deprecation
    # schedules:
    #   - cron: "0 2 * * *"
Notes - Make sure the repo actually has the file at that path on branch/tag 1.5.0. If you’re targeting a tag, using
branch: "1.5.0"
will typically work, but if not, switch to
commit_sha:
with the tag’s SHA. - The entrypoint should be repo-relative (no environments/dev prefix). - Your earlier
pull
used
prefect.deployments.steps.set_working_directory
(missing
.pull.
) and the list item was not indented under
pull:
— that would be ignored. Extra debugging if needed - You can force more logs by setting
PREFECT_LOGGING_LEVEL=DEBUG
in your CI job when running
prefect deploy
. - If you still don’t see the “Build PWD”/“Repo contents” lines, it means the build block isn’t being parsed (usually due to YAML structure). Paste the top of your prefect.yaml (name/prefect-version/build/pull/deployments) and I’ll validate the structure.
m
@Marvin - Do you think something should be changed here ? set -euo pipefail PREFECT_FILE="${1?Usage sync.sh path/to/prefect.yaml}" cd "$(dirname "$PREFECT_FILE")" PREFECT_FILE="$(basename "$PREFECT_FILE")" # Optional: set API URL for this run if [[ -n "${PREFECT_API_URL:-}" ]]; then prefect config set "PREFECT_API_URL=${PREFECT_API_URL}" fi # Prefect Deploy command prefect --no-prompt deploy --all --prefect-file "$PREFECT_FILE" # Only do this in ephemeral CI containers rm -rf ~/.prefect/*
m
thought for 43.6 seconds
Your sync.sh is almost perfect. I’d make 3 small tweaks: - Don’t call prefect config set when PREFECT_API_URL is already an env var (it just warns and the env var wins anyway). Either rely on the env var or unset it first. - Make the cleanup conditional on CI so you don’t accidentally wipe local profiles. - Optional: add a bit of debug to confirm which file is being used and enable more logs during deploy. Suggested version
Copy code
#!/usr/bin/env bash
set -euo pipefail

PREFECT_FILE="${1:?Usage: sync.sh path/to/prefect.yaml}"

cd "$(dirname "$PREFECT_FILE")"
PREFECT_FILE="$(basename "$PREFECT_FILE")"

echo "Using prefect file: $PREFECT_FILE"
# Prefer env vars over config set to avoid warning/no-op
# If you really want to store it in the profile instead of env, unset first:
# if [[ -n "${PREFECT_API_URL:-}" ]]; then
#   unset PREFECT_API_URL
#   prefect config set "PREFECT_API_URL=${YOUR_URL}"
# fi

# Optional: more verbose logs while debugging build/pull steps
# export PREFECT_LOGGING_LEVEL=DEBUG

prefect --no-prompt deploy --all --prefect-file "$PREFECT_FILE"

# Only do this in CI
if [[ "${CI:-}" == "true" ]]; then
  rm -rf ~/.prefect/*
fi
Notes - The warning you saw (“PREFECT_API_URL is also set by an environment variable…”) is because env vars override profile settings. Rely on the env var and skip config set to avoid it. - Ensure your prefect.yaml build/pull sections have the fully qualified step names and proper indentation so you see the debug output from your build steps. - YAML guide: prefect.yaml (3.x) - Confirm that
entrypoint: edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
is correct relative to the repo root after the build step sets the working directory.
m
@Marvin - 104448.631 | DEBUG | prefect.profiles - Using profile 'ephemeral' Deploying all flows with an existing deployment configuration... ╭──────────────────────────────────────────────────────────────────────────────╮ │ Deploying forecasting branch items - Synthetic │ ╰──────────────────────────────────────────────────────────────────────────────╯ 104448.816 | DEBUG | prefect.client - Connecting to API at http://localhost:4200/api/ Defining a schedule via the
schedule
key in the deployment has been deprecated. It will not be available in new releases after Sep 2024. Please use
schedules
instead by renaming the
schedule
key to
schedules
and providing a list of schedule objects. 104448.840 | DEBUG | prefect.client - Connecting to API at http://localhost:4200/api/ 104448.865 | DEBUG | prefect.client - Connecting to API at http://localhost:4200/api/ Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 936, in exec_module File "<frozen importlib._bootstrap_external>", line 1073, in get_code File "<frozen importlib._bootstrap_external>", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/prefect-manifests/prefect-manifests/environments/dev/dbf2939a-42f1-48a2-aec0-8c2ed9ed3cd0/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/flows.py", line 1742, in load_flow_from_entrypoint flow = import_object(entrypoint) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 205, in import_object module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 168, in load_script_as_module raise ScriptError(user_exc=exc, path=path) from exc prefect.exceptions.ScriptError: Script at 'edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py' encountered an exception: FileNotFoundError(2, 'No such file or directory') How to know whether build step is pulling the code from github ?
m
thought for 70.2 seconds
You’re right to suspect the build step. Two points: - Entrypoint import is resolved relative to the project root (where prefect.yaml lives), not the build step’s working directory. So even if you cd into the clone during build, Prefect will still look for the entrypoint under the project root. - To prove the build step ran and to make the import succeed, clone the repo into a predictable subfolder under the project root (e.g., flows-repo), then point entrypoint at that path. Also print and fail fast if the file is missing. How to verify build is running and fix the path - Add a noisy run_shell_script to the build section with stream_output: true that: - prints pwd - shows the directory tree - moves the cloned repo into a known folder (flows-repo) - asserts the flow file exists under that folder Example build and pull, plus entrypoint
Copy code
name: my-project
prefect-version: ">=3.0.0"

build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: clone

  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "BUILD PWD: $(pwd)"
        echo "Cloned dir: {{ clone.directory }}"
        # Normalize path to a fixed folder under project root so entrypoint resolves
        rm -rf flows-repo
        mv "{{ clone.directory }}" "flows-repo"
        echo "Top-level after move:"
        ls -la
        echo "Checking for flow file..."
        python - <<'PY'
        import pathlib, sys
        p = pathlib.Path("flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py")
        print("Flow file exists?", p.exists(), "->", p.resolve())
        if not p.exists():
            sys.exit("ERROR: flow file not found during build")
        PY

pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: pull_clone
  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "RUNTIME PWD: $(pwd)"
        rm -rf flows-repo
        mv "{{ pull_clone.directory }}" "flows-repo"
        echo "Using flows-repo as runtime working dir"
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/flows-repo
  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        poetry install --no-root

deployments:
  - name: forecasting branch items - Synthetic
    entrypoint: flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
    # Migrate to 'schedules:' (list) instead of 'schedule:'
    # schedules:
    #   - cron: "0 2 * * *"
How to confirm build ran - With stream_output: true, you should see “BUILD PWD”, “Cloned dir”, and the “Flow file exists?” line in your CI logs. If you don’t see them: - The build section is not being parsed (usually YAML structure issue or you’re pointing to a different prefect.yaml). - Echo the file being used in your sync script (you already print it). You can also
grep -n "build:" "$PREFECT_FILE"
before running deploy to confirm the build block is present. Why your earlier entrypoint failed - Prefect tried to import edix_forecasting_intelligence/... relative to the project root, but your clone wasn’t under that path. By normalizing the clone into flows-repo and referencing flows-repo/... in entrypoint, the importer can find the file. Optional debugging - Keep
PREFECT_LOGGING_LEVEL=DEBUG
set for deploy runs so step logs are visible.
- If the repo tag “1.5.0” isn’t a branch, consider using commit_sha instead of branch. Docs - YAML steps and schema: prefect.yaml (3.x) - Entrypoint format: Entrypoint type
m
@Marvin - Lets break the steps and talk about the flow happening now from the deployment/ci file to call the sync file and then run prefect deploy and the execute build and pull steps
m
thought for 48.6 seconds
Absolutely — here’s the end-to-end flow broken down, from your CI calling the sync script through Prefect deploy, build/pull steps, and execution. I’ll also call out where things run and how paths are resolved. High-level roles - CI machine (where prefect deploy runs): Executes your sync.sh, reads prefect.yaml, runs build steps, imports the entrypoint to infer parameters, and registers/upserts the deployment. - Worker machine (where flow runs happen): Executes pull steps before each run, then imports and executes the flow. End-to-end sequence 1) CI triggers and calls your sync script - Your CI job detects changes in environments/dev/... and runs: - PREFECT_API_URL=... ./environments/sync-file.sh environments/dev/.../forecaster.yaml - sync.sh: - cd into the prefect.yaml’s directory - runs: - prefect --no-prompt deploy --all --prefect-file forecaster.yaml 2) prefect deploy loads your prefect.yaml on the CI machine - Reads the deployments defined in that file - For each deployment, in order: - Executes build steps (on CI machine) - Imports the entrypoint to infer parameters and validate the flow - Creates/updates the deployment in the API with that entrypoint, parameters, schedules, tags, work pool, and pull steps 3) Build steps (CI-time) prepare the code for import - These steps run on your CI host, not inside your worker image. They must make the entrypoint path resolvable from the project root (the directory containing prefect.yaml). - Recommended pattern for reliable paths: - Clone the flows repo into a stable subfolder under the project root (e.g., flows-repo) - Do NOT rely on changing the process working directory for import resolution — entrypoint is resolved relative to the project root - Make the build step noisy and fail fast if the flow file is missing Example build that proves it ran and normalizes the path
Copy code
build:
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: clone

  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "BUILD PWD: $(pwd)"
        echo "Cloned dir: {{ clone.directory }}"
        rm -rf flows-repo
        mv "{{ clone.directory }}" "flows-repo"
        echo "Top-level after move:"
        ls -la
        echo "Checking for flow file..."
        python - <<'PY'
        import pathlib, sys
        p = pathlib.Path("flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py")
        print("Flow file exists?", p.exists(), "->", p.resolve())
        if not p.exists():
            sys.exit("ERROR: flow file not found during build")
        PY
4) Entrypoint import and parameter inference (still on CI) - After build, Prefect imports your entrypoint. Key rule: the entrypoint path is evaluated relative to the project root (where prefect.yaml lives). - Choose one consistent pattern: - If you move the repo to flows-repo/, set entrypoint to flows-repo/... - Or keep entrypoint as edix_forecasting_intelligence/... and move the repo contents under the project root such that that path exists. - Prefect reads the flow function signature to infer parameters; your deployment defaults under parameters: are applied and visible in the UI. Example deployment with consistent entrypoint
Copy code
deployments:
  - name: forecasting branch items - Synthetic
    entrypoint: flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
    # migrate away from 'schedule:'
    # schedules:
    #   - cron: "0 2 * * *"
5) Deployment is created/updated in the API - The deployment now contains your pull steps. These do not run during deploy. 6) A worker picks up a flow run and executes pull steps (runtime)
- On the worker machine, before running the flow, Prefect executes pull steps to put code in the right place for the entrypoint path. - Keep pull steps consistent with how you set entrypoint. Example pull that mirrors the build normalization - Since entrypoint uses flows-repo/..., keep working directory at /opt/prefect and ensure that folder exists at runtime:
Copy code
pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: pull_clone
  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        rm -rf flows-repo
        mv "{{ pull_clone.directory }}" "flows-repo"
        echo "RUNTIME PWD: $(pwd)"
        # If you need deps:
        # poetry install --no-root
- Now the entrypoint flows-repo/edix_forecasting_intelligence/... is valid at runtime as well. How to know build ran - You’ll see the “BUILD PWD”, “Cloned dir”, and “Checking for flow file” lines in your CI logs (we set stream_output: true). - If you don’t see them: - The build block may not be under the same prefect.yaml you’re deploying - Or YAML indentation is off - Or you’re not passing the intended file to sync.sh - In your sync.sh, print the file you use and enable debug logging:
Copy code
echo "Using prefect file: $PREFECT_FILE"
export PREFECT_LOGGING_LEVEL=DEBUG
Common pitfalls to avoid - Using absolute paths in build (e.g., /opt/prefect) — that path exists in your worker image, not on CI. - Using entrypoint paths that don’t exist under the project root after build completes. - Relying on working-directory changes to change how entrypoint is resolved — keep entrypoint consistent with the project-root file layout. - Still using schedule instead of schedules (you’re seeing the deprecation). References - Prefect YAML (3.x): prefect.yaml guide - Entrypoint format: Entrypoint type - Settings: Manage settings If you paste your exact top-level prefect.yaml (name, prefect-version, build, pull, deployments) I can sanity-check indentation and confirm the final entrypoint string for your repo layout.
m
@Marvin - Based on whatever you have said I dont think build step happened. As i dont any statements in logs.
m
thought for 64.7 seconds
If you’re not seeing any of your “build” prints, it almost certainly means the build block is not being parsed/executed. Let’s verify and fix this systematically. Fast checks to confirm what Prefect is actually using - Confirm the exact file being deployed in CI: - In sync.sh, before deploy:
Copy code
echo "Deploying with prefect file: $PREFECT_FILE"
    test -f "$PREFECT_FILE" || { echo "File not found"; exit 1; }
- Dump the build section that Prefect will read: - Add:
Copy code
echo "Build steps from $PREFECT_FILE:"
    yq -C '.build' "$PREFECT_FILE" || true
- If this prints null or nothing, Prefect has no build steps in the file you’re passing. Force-proof that build runs - Put an intentionally noisy, fail-fast step at the top of build. If deploy still jumps to an entrypoint import error without showing this, build is not running.
Copy code
build:
  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "BUILD START – this should print"
        # exit 123  # uncomment to prove build executes before entrypoint import

  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/Intelmatix-AI/edix-forecasting-intelligence.git>
      access_token: "{{ prefect.blocks.secret.ghb-tkn }}"
      branch: 1.5.0
      id: clone

  - prefect.deployments.steps.utility.run_shell_script:
      stream_output: true
      script: |
        set -euo pipefail
        echo "BUILD PWD: $(pwd)"
        echo "Cloned dir is: {{ clone.directory }}"
        rm -rf flows-repo
        mv "{{ clone.directory }}" "flows-repo"
        ls -la
        python - <<'PY'
        import pathlib, sys
        p = pathlib.Path("flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py")
        print("Flow file exists?", p.exists(), "->", p.resolve())
        if not p.exists():
            sys.exit("ERROR: flow file not found during build")
        PY
Common reasons build doesn’t run - Wrong file: The file your CI passes to sync.sh isn’t the one you edited. Your logs show “syncing file: environments/dev/.../forecaster.yaml” — make sure that file contains the build block exactly as above. - YAML structure/indentation:
build:
must be a top-level key (same level as
name
,
prefect-version
,
pull
,
deployments
). The list items under it must be indented two spaces. - Step names: Must be fully-qualified: -
prefect.deployments.steps.pull.git_clone
-
prefect.deployments.steps.pull.set_working_directory
-
prefect.deployments.steps.utility.run_shell_script
- Entrypoint path vs project root: Prefect resolves
entrypoint:
relative to the project root (directory of the prefect file), not relative to any cd in build. That’s why we move the clone into a known folder like
flows-repo/
under the project root and set:
Copy code
entrypoint: flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
Also update schedules - You’re seeing a deprecation for
schedule:
. Rename to
schedules:
and give a list:
Copy code
deployments:
    - name: forecasting branch items - Synthetic
      entrypoint: flows-repo/edix_forecasting_intelligence/pipelines/main_forecasting_pipeline.py:general_forecast_flow_run
      schedules:
        - cron: "0 2 * * *"
Extra visibility - In sync.sh, enable debug:
Copy code
export PREFECT_LOGGING_LEVEL=DEBUG
- After adding the noisy build step with
stream_output: true
, your CI logs should show: - “BUILD START – this should print” - “BUILD PWD: …” - “Cloned dir is: …” - “Flow file exists? True …” If you don’t see these lines, Prefect is not reading your build block (wrong file or YAML shape). If you can share the top of the exact file you’re deploying (from the first line through the end of the build/pull/deployments sections), I’ll sanity‑check indentation and the step paths.