<@ULVA73B9P> I am getting the error Downloading fl...
# ask-marvin
p
@Marvin I am getting the error Downloading flow code from storage at None.
m
thought for 197.6 seconds
Thanks for the details — that message is actually an INFO log, not an exception. It means “I’m about to fetch your flow code, but the deployment has no storage/source configured,” i.e. Prefect is trying to load your entrypoint from the local working directory of the worker. If your code isn’t present there (or the entrypoint is wrong), the run will fail shortly after. Common causes - Deployment has no source/storage configured (so it logs None) - Wrong or missing
entrypoint
- Worker runs on a different machine than where the code lives (local paths won’t work) - In Prefect 2.x specifically: a misconfigured storage block/path How to fix (Prefect 3.x) Pick one of these patterns so your deployment knows where to get your code: - CLI (recommended)
Copy code
# From your project root (where your flow file is), run:
  prefect deploy
This records the project source and entrypoint in the deployment so workers can pull code correctly. - Python API
Copy code
from prefect import flow

  @flow
  def my_flow():
      ...

  my_flow.from_source(
      "<https://github.com/ORG/REPO>",
      entrypoint="flows/my_flow.py:my_flow",
  ).deploy(
      name="prod",
      work_pool_name="process-pool",
  )
You can also use a local directory:
Copy code
my_flow.from_source(
      ".",
      entrypoint="flows/my_flow.py:my_flow",
  ).deploy(name="local", work_pool_name="process-pool")
Notes - If your worker is on a different machine/container, don’t rely on local paths. Use a Git repo or remote storage that the worker can reach. - If you intentionally want to run from the local filesystem, make sure the worker’s working directory is set to your project root and the
entrypoint
path is correct relative to that directory. If you’re on Prefect 2.x This log line is common when the deployment has no storage block or an incorrect
path
. Configure a storage block (Local/RemoteFileSystem), set
path
and
entrypoint
correctly, or consider upgrading to 3.x for the newer
from_source(...).deploy(...)
workflow. References - Example issues showing this exact log and root cause: - Deployment via Python API logs “Downloading flow code from storage at None” when entrypoint not found - LocalFileSystem/path bug in 2.x showing the same log - Another 2.x thread showing the log and downstream failure -
prefect deploy
help (you can run this locally to see options):
Copy code
prefect deploy --help
- Prefect 3.x notes about new deployment patterns: - Prefect 3.2 release notes (introduces from_source) - Source code reference (shows
from_source
usage in 3.x): prefect/flows.py To help you faster, could you share: - Your Prefect version (output of
prefect version
) - How you created the deployment (CLI vs Python), and the
entrypoint
you used - Where your worker is running relative to your code (same machine? container? different host?) - Any follow-up error after the “storage at None” log (e.g., FileNotFoundError)
p
@Marvin for remote runners yaml works perfectly but the same deployment using .deploy method fails
m
thought for 105.9 seconds
Got it — if the same deployment works via project/YAML but fails when you call
.deploy
, it almost always means the Python-based deployment didn’t include a source for your code. The worker then logs “Downloading flow code from storage at None” and tries to load the entrypoint from its local working directory (which usually doesn’t have your code), so the run fails. Why YAML works -
prefect deploy
(project/YAML) records a source for the deployment (e.g., your Git repo or local project directory), so remote workers know where to fetch the code. Why
.deploy
fails - Calling
my_flow.deploy(...)
on a plain
@flow
doesn’t bind any source/storage by default. Remote workers don’t have your code, so source is
None
. How to fix with
.deploy
Use one of these patterns: 1) Bind a source first, then deploy
Copy code
from prefect import flow

@flow
def my_flow():
    ...

# GitHub repo
my_flow.from_source(
    "<https://github.com/ORG/REPO>",
    entrypoint="path/to/file.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="YOUR_POOL",
)
Local directory (only works if the worker can see this path — usually not recommended for remote workers):
Copy code
my_flow.from_source(
    ".",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(name="prod", work_pool_name="YOUR_POOL")
2) Package code in a Docker image (no source download needed) If you’re using a Docker/Kubernetes work pool, build/push an image in `.deploy`:
Copy code
from prefect import flow
from prefect.docker import DockerImage

@flow
def my_flow():
    ...

my_flow.deploy(
    name="prod",
    work_pool_name="kubernetes-pool",
    image=DockerImage(
        name="<http://ghcr.io/your-org/your-app:{{|ghcr.io/your-org/your-app:{{> sha }}",
        dockerfile="Dockerfile",
        platform="linux/amd64",
    ),
    build=True,
    push=True,
)
Then ensure your work pool is configured to use that image (or lets the job specify it). Quick checklist - Does your Python-based deployment show a non-empty Source in the UI? If it’s None, workers will fail remotely. - Is the
entrypoint
correct and relative to the source root (e.g.,
repo_subdir/flow_file.py:flow_func
)? - If not using images, can the worker actually reach the source (Git repo/remote storage), and does it have auth set up? References - Flow methods (including
from_source
and
deploy
): prefect/flows.py - Prefect 3.x notes introducing `from_source`: Prefect 3.2 release notes If you share: - Your exact
.deploy
call - The work pool type (process, docker, kubernetes, etc.) - The first error after “storage at None” I can pinpoint the minimal change to make your Python deployment behave the same as your YAML one.
p
@Marvin dresdensued_ui_calculation.deploy( name="dresdensued-ui-calculation", work_pool_name="senec-stage", # TODO: use the correct work pool work_queue_name="default", image=DockerImage( name="n0vum/dresdensued-monitoring", tag=tag, dockerfile="./ddsued-storage/docker/Dockerfile.soh", buildargs={ "GITLAB_TOKEN_NAME": "prefect-token", "GITLAB_ACCESS_TOKEN": Secret.load("gitlab-token").get(), # type: ignore }, # TODO: use a proper token ), job_variables={"image_pull_policy": "Always", "network_mode": "host", "auto_remove": True}, # parameters={"resource_config": resource_config}, tags=[ "UI", "Dresden Süd", "Storage Guard", f"branch:{branch}", f"commit:{commit}", f"version:{ddsued_version}", f"battery_guradian:{batter_guardian_version}", ], ) this is my deploy method and FROM n0vum/prefect-python:3.13-slim RUN pip3 install --no-cache-dir poetry ARG GITLAB_TOKEN_NAME ARG GITLAB_ACCESS_TOKEN # Copy the entire project COPY ./ddsued-storage /app/ddsued-storage COPY ./soc /app/soc COPY ./soh /app/soh WORKDIR /app/ddsued-storage RUN git config --global url."https://${GITLAB_TOKEN_NAME}:${GITLAB_ACCESS_TOKEN}@gitlab.novum-engineering.com".insteadOf "https://gitlab.novum-engineering.com" RUN poetry config virtualenvs.create false && poetry install -v --without dev --no-interaction is the dockerfile and the error is FileNotFoundError: [Errno 2] No such file or directory: '/app/ddsued-storage/None'
m
thought for 101.4 seconds
Thanks — the error points to a missing entrypoint/source on the Python-based deployment. Because
.deploy(...)
didn’t bind a source, Prefect records
entrypoint=None
, and at runtime it tries to resolve the flow from the container’s working directory, ending up with “/app/ddsued-storage/None”. What to change - Bind a source and set an explicit entrypoint before calling deploy. This is required even when you package code into a Docker image. Example fix - If you want to use the local project as the source (and you’re calling deploy from the repo root):
Copy code
dresdensued_ui_calculation.from_source(
    ".",  # project root relative to where you're running this
    entrypoint="ddsued-storage/<path-to-your-flow-file>.py:dresdensued_ui_calculation",
).deploy(
    name="dresdensued-ui-calculation",
    work_pool_name="senec-stage",
    image=DockerImage(
        name="n0vum/dresdensued-monitoring",
        tag=tag,
        dockerfile="./ddsued-storage/docker/Dockerfile.soh",
        buildargs={
            "GITLAB_TOKEN_NAME": "prefect-token",
            "GITLAB_ACCESS_TOKEN": Secret.load("gitlab-token").get(),
        },
    ),
    job_variables={"image_pull_policy": "Always", "network_mode": "host", "auto_remove": True},
    tags=[...],
)
- Or bind your Git repo:
Copy code
dresdensued_ui_calculation.from_source(
    "<https://gitlab.novum-engineering.com/<group>/<repo>.git>",
    entrypoint="ddsued-storage/<path-to-your-flow-file>.py:dresdensued_ui_calculation",
).deploy(...)
Important details - Entrypoint must be a path to the file that defines your flow plus
:function_name
, relative to the code root Prefect will use at runtime. - In your Dockerfile, WORKDIR is
/app/ddsued-storage
. If your flow file actually lives under
/app/soh/...
, the entrypoint must reflect that relative to WORKDIR (e.g.,
../soh/path/to_file.py:func
), or change WORKDIR to
/app
and use
soh/path/to_file.py:func
. Right now, if the file isn’t under
/app/ddsued-storage
, a relative path like
soh/...
will be wrong. - After deploying, check the deployment in the UI: you should see a non-empty “Source” and the correct “Entrypoint”. If either is None/missing, remote runs will fail. Why the YAML/project flow works -
prefect deploy
writes the project’s source and entrypoint into the deployment config so workers can load the code. A bare
.deploy(...)
call does not infer this in your setup, so entrypoint stays None. References - Flow API (from_source, deploy): prefect/flows.py - Related discussion where Python deployments lacked entrypoint and logged “storage at None”: #9159 If you can share the exact repo layout and the path to the file where
dresdensued_ui_calculation
is defined, I can suggest the precise
entrypoint
string to use.
p
@Marvin when i run it with from_source as you said to do i get this error [Errno 2] No such file or directory: '/home/parvez/Downloads/21-0278_upside'
m
thought for 112.7 seconds
That error means the path you gave to
from_source(...)
doesn’t exist in the environment where you’re running
.deploy(...)
. In other words, Prefect is trying to read
/home/parvez/Downloads/21-0278_upside
at deploy time, but that directory isn’t there (common if you’re running deploy inside a container/CI or from a different working directory). How to fix it Pick one of these patterns and make sure the path is resolvable where
.deploy()
runs: A) Use a Git source (recommended for remote runners) - Works anywhere without depending on a local path.
Copy code
dresdensued_ui_calculation.from_source(
    "<https://gitlab.novum-engineering.com/<group>/<repo>.git>",
    entrypoint="ddsued-storage/<path-to-flow-file>.py:dresdensued_ui_calculation",
).deploy(
    name="dresdensued-ui-calculation",
    work_pool_name="senec-stage",
    work_queue_name="default",
    image=DockerImage(
        name="n0vum/dresdensued-monitoring",
        tag=tag,
        dockerfile="./ddsued-storage/docker/Dockerfile.soh",
        buildargs={
            "GITLAB_TOKEN_NAME": "prefect-token",
            "GITLAB_ACCESS_TOKEN": Secret.load("gitlab-token").get(),
        },
    ),
    job_variables={"image_pull_policy": "Always", "network_mode": "host", "auto_remove": True},
    tags=[...],
)
- Ensure the entrypoint is correct relative to the repo root. B) Use the local project path correctly - Run the deploy script from the project root so
.
exists:
Copy code
cd /home/parvez/Downloads/21-0278_upside
python your_deploy_script.py
- Then:
Copy code
dresdensued_ui_calculation.from_source(
    ".",  # must be the real project root on THIS machine
    entrypoint="ddsued-storage/<path-to-flow-file>.py:dresdensued_ui_calculation",
).deploy(...)
- If you’re running this inside Docker/CI, make sure the project directory is actually present/mounted at that same path; otherwise, use the Git source in A. Important notes with your Dockerfile - Your WORKDIR is
/app/ddsued-storage
. The entrypoint path must be correct relative to that at runtime. If your flow lives under
/app/soh/...
, either: - change WORKDIR to
/app
and use
soh/...:flow_func
, or - keep WORKDIR and use a relative path like
../soh/...:flow_func
. - Building with
DockerImage(dockerfile="./ddsued-storage/docker/Dockerfile.soh")
uses your current working directory as the build context. Ensure you run
.deploy()
from a directory that contains the
ddsued-storage
,
soh
, and
soc
folders so the Docker COPY instructions succeed. Quick verification checklist - After deploying, open the deployment in the Prefect UI: - Source should NOT be None (it should show your Git URL or a local path). - Entrypoint should point to your flow file and function. - If you still see “Downloading flow code from storage at None” or FileNotFoundError, the Source/Entrypoint are still misconfigured. If you can share: - The exact
from_source(...)
call you used - Where you ran it from (host path vs inside a container/CI) - The relative path to the file that defines
dresdensued_ui_calculation
I’ll give you the precise
entrypoint
string and the best source setting for your setup. Reference for the flow API (from_source/deploy): prefect/flows.py
p
@Marvin I want to run the deployment on a workpool on a different server. and the deployment definition is present in this docker file that is pushed. So i want the workpool to first pull the image and then go to a specific workflow folder to find its content
m
thought for 82.8 seconds
Got it — you want an image-only deployment: the remote work pool should pull your Docker image and load the flow code that’s already inside the image at a specific folder. For that to work reliably with
.deploy(...)
, you must give Prefect a valid entrypoint but tell it there is no external storage to download from. Use this pattern: - Package your code into the image (you’re already doing this in your Dockerfile). - Set the container WORKDIR to a stable location. - Bind the deployment to a “null” storage so Prefect won’t try to download code, and provide an entrypoint path that points to the flow file inside the image. Example
Copy code
from prefect import flow
from prefect.docker import DockerImage
from prefect.filesystems import NullFileSystem

@flow
def dresdensued_ui_calculation():
    ...

# IMPORTANT: entrypoint must point to the file+function INSIDE the container
# relative to the container WORKDIR. Given your Dockerfile sets:
#   WORKDIR /app/ddsued-storage
# and you COPY your code to:
#   /app/ddsued-storage, /app/soh, /app/soc
# choose the correct relative path. For example, if your flow lives at:
#   /app/ddsued-storage/workflows/ui_flow.py  (function: dresdensued_ui_calculation)
# then entrypoint="workflows/ui_flow.py:dresdensued_ui_calculation"
# If it lives under /app/soh/..., use "../soh/....py:dresdensued_ui_calculation"
dresdensued_ui_calculation.from_source(
    NullFileSystem(),  # do not fetch any source at runtime
    entrypoint="workflows/ui_flow.py:dresdensued_ui_calculation",
).deploy(
    name="dresdensued-ui-calculation",
    work_pool_name="senec-stage",
    work_queue_name="default",
    image=DockerImage(
        name="n0vum/dresdensued-monitoring",
        tag=tag,
        dockerfile="./ddsued-storage/docker/Dockerfile.soh",
        buildargs={
            "GITLAB_TOKEN_NAME": "prefect-token",
            "GITLAB_ACCESS_TOKEN": Secret.load("gitlab-token").get(),
        },
    ),
    build=True,   # ensure we build with the code baked in
    push=True,    # and push so the remote worker can pull it
    job_variables={
        "image_pull_policy": "Always",
        # Be careful: these are Docker-runner options; they are ignored on Kubernetes
        "network_mode": "host",
        "auto_remove": True,
    },
    tags=[...],
)
Key points - Entrypoint is required even when using images. Without it, you’ll see paths like “/app/ddsued-storage/None”. - NullFileSystem tells Prefect “don’t download from storage”; it will just import your flow from the container filesystem using the entrypoint. - Make sure the entrypoint is correct relative to the container WORKDIR. If your flow file is not under
/app/ddsued-storage
, either: - change WORKDIR to
/app
and use
soh/path/to_file.py:func
, or - keep the current WORKDIR and use a relative path like
../soh/path/to_file.py:func
. - Build context: when you call
.deploy(...)
, the current working directory must include
ddsued-storage
,
soh
, and
soc
so your Dockerfile’s COPY lines succeed. - Work pool type: if it’s Kubernetes, remove Docker-specific
job_variables
like
network_mode
and `auto_remove`; instead configure resources in the pool/job template. Validate - After deploying, open the deployment in the UI. “Source” should show “NullFileSystem” (or similar) and “Entrypoint” should be your file:func path. There should be no “storage at None” and no FileNotFoundError if the entrypoint path is correct in the image. If you can share where the
dresdensued_ui_calculation
function actually lives inside the repo (full path), I’ll give you the exact
entrypoint
string relative to your chosen WORKDIR. References - Flow API (from_source, deploy): prefect/flows.py
p
@Marvin how to set a working directory in pull step
m
thought for 229.3 seconds
Great question. The “pull step” downloads source code into the container’s current working directory. So to control where Prefect writes/looks for your flow code and evaluates the entrypoint, set the working directory for the job that your work pool launches. Here’s how to do it for each work pool type: Option A: Image-only (no pull) - If you’re baking code into the image and using NullFileSystem so there’s no pull at runtime, there is no download step. Just set the container working directory: - In your Dockerfile: set WORKDIR to where your code lives
Copy code
WORKDIR /app/ddsued-storage
- Or override at runtime for Docker work pools with job variables:
Copy code
job_variables={"container_create_kwargs": {"working_dir": "/app/ddsued-storage"}}
- Make sure your deployment’s entrypoint points to a file relative to that working directory. Option B: You want Prefect to pull source into the container - Prefect will download into the container’s CWD. Set CWD per work pool: 1) Docker work pool - Easiest: set WORKDIR in the Dockerfile as shown above. - Or set it per deployment with job variables:
Copy code
my_flow.from_source(
      "<https://gitlab.novum-engineering.com/<group>/<repo>.git>",
      entrypoint="ddsued-storage/path/to_flow.py:flow_func",
  ).deploy(
      ...,
      job_variables={
          "container_create_kwargs": {"working_dir": "/app/ddsued-storage"},
          "image_pull_policy": "Always",
          # other docker vars like network_mode, auto_remove if needed
      },
  )
- You can also bake this into the work pool’s base job template by adding container_create_kwargs. To inspect the default:
Copy code
prefect work-pool get-default-base-job-template --type docker --file docker.json
Edit docker.json to set container_create_kwargs. Then:
Copy code
prefect work-pool update <your-pool> --base-job-template docker.json
2) Kubernetes work pool - Add workingDir to the container in the job manifest. Fetch the default template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes --file kube.json
- Edit kube.json and add workingDir under the container:
Copy code
job_manifest.spec.template.spec.containers[0].workingDir = "/app/ddsued-storage"
- Apply it:
Copy code
prefect work-pool update <your-pool> --base-job-template kube.json
- You can also put the path behind a variable if you prefer, but editing the template is the simplest. 3) Process work pool - This one supports a working_dir job variable directly:
Copy code
my_flow.from_source("https://...git", entrypoint="...").deploy(
      ...,
      job_variables={"working_dir": "/app/ddsued-storage"}
  )
Important tips - Entrypoint must be correct relative to the working directory you set. If WORKDIR is /app/ddsued-storage and your flow file is under /app/soh, either change WORKDIR to /app and use soh/... in your entrypoint, or keep WORKDIR and use ../soh/... in your entrypoint. - If you use the image-only pattern with NullFileSystem, there is no pull; Prefect will import directly from the container filesystem. You still must set a correct entrypoint and a valid working directory (via Dockerfile or Docker job variables). Useful references - Customize job variables: https://docs-3.prefect.io/v3/how-to-guides/deployments/customize-job-variables - Custom workers / job templates: https://docs-3.prefect.io/v3/advanced/developing-a-custom-worker - Inspect default job templates: - Docker: run
prefect work-pool get-default-base-job-template --type docker
- Kubernetes: run
prefect work-pool get-default-base-job-template --type kubernetes
- Process: run
prefect work-pool get-default-base-job-template --type process
If you tell me your exact work pool type and where your flow file lives in the image, I’ll suggest the exact job_variables and entrypoint to use.
p
@Marvin NullFileSystem doesn't work
m
thought for 116.1 seconds
Thanks for the context. If NullFileSystem “doesn’t work,” you don’t actually need it for image-only deployments. The cleanest way to run purely from the Docker image is: - Do not call
from_source(...)
- Make sure your flow is defined in a real .py module that is copied into the image - Import that flow function in your deploy script (don’t define it inline) - Call
.deploy(...)
with your image settings - Set the container working directory so the entrypoint path resolves at runtime Why this works - Prefect records the entrypoint for your flow when it can locate the file where the flow is defined. With an entrypoint set, Prefect does not need external storage; it will import the flow from the container filesystem. The “Downloading flow code from storage at None” log can still appear, but it won’t try to download anything if the entrypoint is present and correct. Concrete steps 1) Ensure the flow is defined in a file that’s baked into the image - Example file:
/app/ddsued-storage/workflows/ui_flow.py
- Function name:
dresdensued_ui_calculation
Dockerfile
Copy code
FROM n0vum/prefect-python:3.13-slim
RUN pip3 install --no-cache-dir poetry

ARG GITLAB_TOKEN_NAME
ARG GITLAB_ACCESS_TOKEN

COPY ./ddsued-storage /app/ddsued-storage
COPY ./soc /app/soc
COPY ./soh /app/soh

# Set WORKDIR so entrypoint paths resolve relative to here
WORKDIR /app/ddsued-storage

RUN git config --global url."https://${GITLAB_TOKEN_NAME}:${GITLAB_ACCESS_TOKEN}@gitlab.novum-engineering.com".insteadOf "<https://gitlab.novum-engineering.com>"
RUN poetry config virtualenvs.create false && poetry install -v --without dev --no-interaction
2) Import the flow from its module, then deploy without from_source
Copy code
from prefect.docker import DockerImage
from prefect.blocks.system import Secret

# Import from the file that is baked into the image
from ddsued_storage.workflows.ui_flow import dresdensued_ui_calculation

dresdensued_ui_calculation.deploy(
    name="dresdensued-ui-calculation",
    work_pool_name="senec-stage",
    work_queue_name="default",
    image=DockerImage(
        name="n0vum/dresdensued-monitoring",
        tag=tag,
        dockerfile="./ddsued-storage/docker/Dockerfile.soh",
        buildargs={
            "GITLAB_TOKEN_NAME": "prefect-token",
            "GITLAB_ACCESS_TOKEN": Secret.load("gitlab-token").get(),
        },
    ),
    build=True,
    push=True,
    # Ensure the container’s working directory matches your Dockerfile or override it here
    job_variables={
        "image_pull_policy": "Always",
        "container_create_kwargs": {"working_dir": "/app/ddsued-storage"},
        # docker-only extras if you need them:
        # "network_mode": "host",
        # "auto_remove": True,
    },
    tags=[...],
)
3) Verify in the UI - The deployment should show an Entrypoint like
ddsued-storage/workflows/ui_flow.py:dresdensued_ui_calculation
- Source can be None — that’s OK for image-only runs - On the remote server, the worker pulls the image and imports the flow from that path in the image If this still fails, the usual culprit is that the entrypoint is missing or wrong. The earlier “/app/ddsued-storage/None” error indicates the entrypoint was None; importing the flow function from a real module (as shown) lets Prefect infer it. Also ensure your working directory and entrypoint are consistent: - With WORKDIR /app/ddsued-storage, entrypoint should be relative to that directory (e.g.,
workflows/ui_flow.py:func
) - If your flow is under
/app/soh/...
, either change WORKDIR to
/app
and use
soh/...:func
, or keep WORKDIR and use
../soh/...:func
If you still want to try NullFileSystem - Share the exact error and your Prefect version. NullFileSystem should act as “no-op storage” when used like:
Copy code
from prefect.filesystems import NullFileSystem
  my_flow.from_source(NullFileSystem(), entrypoint="workflows/ui_flow.py:func").deploy(...)
But if you hit BlockMissingCapabilities or another error, I can reproduce with your version and suggest a fix. References - Flow API (deploy/from_source): prefect/flows.py - Inspect default job templates (to set working_dir etc.): - Docker:
prefect work-pool get-default-base-job-template --type docker
- Kubernetes:
prefect work-pool get-default-base-job-template --type kubernetes
- Process:
prefect work-pool get-default-base-job-template --type process
If you send: - The module path to the flow function file inside the image - Your work pool type - The exact NullFileSystem error and Prefect version I’ll give you the exact entrypoint string and job_variables to use.