<@ULVA73B9P> I'd like to use a local container as ...
# ask-marvin
k
@Marvin I'd like to use a local container as the container to run the flows in:
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: "fraio-trigger"
  version: "local"
  tags: [] 
  description: null
  schedule: {}
  flow_name: "fraio-trigger"
  entrypoint: fraio_algorithms.fraio_trigger_flow:trigger_flow
  parameters: {}
  work_pool:
    name: fraio
    work_queue_name: null
  infrastructure:
    type: docker
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
    use_local_image: true
But when I try:
Copy code
✗ prefect deploy --prefect-file ./deployments/docker-deployment.yaml --name fraio-trigger            

? Would you like to build a custom Docker image for this deployment? [y/n] (n): n
? Your Prefect workers will need access to this flow's code in order to run it. Would you like your workers to pull your flow code from a remote 
storage location when running this flow? [y/n] (y): n
Your Prefect workers will attempt to load your flow from: /workspace/python/fraio/fraio_orchestrator/fraio_algorithms.fraio_trigger_flow. To see 
more options for managing your flow's code, run:

        $ prefect init
It looks for code in
/workspace/python/fraio/fraio_orchestrator
. That's not the aim. I have a separate
fraio_orchestrator
module, which should be ran.
m
thought for 470.1 seconds
n
Copy code
infrastructure:
    type: docker
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
    use_local_image: true
this is not the correct structure for a yaml deployment definition
k
🙂 - so it seems. I'm still trying to figure out how to do it properly (have been reading and trying different things - also invalid ones 🙂. Main question is still: how to make the worker start the 'container with the flow definitions' and run the entrypoint from that container (which is in python module style, not directories)
m
You can absolutely run your flow from a pre-built Docker image and import the flow from a Python package that’s already installed in that image. In Prefect 3.x, the key is to use a module entrypoint and a Docker work pool—no storage or local path syncing required. What’s happening now - The interactive deploy is warning where it would look for code if it had to load from the local filesystem. That’s not what you want, and it’s fine to say “no” to both the Docker build and remote storage prompts. - Your config still points Prefect at a file-based entrypoint; switch to a module entrypoint and ensure the image contains your package. What to change 1) Use a module entrypoint and remove the old 2.x key - In Prefect 3, you don’t need
use_local_image
(that was a 2.x concept). Prefect 3 uses the image you specify in
job_variables.image
for Docker work pools. - Update your
prefect.yaml
to use your package module as the entrypoint. From your description, it sounds like the full module path should include
fraio_orchestrator
. For example:
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: "fraio-trigger"
  version: "local"
  tags: []
  description: null
  schedule: {}
  flow_name: "fraio-trigger"
  # Use a module path here (adjust to your real package path)
  entrypoint: fraio_orchestrator.fraio_algorithms.fraio_trigger_flow:trigger_flow
  # If your Prefect version supports it, you can be explicit:
  # entrypoint_type: module
  parameters: {}
  work_pool:
    name: fraio
    work_queue_name: null
  infrastructure:
    type: docker
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
      # optional: set env or other docker options if needed
      # env:
      #   PYTHONPATH: /usr/local/lib/python3.11/site-packages
2) Make sure your image contains the package - Your Docker image must have
fraio_orchestrator
(and its submodules) installed so Python can import it inside the container. - Quick sanity check:
Copy code
docker run --rm -it harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local \
  python -c "import fraio_orchestrator.fraio_algorithms.fraio_trigger_flow as m; print(m)"
3) Use a Docker work pool and start a worker - Create the pool if you haven’t:
Copy code
prefect work-pool create fraio --type docker
- Start a worker pointing at that pool:
Copy code
prefect worker start -p fraio
4) Deploy using your YAML - Then deploy (answer “n” to both prompts is fine):
Copy code
prefect deploy --prefect-file ./deployments/docker-deployment.yaml --name fraio-trigger
- Even if the CLI prints a local path after you say “no” to storage, the Docker worker will import your flow from the module inside the container at runtime. Notes and docs - Prefect 3
prefect.yaml
and deployment settings: prefect.yaml reference - Customizing Docker job variables: Customize job variables - Storing flow code (not needed if your image has the package, but useful background): Store flow code A couple of clarifying questions to make sure this works first try: - Is
fraio_orchestrator
installed in the Docker image (e.g., via
pip install .
or
pip install fraio_orchestrator-*.whl
)? - Is the correct module path for your flow
fraio_orchestrator.fraio_algorithms.fraio_trigger_flow:trigger_flow
? If not, what’s the exact import path you use in Python? - You are running a Docker worker (not a process worker), right?
k
right, but that entrypoint is weird: that's not the entrypoint I'd use in the container I'm using. Inside the 'worker/flow container', I don't have 'orchestrator' code. Only algorithm code.
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: "fraio-trigger"
  version: "local"
  tags: [] 
  description: null
  schedule: {}
  flow_name: "fraio-trigger"
  entrypoint: fraio_algorithms.fraio_trigger_flow:trigger_flow
  parameters: {}
  work_pool:
    name: fraio
    work_queue_name: trigger_queue
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
Copy code
prefect work-pool create fraio --type docker --overwrite
prefect worker start --pool fraio
Updated work pool 'fraio'!

To start a worker for this work pool, run:

        prefect worker start --pool fraio

└── UUID: 329c8a52-b564-49e7-8433-8dab09d92b34
└── Type: docker
└── Description: None
└── Status: Ready
└── URL: <http://127.0.0.1:4200/work-pools/work-pool/fraio>

Discovered type 'docker' for work pool 'fraio'.
Worker 'DockerWorker c05e9f81-76bc-4fb7-b78a-7efa639e13b9' started!
works fine
Copy code
✗ prefect deploy --prefect-file ./deployments/docker-deployment.yaml --name fraio-trigger

? Would you like to build a custom Docker image for this deployment? [y/n] (n): n
? Your Prefect workers will need access to this flow's code in order to run it. Would you like your workers to pull your flow code from a remote 
storage location when running this flow? [y/n] (y): n
Your Prefect workers will attempt to load your flow from: /workspace/python/fraio/fraio_orchestrator/fraio_algorithms.fraio_trigger_flow. To see 
more options for managing your flow's code, run:
Why
workspace/...
- my entrypoint is a python module?
fraio_algorithms.fraio_trigger_flow:trigger_flow
which is in
/workspace/fraio_algorithms/fraio_trigger_flow.py:trigger_flow
(I'm trying to understand what's happening - I fail to understand what prefect's doing here)
or:
Copy code
✗ docker run --rm -it harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local \
  python -c "import fraio_orchestrator.fraio_algorithms.fraio_trigger_flow as m; print(m)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'fraio_orchestrator'
doesn't work, as expected: the algorithm module doesn't have a
fraio_orchestrator
directory.
@Nate does this mean I need to have both the 'orchestration' code and 'flow code' in the same module, or the 'flow code' should be a submodule of 'orchestration'? (I wanted to decouple both, which looks cleaner?)
- Is
fraio_orchestrator
installed in the Docker image (e.g., via
pip install .
or
pip install fraio_orchestrator-*.whl
)? no,
fraio_algorithms
is installed in the Docker image, that's where the flow code lives. Orchestration is only about well... orchestration of the flows. The flows themselves live in the algorithms package. - Is the correct module path for your flow
fraio_orchestrator.fraio_algorithms.fraio_trigger_flow:trigger_flow
? If not, what’s the exact import path you use in Python?
fraio_algorithms.fraio_trigger_flow:trigger_flow
Copy code
✗ docker run --rm -it harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local \
  python -c "import fraio_algorithms.fraio_trigger_flow as m; print(m)" 
<module 'fraio_algorithms.fraio_trigger_flow' from '/app/fraio_algorithms/src/fraio_algorithms/fraio_trigger_flow.py'>
- You are running a Docker worker (not a process worker), right? Not sure what what you mean, I guess this answers your question?
Copy code
✗    prefect work-pool create fraio --type docker --overwrite
prefect worker start --pool fraio
Updated work pool 'fraio'!

To start a worker for this work pool, run:

        prefect worker start --pool fraio

└── UUID: 329c8a52-b564-49e7-8433-8dab09d92b34
└── Type: docker
└── Description: None
└── Status: Ready
└── URL: <http://127.0.0.1:4200/work-pools/work-pool/fraio>

Discovered type 'docker' for work pool 'fraio'.
Worker 'DockerWorker c05e9f81-76bc-4fb7-b78a-7efa639e13b9' started!
@Marvin check above: I don't get why the entrypoint changes from
fraio_algorithms.fraio_trigger_flow:trigger_flow
(as set in the prefect template) to
fraio_orchestrator/fraio_algorithms.fraio_trigger_flow
(when running prefect deploy)
m
thought for 78.4 seconds
Great question — the “entrypoint changing” is just Prefect CLI interpreting your entrypoint as a filesystem path. What you’re seeing - You set
entrypoint: fraio_algorithms.fraio_trigger_flow:trigger_flow
in your template. - During
prefect deploy
, since you answered “no” to remote storage, the CLI assumes the part before
:
is a file path relative to the project root. It joins that to your current directory and prints: -
/workspace/.../fraio_orchestrator/fraio_algorithms.fraio_trigger_flow
- That odd-looking mix of slashes and dots happens because the CLI expects a file path like
path/to/file.py:flow_func
, not a module path. The printed path is not an actual module resolution; it’s just telling you where it would try to read a file from if it had to load your flow from the local filesystem. How to run from a package inside your Docker image (no code sync) - For a pre-built image that already contains your package, you want a module entrypoint. Today, the Prefect 3 CLI prefers file-based entrypoints. The most reliable way to declare a module entrypoint is to create the deployment programmatically with
Flow.deploy
and set the entrypoint type to module. Minimal example - This assumes your Docker image already has
fraio_orchestrator
installed and contains your flow.
Copy code
from fraio_orchestrator.fraio_algorithms.fraio_trigger_flow import trigger_flow
from prefect.types.entrypoint import EntrypointType

trigger_flow.deploy(
    name="fraio-trigger",
    work_pool_name="fraio",  # must be a Docker work pool
    image="harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local",
    build=False,
    push=False,
    version="local",
    entrypoint_type=EntrypointType.MODULE,  # key bit: treat entrypoint as a module import
)
Then: - Ensure you have a Docker work pool and worker running:
Copy code
prefect work-pool create fraio --type docker
prefect worker start -p fraio
Why this fixes it - Module entrypoints tell Prefect to import
fraio_orchestrator.fraio_algorithms.fraio_trigger_flow:trigger_flow
inside the container using Python import mechanics, not to read a
.py
file from a local path. No storage or syncing is needed as long as the package is installed in the image. If you must use the CLI-only path - Then you need to give a file-based entrypoint that points to an actual
.py
file available to the worker at runtime. With a pre-built image containing a package, that usually means pointing at the file inside site-packages (which is brittle/undesirable), or using code storage (Git/S3/etc.). For your goal (package-only, no storage), programmatic deploy with a module entrypoint is the right fit. Docs for reference - Prefect YAML and deployments: prefect.yaml reference - Customizing Docker job variables: Customize job variables - Storing flow code (background): Store flow code Quick checks - Your image should import the flow:
Copy code
docker run --rm harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local \
  python -c "import fraio_orchestrator.fraio_algorithms.fraio_trigger_flow as m; print(m.trigger_flow)"
If you’d prefer to stick with
prefect.yaml
, I can propose a small Python “deploy script” you run once (to register the deployment) while still keeping your YAML for the rest of the project. Would you like that?
k
@Marvin yes please
m
thought for 30.7 seconds
Perfect — here’s a small, one-time “deploy script” you can run to register the deployment with a module entrypoint, so the Docker worker will import your flow from the package inside your image (no code sync). Prereqs (once) - Make sure you can import your orchestrator package locally when you run this script (e.g., install it in your venv). - Example:
pip install -e /path/to/fraio_orchestrator
or install the wheel. - Ensure a Docker work pool exists and a worker is (or will be) running:
Copy code
prefect work-pool create fraio --type docker
prefect worker start -p fraio
Deploy script - Save this as
deploy_fraio.py
and run it with your Prefect 3 env active.
Copy code
from prefect.types.entrypoint import EntrypointType

# Import your flow from the package (same import the container will use)
from fraio_orchestrator.fraio_algorithms.fraio_trigger_flow import trigger_flow

if __name__ == "__main__":
    deployment_id = trigger_flow.deploy(
        name="fraio-trigger",
        work_pool_name="fraio",  # Docker work pool
        image="harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local",
        build=False,              # don't build an image
        push=False,               # don't push an image
        version="local",
        tags=[],
        description=None,
        # Optional Docker job variables (env, volumes, etc.)
        # job_variables={"env": {"MY_ENV": "value"}},
        entrypoint_type=EntrypointType.MODULE,  # key: treat entrypoint as a module import
        print_next_steps=True,
    )
    print(f"Created/updated deployment: {deployment_id}")
Run it
Copy code
python deploy_fraio.py
What this does - Registers a deployment that tells Prefect to import
fraio_orchestrator.fraio_algorithms.fraio_trigger_flow:trigger_flow
as a Python module inside your Docker image. - No remote storage is configured or needed; the worker will just import the module in the container’s Python environment. - The image used at runtime is exactly your
harbor...:local
image. Why this avoids the path confusion - The CLI
prefect deploy
prompt you saw assumes a file-based entrypoint when you say “no” to storage and tries to show a local filesystem path. The script above explicitly sets the entrypoint type to “module,” so Prefect stores and uses the module import path at runtime. Docs (for reference) - Prefect YAML reference: prefect.yaml reference - Docker job variables: Customize job variables - Storing flow code (background only; not needed for this approach): Store flow code If you prefer, we can keep your
prefect.yaml
for other project metadata and use this script solely to register the deployment with the correct module entrypoint. Want me to adapt the script to include parameters, schedules, or tags from your YAML?
k
@Marvin Let's keep it all in the template file, so yeah, the entry point type as well?
👍 1
n
oh i would not do that you shouldnt really have to set
Copy code
from prefect.types.entrypoint import EntrypointType
m
thought for 75.2 seconds
Short answer: today, prefect.yaml does not expose a field to set “module” as the entrypoint type. The CLI treats
entrypoint
as a file path (
path/to/file.py:flow_func
) and there isn’t an
entrypoint_type: module
knob in the template. That’s why
prefect deploy
echoes a filesystem-y path. Docs for reference: prefect.yaml reference. Given that constraint, here are your options if you want to keep everything driven by the template: Option A (template-only, no remote storage): add a tiny shim file to your image - Bake a small Python file into your Docker image that imports your flow from the installed package and exposes it as
flow
. Then point the template’s
entrypoint
at that file. - Example shim inside your image at `/app/entrypoints/fraio_trigger.py`:
Copy code
from fraio_orchestrator.fraio_algorithms.fraio_trigger_flow import trigger_flow as flow
- Your prefect.yaml:
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: fraio-trigger
  version: local
  flow_name: fraio-trigger
  entrypoint: app/entrypoints/fraio_trigger.py:flow  # file path inside the container
  parameters: {}
  work_pool:
    name: fraio
  infrastructure:
    type: docker
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
- Notes: - The path in
entrypoint
must match the file’s path relative to the container’s working directory. Set the container’s
WORKDIR
in your Dockerfile accordingly. - No code storage is required; the file ships with your image. Option B (template + pull step): clone your repo at runtime into the container - If you can allow a runtime pull, add a pull step to fetch your repo and then use a file-based entrypoint from that checkout. This still avoids building an image at deploy-time, but does fetch code at run-time. - Example (pseudo, adapt to your source):
Copy code
pull:
  - prefect.deployments.steps.git_clone:
      repository: <https://github.com/your-org/your-repo.git>
      branch: main
      directory: ./app  # where to clone inside the container

deployments:
- name: fraio-trigger
  entrypoint: app/path/to/file.py:trigger_flow
  work_pool:
    name: fraio
  infrastructure:
    type: docker
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
Option C (recommended if you truly want a module entrypoint): one-line programmatic deploy - The only way to register a module-style entrypoint today is programmatically (it sets
entrypoint_type=module
under the hood). You can still keep prefect.yaml for everything else; this script just registers the deployment once. - Script (from my prior message) keeps your image-only, package-import approach and avoids any shim or pull step. Why the path looks weird during
prefect deploy
- The CLI is not changing your entrypoint; it’s just telling you where it would try to load code from if it had to read a local file. Since it expects a file path, it pretty-prints a path even if you typed a module-like string. If you want to stick to the template-only route, Option A (shim baked into the image) is the most reliable while avoiding any code storage. Do you want me to help sketch the Dockerfile changes for Option A (set WORKDIR and copy the shim) based on your current image?
k
@Nate oh, why not? Prefect doesn't really play well with
EntryPointType
?
n
no i just mean its not really necessary in normal use, its not incorrect, just seems like low level detail that you don't actually need to address your question
like setting
entrypoint: path/to/file.py:fn
in yaml seems like what you want
k
@Nate yes, but it is relative to the wrong path, somehow:
Copy code
entrypoint: fraio_algorithms.fraio_trigger_flow:trigger_flow
in the template file translate to:
Copy code
Your Prefect workers will attempt to load your flow from: /workspace/python/fraio/fraio_orchestrator/fraio_algorithms.fraio_trigger_flow. To see
So
/workspace/python/fraio/fraio_orchestrator
is added by prefect somehow?
Copy code
docker run --rm -it harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local \
  python -c "import fraio_algorithms.fraio_trigger_flow as m; print(m)" 
<module 'fraio_algorithms.fraio_trigger_flow' from '/app/fraio_algorithms/src/fraio_algorithms/fraio_trigger_flow.py'>
works fine, so I'd like to run this. Ah, let my try that absolute path.
k
aah, in that case, deploy doesn't work because it can't find that file on my local system (which has a different structure than the container):
Copy code
✗ prefect deploy --prefect-file ./deployments/docker-deployment.yaml --name fraio-trigger

Traceback (most recent call last):
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/app/fraio_algorithms/src/fraio_algorithms/fraio_trigger_flow.py'
OK, let me check that one
hmmm, so I need to align the directories somehow. Doesn't feel very resilient.
Copy code
FROM python:3.10.12-bookworm

RUN apt-get update \
    && apt-get install -y --no-install-recommends postgresql-client \
    && rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade pip && pip install uv

COPY docker/base_requirements.txt .
RUN pip install --no-cache-dir -r base_requirements.txt

ADD "<https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h>" skipcache

COPY python/fraio/fraio_algorithms app/fraio_algorithms

RUN --mount=type=cache,target=/root/.cache/pip,id=pip-build-cache \
    --mount=type=cache,target=/root/.cache/uv,id=uv-build-cache \
    uv pip install --system -e app/fraio_algorithms
I'm installing the module here - but that doesn't matter, since we look at the source files, not at 'python modules'?
n
id use a WORKDIR like the example i just linked
Doesn't feel very resilient.
if you organize your docker image it works! we tend towards giving users freedom to do things the way they need
also unless you're doing multi-stage builds i dont see a good reason to use both
pip
and
uv
. id just use uv
k
yeah, last comment, totally valid, we're currently moving from pip to uv , but it's not fully done 🙂
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: "fraio-trigger"
  version: "local"
  tags: [] 
  description: null
  schedule: {}
  flow_name: "fraio-trigger"
  entrypoint: fraio_algorithms/fraio_trigger_flow.py:trigger_flow
  parameters: {}
  work_pool:
    name: fraio
    work_queue_name: trigger_queue
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
relative path to the algorithms
Copy code
FROM python:3.10.12-bookworm

RUN apt-get update \
    && apt-get install -y --no-install-recommends postgresql-client \
    && rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade pip && pip install uv

COPY docker/base_requirements.txt .
RUN pip install --no-cache-dir -r base_requirements.txt

ADD "<https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h>" skipcache

COPY python/fraio/fraio_algorithms app/fraio_algorithms

RUN --mount=type=cache,target=/root/.cache/pip,id=pip-build-cache \
    --mount=type=cache,target=/root/.cache/uv,id=uv-build-cache \
    uv pip install --system -e app/fraio_algorithms

WORKDIR /app/fraio_algorithms/src
WORKDIR
set
Copy code
Traceback (most recent call last):
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/workspace/python/fraio/fraio_orchestrator/fraio_algorithms/fraio_trigger_flow.py'
so - I guess we already might have established that, the flow code needs to be in the same package as the 'orchestration' package. Or rather, I should put the template in the same directory/module where the flows are defined? That means, moving the flows to 'orchestration' or moving the template file to the algorithms directory.
ok, so @Nate
Copy code
name: prefect.fraio-trigger
prefect-version: 3.4.18

deployments:
- name: "fraio-trigger"
  version: "local"
  tags: [] 
  description: null
  schedule: {}
  flow_name: "fraio-trigger"
  entrypoint: src/fraio_orchestrator/fraio_trigger_flow.py:trigger_flow
  parameters: {}
  work_pool:
    name: fraio
    work_queue_name: trigger_queue
    job_variables:
      image: harbor.mgmt.vintecc.cloud/vintecc-ags-hbm/fraio_algo:local
Copy code
FROM python:3.10.12-bookworm

RUN apt-get update \
    && apt-get install -y --no-install-recommends postgresql-client \
    && rm -rf /var/lib/apt/lists/*

RUN pip install --upgrade pip && pip install uv

COPY docker/base_requirements.txt .
RUN pip install --no-cache-dir -r base_requirements.txt

ADD "<https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h>" skipcache

COPY python/fraio/fraio_orchestrator app/fraio_orchestrator
COPY python/fraio/fraio_algorithms app/fraio_algorithms


RUN --mount=type=cache,target=/root/.cache/pip,id=pip-build-cache \
    --mount=type=cache,target=/root/.cache/uv,id=uv-build-cache \
    uv pip install --system -e app/fraio_algorithms \
    -e app/fraio_orchestrator

WORKDIR /app/fraio_orchestrator
but still, I guess just because the directory structure is different (relative, it's the same):
Copy code
Worker 'DockerWorker 3cbf0e54-f0c4-4225-a39a-1db1ba19a18c' submitting flow run '07bca085-0634-414f-b9e6-e620a3881caa'
08:09:39 PM
prefect.flow_runs.worker
Completed submission of flow run '07bca085-0634-414f-b9e6-e620a3881caa'
08:09:39 PM
prefect.flow_runs.worker
Opening process...
08:09:48 PM
prefect.flow_runs.runner
 > Running set_working_directory step...
08:09:50 PM
prefect.flow_runs
[Errno 2] No such file or directory: '/workspace/python/fraio/fraio_orchestrator'
08:09:50 PM
prefect.flow_runs
Unexpected exception encountered when trying to load flow
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/prefect/deployments/steps/core.py", line 166, in run_steps
    step_output = await run_step(step, upstream_outputs)
  File "/usr/local/lib/python3.10/site-packages/prefect/deployments/steps/core.py", line 137, in run_step
    result = await from_async.call_soon_in_new_thread(
  File "/usr/local/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 374, in aresult
    return await asyncio.wrap_future(self.future)
  File "/usr/local/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 399, in _run_sync
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/prefect/deployments/steps/pull.py", line 35, in set_working_directory
    os.chdir(directory)
FileNotFoundError: [Errno 2] No such file or directory: '/workspace/python/fraio/fraio_orchestrator'

The above exception was the direct cause of the following exception:
I still fail to see why suddenly, there's this
/workspace/python/fraio
in there. OK, let's recap: 1. I'm running the
prefect deploy
from a devcontainer, in the directory
/worskpace/python
2. I'm having a docker container in which the code to be executed lives, in
/app/...
3. is it true that, because the dirs are not exactly the same (from root level), I can't deploy locally from a template? 4. if I can deploy locally from a template, given the difference in root directory structure, how?