<@ULVA73B9P> I have local docker containers that r...
# ask-marvin
e
@Marvin I have local docker containers that run my postgres DB, prefect, and prefect worker. I am developing flows locally and attempting to deploy this code using my prefect.yaml file. Here is a snippet from my docker-compose.yaml file that describes how the worker is created: worker: image: prefecthq/prefect:3.4.19.dev2-python3.9 depends_on: prefect: condition: service_healthy volumes: - ./src:/opt/prefect/flows environment: PREFECT_API_URL: http://prefect:4200/api command: ["prefect", "worker", "start", "--pool", "dev-pool"] restart: always And here is a snippet from my prefect.yaml file that describes the deployment: deployments: pull: - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/flows deployments: - name: socials description: Pull all socials data from TikTok, FB, and Instagram tags: ["testing", "scrape", "daily"] # path to your file and flow function entrypoint: src/tfs_flows/socials/socials.py:socials flow_name: Pull Socials Data parameters: for_date: null schedule: cron: "0 5 * * *" timezone: "Europe/London" work_pool: name: "dev-pool" The deployment succeeds, but when I attempt to run the flow, I get an error: [Errno 2] No such file or directory: 'C:\\Projects\\prefect-testing\\tfs-etl-dev-workflow' This is a local filepath and not the docker container filepath. Can you help me successfully deploy and run my flow?
m
thought for 192.1 seconds
Thanks for the details — this is a classic “host path vs. container path” mismatch. What’s happening - Your deployment is trying to set its working directory to a Windows host path (e.g.,
C:\Projects\...
) inside the Linux worker container, which doesn’t exist — hence the
[Errno 2] No such file or directory
. - In Prefect 3.x, the worker will execute the deployment’s pull steps (like
set_working_directory
) inside the container. Those paths must be valid inside the container. How to fix it Align these three things: 1) What you mount into the worker container 2) The deployment’s
pull
step
set_working_directory
3) The
entrypoint
path (relative to the working directory inside the container) Given your compose file mounts
./src -> /opt/prefect/flows
, the files inside the container live under
/opt/prefect/flows/...
. So: - The
pull
step should set
directory: /opt/prefect/flows
- The
entrypoint
should be relative to
/opt/prefect/flows
and should NOT include
src/
(because you already mounted
src
to that path) Example configs docker-compose (worker)
Copy code
worker:
  image: prefecthq/prefect:3.4.19-python3.11
  depends_on:
    prefect:
      condition: service_healthy
  volumes:
    - ./src:/opt/prefect/flows
  environment:
    PREFECT_API_URL: <http://prefect:4200/api>
  command: ["prefect", "worker", "start", "--pool", "dev-pool"]
  restart: always
prefect.yaml Make sure
pull
is a top-level key (only once) and
deployments
is defined once. For your layout it should look like:
Copy code
name: tfs-etl-dev-workflow
prefect-version: 3.4.19

pull:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/flows

deployments:
  - name: socials
    description: Pull all socials data from TikTok, FB, and Instagram
    tags: ["testing", "scrape", "daily"]

    # NOTE: no "src/" here because /opt/prefect/flows already IS your src
    entrypoint: tfs_flows/socials/socials.py:socials
    flow_name: Pull Socials Data

    parameters:
      for_date: null

    schedule:
      cron: "0 5 * * *"
      timezone: "Europe/London"

    work_pool:
      name: "dev-pool"
Redeploy and run 1) Redeploy so the corrected
pull
step is baked into the deployment: - From your project root, run:
Copy code
prefect deploy
- Or, if you keep your prefect file elsewhere:
Copy code
prefect deploy --prefect-file prefect.yaml
(Docs: prefect.yaml guide) 2) Start a new flow run. In the worker logs you should see the pull step running with the container path: - e.g., “set_working_directory: directory=/opt/prefect/flows” Alternative: keep entrypoint with src If you prefer to keep
entrypoint: src/tfs_flows/...
, then mount the whole project root instead: - Change your volume to
- ./:/opt/prefect/flows
- Keep
pull
with
directory: /opt/prefect/flows
- Keep
entrypoint: src/tfs_flows/socials/socials.py:socials
Common pitfalls to double-check - Ensure there is only one
pull:
block in your
prefect.yaml
and that it’s top-level. Duplicate or nested keys can cause the wrong path (your Windows host path) to be used at runtime. - Make sure the work pool type matches your intent (typically “process” when running code directly on the worker container). If this doesn’t resolve it, please share: - The full
prefect.yaml
- The exact log line from the worker that shows the pull step path it’s trying to use Helpful docs - Using prefect.yaml for deployments - Pull steps (set_working_directory)
e
@Marvin I updated my prefect.yaml file and removed the src, but now I get a new errror when deploying: FileNotFoundError: [Errno 2] No such file or directory: 'tfs_flows\\socials\\socials.py'
m
thought for 69.9 seconds
Got it — that error is happening at deploy time on your host, not at runtime in the container. Why this is happening -
prefect deploy
resolves the
entrypoint
on the machine where you run the command, relative to the directory containing your
prefect.yaml
(your project root). - After you removed
src/
, the CLI is looking for
tfs_flows/socials/socials.py
in your project root — but your file is actually at
src/tfs_flows/socials/socials.py
. Hence the FileNotFoundError during deploy. You have two consistent ways to fix this. Pick one. Option A (recommended): Keep entrypoint with src, mount the project root - Keep
entrypoint
as
src/tfs_flows/socials/socials.py:socials
- Mount your whole project so that the runtime path exists inside the container at
/opt/prefect/flows/src/...
docker-compose
Copy code
worker:
  image: prefecthq/prefect:3.4.19-python3.11
  depends_on:
    prefect:
      condition: service_healthy
  volumes:
    - ./:/opt/prefect/flows
  environment:
    PREFECT_API_URL: <http://prefect:4200/api>
  command: ["prefect", "worker", "start", "--pool", "dev-pool"]
  restart: always
prefect.yaml
Copy code
name: tfs-etl-dev-workflow
prefect-version: 3.4.19

pull:
  - prefect.deployments.steps.set_working_directory:
      directory: /opt/prefect/flows

deployments:
  - name: socials
    description: Pull all socials data from TikTok, FB, and Instagram
    tags: ["testing", "scrape", "daily"]
    entrypoint: src/tfs_flows/socials/socials.py:socials
    flow_name: Pull Socials Data
    parameters:
      for_date: null
    schedule:
      cron: "0 5 * * *"
      timezone: "Europe/London"
    work_pool:
      name: "dev-pool"
Option B: Keep mounting only ./src, make entrypoint not include src, and deploy from src - Keep your existing volume:
./src:/opt/prefect/flows
- Keep
entrypoint: tfs_flows/socials/socials.py:socials
- Move
prefect.yaml
into the
src/
directory (so deploy resolves paths relative to
src
), or run
prefect deploy
from inside the
src
directory. Otherwise the deploy step won’t find the file on your host. After changing, redeploy and run 1) Redeploy from your project root (or from
src
for Option B):
Copy code
prefect deploy
2) Restart the worker container if you changed volumes. 3) Start a new flow run. The worker logs should show
set_working_directory
using
/opt/prefect/flows
. Key takeaways - Deploy-time paths must exist on your host relative to the directory containing
prefect.yaml
. - Runtime paths must exist inside the worker container; that’s controlled by your Docker volume and the
pull
step working directory. If you’re still stuck, let me know: - Where
prefect.yaml
is located - Your current
entrypoint
- Your exact project tree (where
src/
and
tfs_flows/
live) Docs: Using prefect.yaml for deployments
e
@Marvin This helped resolve the filepath problem, but now I am getting errors relating to packages not being installed on the docker container. Is this really the best way for local development? How can I streamline this process?
m
thought for 228.7 seconds
Short answer: you’re hitting the classic “my code is mounted, but my Python deps aren’t” problem. For local dev, you have a few good ways to streamline this. Pick the one that matches your workflow speed vs. reproducibility needs. Option 1 (fast local dev): Custom worker image with your deps, bind-mount your code - Build a small image that has all your Python packages preinstalled. - Keep mounting your code so you don’t have to rebuild for code changes; only rebuild when deps change. Dockerfile
Copy code
FROM prefecthq/prefect:3.4.19-python3.11
WORKDIR /opt/prefect/flows
COPY requirements.txt .
RUN pip install -U pip && pip install -r requirements.txt
# If you need OS libs (e.g., for psycopg2), add:
# RUN apt-get update && apt-get install -y libpq-dev gcc && rm -rf /var/lib/apt/lists/*
docker-compose (worker)
Copy code
worker:
  build: .
  image: yourorg/prefect-worker:dev
  depends_on:
    prefect:
      condition: service_healthy
  volumes:
    - ./src:/opt/prefect/flows
  environment:
    PREFECT_API_URL: <http://prefect:4200/api>
  command: ["prefect", "worker", "start", "--pool", "dev-pool"]
  restart: always
- Your existing prefect.yaml stays the same (working dir and entrypoint as we fixed earlier). - Result: no more import errors; you still get instant code iteration via the volume mount. Option 2 (more reproducible): Docker work pool + per-run containers with your image - Create a Docker work pool (the worker process will launch a fresh container for each flow run using your image). - Bake your deps (and optionally your code) into an image. If the worker runs in a container, mount the Docker socket into it. Create a Docker work pool
Copy code
prefect work-pool create "dev-pool" --type docker
Worker (in docker-compose, with Docker socket):
Copy code
worker:
  image: prefecthq/prefect:3.4.19-python3.11
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  environment:
    PREFECT_API_URL: <http://prefect:4200/api>
  command: ["prefect", "worker", "start", "--pool", "dev-pool"]
- Configure your deployment to use your image (either via a base job template on the pool or in the deployment’s infrastructure settings), and update the image as you change dependencies. - This is the “cleanest” setup for parity with non-local environments. - Reference: prefect.yaml guide, Integrations Option 3 (quick-and-dirty): Install deps at container startup - Not my favorite long-term, but great for fast experiments: update your worker command to install from requirements before starting the worker. docker-compose (worker)
Copy code
worker:
  image: prefecthq/prefect:3.4.19-python3.11
  volumes:
    - ./src:/opt/prefect/flows
  environment:
    PREFECT_API_URL: <http://prefect:4200/api>
  command: ["bash", "-lc", "pip install -U pip && pip install -r /opt/prefect/flows/requirements.txt && prefect worker start --pool dev-pool"]
- Works because your requirements.txt is bind-mounted into the container. - Trade-off: slower container startup; you’ll see installs run each time the container restarts. Notes and tips - Match Python versions between your dev env and the image tag (e.g.,
prefecthq/prefect:3.4.19-python3.11
). Avoid dev tags unless you need them. - If you use the Docker worker inside a container, don’t forget the Docker socket mount (
/var/run/docker.sock
) so it can launch containers. - Prefect has docker build steps you can incorporate into your prefect.yaml if you want to automate image builds during deploys (see
prefect_docker.deployments.steps.build_docker_image
): Integrations. If you want, tell me: - Your preferred workflow (fast iteration vs. reproducibility), and - Your current requirements (Python packages and any OS libs), and I’ll give you a tailored compose + Dockerfile + prefect.yaml setup.
e
@Marvin My production environment is in windows. and my local dev envionment will use docker. Is this a good idea?
m
thought for 106.2 seconds
Short answer: Yes—developing in Docker while production is Windows can work, but you’ll be happiest if you aim for environment parity. The more your prod matches your dev (same OS, same image), the fewer surprises you’ll hit with dependencies, paths, and compiled wheels. Recommended paths forward (pick one): 1) Best parity: use Docker in prod too (Docker work pool) - Build a small image that includes your Python deps. - Use a Docker work pool so each flow run spins up your image. - This gives you “it runs the same everywhere.” - Create the pool:
Copy code
prefect work-pool create "dev-pool" --type docker
- Worker (can run on Windows with Docker Desktop using Linux containers):
Copy code
worker:
    image: prefecthq/prefect:3.4.19-python3.11
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      PREFECT_API_URL: <http://prefect:4200/api>
    command: ["prefect", "worker", "start", "--pool", "dev-pool"]
- Bake your deps into an image (used by the pool and your deployments). You can automate image builds with the
prefect_docker
build step:
Copy code
pull:
    - prefect.deployments.steps.utility.pip_install_requirements:
        requirements_file: requirements.txt   # optional if you bake deps

  # Or use docker build steps in build/push
  # See integrations: <https://docs-3.prefect.io/integrations/use-integrations|Prefect integrations>
2) Fast local iteration: custom worker image + bind-mount code - Build an image that has your deps; bind-mount your code for instant edits. - Only rebuild when deps change. Dockerfile:
Copy code
FROM prefecthq/prefect:3.4.19-python3.11
  WORKDIR /opt/prefect/flows
  COPY requirements.txt .
  RUN pip install -U pip && pip install -r requirements.txt
docker-compose:
Copy code
worker:
    build: .
    image: yourorg/prefect-worker:dev
    volumes:
      - ./src:/opt/prefect/flows
    environment:
      PREFECT_API_URL: <http://prefect:4200/api>
    command: ["prefect", "worker", "start", "--pool", "dev-pool"]
- Use the same image in prod via a Docker work pool for parity. 3) If prod must be Windows “process” (no containers) - Keep Docker for local dev, but ensure your code and deps are cross-platform. - Use a runtime install step so the Windows worker can self-provision deps from your repo:
Copy code
pull:
    - prefect.deployments.steps.git_clone:
        repository: <https://github.com/your-org/your-repo.git>
        branch: main
    - prefect.deployments.steps.utility.pip_install_requirements:
        requirements_file: requirements.txt
    - prefect.deployments.steps.set_working_directory:
        directory: .
- Pros: easy to roll out; Cons: slower start times, more variance (Windows wheels, system libs). - For Windows, prefer pure-Python deps or wheels available for Windows (e.g., use
psycopg2-binary
instead of
psycopg2
). Avoid Linux-only libs. General guidance - Aim for parity: Docker in dev and Docker in prod removes OS differences (path separators, case sensitivity, compiled libs). - If Windows in prod is non-negotiable, invest in: - Pinning versions in
requirements.txt
- CI that runs tests on Windows - Using
pip_install_requirements
in pull steps to converge environments - Avoid absolute host paths in deployments. Either mount the project and set the container working directory, or use
git_clone
in pull steps to make paths consistent across OSes. Docs to reference - Using prefect.yaml for deployments - Prefect integrations (Docker, git steps, utility steps) If you can share constraints for prod (Can you run Docker? Must it be Windows-only? Any OS-level deps?), I can give you a precise setup (prefect.yaml + Dockerfile + compose + pool config) tailored to your case.