<@ULVA73B9P> im trying to start a deployment using...
# ask-marvin
s
@Marvin im trying to start a deployment using a docker worker pool. It starts the run but then crashes
Running set_working_directory step...
112349 AM prefect.flow_runs [Errno 2] No such file or directory: '/home/scox/prefect_test' prefect server is also running in its own container if that matters, Im using the prefecthq/prefect:3-latest docker image for the worker & none of my code directly referes to my repo directory. Im not sure what im missing
m
thought for 376.4 seconds
Thanks for the details — this error is almost certainly coming from your deployment’s pull steps, not your flow code. What’s happening - The “Running set_working_directory step…” log is from the Prefect pull step
prefect.deployments.steps.pull.set_working_directory
. - Your deployment is trying to
chdir
to
/home/scox/prefect_test
inside the Docker job container. That path came from the machine where you ran
prefect deploy
(or from your prefect.yaml), and it doesn’t exist in the container — hence “[Errno 2] No such file or directory”. This is unrelated to your Prefect server running in a separate container. How to fix it (pick one) 1) Mount your local directory into the job container - If you really want the job container to use files from
/home/scox/prefect_test
, add a volume mount in your Docker work pool’s base job template so the path exists in the container: - Get the default template:
Copy code
prefect work-pool get-default-base-job-template --type docker --file docker-template.json
- Edit
docker-template.json
and add your volume:
Copy code
{
      "job_configuration": {
        "image": "prefecthq/prefect:3-latest",
        "volumes": [
          "/home/scox/prefect_test:/home/scox/prefect_test:rw"
        ],
        ... keep the other fields/placeholders ...
      },
      "variables": { ... }
    }
- Update your pool:
Copy code
prefect work-pool update "<your-docker-pool-name>" --base-job-template docker-template.json
- Then re-run your flow. The path will now exist in the container. 2) Remove or change the set_working_directory pull step - Open your
prefect.yaml
and look for a
pull:
section like:
Copy code
pull:
    - prefect.deployments.steps.pull.set_working_directory:
        directory: /home/scox/prefect_test
- Either remove that step entirely, or change the directory to something that exists in the container (e.g.
/opt/prefect/flows
) and make sure your code is available there (via a git clone pull step or other remote storage pull step). - Example pull steps that work well in containers:
Copy code
pull:
    - prefect.deployments.steps.pull.set_working_directory:
        directory: /opt/prefect/flows
    - prefect.deployments.steps.pull.git_clone:
        repository: <https://github.com/<org>/<repo>>
        branch: main
        # access_token: "{{ prefect.blocks.secret.my_github_pat }}"
3) Use flow.from_source(...).deploy(...) so Prefect clones your repo at runtime - This avoids hard-coding a host path in the deployment:
Copy code
from prefect import flow

  @flow
  def my_flow():
      ...

  my_flow.from_source(
      source="<https://github.com/<org>/<repo>>",
      entrypoint="path/to/flow.py:my_flow",
      # access_token="..."  # if needed
  ).deploy(
      name="docker-git",
      work_pool_name="<your-docker-pool-name>",
      job_variables={"image": "prefecthq/prefect:3-latest"},
  )
- No need to mount your host directory; Prefect will clone into a temp directory inside the container. Confirm what your deployment is doing - Run:
Copy code
prefect deployment inspect "<flow-name>/<deployment-name>"
- Look for the “pull_steps” section; you should see the
set_working_directory
entry and the directory it’s trying to use. A few links you may find helpful - API: set_working_directory pull step prefect.deployments.steps.pull.set_working_directory - Deployments overview Deployments (Prefect 3.x) - Docker work pools how-to Docker work pools and job templates If you can share: - how you created the deployment (CLI vs Python), - the relevant
pull:
section from your
prefect.yaml
(if using one), or - the output of
prefect deployment inspect "<flow>/<deployment>"
, I can suggest the most direct change for your setup.
s
@Marvin here is my prefect.yml version: 1 # Project metadata (optional but useful) name: ansible-runner prefect-version: 3.0.0 pull: - prefect.deployments.steps.pull.set_working_directory: directory: /root/.prefect deployments: - name: ansible-run # path to your flow function entrypoint: flows/my_flow.py:my_flow # point it at your docker worker pool work_pool: name: ansible-docker-runner # job_variables tell the worker which image/envs to use job_variables: # image: "prefecthq/prefect:3-latest" env: PREFECT_API_URL: "http://10.5.2.165:4200/api" ENGINE_NAME: "ENGINE #" PLAYBOOK: "test_playbook.yml" "PREFECT_HOME": "/root/.prefect" this now complains that File "/usr/local/lib/python3.12/site-packages/prefect/utilities/importtools.py", line 122, in load_script_as_module raise ScriptError(user_exc=exc, path=path) from exc prefect.exceptions.ScriptError: Script at 'flows/my_flow.py' encountered an exception: FileNotFoundError(2, 'No such file or directory') What edits do I need to make for this to run (ignore the fact it says ansible the flow is just a print outout to confirm the flow has ran for now)
m
thought for 68.6 seconds
Thanks — the error now is because the job container can’t find your entrypoint file. Your deployment is telling Prefect to import
flows/my_flow.py:my_flow
, but nothing in the container has created a
flows/
directory. Right now you only set the working directory; you’re not pulling any code into it. You’ve got two good options: Option A: Pull your repo at runtime with a git pull step (recommended) - Edit your prefect.yaml to add a
git_clone
pull step and point the working directory to a neutral path (not PREFECT_HOME). For example:
Copy code
version: 1
name: ansible-runner
prefect-version: 3.0.0

pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect/flows
  - prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/<org>/<repo>>
      branch: main
      # If private, add one of:
      # access_token: "{{ prefect.blocks.secret.github_pat }}"
      # or credentials: "{{ prefect.blocks.github-credentials.my-creds }}"
  # Optional if you need extra deps:
  # - prefect.deployments.steps.utility.pip_install_requirements:
  #     directory: /opt/prefect/flows
  #     requirements_file: requirements.txt

deployments:
  - name: ansible-run
    entrypoint: flows/my_flow.py:my_flow    # see note below
    work_pool:
      name: ansible-docker-runner
      job_variables:
        image: "prefecthq/prefect:3-latest"
        env:
          PREFECT_API_URL: "<http://10.5.2.165:4200/api>"
          ENGINE_NAME: "ENGINE #"
          PLAYBOOK: "test_playbook.yml"
          PREFECT_HOME: "/root/.prefect"
Notes: - Depending on how
git_clone
populates files, you may need to adjust `entrypoint`: - If the repo clones directly into
/opt/prefect/flows
,
entrypoint: flows/my_flow.py:my_flow
is correct. - If it clones into a subfolder (e.g.,
/opt/prefect/flows/<repo-name>
), change entrypoint to
<repo-name>/flows/my_flow.py:my_flow
. - If you only want a subset of the repo, you can use
directories
to copy specific folders into the working dir:
Copy code
- prefect.deployments.steps.pull.git_clone:
      repository: <https://github.com/<org>/<repo>>
      branch: main
      directories:
        - flows
With that,
entrypoint: flows/my_flow.py:my_flow
will be correct. Option B: Mount your local repo into the job container - Keep your current
entrypoint: flows/my_flow.py:my_flow
, but mount your repo into the container and set the working directory there. - Update your Docker work pool’s base job template to include a volume mount. Example: 1) Get the default template
Copy code
prefect work-pool get-default-base-job-template --type docker --file docker-template.json
2) Edit
docker-template.json
to add a volumes entry mapping your host path to the container path, e.g.:
Copy code
"volumes": [
       "/path/on/host/your-repo:/opt/prefect/flows:rw"
     ]
3) Update the pool
Copy code
prefect work-pool update "ansible-docker-runner" --base-job-template docker-template.json
4) In your prefect.yaml, point the working dir at `/opt/prefect/flows`:
Copy code
pull:
       - prefect.deployments.steps.pull.set_working_directory:
           directory: /opt/prefect/flows
Now
flows/my_flow.py
will exist inside the job container. Quick sanity checks - Inspect your deployment to confirm the pull steps being used:
Copy code
prefect deployment inspect "ansible-runner/ansible-run"
- If your repo is private, make sure you provide credentials to the
git_clone
step. Docs for reference - Pull steps API: prefect.deployments.steps.pull - Docker work pools and job templates: Docker work pools If you share your repo location (and whether it’s private) I can give you the exact
git_clone
stanza and the correct
entrypoint
value.
s
@Marvin if I wanted to do option b, but I only want to mount the flows directory not the whole home directory
m
thought for 102.8 seconds
Great — you can mount just your flows directory into the job container via the Docker work pool’s base job template. Here’s a minimal setup that keeps your existing `entrypoint: flows/my_flow.py:my_flow`: 1) Mount only the flows folder - Export the default Docker job template:
Copy code
prefect work-pool get-default-base-job-template --type docker --file docker-template.json
- Edit docker-template.json and add a bind mount for your host flows folder to a path inside the job container, e.g.:
Copy code
{
  "job_configuration": {
    ... keep all the defaults/placeholders ...,
    "volumes": [
      "/absolute/path/on/host/your-repo/flows:/opt/prefect/flows:ro"
    ]
  },
  "variables": { ... }
}
- Update your pool:
Copy code
prefect work-pool update "ansible-docker-runner" --base-job-template docker-template.json
2) Update prefect.yaml to point the working directory at the parent of flows - Since your entrypoint is
flows/my_flow.py:my_flow
, set the working directory to
/opt/prefect
so that
flows/my_flow.py
exists relative to it:
Copy code
version: 1
name: ansible-runner
prefect-version: 3.0.0

pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /opt/prefect

deployments:
  - name: ansible-run
    entrypoint: flows/my_flow.py:my_flow
    work_pool:
      name: ansible-docker-runner
      job_variables:
        image: "prefecthq/prefect:3-latest"
        env:
          PREFECT_API_URL: "<http://10.5.2.165:4200/api>"
          ENGINE_NAME: "ENGINE #"
          PLAYBOOK: "test_playbook.yml"
          PREFECT_HOME: "/root/.prefect"
Important notes - The left side of the volume (
/absolute/path/on/host/your-repo/flows
) must be a path on the Docker HOST, not inside the worker container. If your code is only inside the worker container, bind-mount it from the host into the worker as well, or point the job volume at a host path that actually has the flows. - If you prefer to keep your working directory set to
/opt/prefect/flows
, then change your entrypoint to
my_flow.py:my_flow
instead. Quick verify - Inspect your deployment to confirm pull steps:
Copy code
prefect deployment inspect "ansible-runner/ansible-run"
- Trigger a run and confirm the container has
/opt/prefect/flows/my_flow.py
. Docs - Docker work pools and job templates: Docker work pools - Pull steps API: prefect.deployments.steps.pull If you share the exact host path to your flows folder, I can fill in the volume line precisely.
s
@Marvin when I run "prefect deployment run 'my-flow/ansible-run' how do I update the docker envs passed to docker worker?
m
thought for 192.9 seconds
Great question. In Prefect 3.x, the Docker worker’s environment for a specific flow run can be overridden at trigger time using job variables on the deployment run. One-off (this run only) - Use
--job-variable
to set the
env
map for the Docker job:
Copy code
prefect deployment run 'my-flow/ansible-run' --job-variable env='{"ANSIBLE_HOST_KEY_CHECKING":"False","PREFECT_LOGGING_LEVEL":"DEBUG"}'
Notes: - The value after
env=
must be valid JSON. - Quoting tips: - Bash/zsh:
--job-variable env='{"FOO":"bar"}'
- PowerShell:
--job-variable env="{\"FOO\":\"bar\"}"
- In the default Docker job template, these env values are applied on top of pool/deployment defaults; keys you provide take precedence. Persist for all future runs - Put them in your deployment’s job variables (prefect.yaml):
Copy code
deployments:
- name: ansible-run
  work_pool:
    name: my-docker-pool
  job_variables:
    env:
      ANSIBLE_HOST_KEY_CHECKING: "False"
      PREFECT_LOGGING_LEVEL: "DEBUG"
- Or in Python when deploying:
Copy code
flow.from_source("<https://github.com/owner/repo>", entrypoint="path:flow.py:flow").deploy(
  name="ansible-run",
  work_pool_name="my-docker-pool",
  job_variables={"env": {"ANSIBLE_HOST_KEY_CHECKING": "False", "PREFECT_LOGGING_LEVEL": "DEBUG"}}
)
- Or at the work pool level by editing the base job template’s defaults (via UI or CLI) so all deployments using that pool inherit them. Docs for deeper reference: - Customize job variables - Manage work pools - Work pools and workers blog Would you like these envs applied just for this one run, or baked into the deployment/work pool going forward?
s
@Marvin # flows/my_flow.py import os from prefect import flow, get_run_logger # Get environment variables engine = os.getenv("ENGINE_NAME", "Unknown Engine") playbook = os.getenv("PLAYBOOK", "default_playbook") @flow(name=f"{engine} - My Flow") def my_flow(): logger = get_run_logger() logger.info("Hello from inside my_flow!") logger.info("Running with PREFECT_API_URL, ENGINE_NAME, and PLAYBOOK env vars.") # You can access env vars if you want: import os logger.info("testing env vars...") logger.info(f"PREFECT_API_URL={os.getenv('PREFECT_API_URL')}") logger.info(f"ENGINE_NAME={os.getenv('ENGINE_NAME')}") logger.info(f"PLAYBOOK={os.getenv('PLAYBOOK')}") # Do your work here return "done" is my current flow python file when I run this the name is my-flow rather the name name=f"{engine} - My Flow"
m
thought for 230.2 seconds
Short answer: you’re seeing “my-flow” because the flow’s name is resolved when the flow is imported, and your deployment is still targeting the original flow object (whose name defaulted to the function name). Also, making the flow’s entity name dynamic via env vars is not recommended; use a static flow name and a dynamic flow run name instead. What’s happening - The
name=
on
@flow
is evaluated at import time. In a Docker worker this happens inside the container when your module is imported. - Deployments point to a specific flow object (by ID). If you created the deployment when the flow’s name was the default, the deployment will continue to run that flow object and you’ll see “my-flow”. - Changing the decorator later does not retroactively change the flow entity for existing deployments/runs. Recommended pattern - Keep the flow name stable. - Put environment-specific detail (ENGINE_NAME, PLAYBOOK) into the flow run name so you can see it per run. Example:
Copy code
import os
from prefect import flow, get_run_logger

@flow(
    name="My Flow",
    # dynamic run name from env vars at run time
    flow_run_name=lambda: f"{os.getenv('ENGINE_NAME', 'Unknown Engine')} - {os.getenv('PLAYBOOK', 'default_playbook')}"
)
def my_flow():
    logger = get_run_logger()
    <http://logger.info|logger.info>("Hello from inside my_flow!")
    <http://logger.info|logger.info>("Running with PREFECT_API_URL, ENGINE_NAME, and PLAYBOOK env vars.")
    <http://logger.info|logger.info>("testing env vars...")
    <http://logger.info|logger.info>(f"PREFECT_API_URL={os.getenv('PREFECT_API_URL')}")
    <http://logger.info|logger.info>(f"ENGINE_NAME={os.getenv('ENGINE_NAME')}")
    <http://logger.info|logger.info>(f"PLAYBOOK={os.getenv('PLAYBOOK')}")
    return "done"
Alternatively, parameterize and template the run name:
Copy code
from prefect import flow, get_run_logger

@flow(name="My Flow", flow_run_name="{parameters.engine} - {parameters.playbook}")
def my_flow(engine: str = "Unknown Engine", playbook: str = "default_playbook"):
    logger = get_run_logger()
    <http://logger.info|logger.info>(f"ENGINE_NAME={engine}")
    <http://logger.info|logger.info>(f"PLAYBOOK={playbook}")
    return "done"
Then when you trigger a run:
Copy code
prefect deployment run 'my-flow/ansible-run' -p engine='"Ansible"' -p playbook='"site.yml"'
(values must be valid JSON) If you truly want to change the flow’s entity name - Re-deploy after changing the decorator so the deployment points to the new flow object. - Ensure the env vars you rely on exist at import time inside the Docker container (you can set them via job variables):
Copy code
prefect deployment run 'my-flow/ansible-run' --job-variable env='{"ENGINE_NAME":"Ansible","PLAYBOOK":"site.yml"}'
Docs: - Customize job variables - Deployments overview Quick check: Do you want the flow’s display name to change permanently, or do you just want each run to show ENGINE_NAME/PLAYBOOK? I can tailor the steps accordingly.
s
@Marvin can you give me a python code snippet that does that same as these 2 commands prefect deployment run 'my-flow/ansible-run' --job-variable env='{"ENGINE_NAME": "LOEN1", "PLAYBOOK": "test_playbook"}' && prefect worker start --pool 'ansible-docker-runner' --run-once
m
thought for 44.9 seconds
Yep! Here are two Python ways to do the equivalent: 1) Simple and reliable: call the same CLI commands from Python
Copy code
import json
import subprocess

deployment = "my-flow/ansible-run"
pool = "ansible-docker-runner"
env_overrides = {"ENGINE_NAME": "LOEN1", "PLAYBOOK": "test_playbook"}

# 1) Create a flow run with job variables (env overrides)
subprocess.run(
    [
        "prefect", "deployment", "run", deployment,
        "--job-variable", f"env={json.dumps(env_overrides)}",
    ],
    check=True,
)

# 2) Start a worker for a single polling cycle
subprocess.run(
    [
        "prefect", "worker", "start",
        "--pool", pool,
        "--run-once",
    ],
    check=True,
)
2) Use the Prefect Python client for the run, then start the worker once
Copy code
import asyncio
import json
import subprocess
from prefect.client.orchestration import get_client

deployment = "my-flow/ansible-run"
pool = "ansible-docker-runner"
env_overrides = {"ENGINE_NAME": "LOEN1", "PLAYBOOK": "test_playbook"}

async def main():
    # 1) Create a flow run from a deployment with job variables
    async with get_client() as client:
        dep = await client.read_deployment_by_name(deployment)
        await client.create_flow_run_from_deployment(
            deployment_id=dep.id,
            job_variables={"env": env_overrides},
        )

    # 2) Start a worker for a single polling cycle
    subprocess.run(
        ["prefect", "worker", "start", "--pool", pool, "--run-once"],
        check=True,
    )

if __name__ == "__main__":
    asyncio.run(main())
Notes: -
job_variables={"env": {...}}
matches the Docker job template’s
env
block and will override/add env vars for that run. - The subprocess approach is the closest 1:1 to your original commands and works across platforms. Refs: - Customize job variables - Deployments overview Want me to tweak this to wait for the run to finish or to print the flow run ID?
s
@Marvin is the my-flow part of this defined in the @flow(name="") of the flow file
m
thought for 16.1 seconds
Short answer: yes. The
my-flow
part is the flow name that your deployment was created with. That comes from the
@flow(name="...")
on your flow at deploy time (if you didn’t set it, Prefect uses the function name). Key points: - Deployments are bound to a specific flow entity. The “FLOW_NAME/DEPLOYMENT_NAME” you type in the CLI uses the flow name recorded on that deployment. - Changing
@flow(name=...)
later does not change existing deployments—you’d need to redeploy for the new flow name to be reflected. - Best practice: keep the flow name stable; use
flow_run_name
for dynamic info per run. If you want
my-flow
to change: 1) Update the decorator, e.g.
@flow(name="My New Flow")
2) Recreate/redeploy the deployment so it points to this (re)named flow. Refs: - Deployments overview - Name/slug behavior
s
@Marvin how do I start the prefect server with a predefined worker pool
m
thought for 61.1 seconds
Short answer: there isn’t a flag to create work pools as part of
prefect server start
. You start the server, then create the work pool via CLI (or a bootstrap script) once the API is up. Two common patterns: A) Local script (start server, wait for health, create pool)
Copy code
#!/usr/bin/env bash
set -euo pipefail

# 1) Start the local Prefect server
prefect server start &

# 2) Wait for the API to be healthy
until curl -sf <http://127.0.0.1:4200/api/health> > /dev/null; do
  echo "Waiting for Prefect server..."
  sleep 1
done
echo "Prefect server is up."

# 3) Create or update a Docker work pool with an optional base job template
#    (Remove --base-job-template if you don't need to customize defaults)
prefect work-pool create "ansible-docker-runner" \
  --type docker \
  --base-job-template ./docker-job-template.json \
  --overwrite || true

echo "Work pool ready."
Example
docker-job-template.json
(optional defaults for all runs in this pool):
Copy code
{
  "job_configuration": {
    "image": "python:3.11-slim",
    "env": {
      "PREFECT_LOGGING_LEVEL": "INFO",
      "ENGINE_NAME": "default-engine"
    }
  }
}
B) Docker Compose: add an “init” service that waits for the API, then creates the pool
Copy code
services:
  prefect-server:
    image: prefecthq/prefect:3-latest
    command: ["prefect", "server", "start", "--host", "0.0.0.0"]
    ports:
      - "4200:4200"

  prefect-init:
    image: prefecthq/prefect:3-latest
    depends_on:
      - prefect-server
    volumes:
      - ./docker-job-template.json:/docker-job-template.json:ro
    entrypoint: /bin/sh
    command: -c "
      until curl -sf <http://prefect-server:4200/api/health> > /dev/null; do echo 'waiting for server'; sleep 1; done &&
      prefect work-pool create 'ansible-docker-runner' --type docker --base-job-template /docker-job-template.json --overwrite
    "
Notes - You can omit
--base-job-template
if you don’t need to preconfigure defaults; the pool will be created with Prefect’s default Docker template. - Re-running the create command with
--overwrite
will update the pool if it already exists (idempotent bootstrap). - After the pool exists, start a worker to pull from it:
Copy code
prefect worker start --pool "ansible-docker-runner"
Helpful docs - Manage work pools - Customize job variables If you share your environment (local vs. Docker/Kubernetes) and the worker type you want (Docker, Kubernetes, Process, etc.), I can tailor the exact bootstrap snippet for you.
s
@Marvin I currently have a docker worker setup, that using a prefect.yml to point at a flow file & a work pool that it will use. I want to have this more dynamically assigned. I still want a docker work pool but I want to by able to define flows during runtime, that can execute using that workpool. Can you give an example of the flow file, prefect.yml if needed, etc
m
thought for 604.6 seconds
Great question — you can absolutely keep a single Docker work pool/worker and dynamically register new flows at runtime so they execute there. In Prefect 3.x, the cleanest way is to programmatically create deployments from source with Flow.from_source(...).deploy(...). This avoids hardcoding flows in a prefect.yml and lets you add/update deployments at any time. Below is a minimal, end-to-end example. Assumptions - You’re on Prefect 3.x. - You want a Docker work pool. - You’d prefer not to rebuild/publish a new Docker image for every new flow (we’ll point deployments at a Git repo or local path; the worker runs the image and loads code from source). 1) Create a Docker work pool (one-time) Make sure the Docker extra is available when using Docker features:
Copy code
uv run --with 'prefect[docker]' prefect work-pool create my-docker-pool --type docker
2) Example flow file (in your repo) Repo layout example: - repo root/ - flows/hello.py contents of flows/hello.py:
Copy code
from prefect import flow, get_run_logger

@flow
def hello(name: str = "world"):
    logger = get_run_logger()
    <http://logger.info|logger.info>(f"Hello, {name}!")
3) Dynamically register deployments from source This script can run in CI/CD or anywhere with Prefect auth to your server/cloud. It registers one or more deployments that will run on your Docker work pool. The image should be a stable “runtime” image that has Prefect and your dependencies installed — you don’t need to rebuild it as you add flows. deploy_flows.py:
Copy code
from prefect.flows import Flow

# Source can be a Git repo URL or a local path
# If using Git, ensure your Docker worker can authenticate (e.g., SSH key or token)
SOURCE = "<https://github.com/your-org/your-prefect-repo>"
WORK_POOL = "my-docker-pool"
RUNTIME_IMAGE = "<http://ghcr.io/your-org/prefect-runtime:py3.11|ghcr.io/your-org/prefect-runtime:py3.11>"  # prebuilt image with deps

# Define any flows you want to deploy dynamically
TO_DEPLOY = [
    {
        "name": "hello-dev",
        "entrypoint": "flows/hello.py:hello",
        "parameters": {"name": "dev"},
        "tags": ["dev"]
    },
    {
        "name": "hello-prod",
        "entrypoint": "flows/hello.py:hello",
        "parameters": {"name": "production"},
        "tags": ["prod"]
    },
]

def main():
    for spec in TO_DEPLOY:
        flow = Flow.from_source(SOURCE, entrypoint=spec["entrypoint"])
        flow.deploy(
            name=spec["name"],
            work_pool_name=WORK_POOL,
            image=RUNTIME_IMAGE,        # runs in Docker via your work pool
            build=False,                # no image build step
            push=False,                 # no image push step
            parameters=spec.get("parameters"),
            tags=spec.get("tags"),
            # Optional: set Docker job variables like env, networks, volumes, etc.
            job_variables={
                "env": {"PREFECT_LOGGING_LEVEL": "INFO"},
                # "networks": ["my_network"],
                # "volumes": ["/host/path:/container/path:rw"],
                # "pull_policy": "ALWAYS",
            },
        )
    print("Deployments registered.")

if __name__ == "__main__":
    main()
Notes: - For SOURCE as a Git repo, the Docker container running your flow needs access to clone/fetch. Provide credentials via env vars, SSH agent, or secrets. - If you set SOURCE to a local path, that source must be accessible when the worker executes the flow (e.g., via remote storage or baked into the image). For dynamic flows, Git is usually the easiest. 4) Start your Docker worker and point it at the pool Run this wherever you want your Docker workloads to execute:
Copy code
uv run --with 'prefect[docker]' prefect worker start -p my-docker-pool
5) Kick off a run of a new deployment After running deploy_flows.py, you’ll see two deployments: - hello/hello-dev - hello/hello-prod Trigger one:
Copy code
prefect deployment run "hello/hello-dev" --watch
Do I still need prefect.yml?
- Not for this pattern. prefect.yml is helpful if you want to use the CLI to deploy from a project recipe and/or build/push Docker images in a standardized way. For fully dynamic, programmatic deployments, you can skip it. Two common variants - Immutable base image + Git source (shown above): No image rebuilds when flows change. Make sure the container can read the repo. - Build code into the image: If you prefer fully self-contained images (e.g., locked-down environments), use
prefect deploy
from a project with a prefect.yml that defines build/push steps and deployments. This is less “dynamic,” as it requires a rebuild for new flows. Questions to tune this for you - Where does your flow code live today (Git vs local path)? Do you want to avoid rebuilding/pushing Docker images whenever you add a flow? - Do your containers have access to your repo (how do you prefer to authenticate)? - Do you need specific Docker networks/volumes/env vars per deployment? If you share those, I can tailor the deploy script or a minimal prefect.yml to your exact setup.
s
@Marvin how do define a flow that calls 2 flows that can run at the same time in python
m
thought for 123.5 seconds
You’ve got two good options in Prefect 3.x: A) Run two subflows concurrently within a parent flow (same infra/process) - Use
child_flow.submit(...)
to kick off subflows in parallel. - Give the parent a task runner (e.g.,
ThreadPoolTaskRunner
) so work can overlap. Example:
Copy code
from prefect import flow, get_run_logger
from prefect.task_runners import ThreadPoolTaskRunner
from prefect.futures import as_completed

@flow(name="child-a")
def child_a(x: int) -> int:
    logger = get_run_logger()
    <http://logger.info|logger.info>(f"child_a starting with {x}")
    return x * 2

@flow(name="child-b")
def child_b(y: int) -> int:
    logger = get_run_logger()
    <http://logger.info|logger.info>(f"child_b starting with {y}")
    return y + 10

@flow(name="parent", task_runner=ThreadPoolTaskRunner())
def parent():
    # Launch both subflows concurrently
    f1 = child_a.submit(5)
    f2 = child_b.submit(7)

    # Option 1: wait individually
    r1 = f1.result()
    r2 = f2.result()
    return r1, r2

    # Option 2: stream results as they complete
    # for fut in as_completed([f1, f2]):
    #     print("Completed:", fut.result())

if __name__ == "__main__":
    print(parent())
B) Kick off two deployed flows in parallel (scale out on your workers) - Useful if you want each flow to run on your Docker work pool as separate flow runs (potentially on different workers/machines). Example orchestrator:
Copy code
from prefect import flow
from prefect.deployments import run_deployment
from prefect.flow_runs import wait_for_flow_run

@flow
def orchestrator():
    # Assumes you already have deployments registered:
    # - child-a/child-a-deployment
    # - child-b/child-b-deployment
    r1 = run_deployment(
        "child-a/child-a-deployment",
        parameters={"x": 5},
        as_subflow=True,  # set False if you want them as separate (non-subflow) runs
    )
    r2 = run_deployment(
        "child-b/child-b-deployment",
        parameters={"y": 7},
        as_subflow=True,
    )

    # Wait for both to finish (optional)
    wait_for_flow_run(r1.id)
    wait_for_flow_run(r2.id)

if __name__ == "__main__":
    orchestrator()
Which should you choose? - Use A when you want a single parent run to orchestrate subflows locally/in the same container. - Use B when you want true parallelism across your Docker work pool (each flow run scheduled to workers), or when each child is already a deployment. If you share how you plan to run these (local vs via deployments on your Docker pool), I can tailor this to your setup (including deployment examples).