Hi <@ULVA73B9P>, how can I configure Prefect, that...
# ask-marvin
o
Hi @Marvin, how can I configure Prefect, that not dockerhub but my personal artifactory is used for pulling and pushing docker images using .deploy? Not that my private artifactory has the same routes as the publich dockerhub. My company does only allow access of dockerhub via the private artifactory.
m
thought for 572.8 seconds
Great question — you can absolutely direct Prefect to build/push to Artifactory and have your workers pull from it. High-level - In Prefect 3.x, the registry used is determined by the image name you provide. Use your Artifactory hostname as the image prefix and Prefect will build/tag/push there. - For the push during deploy: the machine running
.deploy(...)
must be authenticated to your Artifactory registry (e.g., via
docker login
or CI secrets). - For workers to pull at run time: configure your work pool/worker with Docker registry credentials (via a Prefect block) or Kubernetes image pull secrets. 1) Point
.deploy()
at Artifactory Pass a fully-qualified image name (including your Artifactory host) to
Flow.deploy
. You can use either a string or
DockerImage
for extra build kwargs. Example:
Copy code
from prefect import flow
from prefect.docker import DockerImage

@flow
def my_flow():
    ...

# Source could be Git, local path, etc.
flow.from_source(
    source="<https://github.com/acme/prefect-repo>",
    entrypoint="flows/my_flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="docker-pool",
    image=DockerImage(
        name="<http://artifactory.myco.com/prefect/my-flow|artifactory.myco.com/prefect/my-flow>",  # <- your Artifactory host/repo
        tag="2025-09-30",
        dockerfile="Dockerfile",
        # Optional: pass Docker build args (e.g., for Artifactory PyPI mirrors, proxies, etc.)
        buildargs={
            "PIP_INDEX_URL": "<https://artifactory.myco.com/api/pypi/pypi/simple>"
        },
    ),
    build=True,
    push=True,
)
Notes: - Make sure the machine running this code can push (e.g.,
docker login <http://artifactory.myco.com|artifactory.myco.com>
). - If your Artifactory proxies Docker Hub and your company requires all pulls go through Artifactory, update your Dockerfile base image to use the proxied path, e.g.: - Instead of
FROM python:3.11-slim
, use
FROM <http://artifactory.myco.com/docker-hub-proxy/library/python:3.11-slim|artifactory.myco.com/docker-hub-proxy/library/python:3.11-slim>
- Or configure your Docker daemon with a registry mirror that points to Artifactory (outside Prefect). 2) Authenticating the push (during deploy) Flow.deploy’s push step uses your local Docker credentials. Options: - Local: run
docker login <http://artifactory.myco.com|artifactory.myco.com>
before
.deploy(...)
. - CI: set
DOCKER_AUTH_CONFIG
or run a
docker login
step with CI secrets before invoking your deploy script. - If you’re using prefect.yaml with steps, you can explicitly call the push step with credentials:
Copy code
steps:
    - prefect_docker.deployments.steps.build_docker_image:
        image_name: <http://artifactory.myco.com/prefect/my-flow|artifactory.myco.com/prefect/my-flow>
        tag: "{{ version }}"
    - prefect_docker.deployments.steps.push_docker_image:
        image_name: <http://artifactory.myco.com/prefect/my-flow|artifactory.myco.com/prefect/my-flow>
        tag: "{{ version }}"
        credentials:
          username: "{{ prefect.variables.ARTIFACTORY_USER }}"
          password: "{{ prefect.variables.ARTIFACTORY_TOKEN }}"
          registry_url: "<http://artifactory.myco.com|artifactory.myco.com>"
See the step signatures here: - prefect-docker deployment steps 3) Authenticating workers for pulls (runtime) - Docker worker: - Create a DockerRegistryCredentials block and use it in your Docker work pool’s base job template (or per-deployment). - Example to create/save a credentials block:
Copy code
from prefect_docker.credentials import DockerRegistryCredentials

    creds = DockerRegistryCredentials(
        username="svc-prefect",
        password="***",  # load from env/secret manager
        registry_url="<http://artifactory.myco.com|artifactory.myco.com>",
    )
    creds.save("my-artifactory-creds", overwrite=True)
- In your Docker work pool’s base job template, reference it:
Copy code
{
  "image": "{{ image }}",
  "env": "{{ env }}",
  "command": "{{ command }}",
  "registry_credentials": "{{ block('docker-registry-credentials', 'my-artifactory-creds') }}"
}
- Docs: prefect-docker and the block class: DockerRegistryCredentials - Kubernetes worker: - Configure imagePullSecrets in the work pool base job template or on the ServiceAccount in the namespace so the cluster can pull from Artifactory. - Docs: prefect-kubernetes A couple of clarifying questions to tailor this to you: - Which worker type are you using to run the flows (Docker or Kubernetes)? - Will you run
.deploy(...)
locally or in CI? - Do you already have an Artifactory-proxied path for your base images (FROM lines), or do you rely on a registry mirror in Docker? Happy to provide a concrete example (prefect.yaml or Python) once I know your worker type and where you run deploy. Helpful links - Docker integration docs: prefect-docker - Docker worker docs: Docker worker - Kubernetes integration: prefect-kubernetes
o
@Marvin File "/uvd-workflow-orchestrator-example-prefect/deployments/local_deployment.py", line 14, in <module> my_flow.deploy( ~~~~~~~~~~~~~~^ name="my-deployment", ^^^^^^^^^^^^^^^^^^^^^ ...<2 lines>... push=False # switch to True to push to your image registry ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/asyncutils.py", line 351, in coroutine_wrapper return run_coro_as_sync(ctx_call()) File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/asyncutils.py", line 207, in run_coro_as_sync return call.result() ~~~~~~~~~~~^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/_internal/concurrency/calls.py", line 365, in result return self.future.result(timeout=timeout) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/_internal/concurrency/calls.py", line 192, in result return self.__get_result() ~~~~~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/home/labor/.local/lib/python3.13/site-packages/prefect/_internal/concurrency/calls.py", line 441, in _run_async result = await coro ^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/asyncutils.py", line 188, in coroutine_wrapper return await task ^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/asyncutils.py", line 341, in ctx_call result = await async_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/flows.py", line 1561, in deploy deployment_ids = await deploy_coro ^^^^^^^^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/asyncutils.py", line 341, in ctx_call result = await async_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/deployments/runner.py", line 1275, in deploy image.build() ~~~~~~~~~~~^^ File "/home/labor/.local/lib/python3.13/site-packages/prefect/docker/docker_image.py", line 74, in build build_image(**build_kwargs) ~~~~~~~~~~~^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/contextlib.py", line 85, in inner return func(*args, **kwds) File "/home/labor/.local/lib/python3.13/site-packages/prefect/utilities/dockerutils.py", line 195, in build_image raise BuildError(event["error"]) prefect.utilities.dockerutils.BuildError: Get "https://registry-1.docker.io/v2/": read tcp 10.50.36.19429594 &gt;52.22.88.186443: read: connection reset by peer
m
thought for 158.0 seconds
Thanks for sharing the traceback — the key part is at the end: prefect.utilities.dockerutils.BuildError: Get "https://registry-1.docker.io/v2/": read tcp ...443 read: connection reset by peer What’s happening: - Prefect 3 is invoking Docker to build your image (
DockerImage.build()
). - During the build, Docker needs to pull a base image from Docker Hub (registry-1.docker.io). - The Docker daemon can’t maintain the TLS connection to Docker Hub; the peer (or something on the network path) is resetting it. This is a network/connectivity issue, not a Prefect bug. Likely causes: - Corporate proxy or firewall intercepting/terminating TLS. - Docker Hub being blocked or rate-limited for anonymous pulls. - Intermittent network issues or DNS problems. - TLS inspection or MITM box that Docker doesn’t trust. - Docker daemon not configured for proxies or mirrors. Quick checks 1) Can the host reach Docker Hub?
Copy code
curl -v <https://registry-1.docker.io/v2/>
You should see a 401 Unauthorized (that’s expected) — if it fails, it’s a network/proxy issue. 2) Can Docker pull any image?
Copy code
docker pull alpine:3.20
docker pull hello-world
3) Are you logged in to Docker Hub (to avoid rate limits)?
Copy code
docker login
Fixes and workarounds - If you’re behind a proxy or corporate firewall: - Configure the Docker daemon to use your HTTP/HTTPS proxy and set NO_PROXY for internal addresses: - Linux/systemd: Configure Docker daemon proxy (systemd) - Build-time proxy for apt/pip in Dockerfile: Build-time proxies - Allowlist these hosts/ports if needed: registry-1.docker.io:443, auth.docker.io:443. - Use a Docker registry mirror (pull-through cache) to avoid hitting Docker Hub directly: - Configure
"registry-mirrors"
in `/etc/docker/daemon.json`: Docker daemon configuration (registry-mirrors) - Set up or use an existing mirror: Use a registry mirror / pull-through cache - Pre-pull the base image to prime the Docker cache: - Example:
Copy code
docker pull prefecthq/prefect:3-python3.12
Replace with the exact tag you use. Once it’s local,
docker build
won’t need to reach Docker Hub for the base layer unless you force pulls. - Use a custom Dockerfile that points at an internal/mirrored base image instead of Docker Hub: - Dockerfile example:
Copy code
FROM <http://registry.internal.example.com/prefect/prefect:3-python3.12|registry.internal.example.com/prefect/prefect:3-python3.12>
    # your additional steps...
- Then tell Prefect to use this file when deploying:
Copy code
from prefect import flow
    from prefect.docker import DockerImage

    image = DockerImage(
        name="<http://registry.internal.example.com/your-team/my-flow|registry.internal.example.com/your-team/my-flow>",
        tag="2025-10-07",
        dockerfile="Dockerfile",
        # Optionally help Docker during build if needed:
        # build_kwargs={"network_mode": "host"}
    )

    my_flow.deploy(
        name="my-deployment",
        image=image,
        push=False,  # still builds locally, doesn’t push
    )
- If you need to pass proxy env vars during build steps (apt/pip), you can do:
Copy code
image = DockerImage(
        name="<http://registry.internal.example.com/your-team/my-flow|registry.internal.example.com/your-team/my-flow>",
        dockerfile="Dockerfile",
        build_kwargs={
            "buildargs": {
                "HTTP_PROXY": "<http://proxy:3128>",
                "HTTPS_PROXY": "<http://proxy:3128>",
                "NO_PROXY": "localhost,127.0.0.1,.<http://internal.example.com|internal.example.com>"
            }
        },
    )
Note: build args won’t affect pulling the base image, but they help in RUN steps.
- If a middlebox is doing TLS inspection, ensure trusted certs are installed for the Docker daemon or avoid inspection for Docker Hub. For internal registries with custom CAs, add them to Docker’s trust store or configure as an insecure registry if appropriate. - Optional: try
network_mode=host
for the build as a test:
Copy code
build_kwargs={"network_mode": "host"}
- Check Docker Hub status if this might be transient: Docker Hub status Notes about Prefect 3 - You’re on Prefect 3.x, and using
my_flow.deploy(...)
which can build images via
prefect.docker.DockerImage
. The failure is occurring before Prefect can finish the deployment because Docker cannot pull the base image. - If you don’t need a container image for this deployment, you can temporarily omit the
image=
argument to bypass the Docker build and run with a non-container infrastructure until network issues are resolved. A few questions to help me tailor the fix: - Are you behind a corporate proxy/VPN or on a locked-down network? - Can you successfully run
docker pull alpine:3.20
from this same host? - Are you using Prefect’s auto-generated Dockerfile or a custom one? - Which OS and Docker engine version are you running? If you share those details, I can suggest the most direct config changes for your environment.
o
@Marvin i think you lost context, the image is available at local-docker_client_kas/prefect/numpy-mul
m
thought for 70.8 seconds
Got it — thanks for the clarification. If your image already exists as
local-docker_client_kas/prefect/numpy-mul
, Prefect tried to build anyway because
Flow.deploy()
defaults to
build=True
, which triggers a Docker build (and pulls a base image from Docker Hub). Use the image directly and skip the build: - Set
image
to your existing image name - Set
build=False
(and keep
push=False
) Example:
Copy code
from prefect import flow

@flow
def my_flow():
    ...

my_flow.deploy(
    name="my-deployment",
    work_pool_name="your-docker-work-pool",
    image="local-docker_client_kas/prefect/numpy-mul:latest",  # or your specific tag
    build=False,  # <- prevents Prefect from building (no Docker Hub pull)
    push=False,   # <- prevents push
)
Notes: - Make sure the Docker worker that will run this deployment can pull/use that image name. If the worker runs on the same machine and the image is local, great. If
local-docker_client_kas
is a private registry hostname, ensure the worker host is logged in (
docker login <registry>
) and trusts the registry (add CA or mark as insecure if needed). - If
local-docker_client_kas
is a Docker context name (not a registry), Prefect will still use the worker’s Docker daemon, not your local context. In that case, either run the worker against that same daemon or push the image to a registry accessible to the worker. If you can share: - whether
local-docker_client_kas
is a registry or a Docker context - where your Docker worker is running (same host vs remote) I can tailor the exact configuration steps.
o
@Marvin Worker 'DockerWorker 04aec57c-a0c0-4db8-b971-78ba0ba73d1f' submitting flow run '9cf5e4bf-4bb4-4269-820c-abc0767e3043' 084808 AM prefect.flow_runs.worker Failed to submit flow run '9cf5e4bf-4bb4-4269-820c-abc0767e3043' to infrastructure. Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 275, in _raise_for_status response.raise_for_status() File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.51/containers/30dbc4cd8495aa397b89b866aa751031e8a5304b0111911054c23134437ce577/start The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/prefect/workers/base.py", line 1281, in _submit_run_and_capture_errors result = await self.run( ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 457, in run container, created_event = await run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 233, in run_sync_in_worker_thread result = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2476, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 967, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 243, in call_with_mark return call() ^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 753, in _create_and_start_container container.start() File "/usr/local/lib/python3.12/site-packages/docker/models/containers.py", line 420, in start return self.client.api.start(self.id, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/docker/api/container.py", line 1136, in start self._raise_for_status(res) File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 277, in _raise_for_status raise create_api_error_from_http_exception(e) from e ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/docker/errors.py", line 39, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) from e docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.51/containers/30dbc4cd8495aa397b89b866aa751031e8a5304b0111911054c23134437ce577/start: Bad Request ("failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: exec: "prefect": executable file not found in $PATH: unknown") 084808 AM prefect.flow_runs.worker Reported flow run '9cf5e4bf-4bb4-4269-820c-abc0767e3043' as crashed: Flow run could not be submitted to infrastructure: APIError(HTTPError('400 Client Error: Bad Request for url: http+docker://localhost/v1.51/containers/30dbc4cd8495aa397b89b866aa751031e8a5304b0111911054c23134437ce577/start')) 084808 AM prefect.flow_runs.worker
m
thought for 206.1 seconds
Thanks — that error is clear now: exec: "prefect": executable file not found in $PATH What it means: - The Docker worker starts your container by running the
prefect
CLI inside the image. - Your image
local-docker_client_kas/prefect/numpy-mul
doesn’t have the
prefect
CLI available (or an ENTRYPOINT is interfering). Quick verification - From the worker host, try:
Copy code
docker run --rm local-docker_client_kas/prefect/numpy-mul:latest prefect version
docker run --rm local-docker_client_kas/prefect/numpy-mul:latest which prefect
If these fail, the image doesn’t include Prefect. Two common fixes 1) Base your image on Prefect (recommended) - Rebuild your image FROM a Prefect base, or extend your existing image and install Prefect:
Copy code
# Option A: start from Prefect base
FROM prefecthq/prefect:3-python3.12
# add your code/deps here...

# Option B: extend your image and add Prefect
FROM local-docker_client_kas/prefect/numpy-mul:latest
RUN python -m pip install --no-cache-dir "prefect>=3,<4"
- Use that tag in your deployment, with build/push disabled:
Copy code
my_flow.deploy(
  name="my-deployment",
  work_pool_name="your-docker-work-pool",
  image="local-docker_client_kas/prefect/numpy-mul:with-prefect",
  build=False,
  push=False,
)
2) If your image has a custom ENTRYPOINT - Prefect sets the container command to
prefect ...
. If your image’s ENTRYPOINT wraps/overrides the command, you can clear it at runtime:
Copy code
my_flow.deploy(
  name="my-deployment",
  work_pool_name="your-docker-work-pool",
  image="local-docker_client_kas/prefect/numpy-mul:latest",
  build=False,
  push=False,
  job_variables={
    "container_create_kwargs": {"entrypoint": []}
  },
)
This only helps if
prefect
is actually installed in the image; otherwise you’ll see the same error. Also confirm - The worker is running on the same Docker host that has the image, or the image name is pullable from that host. You can enforce no pulls:
Copy code
job_variables={"image_pull_policy": "Never"}
- If
local-docker_client_kas
is a private registry, ensure the worker host is logged in:
docker login local-docker_client_kas
. If you can share: - Does your image currently include Prefect? - Does it set an ENTRYPOINT? - Is the worker on the same machine as the image? I’ll suggest the exact minimal change for your setup.
o
@Marvin do i always need prefect in the image?
m
thought for 24.4 seconds
Short answer: for Docker (and other containerized) workers, yes — the image must include Prefect. Why: the Docker worker starts your flow by executing the
prefect
CLI inside the container. If
prefect
isn’t on PATH in the image, you’ll get the “executable file not found” error you saw. Ways to satisfy this: - Base from a Prefect image:
Copy code
FROM prefecthq/prefect:3-python3.12
# add your code/deps...
- Or install Prefect into your existing image:
Copy code
FROM local-docker_client_kas/prefect/numpy-mul:latest
RUN python -m pip install --no-cache-dir "prefect>=3,<4"
- Verify:
Copy code
docker run --rm your-image prefect version
Only exception: if you don’t want Prefect in a container at all, use a non-container worker (e.g., Process worker) so Prefect runs from the host environment instead of inside an image. If your image has a custom ENTRYPOINT, make sure it doesn’t block the worker’s command, or clear it via deployment `job_variables`:
Copy code
job_variables={"container_create_kwargs": {"entrypoint": []}}
(This still requires Prefect to be installed in the image.)
o
@Marvin ok for control flow i understand that but is that necessary for docker subprocess calls? a container that is called with entrypoint ?
m
thought for 63.7 seconds
Great question — there are two different “containers” to think about: 1) The orchestration container (the one the Docker worker starts to execute your Prefect flow) - This container MUST have the Prefect CLI available because the worker runs the flow by invoking
prefect ...
inside it. - ENTRYPOINT notes: - Prefer not to override ENTRYPOINT. Let Prefect set the command. - If you do have a custom ENTRYPOINT, make sure it ultimately execs the command passed by the worker (which will be the
prefect ...
invocation). A safe pattern:
Copy code
# Dockerfile
    COPY entrypoint.sh /entrypoint.sh
    ENTRYPOINT ["/entrypoint.sh"]
Copy code
# entrypoint.sh
    #!/bin/sh
    set -e
    # your init steps here
    exec "$@"  # <- runs the CMD passed by the worker, e.g. the Prefect command
- If your image’s ENTRYPOINT conflicts, you can clear it at deploy time:
Copy code
job_variables={"container_create_kwargs": {"entrypoint": []}}
(You still need Prefect installed in the image.) 2) Workload containers you launch from your flow (docker “subprocess” calls) - These do NOT need Prefect. They’re just containers you start from your flow code (e.g., with the Docker SDK or subprocess). - Your flow still runs inside the orchestration container (which needs Prefect), but the child containers can be any image with its own ENTRYPOINT/CMD. Example: run a child container from a Prefect flow running on a Docker worker
Copy code
from prefect import flow, task
import docker

@task
def run_child_container():
    client = docker.from_env()
    # Runs your image with its own entrypoint/CMD
    logs = client.containers.run(
        "local-docker_client_kas/prefect/numpy-mul:latest",
        detach=False,
        remove=True,
        environment={"FOO": "bar"},
    )
    print(logs.decode() if isinstance(logs, bytes) else logs)

@flow
def my_flow():
    run_child_container()

# Deploy using a Prefect-enabled image for the ORCHESTRATION container
my_flow.deploy(
    name="my-deployment",
    work_pool_name="your-docker-work-pool",
    image="prefecthq/prefect:3-python3.12",
    build=False,
    push=False,
    job_variables={
        # allow the flow to talk to the host Docker daemon
        "volumes": ["/var/run/docker.sock:/var/run/docker.sock"],
        # optionally set DOCKER_HOST if needed
        "env": {"DOCKER_HOST": "unix:///var/run/docker.sock"},
    },
)
Key takeaways - Orchestration container (the one the worker starts): Prefect required. - Child containers you start from your flow: Prefect not required; your own ENTRYPOINT is fine. - If you have a custom ENTRYPOINT in the orchestration image, ensure it execs the CMD the worker supplies (or clear it via
container_create_kwargs.entrypoint=[]
).
o
@Marvin what does that mean? orker 'DockerWorker 04aec57c-a0c0-4db8-b971-78ba0ba73d1f' submitting flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' 091443 AM prefect.flow_runs.worker Completed submission of flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' 091443 AM prefect.flow_runs.worker Reported flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' as crashed: Flow run infrastructure exited with non-zero status code 1. 091445 AM prefect.flow_runs.worker Worker 'DockerWorker 04aec57c-a0c0-4db8-b971-78ba0ba73d1f' submitting flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' 091557 AM prefect.flow_runs.worker Completed submission of flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' 091558 AM prefect.flow_runs.worker Reported flow run 'f07c9471-72a7-45f1-a4d3-e62b41c34dc9' as crashed: Flow run infrastructure exited with non-zero status code 1. 091600 AM prefect.flow_runs.worker
m
thought for 78.9 seconds
Those messages mean the Docker worker successfully started your container for the flow run, but the process inside the container exited with a non-zero status (1). Prefect marks the flow run as “crashed” when the infrastructure (the container) exits with an error before or during engine startup. In short: - “Completed submission” = container created and started. - “Flow run infrastructure exited with non-zero status code 1” = the container’s main process failed quickly. Common causes - The image runs a custom ENTRYPOINT that doesn’t exec the command Prefect provides, so the Prefect command never runs or gets replaced. - The image doesn’t have Prefect installed or PATH isn’t correct. - The image can’t start due to import errors, missing code/dependencies, or Python/OS mismatches. - Prefect CLI inside the container fails to start (e.g., can’t reach API, crashes early). - Permissions/exec issues in entrypoint scripts (e.g., not executable, bad shebang). How to see the actual error 1) Check the container logs on the worker host: - Find the container:
Copy code
docker ps -a --no-trunc --filter "status=exited" --format "{{.ID}}\t{{.Names}}\t{{.Status}}"
- Inspect and view logs:
Copy code
docker inspect <container_id> --format '{{.Config.Entrypoint}} {{.Config.Cmd}}'
docker logs <container_id>
These logs will show why it exited with code 1. 2) Ensure Prefect logs are streamed: - Deployment/job variables can set this; it defaults to True for Docker worker:
Copy code
job_variables={"stream_output": True}
If the container dies before Prefect starts, you may still only see infra (docker) logs, so the host
docker logs
is the best source. Quick sanity checks - Verify Prefect is in the image and runnable:
Copy code
docker run --rm YOUR_IMAGE prefect version
docker run --rm --entrypoint '' YOUR_IMAGE prefect version   # bypass ENTRYPOINT if needed
- Verify your ENTRYPOINT script (if any) ends with:
Copy code
exec "$@"
so that Prefect’s command provided by the worker actually runs. - See exactly what the worker is trying to run by starting it with debug logs:
Copy code
PREFECT_LOGGING_LEVEL=DEBUG prefect worker start -p your-docker-work-pool
This will show the container create/start parameters. If you share the output of: -
docker inspect <container_id> --format '{{.Config.Entrypoint}} {{.Config.Cmd}}'
-
docker logs <container_id>
I can pinpoint the exact fix (ENTRYPOINT adjustment, install Prefect, dependency issue, etc.).
o
@Marvin what is the standard entrypoint? ENTRYPOINT [ "" ] ?
m
thought for 39.8 seconds
Short answer: there is no “standard” ENTRYPOINT you must set for Prefect. The safest default is to have no ENTRYPOINT at all and let the Docker worker provide the command it wants to run. Clarifications - Prefect’s Docker worker sets the container command to a
prefect ...
invocation at runtime. It does not require an ENTRYPOINT. - If you do have an ENTRYPOINT, it must pass through the command Prefect supplies. Use an exec pattern. Good patterns - No ENTRYPOINT in your Dockerfile:
Copy code
# No ENTRYPOINT; Prefect supplies the command at runtime
- Pass-through ENTRYPOINT:
Copy code
# Dockerfile
  COPY entrypoint.sh /entrypoint.sh
  ENTRYPOINT ["/entrypoint.sh"]
Copy code
# entrypoint.sh
  #!/bin/sh
  set -e
  # optional init work...
  exec "$@"   # critical: runs the Prefect command that the worker passes
Resetting/clearing ENTRYPOINT at deploy time - If your image has an ENTRYPOINT you want to ignore for Prefect runs, clear it in your deployment:
Copy code
my_flow.deploy(
    name="my-deployment",
    work_pool_name="your-docker-work-pool",
    image="your/image:tag",
    build=False,
    push=False,
    job_variables={
      "container_create_kwargs": {
        "entrypoint": []   # clears ENTRYPOINT for this run
      }
    },
  )
About ENTRYPOINT [""] - In a Dockerfile,
ENTRYPOINT [""]
is not valid and will fail. - At runtime, tools may use an empty value to clear the entrypoint; with Prefect we recommend the explicit array form
entrypoint: []
via
container_create_kwargs
. Rule of thumb - Prefer no ENTRYPOINT or a pass-through one that ends with
exec "$@"
. - Ensure the image has the
prefect
CLI installed; otherwise the worker’s command will fail even with a perfect ENTRYPOINT. Verify with:
Copy code
docker run --rm your/image:tag prefect version
o
@Marvin i do not get it, at first you tell me there is the wrong entrypoint, then you tell i do not need to define one in my dockerfile. what is now corect?
m
thought for 27.6 seconds
You’re right to call that out — let me clarify precisely. Two separate requirements are at play: 1) The image must include the Prefect CLI - The Docker worker starts your flow by running a Prefect command inside the container. - If Prefect isn’t installed in the image, it fails with “exec: 'prefect': executable file not found in $PATH” — this happens regardless of ENTRYPOINT. - Test:
docker run --rm your-image prefect version
2) ENTRYPOINT is optional, but if present it must be compatible - You do not need to set an ENTRYPOINT in your Dockerfile. The simplest and most reliable setup is: no ENTRYPOINT, Prefect installed, and let the worker provide the command. - If you do have an ENTRYPOINT, it must pass through the command Prefect supplies. In practice, your entrypoint script should end with:
Copy code
exec "$@"
so the Prefect command (the CMD) actually runs. - If your ENTRYPOINT ignores or overrides the CMD, the container can exit with code 1 before Prefect starts. Putting it together: - Correct statement: You don’t need to define an ENTRYPOINT at all. If you choose to define one, it has to be a pass-through entrypoint that ultimately execs the CMD the worker provides. - The earlier “wrong entrypoint” comment means: if you have an ENTRYPOINT and it doesn’t exec "$@", it can cause the “exited with status 1” behavior. That’s separate from the “prefect not found” error, which is solved by installing Prefect in the image. Helpful knobs during deployment: - To ignore a problematic entrypoint at runtime:
Copy code
job_variables={"container_create_kwargs": {"entrypoint": []}}
- To verify what the worker is doing, start it with debug logs:
Copy code
PREFECT_LOGGING_LEVEL=DEBUG prefect worker start -p your-docker-work-pool
If you share your current Dockerfile’s ENTRYPOINT (if any) and the result of
docker run --rm your-image prefect version
, I can give you the exact minimal change.
o
@Marvin the container crashes Worker 'DockerWorker 76f5b556-2528-43a0-ba2f-efd005f28ede' submitting flow run '7b52d87f-a8f6-4684-9fc3-70ea5523b893' prefect.flow_runs.worker Completed submission of flow run '7b52d87f-a8f6-4684-9fc3-70ea5523b893' prefect.flow_runs.worker Reported flow run '7b52d87f-a8f6-4684-9fc3-70ea5523b893' as crashed: Flow run infrastructure exited with non-zero status code 1. prefect.flow_runs.worker docker logs granite-parakeet 093243.161 | DEBUG | prefect.profiles - Using profile 'ephemeral' 093243.371 | DEBUG | prefect.runner - Starting runner... 093243.387 | DEBUG | prefect.events.clients - Connecting to ws://prefect-server:4200/api/events/out 093243.388 | DEBUG | prefect.events.clients - Reconnecting... Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/prefect/cli/_utilities.py", line 44, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/cli/_types.py", line 156, in sync_fn return asyncio.run(async_fn(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/cli/flow_run.py", line 412, in execute await runner.execute_flow_run(id) File "/usr/local/lib/python3.11/site-packages/prefect/runner/runner.py", line 601, in execute_flow_run async with context: File "/usr/local/lib/python3.11/site-packages/prefect/runner/runner.py", line 1550, in aenter self._cancelling_observer = await self._exit_stack.enter_async_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/contextlib.py", line 650, in enter_async_context result = await _enter(cm) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/runner/_observers.py", line 157, in aenter self._events_subscriber = await self._exit_stack.enter_async_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/contextlib.py", line 650, in enter_async_context result = await _enter(cm) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/events/clients.py", line 555, in aenter await self._reconnect() File "/usr/local/lib/python3.11/site-packages/prefect/events/clients.py", line 566, in _reconnect self._websocket = await self._connect.__aenter__() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/websockets/asyncio/client.py", line 587, in aenter return await self ^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/websockets.py", line 135, in _proxy_connect proto = await self.__await_impl__() ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/websockets/asyncio/client.py", line 541, in __await_impl__ self.connection = await self.create_connection() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/websockets/asyncio/client.py", line 467, in create_connection _, connection = await loop.create_connection(factory, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1046, in create_connection infos = await self._ensure_resolved( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1420, in _ensure_resolved return await loop.getaddrinfo(host, port, family=family, type=type, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/base_events.py", line 868, in getaddrinfo return await self.run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/socket.py", line 974, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ socket.gaierror: [Errno -2] Name or service not known An exception occurred.
m
thought for 84.9 seconds
Thanks for the logs — they explain the crash. What it means - Inside your flow-run container, Prefect is trying to connect to the API/events at ws://prefect-server:4200/api/events/out - The error socket.gaierror: [Errno -2] Name or service not known means the hostname prefect-server cannot be resolved from that container. - So the container starts fine, but Prefect crashes early because it cannot reach your Prefect Server. Why this happens - The API URL in the container is set to use the hostname prefect-server. That name is only resolvable if the container is on the same Docker network as the server (typically a user-defined compose network). Your flow-run container is not on that network, so DNS fails. - Alternatively, your worker is configured with PREFECT_API_URL=http://prefect-server:4200/api and passes that into the job, but the job cannot resolve that name. Fix options (pick one) 1) Point PREFECT_API_URL at an address the job container can reach - Use the host’s IP/hostname or host.docker.internal (on Docker Desktop) with the server’s published port:
Copy code
# Start your worker with a resolvable URL
PREFECT_API_URL=http://<server-host-or-ip>:4200/api prefect worker start -p your-docker-work-pool
- Or set it per deployment:
Copy code
my_flow.deploy(
  ...,
  job_variables={
    "env": {"PREFECT_API_URL": "http://<server-host-or-ip>:4200/api"}
  },
)
2) Put the job container on the same Docker network as the Prefect Server - Find the server container’s network(s):
Copy code
docker inspect <prefect-server-container-name> --format '{{json .NetworkSettings.Networks}}' | jq .
- Add that network to your deployment so the job can resolve prefect-server:
Copy code
my_flow.deploy(
  ...,
  job_variables={
    "networks": ["<the-server-network-name>"]
  },
)
3) If you run the worker in the same compose stack/network as the server - You can keep using the hostname prefect-server, but ensure your worker submits jobs that join that same network (see option 2). Quick tests - From the worker host, check that the API is reachable by the address you plan to use:
Copy code
curl -sS http://<server-host-or-ip>:4200/api/health
- From a test container on the intended network:
Copy code
docker run --rm --network <the-server-network-name> curlimages/curl -sS <http://prefect-server:4200/api/health>
Once PREFECT_API_URL resolves from inside the flow-run container, the “infrastructure exited with non-zero status code 1” should go away. If you can share: - How your Prefect Server is running (docker compose? container name/network?) - How you start the worker (any PREFECT_API_URL set?) I can give the exact command and network name to use.
o
@Marvin so this looks to me that the code was not baked into the container? Worker 'DockerWorker 76f5b556-2528-43a0-ba2f-efd005f28ede' submitting flow run '8a7769a6-0e64-4ef9-9f6c-b7cb244d561f' 115023 AM prefect.flow_runs.worker Completed submission of flow run '8a7769a6-0e64-4ef9-9f6c-b7cb244d561f' 115024 AM prefect.flow_runs.worker Opening process... 115025 AM prefect.flow_runs.runner Downloading flow code from storage at '.' 115026 AM prefect.flow_runs Importing flow code from 'deployments/local_deployment_image_average.py:cluster_image_and_process' 115026 AM prefect.flow_runs Unexpected exception encountered when trying to load flow Traceback (most recent call last): File "<frozen importlib._bootstrap_external>", line 936, in exec_module File "<frozen importlib._bootstrap_external>", line 1073, in get_code File "<frozen importlib._bootstrap_external>", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/app/deployments/local_deployment_image_average.py' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/prefect/engine.py", line 112, in <module> flow: "Flow[..., Any]" = load_flow(flow_run) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/flow_engine.py", line 150, in load_flow flow = run_coro_as_sync( ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 207, in run_coro_as_sync return call.result() ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 365, in result return self.future.result(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 192, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 441, in _run_async result = await coro ^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 188, in coroutine_wrapper return await task ^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/client/utilities.py", line 69, in wrapper return await func(client, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/flows.py", line 2826, in load_flow_from_flow_run flow = await run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 233, in run_sync_in_worker_thread result = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2485, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 976, in run result = context.run(func, *args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/asyncutils.py", line 243, in call_with_mark return call() ^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/flows.py", line 2503, in load_flow_from_entrypoint flow: Flow[P, Any] = import_object(entrypoint) # pyright: ignore[reportRedeclaration] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 157, in import_object module = load_script_as_module(script_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/prefect/utilities/importtools.py", line 122, in load_script_as_module raise ScriptError(user_exc=exc, path=path) from exc prefect.exceptions.ScriptError: Script at 'deployments/local_deployment_image_average.py' encountered an exception: FileNotFoundError(2, 'No such file or directory') 115026 AM prefect.flow_runs Process for flow run 'military-capybara' exited with status code: 1 115027 AM prefect.flow_runs.runner Reported flow run '8a7769a6-0e64-4ef9-9f6c-b7cb244d561f' as crashed: Flow run process exited with non-zero status code 1. 115027 AM prefect.flow_runs.runner Loading flow to check for on_crashed hooks
m
thought for 171.2 seconds
You’re reading it correctly: Prefect is trying to load your flow code from the container filesystem and can’t find it. That means your code was not available at the expected path inside the container. Key lines - Downloading flow code from storage at '.' - Importing flow code from 'deployments/local_deployment_image_average.py:cluster_image_and_process' - FileNotFoundError: No such file or directory: '/app/deployments/local_deployment_image_average.py' What this means in Prefect 3.x - By default, when you deploy from a local directory without specifying remote storage, Prefect uses the local path '.' as storage metadata. At runtime, the Docker worker needs to make that code available inside the container so it can import the entrypoint. - If the image doesn’t already contain your project files at the same relative path (e.g., /app/deployments/local_deployment_image_average.py) and you didn’t mount them as a volume, the file won’t be there and you get this error. You have three good options. Pick one. Option A: Bake the code into the image - In your Dockerfile, copy your project into the image at /app and set WORKDIR:
Copy code
FROM prefecthq/prefect:3-python3.12

WORKDIR /app
COPY . /app

# install your deps if needed
# RUN pip install -r requirements.txt
- Build/tag/push or make available locally, then deploy using that image and disable build/push in Prefect:
Copy code
my_flow.deploy(
  name="image-average",
  work_pool_name="your-docker-work-pool",
  image="your-registry/your-image:with-code",
  build=False,
  push=False,
)
- Ensure your deployment’s entrypoint matches the path inside the image, e.g.
deployments/local_deployment_image_average.py:cluster_image_and_process
. Option B: Mount your code into the container at runtime - Keep your existing image, and mount the project directory into /app with job_variables.volumes:
Copy code
my_flow.deploy(
  name="image-average",
  work_pool_name="your-docker-work-pool",
  image="your-registry/your-image:tag",
  build=False,
  push=False,
  job_variables={
    "volumes": [
      "/abs/path/on/worker/host/project:/app"  # must exist on the worker host
    ],
    # optional: set working dir if needed
    "container_create_kwargs": {"working_dir": "/app"}
  },
)
- The path on the left must exist on the worker’s machine (not your laptop if the worker is remote). Option C: Use Flow.from_source with a remote source (recommended for portability) - Put your code in a Git repo or package and let Prefect pull it at runtime:
Copy code
from prefect import flow

flow = flow.from_source(
    source="<https://github.com/your-org/your-repo>",
    entrypoint="deployments/local_deployment_image_average.py:cluster_image_and_process",
)

flow.deploy(
  name="image-average",
  work_pool_name="your-docker-work-pool",
  image="prefecthq/prefect:3-python3.12",
  build=False,
  push=False,
)
- You can also point
source
to a local path that’s accessible to the worker, but remote git is best for distributed workers. Sanity checks - Verify where Prefect expects the file: it uses the entrypoint path relative to the working directory inside the container. With WORKDIR /app and COPY . /app, the file should be at /app/deployments/local_deployment_image_average.py. - Confirm the file exists by running the image interactively:
Copy code
docker run --rm -it your-image:with-code ls -l /app/deployments
Given your log, the quickest fix is Option A (COPY your code) or Option B (mount your project) depending on whether you want immutable images or live-mount for development.
o
@Marvin why do i not see any prints that are in my flow when deployed via docker? Finished in state Completed() 125958 PM process_kernel-cdf prefect.task_runs Finished in state Completed() 125958 PM process_kernel-a35 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-df4 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-ea5 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-aab prefect.task_runs Finished in state Completed() 125958 PM process_kernel-f80 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-3c4 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-ba2 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-72d prefect.task_runs Finished in state Completed() 125958 PM process_kernel-b60 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-686 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-be3 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-040 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-c7c prefect.task_runs Finished in state Completed() 125958 PM process_kernel-178 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-345 prefect.task_runs Finished in state Completed() 125958 PM process_kernel-e26 prefect.task_runs
m
thought for 29.0 seconds
Short answer: print() isn’t logged by default to the Prefect UI unless you enable it or stream container output. Do this to see your prints: - Capture prints as Prefect logs (recommended) - Add
log_prints=True
on your flow and any tasks that call `print`:
Copy code
from prefect import flow, task

  @task(log_prints=True)
  def my_task():
      print("hello from task")

  @flow(log_prints=True)
  def my_flow():
      print("hello from flow")
      my_task()
- Or set it globally via config/env:
Copy code
# in code
  from prefect.settings import PREFECT_LOGGING_LOG_PRINTS, temporary_settings
  # or set env var before starting worker/run: PREFECT_LOGGING_LOG_PRINTS=true
- Ensure the Docker worker streams container stdout/stderr - It’s on by default, but if you’ve overridden it, re-enable:
Copy code
my_flow.deploy(
    ...,
    job_variables={"stream_output": True}
  )
- Without streaming, raw stdout won’t be forwarded to the UI. - Avoid output buffering - Python may buffer prints in containers. Either: - Use
print(..., flush=True)
, or - Set
PYTHONUNBUFFERED=1
in the container env:
Copy code
my_flow.deploy(
      ...,
      job_variables={"env": {"PYTHONUNBUFFERED": "1"}}
    )
What to expect: - With
log_prints=True
, print output appears as Prefect logs under the corresponding flow/task runs. - With only
stream_output=True
(and no
log_prints
), you may see raw container output attached to the flow run logs, but not attributed to specific tasks. - If you see “Finished in state Completed()” but still no prints, check the container logs directly to confirm the prints are happening:
Copy code
docker logs <container_id>
If you share how your flow/tasks are defined and your current deployment’s
job_variables
, I can suggest the minimal change to surface your prints in the UI.
o
@Marvin like this? cluster_image_and_process.deploy( name="my-seventh-docker-deployment", work_pool_name="docker-worker-pool-in-compose-on-brekasdev06", image="infra-repos.ohb-system.de:443/local-docker_client_kas/prefect/image_average:0.0.3", # <- your Artifactory host/repo job_variables={ # these are Enviornment Variables you can submit to each ephemeral container "env": {"PREFECT_API_URL": "http://localhost:4200/api", "PYTHONUNBUFFERED": "1", "PREFECT_LOGGING_LOG_PRINTS": True}, "stream_output": True, }, build=False, push=False, )
m
thought for 20.9 seconds
You’re very close. A few important tweaks: - Image name: do not include a URL scheme or Slack link formatting. Use a plain Docker image reference like: -
<http://infra-repos.ohb-system.de:443/local-docker_client_kas/prefect/image_average:0.0.3|infra-repos.ohb-system.de:443/local-docker_client_kas/prefect/image_average:0.0.3>
- PREFECT_API_URL: don’t use
localhost
inside the container unless your server is actually on the same container namespace. Point to something the container can reach: - If your server is a container on a compose network:
<http://prefect-server:4200/api>
and add that network to the job - Or use the host IP/hostname:
http://<host-ip>:4200/api
- On Docker Desktop, you can use
<http://host.docker.internal:4200/api>
- Env values must be strings. Use
"true"
for booleans. - Keep
stream_output=True
(it’s default, but fine to be explicit). - Optionally also set
log_prints=True
on your flow/tasks for attribution. Corrected example:
Copy code
cluster_image_and_process.deploy(
    name="my-seventh-docker-deployment",
    work_pool_name="docker-worker-pool-in-compose-on-brekasdev06",
    image="<http://infra-repos.ohb-system.de:443/local-docker_client_kas/prefect/image_average:0.0.3|infra-repos.ohb-system.de:443/local-docker_client_kas/prefect/image_average:0.0.3>",
    build=False,
    push=False,
    job_variables={
        "env": {
            "PREFECT_API_URL": "<http://prefect-server:4200/api>",   # or http://<host-ip>:4200/api
            "PYTHONUNBUFFERED": "1",
            "PREFECT_LOGGING_LOG_PRINTS": "true"
        },
        "stream_output": True,
        # If your Prefect Server runs on a compose network, add it here:
        # "networks": ["<server_network_name>"]
    },
)
And in code (recommended for clarity):
Copy code
from prefect import flow, task

@task(log_prints=True)
def process_kernel():
    print("hello from task")

@flow(log_prints=True)
def cluster_image_and_process():
    print("hello from flow")
    process_kernel()
If prints still don’t show: - Confirm the container can reach the API (inside container):
curl -sS $PREFECT_API_URL/health
- Check the container logs directly on the worker host:
docker logs <container_id>
to confirm the prints are emitted - Start the worker with debug logs to verify env was passed: -
PREFECT_LOGGING_LEVEL=DEBUG prefect worker start -p docker-worker-pool-in-compose-on-brekasdev06
If you share how your Prefect Server is hosted (compose service name/network) I can suggest the exact
PREFECT_API_URL
and
networks
value.