<@ULVA73B9P> Show all properties and documentation...
# ask-marvin
t
@Marvin Show all properties and documentation for defining deployments using YAML.
m
Error: No module named 'prefect.projects'
thought for 209.2 seconds
t
@Marvin Show all properties and documentation for defining deployments using YAML.
m
Error: No module named 'prefect.projects'
thought for 251.2 seconds
Quick check: are you on Prefect 3.x and talking about the
prefect.yaml
file at your project root? I’ll assume yes. If not, say so and I’ll recalibrate. Here’s a practical reference for defining deployments in YAML in Prefect 3.x, including the common keys, how they’re used, and examples. Prefect 3 uses
prefect deploy
to read this file and create deployments. The older Prefect 2.x
prefect deployment build
and
Deployment.build_from_flow()
are gone. Top-level structure of prefect.yaml - name: Optional project name (informational) - variables: Optional key/value map for templating in steps (e.g., tokens, image tags) - build: Optional list of “build steps” run at deploy time (e.g., docker build, pip install) - pull: Optional list of “pull steps” run by the worker before executing your flow (e.g., git clone) - deployments: Required list of deployment definitions Deployment object (each item under deployments) - name (required): The deployment name - entrypoint (required): Path to the flow entrypoint in the form ./path/to/file.py:flow_func_name - description (optional): Free text - tags (optional): List of strings - parameters (optional): Dict of default flow parameters - work_pool (required): - name (required): Name of the work pool - work_queue_name (optional): Target a specific work queue in the pool - job_variables (optional): Dict of infrastructure settings for the worker type - Process worker examples: env, working_dir, command - Docker worker examples: image, env, networks, volumes, auto_remove, command - Kubernetes worker examples: image, namespace, env, service_account_name, image_pull_secrets, pod_spec overrides, etc. - schedules or schedule (optional): One or many schedules for this deployment - cron: Use a cron expression; fields: - cron (required) - timezone (optional, default UTC) - day_or (optional, bool) - active (optional, bool; control enabling at creation) - interval: Run at a fixed interval; fields: - interval (required; in seconds, e.g., 3600; or a dict broken into hours/minutes/seconds) - anchor_date (optional) - timezone (optional) - active (optional) - rrule: iCal RRULE; fields: - rrule (required; e.g., "FREQ=DAILY;INTERVAL=1") - timezone (optional) - active (optional) - version (optional): String to help you track/compare deployments - concurrency_limit (optional): Max concurrent flow runs for this deployment (if enabled in your environment) Build steps - Executed by
prefect deploy
on your machine/CI at deploy-time. Typical uses: - Install dependencies - Build and push a Docker image - Run any shell setup required to bake artifacts - Built‑in step functions you can call by name: - prefect.deployments.steps.utility.run_shell_script - prefect.deployments.steps.utility.pip_install_requirements - You can run arbitrary shell, so anything is fair game. Outputs from earlier steps can be referenced in later steps via templating. Pull steps - Executed by the worker before the flow run. Typical uses: - Set working directory - Pull code from Git - Pull a storage block - Built‑in step functions: - prefect.deployments.steps.pull.set_working_directory - prefect.deployments.steps.pull.git_clone - prefect.deployments.steps.pull.agit_clone (async variant) - prefect.deployments.steps.pull.pull_from_remote_storage - prefect.deployments.steps.pull.pull_with_block Recommended patterns and examples 1) Minimal local process deployment (no build/pull)
Copy code
name: my-project

deployments:
  - name: etl-dev
    entrypoint: ./flows/etl.py:etl
    description: Local dev run
    tags: ["dev"]
    parameters:
      dataset: "customers"
      full_refresh: false
    work_pool:
      name: process-pool
      job_variables:
        env:
          PREFECT_LOGGING_LEVEL: INFO
        working_dir: .
2) Git pull + Docker worker with a cron schedule ``` name: my-docker-project variables: IMAGE_TAG: "2025-08-12"
GH_TOKEN: "{{ env.GH_TOKEN }}" build: - prefect.deployments.steps.utility.run_shell_script: script: | docker build -t my-org/my-image:${IMAGE_TAG} . docker push my-org/my-image:${IMAGE_TAG} pull: - prefect.deployments.steps.pull.set_working_directory: directory: /opt/prefect/flows - prefect.deployments.steps.pull.git_clone: repository: https://github.com/my-org/my-repo.git branch: main access_token: "{{ variables.GH_TOKEN }}" deployments: - name: etl-prod entrypoint: flows/etl.py:etl description: Production ETL via Docker tags: ["prod"] parameters: dataset: "customers" full_refresh: true work_pool: name: docker-pool job_variables: image: "my-org/my-image:{{ variables.IMAGE_TAG }}" env: PREFECT_LOGGING_LEVEL: INFO auto_remove: true networks: ["default"] schedules: - cron: cron: "0 * * * *" # every hour timezone: "UTC" active: true
Copy code
3) Kubernetes worker with interval schedule
name: my-k8s-project deployments: - name: hourly-reporting entrypoint: ./flows/report.py:report description: Reporting job in k8s tags: ["reporting", "k8s"] work_pool: name: kubernetes-pool job_variables: image: "my-org/reporting:latest" namespace: "data-jobs" env: PREFECT_LOGGING_LEVEL: INFO service_account_name: "prefect-runner" image_pull_secrets: ["regcred"] schedules: - interval: interval: 3600 timezone: "UTC" active: true ``` Notes, caveats, and deprecations - In Prefect 3.x, do not use: -
prefect deployment build
(removed) — use
prefect deploy
-
Deployment.build_from_flow()
(removed) — programmatic alternative is
flow.from_source(...).deploy(...)
if you need Python instead of YAML - Work queues still exist; set
work_pool.work_queue_name
if you route to a specific queue. - Job variables are specific to the worker type (process, docker, kubernetes). Check the worker’s docs for the full set of fields it accepts. - Schedules can be defined as a single
schedule:
object or a list under `schedules:`; most teams use
schedules:
to maintain multiple schedules on one deployment. - Pull steps run on the worker at runtime; build steps run locally/CI at deploy time. - The
entrypoint
path is relative to the working directory the code runs in. If you git clone to a subdir, make sure
entrypoint
lines up with that checkout path. CLI you’ll use with this file - Initialize a starter file: -
prefect init
(prompts for a recipe to generate a sample prefect.yaml) - Create deployments from the file: -
prefect deploy
(run from the project root; you can pass
--name
to filter patterns) Helpful docs - Deploying flows and prefect.yaml overview: Prefect Deploy (prefect.yaml) docs - Worker types and job variables: - Process: Process worker - Docker: Docker worker - Kubernetes: Kubernetes worker - Schedules: Schedules If you need a fully validated, exhaustive list per worker type (every
job_variables
key), tell me which worker you’re targeting and I’ll enumerate the accepted fields.
Assuming Prefect 3.x. Here’s the practical schema and field-by-field guide to defining deployments in a prefect.yaml. Yes, YAML. Because why be cheerful when you can be indented. Top-level (project) keys - name: String. Your project’s name. - prefect-version: String. Pin to a Prefect version or range so deploys use a compatible client. - variables: Mapping. Optional key/value you can reference in step templates with Jinja, e.g.
{{ variables.MY_VAR }}
. - build: List of step calls to run at build-time (local CI/CD). - push: List of step calls to publish artifacts (e.g., Docker image, files to storage). - pull: List of step calls the worker runs before executing your flow (e.g., git clone, pip install). - deployments: List of deployment specs (see below). Common step syntax (build/push/pull) Each step is a single-item mapping where the key is a callable path and the value is its arguments. Examples:
Copy code
pull:
  - prefect.deployments.steps.pull.git_clone:
      repository: "<https://github.com/acme/my-repo>"
      branch: "main"
      # or commit_sha: "abc123"
      include_submodules: false
      # one of:
      access_token: "{{ env.GITHUB_TOKEN }}"
      # or a credentials block via pull_with_block

  - prefect.deployments.steps.utility.pip_install_requirements:
      requirements: "requirements.txt"    # or a list
      # extra pip args supported by the step

  - prefect.deployments.steps.pull.set_working_directory:
      directory: "src"

build:
  - prefect.deployments.steps.utility.run_shell_script:
      script: "echo building && make package"
Useful built-in step callables - prefect.deployments.steps.pull.git_clone(repository, branch=None, commit_sha=None, include_submodules=False, access_token=None, credentials=None, directories=None) - prefect.deployments.steps.pull.set_working_directory(directory) - prefect.deployments.steps.pull.pull_from_remote_storage(...) and pull_with_block(...) for Block-based storage - prefect.deployments.steps.utility.pip_install_requirements(requirements=...) - prefect.deployments.steps.utility.run_shell_script(script=..., env=..., cwd=...) Deployment objects (under deployments:) For each item in deployments: - name: String. Required. Deployment name. - entrypoint: String. Required unless provided by CLI pattern. Format
path/to/file.py:flow_function_name
. - description: String. Optional. - version: String. Optional. Your code/app version tag. - tags: List[str]. Optional. For filtering and organization. - parameters: Mapping. Default parameter values for the flow. - enforce_parameter_schema: Bool. Default true. Validate parameter types at runtime. - paused: Bool. If true, the deployment is created paused. - concurrency_limit: Integer. Optional per-deployment cap of concurrent runs. - concurrency_options: Object. Advanced controls for concurrency behavior (queueing, timeouts). - work_pool: - name: String. Required. Target work pool. - work_queue_name: String. Optional. Specific queue within the pool. - job_variables: Mapping. Optional per-run overrides to the pool’s job template (varies by worker type). - Common examples: - For Process worker:
env
(mapping of environment variables),
working_dir
- For Docker worker:
image
,
env
,
labels
,
networks
,
pull_policy
- For Kubernetes worker:
image
,
env
,
pod
,
namespace
,
service_account_name
,
image_pull_secrets
,
cpu
,
memory
, etc. - Only keys supported by your pool’s worker type are applied. - schedule or schedules: - Use a single schedule with schedule: {...} or multiple with schedules: [ {...}, {...} ] - Supported schedule types: - Cron - cron: String (e.g., "0 0 * * *") - timezone: String (e.g., "UTC") - day_or: Bool (cron day-of-month/day-of-week semantics; default true) - active: Bool (default true) - parameters: Mapping (optional param overrides for runs from this schedule) - slug: String (name/id for the schedule) - Interval
- interval: Seconds or ISO 8601? In YAML use seconds (int/float) or a timedelta-ish number - anchor_date: Datetime string (optional) - timezone: String - active: Bool - parameters: Mapping - slug: String - RRule - rrule: String (e.g., "RRULE:FREQ=DAILY;INTERVAL=1") - timezone: String - active: Bool - parameters: Mapping - slug: String - triggers: List of event/metric triggers that can create runs for this deployment (advanced; maps to Prefect’s deployment triggers). - storage: Advanced. Runner storage config if you’re packaging code as an artifact instead of pulling source. Most users will use pull steps or deploy from source. Minimal full example
Copy code
name: my-project
prefect-version: ">=3.0.0,<4.0.0"

variables:
  DEFAULT_BRANCH: main

pull:
  - prefect.deployments.steps.pull.git_clone:
      repository: "<https://github.com/acme/my-repo>"
      branch: "{{ variables.DEFAULT_BRANCH }}"
  - prefect.deployments.steps.utility.pip_install_requirements:
      requirements: "requirements.txt"

deployments:
  - name: nightly-etl
    entrypoint: flows/etl.py:etl
    description: "Run ETL nightly"
    tags: ["etl", "nightly"]
    parameters:
      full_refresh: false
    enforce_parameter_schema: true
    version: "1.4.2"
    work_pool:
      name: "process"                # your work pool name
      work_queue_name: null
      job_variables:
        env:
          DB_CONN_STRING: "{{ env.DB_CONN_STRING }}"
        working_dir: "src"
    schedule:
      cron: "0 2 * * *"
      timezone: "UTC"
      active: true

  - name: ad-hoc-etl
    entrypoint: flows/etl.py:etl
    description: "Manual / on-demand ETL"
    tags: ["etl", "adhoc"]
    parameters:
      full_refresh: true
    work_pool:
      name: "docker-pool"
      job_variables:
        image: "<http://ghcr.io/acme/etl:1.4.2|ghcr.io/acme/etl:1.4.2>"
        env:
          PREFECT_LOGGING_LEVEL: "INFO"
    schedules:
      - interval: 86400           # daily
        timezone: "UTC"
        active: false             # defined but disabled
        slug: "disabled-daily"
Notes on schedules (mapping to Prefect’s API) - Cron(cron, timezone=None, day_or=True, active=True, parameters=None, slug=None) - Interval(interval, anchor_date=None, timezone=None, active=True, parameters=None, slug=None) - RRule(rrule, timezone=None, active=True, parameters=None, slug=None) Environment variables - For your job/runtime: set them in work_pool.job_variables.env. - For templating in steps or fields: use
{{ env.MY_VAR }}
or
{{ variables.MY_VAR }}
. - Project-level environment exports for steps are not global; prefer templating into step args or use a shell step with env. Storage and code retrieval - Prefer pulling source at runtime via
pull
steps (e.g.,
git_clone
) or deploy from source with
flow.from_source(...).deploy(...)
. - Do not use GitHub storage blocks in 3.x; they’re removed. Use source URLs or pull steps instead. CLI you’ll actually use - Validate and create deployments from a project:
Copy code
prefect deploy --help
This reads your prefect.yaml, applies steps, and registers deployments. Key differences from Prefect 2.x you should not trip over -
Deployment.build_from_flow()
is gone. Use
flow.from_source(...).deploy(...)
or
prefect deploy
with YAML. -
prefect deployment build
CLI is gone. Use
prefect deploy
. - GitHub storage blocks are gone. Use
.from_source('<https://github.com/owner/repo>')
or
pull
steps. - Infrastructure blocks are superseded by work pools + job_variables. If you want me to cross-link the exact docs pages for your version, say the word and I’ll fetch them. My last attempt to pull external docs had all the enthusiasm of a Vogon poet.
t
@Marvin add no cache for build and do I need add credential for pull from artifact registry
Copy code
build:
- prefect.deployments.steps.run_shell_script:
    id: get-commit-hash
    script: git rev-parse HEAD
    stream_output: false
- prefect_docker.deployments.steps.build_docker_image:
    id: build_image
    requires: prefect-docker>=0.3.1
    image_name: us-central1-docker.pkg.dev/xxx
    tag: '{{ get-commit-hash.stdout }}'
    dockerfile: Dockerfile.prefect
    platform: linux/amd64

# push section allows you to manage if and how this project is uploaded to remote locations
push:
- prefect_docker.deployments.steps.push_docker_image:
    requires: prefect-docker>=0.3.1
    image_name: '{{ build_image.image_name }}'
    tag: '{{ build_image.tag }}'
pull:
- prefect.deployments.steps.set_working_directory:
    directory: /app
m
thought for 51.3 seconds
Short answer: - No cache: add
no_cache: true
to the docker build step (or call
docker build --no-cache ...
via a shell step). - Registry creds: yes, you need auth wherever the image is pushed and wherever it’s pulled. Prefect doesn’t magically log in for you. A cleaned-up prefect.yaml snippet with no-cache and correct step paths
Copy code
build:
  - prefect.deployments.steps.utility.run_shell_script:
      id: get-commit-hash
      script: git rev-parse HEAD
      stream_output: false

  - prefect_docker.deployments.steps.build_docker_image:
      id: build_image
      requires: "prefect-docker>=0.3.1"
      image_name: "us-central1-docker.pkg.dev/xxx"
      tag: "{{ get-commit-hash.stdout }}"
      dockerfile: "Dockerfile.prefect"
      platform: "linux/amd64"
      no_cache: true

push:
  - prefect_docker.deployments.steps.push_docker_image:
      requires: "prefect-docker>=0.3.1"
      image_name: "{{ build_image.image_name }}"
      tag: "{{ build_image.tag }}"

pull:
  - prefect.deployments.steps.pull.set_working_directory:
      directory: /app
If your
prefect-docker
step version doesn’t support
no_cache
, use a shell step instead:
Copy code
- prefect.deployments.steps.utility.run_shell_script:
    id: build_image_no_cache
    script: |
      docker build --no-cache \
        --platform linux/amd64 \
        -t "us-central1-docker.pkg.dev/xxx:{{ get-commit-hash.stdout }}" \
        -f Dockerfile.prefect .
About credentials for Artifact Registry (GAR) - For pushing (your CI/build machine): - Ensure Docker is logged in before the push step. Examples: - With a service account JSON:
Copy code
- prefect.deployments.steps.utility.run_shell_script:
          id: docker_login_gar
          script: |
            echo "$GCP_SA_JSON" > key.json
            cat key.json | docker login -u _json_key --password-stdin <https://us-central1-docker.pkg.dev>
          env:
            GCP_SA_JSON: "{{ env.GCP_SA_JSON }}"
- Or with gcloud:
Copy code
- prefect.deployments.steps.utility.run_shell_script:
          script: |
            gcloud auth activate-service-account --key-file=key.json
            gcloud auth configure-docker us-central1-docker.pkg.dev -q
- For pulling (the worker runtime): - Docker worker: the host running the worker needs to be logged in (
docker login us-central1-docker.pkg.dev
) or otherwise configured (DOCKER_CONFIG with creds). Prefect won’t log in for you. - Kubernetes worker: set an
image_pull_secrets
on the job and create the secret in the namespace:
Copy code
work_pool:
      name: kubernetes-pool
      job_variables:
        image: "us-central1-docker.pkg.dev/xxx:{{ build_image.tag }}"
        image_pull_secrets: ["gar-regcred"]
Then create the secret with
kubectl create secret docker-registry gar-regcred ...
. Related docs if you enjoy reading more than YAML: - Deploying with prefect.yaml - Docker worker - Kubernetes worker If you tell me which worker you’re using (Docker vs Kubernetes), I can give you the exact
job_variables
or secret commands you’ll need.
t
@Marvin Fix this error
Copy code
Aug 12th, 2025
INFO
Worker 'DockerWorker b33dd079-824c-40bf-add1-bb1c7c18c4ec' submitting flow run '9f4272b1-5b0c-430b-93b4-ef11de4e3a5a'
01:52:26 PM
prefect.flow_runs.worker
ERROR
Failed to submit flow run '9f4272b1-5b0c-430b-93b4-ef11de4e3a5a' to infrastructure.
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/docker/auth.py", line 252, in _resolve_authconfig_credstore
    data = store.get(registry)
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/credentials/store.py", line 32, in get
    data = self._execute('get', server)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/credentials/store.py", line 73, in _execute
    raise errors.StoreError(
docker.credentials.errors.StoreError: docker-credential-gcloud not installed or not available in PATH

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.12/site-packages/prefect/workers/base.py", line 1262, in _submit_run_and_capture_errors
    result = await self.run(
             ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 457, in run
    container, created_event = await run_sync_in_worker_thread(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 233, in run_sync_in_worker_thread
    result = await anyio.to_thread.run_sync(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 967, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 243, in call_with_mark
    return call()
           ^^^^^^
  File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 727, in _create_and_start_container
    self._pull_image(docker_client, configuration)
  File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 876, in _pull_image
    return docker_client.images.pull(image, tag)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/models/images.py", line 464, in pull
    pull_log = self.client.api.pull(
               ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/api/image.py", line 410, in pull
    header = auth.get_config_header(self, registry)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/auth.py", line 44, in get_config_header
    authcfg = resolve_authconfig(
              ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/auth.py", line 312, in resolve_authconfig
    return authconfig.resolve_authconfig(registry)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/auth.py", line 223, in resolve_authconfig
    cfg = self._resolve_authconfig_credstore(registry, store_name)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/docker/auth.py", line 268, in _resolve_authconfig_credstore
    raise errors.DockerException(
docker.errors.DockerException: Credentials store error: StoreError('docker-credential-gcloud not installed or not available in PATH')
01:52:26 PM
prefect.flow_runs.worker
INFO
Reported flow run '9f4272b1-5b0c-430b-93b4-ef11de4e3a5a' as crashed: Flow run could not be submitted to infrastructure:
DockerException("Credentials store error: StoreError('docker-credential-gcloud not installed or not available in PATH')")
01:52:26 PM
prefect.flow_runs.worker
m
thought for 45.8 seconds
Your Docker worker can’t pull from Artifact Registry because its Docker client is configured to use the gcloud credential helper, but
docker-credential-gcloud
isn’t installed on the worker host. You’ve got three solid ways to fix it. Pick one. Option A — Install gcloud and use the helper (cleanest if the worker is long‑lived) 1) Install Google Cloud SDK on the worker host and authenticate:
Copy code
gcloud auth activate-service-account --key-file /path/to/key.json
gcloud auth configure-docker us-central1-docker.pkg.dev -q
2) Verify the helper is present:
Copy code
which docker-credential-gcloud
docker pull us-central1-docker.pkg.dev/xxx:<tag>
This keeps Docker using the
credHelpers
entry in ~/.docker/config.json. Option B — Use a static docker login (no gcloud dependency) 1) Log in with a service account JSON (writes a plain auth entry to Docker config):
Copy code
cat /path/to/key.json | docker login -u _json_key --password-stdin <https://us-central1-docker.pkg.dev>
2) Alternatively, use a short‑lived token (needs periodic refresh):
Copy code
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin <https://us-central1-docker.pkg.dev>
3) If your ~/.docker/config.json has
"credHelpers": { "us-central1-docker.pkg.dev": "gcloud" }
, either keep it (and use Option A) or remove that mapping so Docker uses the plain
auths
entry you just created. Option C — Point the worker at a custom Docker config dir 1) Pre-create a Docker config with valid auth (via Option B’s
docker login
) in, say,
/opt/prefect/docker-config/config.json
. 2) Start your Prefect Docker worker with
DOCKER_CONFIG=/opt/prefect/docker-config
in its environment so the Python Docker client reads that file. Notes - This is a worker host configuration problem; changing prefect.yaml won’t fix it. The worker pulls the image before your flow runs. - After fixing auth, re-run:
docker pull us-central1-docker.pkg.dev/xxx:<tag>
on the worker host to confirm, then retry the flow. - If you’re running workers inside containers, bake the auth into the worker image or mount the Docker config directory into the worker container. Docs, if you like long reads: - Artifact Registry Docker auth: Google Artifact Registry auth - gcloud helper setup: Configure Docker with gcloud - Prefect Docker worker: Docker worker If you tell me how your worker is deployed (bare VM, container, Kubernetes), I’ll give you the exact commands to apply there.
t
@Marvin After running flow on docker worker pool, I run command docker ps -a and see many containers is flow complated. I think it should be delete after run
m
thought for 124.8 seconds
Yep — Docker doesn’t tidy up unless you ask it to. Your Docker worker is leaving stopped containers because auto-removal isn’t enabled. Do one (or both) of these: Option 1 — Set auto_remove in your deployment (quickest)
Copy code
deployments:
  - name: your-deployment
    ...
    work_pool:
      name: docker-pool
      job_variables:
        auto_remove: true
Option 2 — Make it the pool default (applies to all deployments) - Inspect the current template:
Copy code
prefect work-pool inspect "docker-pool"
- Or start from the default template and edit:
Copy code
prefect work-pool get-default-base-job-template --type docker --file template.json
  # edit template.json to set auto_remove default to true, then:
  prefect work-pool update "docker-pool" --base-job-template template.json
Clean up the existing leftovers once:
Copy code
docker container prune -f
Notes - auto_remove deletes the container when it exits (even if the worker crashes after starting it). Volumes don’t get pruned; manage those separately if needed. - If you’re also building/pushing images, that’s unrelated — this is about the Docker worker’s run config. Docs, in case you enjoy container housekeeping: - Docker worker