<@ULVA73B9P> I work with Prefect 2 on Prefect Clou...
# ask-marvin
f
@Marvin I work with Prefect 2 on Prefect Cloud and GCP. Which work pool is the best for my usage?
m
thought for 338.7 seconds
Great question. Since you’re on Prefect 2, Prefect Cloud, and GCP, here’s a concise way to choose the best work pool type: - Cloud Run work pool (serverless, managed): Best default for most GCP users who want minimal ops, autoscaling, and containerized runs without managing VMs or clusters. Runs each flow as a Cloud Run Job. Good for stateless work, typical CPU/memory needs, and durations up to Cloud Run Jobs limits. - Docs: Cloud Run worker, GCP worker guide (IAM, setup) - Kubernetes work pool (GKE): Best if you already have GKE or need advanced scheduling, GPUs, custom node pools, or tight control over runtime and networking. More ops overhead but maximum flexibility. - Docs: Kubernetes worker (GKE) - Docker work pool (VM-based): Good middle ground if you want container isolation and reproducible images but do not want to run Kubernetes. You run a Docker worker on a GCE VM or MIG and it launches containers per flow. You manage VM scaling yourself. - Docs: Docker worker - Process work pool (single machine): Simplest and good for local/dev or a single VM where you don’t need containerization. Not ideal for multi-tenant or production in the cloud. - Concept: Work pools concept A quick recommendation - If you want fully managed, minimal ops, and can containerize your flows: choose Cloud Run. - If you already operate GKE, need GPUs or specialized scheduling/networking: choose Kubernetes. - If you prefer containers but want to keep it simple on a VM: choose Docker. - For development or a single controlled VM: Process. What this looks like (examples) - Cloud Run
Copy code
prefect work-pool create "gcp-cloud-run" --type cloud-run
  prefect worker start -p gcp-cloud-run
You’ll provide image, region, service account, and optional VPC connector via variables in the pool or your deployment. See IAM and setup in the GCP guide: GCP worker guide. - Kubernetes (GKE)
Copy code
prefect work-pool create "gke-pool" --type kubernetes
  prefect worker start -p gke-pool
Ensure the worker has kubeconfig/cluster access and your job template matches your cluster config. Docs: Kubernetes worker. - Docker (on a GCE VM)
Copy code
prefect work-pool create "gce-docker" --type docker
  prefect worker start -p gce-docker
The worker launches each flow in a container on that VM. Docs: Docker worker. A few questions to help me tailor the recommendation - Do you already run GKE, or would you prefer fully managed/serverless? - Can all of your flows run in containers (and are you comfortable building/pushing images)? - Any requirements for GPUs, very large memory, or long-running jobs? - Do you need VPC access to private resources (e.g., Cloud SQL, private APIs)? - Rough scale: how many flow runs/day and typical runtime per flow? If you share those details, I can recommend a specific pool type (and a starter config) for your setup.
f
@Marvin actually I use a cloud-run-v2:push work-pool. But the startup time is too long for my needs. What's the best solution to run my flows immediately? My code is stored with its environment in a Docker build on GCP
m
thought for 97.3 seconds
Thanks — that helps. Cloud Run push pools are convenient, but they pay a cold‑start tax each time a run is created (provisioning a Cloud Run Job + pulling your image). If “immediate” means seconds instead of tens of seconds, the most reliable fix is to move off a push pool to a long‑lived worker that keeps your runtime warm. Two good paths given you already package code in a Docker image on GCP: Option A: Docker work pool on a GCE VM (fastest, simple, containerized) - Why: A pull-style worker is always running, your image can be pre-pulled and cached locally, so flows start in 1–3s. - How (high level): - Create a small GCE VM with Docker installed and access to Artifact Registry. - Pre-pull your flow image(s) on boot so they’re warm. - Create a Docker work pool and point deployments to it:
Copy code
prefect work-pool create "gce-docker" --type docker
    prefect worker start -p gce-docker
- Run the worker under systemd or a startup script so it restarts on VM reboot. - Docs: Docker worker Option B: Kubernetes (GKE) work pool with warm nodes - Why: If you already run GKE or need advanced scheduling/GPUs, a GKE worker gives low, consistent startup. Keep nodes warm and pre-pull your images with a DaemonSet. - How:
Copy code
prefect work-pool create "gke-pool" --type kubernetes
  prefect worker start -p gke-pool
- Docs: Kubernetes worker (GKE) If you prefer to stay on Cloud Run push, you can trim startup time, but it won’t be “instant”: - Shrink and simplify the image (use slim/distroless base, multi-stage builds, avoid runtime installs). - Keep Artifact Registry and Cloud Run in the same region; ensure fast private access if used. - Avoid VPC connector unless required (it adds cold-start latency). - Increase CPU/memory in the job template to speed initialization and image decompression. - Minimize initialization code in your entrypoint. - References: Cloud Run worker, GCP worker guide Quick recommendation - For near-immediate starts and minimal ops: Docker work pool on a small GCE VM with pre-pulled images. - If you already have GKE and need more control: Kubernetes work pool with warm nodes + image pre-pull. Would you like a short setup guide for the Docker work pool on a GCE VM (including a systemd unit and a startup script that pre-pulls your Artifact Registry image)? Also, what does “immediate” mean for you (e.g., <2s, <5s)?
f
@Marvin with my actual configuration (cloud-run-v2:push), startup time is 1 minute. I want to run my flow <10s after the schedule
m
thought for 44.6 seconds
Got it — a 1-minute cold start from a Cloud Run push pool is typical. To get <10s after schedule, switch to a pull-style worker that keeps your runtime warm. Two options that reliably meet <10s startup: 1) Docker work pool on a small GCE VM (fastest and simple) - Why: Worker is always running and your image can be pre-pulled and cached; flow containers start in ~1–5s. - How: - Create a GCE VM with Docker installed and Artifact Registry access. - Pre-pull your flow image(s) on boot. - Create and start a Docker work pool/worker:
Copy code
prefect work-pool create "gce-docker" --type docker
    prefect worker start -p gce-docker --name docker-worker --concurrency 4
- Point deployments at the
gce-docker
pool and set the image in variables or deployment config. - Docs: Docker worker Example: VM bootstrap + systemd - Startup script to pre-pull your image:
Copy code
#!/bin/bash
  set -euo pipefail
  # Authenticate Docker to Artifact Registry
  gcloud auth configure-docker REGION-docker.pkg.dev -q
  # Pre-pull your image(s)
  docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG
- systemd unit for the worker:
Copy code
[Unit]
  Description=Prefect Docker Worker
  After=network-online.target docker.service
  Wants=network-online.target

  [Service]
  Environment=PREFECT_API_URL=<https://api.prefect.cloud/api/accounts/<account_id>/workspaces/<workspace_id>>
  Environment=PREFECT_API_KEY=<your_api_key>
  ExecStart=/usr/local/bin/prefect worker start -p gce-docker --name docker-worker --concurrency 4
  Restart=always
  RestartSec=5

  [Install]
  WantedBy=multi-user.target
2) Kubernetes (GKE) work pool with warm nodes - Why: If you already run GKE or need GPUs/custom scheduling. Keep nodes warm and pre-pull images for ~5–10s starts. - How:
Copy code
prefect work-pool create "gke-pool" --type kubernetes
  prefect worker start -p gke-pool
- Tips: Use a small always-on node pool, imagePullPolicy IfNotPresent, pre-pull via a DaemonSet, set job priority class. - Docs: Kubernetes worker (GKE) If you must stay on Cloud Run push, you can trim latency but sub-10s is unlikely: - Use slim/distroless images, keep AR + Cloud Run in same region, avoid VPC connector if possible, bump CPU/memory, minimize entrypoint init, and reuse images. Docs: Cloud Run worker, GCP worker guide Recommendation for your case - Since your code is already in Docker on GCP and you need <10s, run a Docker work pool on a small GCE VM with pre-pulled images. It’s the simplest way to get near-instant starts. - You can keep your Cloud Run push pool for bursty or non-latency-sensitive flows and route latency-sensitive ones to the Docker pool via work queues. Want me to tailor a minimal GCE setup guide (machine type, scopes, firewall, startup script, systemd) using your Artifact Registry path?
f
@Marvin between solution 1 and solution 2, what's the difference? Which one is the cheapest solution? Which one is the easiest to maintain?
m
thought for 42.3 seconds
Here’s a clear comparison between the two options for getting <10s starts: How they differ - Docker work pool on a GCE VM - A single always-on Prefect worker runs on your VM and launches each flow in a local Docker container. - You can pre-pull your Artifact Registry image(s), so runs start as soon as the container launches. - Concurrency is limited by that VM’s CPU/RAM. - Single point of failure unless you add a MIG or a second VM. - Kubernetes (GKE) work pool with warm nodes - A worker in your cluster launches each flow as a Kubernetes Job. - To get <10s reliably, you keep a small node pool warm and pre-pull images (e.g., DaemonSet). - Much better horizontal scaling, HA, and advanced features (GPUs, node pools, pod security, quotas). - More moving parts (cluster upgrades, node pools, RBAC, templates). Which is cheapest - Typically: Docker on a single small GCE VM is the cheapest way to get sub-10s starts. You pay for one VM (plus disk), and you can size it small if your concurrency is low. - GKE Standard: You pay for the nodes you keep warm (and possibly control plane/cluster overhead depending on your GKE plan). To ensure <10s, you’ll keep at least one node on 24/7, which is usually more than a single small VM. - GKE Autopilot: You pay per pod resources when they run, but you can’t really pre-warm to guarantee <10s; cold image pulls and scheduling can still push you over target. It may be cost-effective for bursty jobs, but it won’t reliably meet your latency requirement. Which is easiest to maintain - Easiest: Docker on a GCE VM. One VM, Docker, and a Prefect worker as a systemd service. Pre-pull your image on boot, done. Great if your needs are modest and you want low ops. - GKE is more to maintain (or learn), even with Autopilot. You gain powerful features but at the cost of complexity. Startup time expectations - Docker on GCE VM (image cached): typically ~1–5s from schedule to container start. - GKE with warm nodes + pre-pulled image: typically ~5–10s, depending on scheduling and cluster load. - Cloud Run push: ~30–60s cold start is common due to job provisioning and image pull. My recommendation for your goal (<10s) and current packaging (Docker image in GCP) - Choose Docker work pool on a small GCE VM. It’s usually the cheapest and easiest way to consistently hit your latency target. - If you already run GKE and want enterprise features (HA, GPUs, quotas, multi-team), go with GKE and keep a minimal warm node pool with pre-pulled images. Docs (for reference) - Docker worker: Docker worker - Kubernetes worker (GKE): Kubernetes worker If you want, I can share a minimal GCE VM setup: machine type suggestions, startup script to auth + pre-pull your Artifact Registry image, and a systemd unit to run
prefect worker start
so you’re up and running quickly.
f
@Marvin ok, lets go with docker on GCE VM, explain me step by step how to do that
m
thought for 69.0 seconds
Awesome — here’s a step-by-step to get a Docker work pool running on a GCE VM with sub‑10s startup. This keeps a Prefect worker always on, pre-pulls your image from Artifact Registry, and starts each flow in a local Docker container fast. High-level flow - GCE VM runs Docker + a Prefect Docker worker (pull-style). - Worker polls your Docker work pool in Prefect Cloud. - Your deployment specifies the Docker image (from Artifact Registry). - VM pre-pulls the image so runs start in seconds. Prereqs - Your flow code is already built into an Artifact Registry image, e.g.
REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG
- Prefect Cloud account/workspace + API key - You’ll assign a GCP service account to the VM with permission to pull from Artifact Registry 1) Create a small GCE VM - Machine type: e2-small or e2-medium (adjust for your desired concurrency) - OS: Debian 12 or Ubuntu LTS - Region/zone: same region as your Artifact Registry for faster pulls - Service account: attach one with IAM role “Artifact Registry Reader” (and any other roles your flows need, e.g., Storage Object Viewer) - Firewall: no inbound ports needed for the Prefect worker 2) Install Docker on the VM Run on the VM (SSH):
Copy code
sudo apt-get update
sudo apt-get install -y <http://docker.io|docker.io>
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
# Re-login to apply docker group membership or use sudo for docker commands
3) Install Google Cloud CLI and configure Docker auth
Copy code
sudo apt-get install -y apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] <http://packages.cloud.google.com/apt> cloud-sdk main" | \
  sudo tee /etc/apt/sources.list.d/google-cloud-sdk.list
curl <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | \
  sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
sudo apt-get update && sudo apt-get install -y google-cloud-cli
# Configure docker to auth to Artifact Registry
gcloud auth configure-docker REGION-docker.pkg.dev -q
4) Pre-pull your flow image(s) Replace with your image path:
Copy code
docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG
Tip: Keep tags immutable (e.g., image digest or a versioned tag) so the node’s cache stays hot and you get consistent <10s starts. 5) Install Prefect (2.x) and prefect-docker Use a virtualenv or system Python (example uses venv):
Copy code
sudo apt-get install -y python3-venv
python3 -m venv /opt/prefect-venv
/opt/prefect-venv/bin/pip install "prefect<3" prefect-docker
6) Log in to Prefect Cloud on the VM Either interactive:
Copy code
/opt/prefect-venv/bin/prefect cloud login
Or set env vars (useful for systemd): -
PREFECT_API_URL=<https://api.prefect.cloud/api/accounts/<account_id>/workspaces/<workspace_id>>
-
PREFECT_API_KEY=<your_api_key>
7) Create a Docker work pool in Prefect Run once (locally or on the VM):
Copy code
/opt/prefect-venv/bin/prefect work-pool create "gce-docker" --type docker
Optionally, set a default image on the pool in the UI (Work Pools -> gce-docker -> Variables) so most deployments don’t need to repeat it. Docs: Docker worker, Work pools concept 8) Run the worker under systemd (recommended) Create a unit file at `/etc/systemd/system/prefect-docker-worker.service`: ``` [Unit] Description=Prefect Docker Worker After=network-online.target docker.service Wants=network-online.target [Service] Environment=PREFECT_API_URL=https://api.prefect.cloud/api/accounts/&lt;account_id&gt;/workspaces/&lt;workspace_id> Environment=PREFECT_API_KEY=<your_api_key> # Pre-pull on each start (optional but keeps cache warm if the VM restarts) ExecStartPre=/usr/bin/docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG ExecStart=/opt/prefect-venv/bin/prefect worker start -p gce-docker --name docker-worker --concurrency 4 Restart=always RestartSec=5 # If you use a non-root user: # User=<youruser> # Group=<youruser>
[Install] WantedBy=multi-user.target
Copy code
Enable and start it:
sudo systemctl daemon-reload sudo systemctl enable --now prefect-docker-worker sudo systemctl status prefect-docker-worker
Copy code
9) Point a deployment at the Docker work pool
- In your deployment (via UI or `prefect deploy`), set:
  - Work pool: `gce-docker`
  - Job variables: `image=REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG` (if not set at the pool level)
- Keep your image tag immutable; update the deployment or pool when you ship a new image.

10) Test and validate startup time
- Create a tiny test flow and schedule it, or trigger a run.
- You should see container start within ~1–5 seconds when the image is cached.

Operational tips
- Concurrency: tune `--concurrency` based on VM CPU/RAM and your flow’s resource needs.
- Image cache: use versioned tags; avoid `:latest` if you need consistently fast starts.
- Housekeeping: periodically prune old images if disk fills:
docker image prune -f ``` - HA: if needed, run a second identical VM (each with a worker pointed at the same pool) for redundancy. If you share your Artifact Registry image URL and workspace IDs, I can customize the systemd unit and a quick startup script for your exact setup.
f
@Marvin When I deploy a new version, I have a new Docker image, how can I update my work-pool easily?
m
thought for 185.4 seconds
Short answer: keep your work pool generic and update the image at the deployment level. It’s faster and simpler than patching the pool template each release. Two practical ways to do it: Option A — Update the deployment’s image (recommended) - Prefect Cloud UI - Deployments -> your deployment -> Edit -> Job variables -> set
image=REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG
- Save. Then warm the VM cache:
Copy code
ssh <vm> "docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG"
- CLI (if you use
prefect deploy
) - Re-run deploy with the new image and same pool:
Copy code
prefect deploy ./path/to/flow.py:flow_fn \
      --name my-deploy \
      --pool gce-docker \
      --job-variable image=REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG
- Warm the VM cache:
Copy code
ssh <vm> "docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG"
- If you manage deployments via YAML, set it there:
Copy code
work_pool:
    name: gce-docker
    job_variables:
      image: REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG
- For <10s starts, keep
image_pull_policy: IfNotPresent
and use immutable tags (e.g., 1.2.3). Always pre-pull the new tag on the VM before the first run. Option B — Set the default image on the work pool (UI or CLI) - UI: Work Pools -> select your pool -> Variables -> set
image=...
- CLI: Update the base job template
Copy code
prefect work-pool get-default-base-job-template --type docker > docker.json
  # edit docker.json: add a default under variables.properties.image.default
  prefect work-pool update gce-docker --base-job-template docker.json
- Note: Pool-level defaults apply to all deployments using the pool unless the deployment overrides them. This is convenient if all deployments should use the same image, but most teams prefer per-deployment control. Release workflow suggestion for speed and safety 1) Build and push:
docker build
->
docker push ...:1.2.3
2) Pull on VM:
ssh <vm> "docker pull ...:1.2.3"
3) Update deployment image (UI or
prefect deploy --job-variable image=...:1.2.3
) 4) Run a quick test flow; observe ~1–5s startup. Docs for reference: - Docker worker: Docker worker - Work pools concept: Work pools If you share your deployment method (UI vs
prefect deploy
) and your image path, I can provide an exact command/script you can reuse each release.
f
@Marvin ok, so what's the step for the first time to create the machine and the work pool?
m
thought for 68.4 seconds
Perfect — here’s a first-time, end‑to‑end setup to run a Prefect 2 Docker work pool on a GCE VM with fast (<10s) starts. What you’ll do - Create a small GCE VM with a service account that can pull from Artifact Registry. - Install Docker, Prefect, and configure Docker auth for Artifact Registry. - Create a Docker work pool in Prefect Cloud and run a worker as a systemd service. - Point your deployment at this pool and your image. Prereqs to gather - PROJECT_ID, REGION, ZONE - Artifact Registry image: REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG - Prefect Cloud: ACCOUNT_ID, WORKSPACE_ID, API key Step 1) Create a service account with Artifact Registry read access
Copy code
export PROJECT_ID=<your-project>
gcloud config set project $PROJECT_ID

gcloud iam service-accounts create prefect-docker-worker \
  --display-name="Prefect Docker Worker"

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member="serviceAccount:prefect-docker-worker@${PROJECT_ID}.<http://iam.gserviceaccount.com|iam.gserviceaccount.com>" \
  --role="roles/artifactregistry.reader"
# Add more roles your flows need (e.g., Storage, BigQuery) as appropriate.
Step 2) Create a small GCE VM and attach the service account - Suggested: e2-small or e2-medium, Debian 12, same region as your Artifact Registry.
Copy code
export ZONE=<your-zone>
gcloud compute instances create prefect-docker-vm \
  --zone $ZONE \
  --machine-type e2-small \
  --image-family debian-12 \
  --image-project debian-cloud \
  --service-account prefect-docker-worker@${PROJECT_ID}.<http://iam.gserviceaccount.com|iam.gserviceaccount.com> \
  --scopes <https://www.googleapis.com/auth/cloud-platform>
No inbound firewall rules are required for the worker. Step 3) SSH in and install Docker, gcloud, Prefect, and configure Docker auth
Copy code
gcloud compute ssh prefect-docker-vm --zone $ZONE
On the VM:
Copy code
# Install Docker
sudo apt-get update
sudo apt-get install -y <http://docker.io|docker.io>
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
# Re-login or use sudo for docker

# Install Google Cloud CLI
sudo apt-get install -y apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] <http://packages.cloud.google.com/apt> cloud-sdk main" | \
  sudo tee /etc/apt/sources.list.d/google-cloud-sdk.list
curl <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | \
  sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
sudo apt-get update && sudo apt-get install -y google-cloud-cli

# Configure docker to use gcloud as a credential helper for Artifact Registry
REGION=<your-region>  # e.g. europe-west1, us-central1, etc.
sudo gcloud auth configure-docker ${REGION}-docker.pkg.dev -q
Note: On a GCE VM,
gcloud
can use the attached VM service account. If you ever see auth issues pulling images, run
gcloud auth login
once interactively or use a JSON key temporarily, then remove it. Step 4) Pre-pull your image to warm the cache
Copy code
docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG
Step 5) Install Prefect 2 and the Docker integration
Copy code
sudo apt-get install -y python3-venv
python3 -m venv /opt/prefect-venv
/opt/prefect-venv/bin/pip install "prefect<3" prefect-docker
Step 6) Configure Prefect Cloud credentials on the VM Either log in interactively:
Copy code
/opt/prefect-venv/bin/prefect cloud login
Or set environment variables (useful for systemd): - PREFECT_API_URL=https://api.prefect.cloud/api/accounts/&lt;ACCOUNT_ID&gt;/workspaces/&lt;WORKSPACE_ID> - PREFECT_API_KEY=<your_api_key> Step 7) Create a Docker work pool You can do this on the VM or locally:
Copy code
/opt/prefect-venv/bin/prefect work-pool create "gce-docker" --type docker
Optional: In the Prefect UI, set pool variables to default your image and
image_pull_policy=IfNotPresent
. Step 8) Run the worker as a systemd service (recommended) Create a unit file: ``` sudo tee /etc/systemd/system/prefect-docker-worker.service >/dev/null <<'EOF' [Unit] Description=Prefect Docker Worker After=network-online.target docker.service Wants=network-online.target [Service]
Environment=PREFECT_API_URL=https://api.prefect.cloud/api/accounts/&lt;ACCOUNT_ID&gt;/workspaces/&lt;WORKSPACE_ID> Environment=PREFECT_API_KEY=<YOUR_API_KEY> # Optional: Pull latest image tag before worker starts to keep cache warm ExecStartPre=/usr/bin/docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG ExecStart=/opt/prefect-venv/bin/prefect worker start -p gce-docker --name docker-worker --concurrency 4 Restart=always RestartSec=5 [Install] WantedBy=multi-user.target EOF sudo systemctl daemon-reload sudo systemctl enable --now prefect-docker-worker sudo systemctl status prefect-docker-worker
Copy code
Step 9) Point a deployment at the pool and your image
- In the UI: Deployments -> your deployment -> Edit -> set Work Pool to `gce-docker` and Job variables `image=REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG`
- Or via CLI:
prefect deploy ./path/to/flow.py:flow_fn \ --name my-deploy \ --pool gce-docker \ --job-variable image=REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:TAG
Copy code
Step 10) Trigger a test run and confirm startup time
Your container should start in ~1–5s when the image is cached.

Operational tips
- Use immutable, versioned tags (e.g., 1.2.3). Pre-pull the new tag on the VM before the first run for sub‑10s starts:
ssh <vm> "docker pull REGION-docker.pkg.dev/PROJECT/REPO/IMAGE:NEW_TAG"
Copy code
- Tune worker `--concurrency` to match VM CPU/RAM.
- Periodically prune old images if disk fills:
docker image prune -f ``` References - Docker worker: Docker worker - Work pools: Work pools concept Want me to generate a one-shot startup script (metadata startup-script) that bootstraps the VM end‑to‑end with your exact ACCOUNT_ID/WORKSPACE_ID and IMAGE URI? If you share those values, I’ll tailor it so the VM comes up fully configured and the worker is running.