<@ULVA73B9P> how do i create and start using a gcp...
# ask-marvin
a
@Marvin how do i create and start using a gcp cloud run work pool
m
thought for 414.8 seconds
n
idk if marvin will pull this out, but you can use the gcp push pool and thats probably the easiest way. you just auth your gcp cli and run the provision command here
that works if you don't have a requirement to run the worker yourself
m
Here’s the quickest path to a working GCP Cloud Run work pool in Prefect 3.x. Nothing could possibly go wrong… assuming credentials. TL;DR steps - Install GCP extras - Create a Cloud Run work pool - Start a worker (with GCP auth) - Deploy your flow to that pool with region/service account/image set 1) Install GCP extras on the machine that will run the worker
Copy code
pip install "prefect[gcp]"
# or with uv
uv pip install "prefect[gcp]"
2) Create the Cloud Run work pool
Copy code
prefect work-pool create my-cloud-run-pool --type cloud-run
Optionally, inspect the default job template to see supported variables (cpu, memory, timeout, region, service_account_name, vpc connector, etc.):
Copy code
prefect work-pool get-default-base-job-template --type cloud-run
3) Authenticate to GCP for the worker Use Application Default Credentials on the worker machine. Either: - Service account key file:
Copy code
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/sa.json
export GOOGLE_CLOUD_PROJECT=<your-project-id>
- Or
gcloud
ADC:
Copy code
gcloud auth application-default login
gcloud config set project <your-project-id>
Ensure the service account has permissions to run Cloud Run Jobs and pull images (e.g. Cloud Run Admin or Developer, Service Account User, Artifact Registry Reader, and the worker’s SA Token Creator if needed). 4) Start the worker
Copy code
prefect worker start -p my-cloud-run-pool
Keep this running. It will submit your flow runs to Cloud Run Jobs. 5) Deploy a flow to the pool (Python example) - Build/push an image accessible by Cloud Run (Artifact Registry recommended). - Set region and service account via job variables.
Copy code
from prefect import flow
from prefect.docker import DockerImage

@flow
def hello():
    print("Hello from Cloud Run")

# Example image in Artifact Registry:
image_ref = "us-central1-docker.pkg.dev/<project>/<repo>/prefect-flows:latest"

# This will build and push the image (build=True, push=True by default)
hello.deploy(
    name="hello-cloud-run",
    work_pool_name="my-cloud-run-pool",
    image=DockerImage(name=image_ref),
    job_variables={
        "region": "us-central1",
        "service_account_name": "prefect-runner@<project>.<http://iam.gserviceaccount.com|iam.gserviceaccount.com>",
        # optional tuning:
        "cpu": "1",
        "memory": "2Gi",
        "timeout": 3600,
        # if you need VPC:
        # "vpc_connector_name": "projects/<project>/locations/us-central1/connectors/<connector>"
    },
)
Alternatively, if you prefer CLI deployments from a project, set the work pool and job variables in your project’s deployment config and run:
Copy code
prefect deploy ./path/to/flow.py:hello -n hello-cloud-run
(Note: in Prefect 3.x, use
prefect deploy
and not the removed
deployment build
command.) Helpful commands you can use - See the default variables for the Cloud Run worker type:
Copy code
prefect work-pool get-default-base-job-template --type cloud-run
- Inspect your pool:
Copy code
prefect work-pool inspect my-cloud-run-pool
A few gotchas (you’re welcome) - The worker machine must have
prefect[gcp]
installed; it’s the thing that actually calls Cloud Run. - Your image must be accessible to Cloud Run (Artifact Registry repo in the same project/region is safest). - Set region and service account via
job_variables
or by editing the pool’s base job template if you want globals. Want me to tailor this to your exact project/region/registry? Tell me: - GCP project ID - Region for Cloud Run Jobs (e.g., us-central1) - Service account email to run jobs - Artifact Registry image URL you want to use
a
@Marvin why should i run the prefect work-pool locally?
m
thought for 14.2 seconds
Short answer: you don’t “run a work pool” anywhere. A work pool is just configuration in the Prefect API. You run a worker that polls the pool. Subtle, but important. Why run the worker locally? - Speedy development and debugging: instant logs in your terminal, easy breakpoints, no infra yak-shaving. - Zero ops to start: no VM, no Kubernetes, no tickets to IT. - Still works with remote execution: even for a Cloud Run pool, the local worker just calls the GCP API to launch Cloud Run Jobs; your flows still run in Cloud Run. - Local access: if you’re using the Process worker (local execution), you can read local files and use your local environment. Why not run it locally (for production)? - It must be always-on. Laptops sleep. Coffee breaks happen. - Reliability and HA: single point of failure, no scaling, no restart policy. - Network/security: corporate VPNs, IP allowlists, and credential hygiene are easier on managed hosts. - Separation of duties: better to run under a service account on stable cloud compute. Where to run it instead (prod): - A VM (GCE) under systemd or Docker, a container service, or Kubernetes. - Multiple workers for redundancy and throughput. - For Cloud Run pools specifically: the worker can still live anywhere; it just needs GCP credentials and network access to GCP. If you actually meant “should I run the worker locally for a Cloud Run pool?” — yes, for dev it’s great. For prod, park it on durable compute. Want recommendations for your environment? Tell me where you prefer to host the worker (GCE VM, GKE, etc.) and your GCP region.
a
@Nate why do i have to run "prefect worker start --pool my-cloud-run-pool" on my machine, shouldnt the work pool start on gcp
n
just to reiterate, marvin is correct here > Short answer: you don’t “run a work pool” anywhere. A work pool is just configuration in the Prefect API. You run a worker that polls the pool. Subtle, but important. the worker is the client side process that polls the work pool (work pool ~ "topic" from pubsub that has an associated job template) if you run the worker yourself, then you might want to run it as a container in your GCP yep, but you could also run it locally from a place that's authed to the same project via some credentials JSON alternatively, if you don't need the control that running your own worker offers, then you don't need to ever run
prefect worker start
and instead you just use the provisioner as i linked earlier to create a "push" work pool where prefect cloud will run the worker on your behalf as needed from Prefect Cloud itself (using creds you stick on the work pool)
a
okay so i need to create a cloud run instance and run it yes? and give the creds to prefect server so that it can access the cloud run instance
when i do prefect worker start locally, i am just emulating gcp cloud run than actually running it there
n
are you using prefect cloud or OSS
a
OSS
n
ok so push pools are cloud only, so your only choice with OSS is to run your own worker. then the choice is just where to run
prefect worker start
this guide should apply
a
finally able to run the flow... now there is a new issue: ValueError: The existing repository at /opt/prefect/armor-adk-deployments-prefect does not match the configured repository <https://[redacted]@github.com/Armor1AI/armor-adk.git|https://[redacted-token]@github.com/[redacted-org]/[redacted-repo].git> am using GitRepository for the deployment
@Marvin finally able to run the flow... now there is a new issue: ValueError: The existing repository at /opt/prefect/armor-adk-deployments-prefect does not match the configured repository <https://[redacted]@github.com/Armor1AI/armor-adk.git|https://[redacted-token]@github.com/[redacted-org]/[redacted-repo].git> am using GitRepository for the deployment know why this is happening?
m
Error: No module named 'prefect_git'
Error: No module named 'prefect.storage'