Tu Vu
08/07/2025, 9:28 AMMarvin
08/07/2025, 9:28 AMMarvin
08/07/2025, 9:29 AMPREFECT_WORKER_QUERY_SECONDS
environment variable in your worker configuration. For example:
yaml
env:
PREFECT_WORKER_QUERY_SECONDS: "5" # Poll every 5 seconds instead of 15
2. Optimize Prefetch Time:
If your Cloud Run infrastructure takes longer to provision, you can increase the prefetch time:
yaml
env:
PREFECT_WORKER_PREFETCH_SECONDS: "20" # Start provisioning 20 seconds before scheduled time
3. Check Worker Health:
- Ensure your worker is actually running and healthy
- Monitor worker logs for any connection issues or errors
- Verify that the worker has the correct permissions to create Cloud Run jobs
4. Infrastructure Optimization:
- Consider using minimum instances in Cloud Run to keep some capacity warm
- Optimize your container image size to reduce startup time
- Use appropriate memory and CPU settings for your workload
### Best Practices
1. Monitor Worker Logs: Keep an eye on your worker logs to ensure it's properly polling and submitting jobs.
2. Balance Polling Frequency: While you can reduce the polling interval, setting it too low (e.g., 1 second) might cause unnecessary API calls. A value between 5-10 seconds is usually a good balance.
3. Use Work Pool Health Checks: Monitor your work pool's health in the Prefect UI to ensure workers are connected and operating correctly.
### Additional Considerations
If you're still experiencing issues after implementing these solutions, you might want to:
1. Check if there are any quota limits or restrictions in your Google Cloud environment
2. Verify network connectivity between your worker and Prefect Cloud
3. Ensure your Cloud Run service account has all necessary permissions
Would you like me to provide more specific details about any of these aspects or help you troubleshoot further?Tu Vu
08/07/2025, 9:32 AM# Welcome to your prefect.yaml file! You can you this file for storing and managing
# configuration for deploying your flows. We recommend committing this file to source
# control along with your flow code.
# Generic metadata about this project
name: prefect-pipeline
prefect-version: 3.4.11
# build section allows you to manage and build docker image
build:
- prefect_docker.deployments.steps.build_docker_image:
id: build_image
requires: prefect-docker>=0.3.1
image_name: xxx
tag: latest
dockerfile: Dockerfile.prefect
platform: linux/amd64
# push section allows you to manage if and how this project is uploaded to remote locations
push:
- prefect_docker.deployments.steps.push_docker_image:
requires: prefect-docker>=0.3.1
image_name: '{{ build_image.image_name }}'
tag: '{{ build_image.tag }}'
# pull section allows you to provide instructions for cloning this project in remote locations
pull:
- prefect.deployments.steps.set_working_directory:
directory: /app
# the deployments section allows you to provide configuration for deploying flows
deployments:
- name: xxx
version: null
tags: []
description: null
schedule: {}
flow_name: null
entrypoint: xxx
parameters:
user_id: '1'
work_pool:
name: cloudrun-worker-pool
work_queue_name: default
job_variables:
image: '{{ build_image.image }}'
Marvin
08/07/2025, 9:32 AMMarvin
08/07/2025, 9:32 AMyaml
# Welcome to your prefect.yaml file
name: prefect-pipeline
prefect-version: 3.4.11
# Build section for docker image
build:
- prefect_docker.deployments.steps.build_docker_image:
id: build_image
requires: prefect-docker>=0.3.1
image_name: xxx
tag: latest
dockerfile: Dockerfile.prefect
platform: linux/amd64
# Push section for docker image
push:
- prefect_docker.deployments.steps.push_docker_image:
requires: prefect-docker>=0.3.1
image_name: '{{ build_image.image_name }}'
tag: '{{ build_image.tag }}'
# Pull section
pull:
- prefect.deployments.steps.set_working_directory:
directory: /app
# Deployments section with optimized configuration
deployments:
- name: xxx
version: null
tags: []
description: null
schedule: {}
flow_name: null
entrypoint: xxx
parameters:
user_id: '1'
work_pool:
name: cloudrun-worker-pool
work_queue_name: default
job_variables:
image: '{{ build_image.image }}'
# Add Cloud Run specific configurations
cpu: 1
memory: 2Gi # Adjust based on your needs
env:
PREFECT_WORKER_QUERY_SECONDS: "5" # More frequent polling
PREFECT_WORKER_PREFETCH_SECONDS: "15" # Earlier job preparation
# Cloud Run specific options for faster startup
service_account_name: "<mailto:your-service-account@your-project.iam.gserviceaccount.com|your-service-account@your-project.iam.gserviceaccount.com>" # Replace with your SA
command: null # Let the container's CMD/ENTRYPOINT handle execution
startup_timeout: "300" # 5 minutes max startup time
execution_timeout: "3600" # 1 hour max execution time
min_cpu_platform: "Intel Skylake" # Optional: specify CPU platform
vpc_connector: null # Add if you need VPC access
vpc_connector_egress: null # Add if needed
cloudsql_connections: [] # Add if needed
Key changes and additions:
1. Worker Configuration:
- Added PREFECT_WORKER_QUERY_SECONDS: "5"
to reduce polling interval
- Added PREFECT_WORKER_PREFETCH_SECONDS: "15"
for earlier job preparation
2. Resource Configuration:
- Added explicit cpu
and memory
settings
- You should adjust these based on your workload requirements
3. Cloud Run Optimizations:
- Added startup_timeout
to ensure adequate time for container startup
- Added execution_timeout
to prevent indefinite running
- Added service_account_name
placeholder (replace with your actual service account)
4. Optional Configurations (uncomment and configure as needed):
- `vpc_connector`: Add if you need VPC access
- `cloudsql_connections`: Add if you need Cloud SQL access
- `min_cpu_platform`: Specify if you need a particular CPU architecture
To use this configuration:
1. Replace <mailto:your-service-account@your-project.iam.gserviceaccount.com|your-service-account@your-project.iam.gserviceaccount.com>
with your actual Google Cloud service account
2. Adjust the resource settings (cpu
and memory
) based on your workload requirements
3. If you need VPC access or Cloud SQL connections, uncomment and configure those sections
4. The timeouts are set to reasonable defaults, but you can adjust them based on your needs:
- `startup_timeout`: How long to wait for the container to start
- `execution_timeout`: Maximum runtime for the flow
This configuration should help reduce the pending time for your flow runs while providing better resource management and reliability.
Would you like me to explain any of these configurations in more detail or help you adjust them for your specific use case?Tu Vu
08/08/2025, 7:23 AMraise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__
prefect.exceptions.PrefectHTTPStatusError: Client error '401 Unauthorized' for url '<http://xxx:4200/api/work_pools/cloudrun-worker-pool>'
Response: ***'exception_message': 'Unauthorized'***
Marvin
08/08/2025, 7:23 AMMarvin
08/08/2025, 7:32 AMTu Vu
08/08/2025, 7:36 AMTu Vu
08/08/2025, 7:36 AM# Welcome to your prefect.yaml file! You can you this file for storing and managing
# configuration for deploying your flows. We recommend committing this file to source
# control along with your flow code.
# Generic metadata about this project
name: prefect-pipeline
prefect-version: 3.4.11
# build section allows you to manage and build docker image
build:
- prefect_docker.deployments.steps.build_docker_image:
id: build_image
requires: prefect-docker>=0.3.1
image_name: xxx
tag: latest
dockerfile: Dockerfile.prefect
platform: linux/amd64
# push section allows you to manage if and how this project is uploaded to remote locations
push:
- prefect_docker.deployments.steps.push_docker_image:
requires: prefect-docker>=0.3.1
image_name: '{{ build_image.image_name }}'
tag: '{{ build_image.tag }}'
# pull section allows you to provide instructions for cloning this project in remote locations
pull:
- prefect.deployments.steps.set_working_directory:
directory: /app
# the deployments section allows you to provide configuration for deploying flows
deployments:
- name: xxx
version: null
tags: []
description: null
schedule: {}
flow_name: null
entrypoint: xxx
parameters:
user_id: '1'
work_pool:
name: cloudrun-worker-pool
work_queue_name: default
job_variables:
image: '{{ build_image.image }}'
Marvin
08/08/2025, 7:36 AMMarvin
08/08/2025, 7:37 AMTu Vu
08/08/2025, 7:48 AMMarvin
08/08/2025, 7:48 AMMarvin
08/08/2025, 7:49 AMTu Vu
08/08/2025, 7:56 AMMarvin
08/08/2025, 7:56 AMMarvin
08/08/2025, 7:58 AMTu Vu
08/08/2025, 8:39 AMMarvin
08/08/2025, 8:39 AMMarvin
08/08/2025, 8:41 AMTu Vu
08/08/2025, 9:14 AM# Welcome to your prefect.yaml file! You can you this file for storing and managing
# configuration for deploying your flows. We recommend committing this file to source
# control along with your flow code.
# Generic metadata about this project
name: prefect-pipeline
prefect-version: 3.4.11
# build section allows you to manage and build docker image
build:
- prefect_docker.deployments.steps.build_docker_image:
id: build_image
requires: prefect-docker>=0.3.1
image_name: us-central1-docker.pkg.dev/xxx
tag: {git_sha_tag}
dockerfile: Dockerfile.prefect
platform: linux/amd64
# push section allows you to manage if and how this project is uploaded to remote locations
push:
- prefect_docker.deployments.steps.push_docker_image:
requires: prefect-docker>=0.3.1
image_name: '{{ build_image.image_name }}'
tag: '{{ build_image.tag }}'
# pull section allows you to provide instructions for cloning this project in remote locations
pull:
- prefect.deployments.steps.set_working_directory:
directory: /app
# the deployments section allows you to provide configuration for deploying flows
deployments:
- name: company-portfolio-deploy
version: null
tags: []
description: null
schedule: {}
flow_name: null
entrypoint: xxx:xxx_flow
parameters:
user_id: '1'
work_pool:
name: docker-worker-pool
work_queue_name: default
job_variables:
image: '{{ build_image.image }}'
env:
PREFECT_WORKER_QUERY_SECONDS: "5" # More frequent polling
PREFECT_WORKER_PREFETCH_SECONDS: "15" # Earlier job preparation
Marvin
08/08/2025, 9:14 AMexplore_module_offerings
📊 Tools used:
• check_cli_command
(2x)
• explore_module_offerings
(2x)Marvin
08/08/2025, 9:15 AMMarvin
08/08/2025, 9:18 AMTu Vu
08/08/2025, 9:49 AMWorker 'DockerWorker 06435fbc-0fb3-45be-a935-a536aec356c0' submitting flow run 'e9d77bc9-24c7-4fb2-919a-e88e47748bc8'
04:44:11 PM
prefect.flow_runs.worker
Failed to submit flow run 'e9d77bc9-24c7-4fb2-919a-e88e47748bc8' to infrastructure.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 493, in _make_request
conn.request(
File "/usr/local/lib/python3.12/site-packages/urllib3/connection.py", line 494, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1333, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1093, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1037, in send
self.connect()
File "/usr/local/lib/python3.12/site-packages/docker/transport/unixconn.py", line 26, in connect
sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 841, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/urllib3/util/retry.py", line 474, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/urllib3/connectionpool.py", line 493, in _make_request
conn.request(
File "/usr/local/lib/python3.12/site-packages/urllib3/connection.py", line 494, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1333, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1093, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1037, in send
self.connect()
File "/usr/local/lib/python3.12/site-packages/docker/transport/unixconn.py", line 26, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 223, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/utils/decorators.py", line 44, in inner
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 246, in _get
return self.get(url, **self._set_request_timeout(kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 682, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 629, in _get_client
docker_client = docker.from_env(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/client.py", line 94, in from_env
return cls(
^^^^
File "/usr/local/lib/python3.12/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 207, in __init__
self._version = self._retrieve_server_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 230, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/prefect/workers/base.py", line 1262, in _submit_run_and_capture_errors
result = await self.run(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 457, in run
container, created_event = await run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 233, in run_sync_in_worker_thread
result = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/prefect/utilities/asyncutils.py", line 243, in call_with_mark
return call()
^^^^^^
File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 712, in _create_and_start_container
docker_client = self._get_client()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 632, in _get_client
raise RuntimeError("Could not connect to Docker.") from exc
RuntimeError: Could not connect to Docker.
04:44:11 PM
prefect.flow_runs.worker
Reported flow run 'e9d77bc9-24c7-4fb2-919a-e88e47748bc8' as crashed: Flow run could not be submitted to infrastructure:
RuntimeError('Could not connect to Docker.')
Marvin
08/08/2025, 9:49 AMMarvin
08/08/2025, 9:49 AMTu Vu
08/08/2025, 9:51 AMFile "/usr/local/lib/python3.12/site-packages/docker/api/client.py", line 230, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
File "/usr/local/lib/python3.12/site-packages/prefect_docker/worker.py", line 632, in _get_client
raise RuntimeError("Could not connect to Docker.") from exc
RuntimeError: Could not connect to Docker.
Marvin
08/08/2025, 9:51 AMexplore_module_offerings
📊 Tools used:
• explore_module_offerings
(4x)Marvin
08/08/2025, 9:52 AMMarvin
08/08/2025, 9:52 AMTu Vu
08/08/2025, 10:28 AMwork_pool:
name: my-docker-pool
work_queue_name: default
job_variables:
image: '{{ build_image.image }}'
image_pull_policy: IfNotPresent
env:
PREFECT_WORKER_QUERY_SECONDS: "5" # More frequent polling
PREFECT_WORKER_PREFETCH_SECONDS: "15" # Earlier job preparation
Marvin
08/08/2025, 10:28 AMMarvin
08/08/2025, 10:33 AM