<@ULVA73B9P> I have this flow run from a deploymen...
# marvin-in-the-wild
m
@Marvin I have this flow run from a deployment defined below failing with the following error:
Copy code
06:41:31.085 | ERROR   | prefect.worker.kubernetes.kubernetesworker acc28481-964b-45e9-a84f-25cdaa0529a1 - Flow run 2d2e82e1-2e37-4bc1-b25d-7ffc51ee637e did not pass checks and will not be submitted for execution
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 855, in _submit_run
    await self._check_flow_run(flow_run)
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 841, in _check_flow_run
    raise ValueError(
ValueError: Flow run UUID('2d2e82e1-2e37-4bc1-b25d-7ffc51ee637e') was created from deployment 'hello_flow_deployment' which is configured with a storage block. Please use an agent to execute this flow run.
Can you point how the deployment should be defined to work with my k8s work pool, having the flow being executed as a Cloud Run job?
Copy code
from prefect.deployments import Deployment
from prefect_gcp.cloud_storage import GcsBucket
from prefect_gcp.cloud_run import CloudRunJob
from prefect.client.schemas.schedules import CronSchedule
from dataflows.flows.hello_flow import hello_flow

storage = GcsBucket.load("prefect-storage")
infrastructure = CloudRunJob.load("cloud-run-default")


def deploy_hello_flow():
    deployment = Deployment.build_from_flow(
        flow=hello_flow,
        name="hello_flow_deployment",
        work_queue_name="default",
        storage=storage,
        path="hello_flow",
        tags=["staging"],
        infrastructure=infrastructure,
        schedule=CronSchedule(cron="0 12 1 * *", timezone="UTC"),
    )
    deployment.apply()
m
Your message was too long by 132 tokens - please shorten it and try again. For reference, here's your message at the allowed limit:
I have this flow run from a deployment defined below failing with the following error:
Copy code
06:41:31.085 | ERROR   | prefect.worker.kubernetes.kubernetesworker acc28481-964b-45e9-a84f-25cdaa0529a1 - Flow run 2d2e82e1-2e37-4bc1-b25d-7ffc51ee637e did not pass checks and will not be submitted for execution
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 855, in _submit_run
    await self._check_flow_run(flow_run)
  File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 841, in _check_flow_run
    raise ValueError(
ValueError: Flow run UUID('2d2e82e1-2e37-4bc1-b25d-7ffc51ee637e') was created from deployment 'hello_flow_deployment' which is configured with a storage block. Please use an agent to execute this flow run.
Can you point how the deployment should be defined to work with my k8s work pool, having the flow being executed as a Cloud Run job? ```from prefect.deployments import Deployment from prefect_gcp.cloud_storage import GcsBucket from prefect_gcp.cloud_run import CloudRun
m
@Marvin I have a deployment defined as:
Copy code
Deployment.build_from_flow(flow=hello_flow, name="hello_flow_deployment", work_queue_name="default", storage=storage, path="hello_flow", tags=["staging"], infrastructure=infrastructure, schedule=CronSchedule(cron="0 12 1 * *", timezone="UTC"), )
It fails with:
Copy code
ValueError: Flow run UUID('2d2e82e1-2e37-4bc1-b25d-7ffc51ee637e') was created from deployment 'hello_flow_deployment' which is configured with a storage block. Please use an agent to execute this flow run.
Can you help me with the deployment definition?
m
Based on the information in the Prefect documentation and your error message, it seems that your deployment is configured using a storage block, which is part of a legacy block-based deployment model. The use of storage blocks is not recommended anymore. Instead, using serve or runner-based Python creation methods or workers and work pools with
prefect deploy
via the CLI is the current best practice. To fix the error and correctly deploy your flow, you should avoid using storage blocks and instead follow the modern practices recommended in the Prefect documentation. You will want to make sure that your flow code is appropriately configured for deployment so that Prefect workers (or, previously, agents) can retrieve the code from the specified location. If no storage is configured, Prefect will use local storage options. For deploying without storage blocks, consider using the
serve
or
runner
creation methods, or work with work pools and workers when using
prefect deploy
. For further information, you can refer to the Prefect documentation pages on storage. If you need detailed guidance on how to adjust your deployment definition to align with the recommended model, please let me know, and I would be happy to guide you through that process.
m
@Marvin I adapted the deployment to:
Copy code
hello_flow.deploy(
        name="hello_flow_k8s_deployment",
        work_pool_name="default-worker-pool",
        image="europe-docker.pkg.dev/cambium-earth/prefect-runners-prod/default:latest",
        tags=["staging"],
        schedule=CronSchedule(cron="0 12 1 * *", timezone="UTC"),
    )
I get:
Copy code
07:17:52.126 | INFO    | httpx - HTTP Request: GET ***/work_pools/default-worker-pool "HTTP/2 200 OK"
Successfully built image 
'europe-docker.pkg.dev/cambium-earth/prefect-runners-prod/default:latest'

prefect.utilities.dockerutils.PushError: denied: Unauthenticated request. Unauthenticated requests do not have permission "artifactregistry.repositories.uploadArtifacts" on resource "projects/cambium-earth/locations/europe/repositories/prefect-runners-prod" (or it may not exist)
If the image is already built, what is trying to push to the Artifact Registry? do I need this?
m
The error message you are seeing indicates that there is an authentication problem when attempting to push the Docker image to your Artifact Registry on Google Cloud. The error message "denied: Unauthenticated request" means that the necessary credentials to authenticate with Google Cloud's Artifact Registry are not being presented or are incorrect. It may be assumed that if your image is pre-built and available in the Artifact Registry, pushing it again wouldn't be necessary. However, during the deployment process with Prefect, it may attempt to push the Docker image to the registry to ensure that the most up-to-date version of the image is available for the deployment. This push operation requires proper authentication. To resolve this push error, you will need to ensure that your deployment environment is configured with the correct credentials to authenticate with Google Artifact Registry. Here are the steps you can follow: 1. Ensure the environment in which you are deploying your flow (your local machine or CI/CD pipeline) is authenticated with Google Cloud. You can use
gcloud auth login
for manual processes or set up a service account with the necessary roles and download its JSON key for automated environments. 2. Configure Docker to use the credentials for the Google Artifact Registry. This can be done using
gcloud auth configure-docker
which updates the Docker configuration to authenticate Docker commands with your Google Cloud credentials. 3. Make sure that the account or service account being used has the required permissions (
artifactregistry.repositories.uploadArtifacts
) to push images to the Artifact Registry. 4. If you're using Prefect Cloud or Prefect Server, check that the correct credentials are also configured there, if necessary. Make sure you have the required permissions and that the correct configuration steps have been followed based on the Artifact Registry documentation: Google Artifact Registry Authentication. Here is additional documentation that you might find useful: - For pushing work to Google Artifact Registry, please refer to this guide. - For understanding work pools in Prefect Cloud, you may want to review this page. - For integrating Kubernetes with Prefect, check this guide. It's crucial to have the right credentials in place for this operation to succeed. If you have already performed these steps and are still facing issues, you may need to further troubleshoot the permissions or the CI/CD pipeline's environment settings.