Hi. Running flow with s3 storage block on workers ...
# ask-community
v
Hi. Running flow with s3 storage block on workers leads to the following error log on worker:
Copy code
07:40:24.914 | ERROR   | prefect.worker.kubernetes.kubernetesworker f86aeb70-1ac4-44c6-ba05-2241b65aa414 - Flow run a68ec8fc-daac-41a9-9c92-7c69a601e36b did not pass checks a │
│ nd will not be submitted for execution                                                                                                                                         │
│ Traceback (most recent call last):                                                                                                                                             │
│   File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 785, in _submit_run                                                                             │
│     await self._check_flow_run(flow_run)                                                                                                                                       │
│   File "/usr/local/lib/python3.11/site-packages/prefect/workers/base.py", line 771, in _check_flow_run                                                                         │
│     raise ValueError(                                                                                                                                                          │
│ ValueError: Flow run UUID('a68ec8fc-daac-41a9-9c92-7c69a601e36b') was created from deployment 'sample' which is configured with a storage block. Workers currently only suppor │
│ t local storage. Please use an agent to execute this flow run.
but flow run was submitted and completed without any issues, could you please explain why I see such weird behaviour? Thanks
1
j
Hi! Workers are meant to work with the new version of deployments (defined in the prefect.yaml file) - they do not work with infra or storage blocks. See here for more details.
v
But it actually works.
j
Where are you seeing the completed flow run logs? in the traceback you provided, i only see the errors raised.
v
This error from worker logs, but flow run completed fine and I can see it on UI with logs and completed status.
j
If it did succeed, that certainly isn’t expected behavior. you should migrate to use the new version of deployments
can you share the logs of the completed flow run?
v
Copy code
2023-08-29 07:40:24.807 | INFO    | Flow run 'sample/sample (2023-01-02)' - Worker 'KubernetesWorker f86aeb70-1ac4-44c6-ba05-2241b65aa414' submitting flow run 'a68ec8fc-daac-41a9-9c92-7c69a601e36b'
2023-08-29 07:40:46.924 | INFO    | Flow run 'sample/sample (2023-01-02)' - Downloading flow code from storage at 'sample'
...my tasks logs
2023-08-29 07:49:10.784 | INFO    | Flow run 'sample/sample (2023-01-02)' - Finished in state Completed('All states completed.')
How to pass custom docker image for example for different deployments?
j
you can define job variables at the deployment level, including the image to use for the deployment
v
Is there Python api to deploy the flow with new approach?
Currently we use:
Copy code
deployment = Deployment.build_from_flow(
    flow=flow,
    schedule=schedule,
    parameters=parameters,
    name=name,
    storage=storage,
    work_queue_name=work_queue.value,
    work_pool_name=work_pool.value,
    path=path,
    infrastructure=infrastructure,
    tags=tags,
)
deployment.apply()
j
not currently. the deployment must be defined in yaml. there is a cli workflow that will help generate your first deployment in the prefect.yaml file
v
I see, let me read more about that. Thanks.
P 1
@Jamie Zieziula Error message says "Workers currently only support local storage". Is it a plan to add storage block support for workers?
j
instead of storage blocks, you’ll define where your code lives in the pull step of your deployment in the prefect.yaml file
v
Thanks!
j
np!