<@ULVA73B9P> I am getting this error Encountered e...
# ask-marvin
a
@Marvin I am getting this error Encountered exception during execution: Traceback (most recent call last): File "/.venv/lib/python3.10/site-packages/prefect/engine.py", line 2055, in orchestrate_task_run result = await call.aresult() File "/.venv/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 293, in aresult return await asyncio.wrap_future(self.future) File "/.venv/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync result = self.fn(*self.args, **self.kwargs) File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 1060, in extract_table_load_dw etl_dataframe( File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 1248, in etl_dataframe load_table_to_dw( File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 987, in load_table_to_dw dd_df.to_sql( File "/.venv/lib/python3.10/site-packages/dask/dataframe/core.py", line 1859, in to_sql return to_sql( File "/.venv/lib/python3.10/site-packages/dask/dataframe/io/sql.py", line 592, in to_sql dask_compute(result) File "/.venv/lib/python3.10/site-packages/dask/base.py", line 599, in compute results = schedule(dsk, keys, **kwargs) File "/.venv/lib/python3.10/site-packages/distributed/client.py", line 3168, in get results = self.gather(packed, asynchronous=asynchronous, direct=direct) File "/.venv/lib/python3.10/site-packages/distributed/client.py", line 2328, in gather return self.sync( File "/.venv/lib/python3.10/site-packages/distributed/utils.py", line 345, in sync return sync( File "/.venv/lib/python3.10/site-packages/distributed/utils.py", line 412, in sync raise exc.with_traceback(tb) File "/.venv/lib/python3.10/site-packages/distributed/utils.py", line 385, in f result = yield future File "/.venv/lib/python3.10/site-packages/tornado/gen.py", line 767, in run value = future.result() File "/.venv/lib/python3.10/site-packages/distributed/client.py", line 2191, in _gather raise exception.with_traceback(traceback) distributed.scheduler.KilledWorker: Attempted to run task to_sql-7626bcf3-0ee5-4844-99bd-83c1ba5df0c1 on 3 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:38723. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.
m
Your message was too long by 440 tokens - please shorten it and try again. For reference, here's your message at the allowed limit:
I am getting this error
Encountered exception during execution: Traceback (most recent call last): File "/.venv/lib/python3.10/site-packages/prefect/engine.py", line 2055, in orchestrate_task_run result = await call.aresult() File "/.venv/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 293, in aresult return await asyncio.wrap_future(self.future) File "/.venv/lib/python3.10/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync result = self.fn(*self.args, **self.kwargs) File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 1060, in extract_table_load_dw etl_dataframe( File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 1248, in etl_dataframe load_table_to_dw( File "/home/appuser/prefect_project/flows/s3_to_dw.py", line 987, in load_table_to_dw dd_df.to_sql( File "/.venv/lib/python3.10/site-packages/dask/dataframe/core.py", line 1859, in to_sql return to_sql( File "/.
a
@Marvin Finished in state Failed('Task run encountered an exception KilledWorker: Attempted to run task to_sql-7626bcf3-0ee5-4844-99bd-83c1ba5df0c1 on 3 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:38723. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.')
m
Your issue seems related to Dask's
KilledWorker
exception, which occurs when a task repeatedly fails on different workers. This can be caused by issues like out-of-memory errors. For more information, refer to: - A GitHub issue discussing the same problem and potential solutions: https://github.com/PrefectHQ/prefect/issues/12968 It's recommended to inspect your worker logs for any additional detail on the failures. If necessary, you might consider debugging the deployment locally by running
prefect worker start -p my-local-machine
. For a more comprehensive understanding, you can read the concepts about worker implementation and handling infrastructure teardown: - Developing a new worker type: https://docs.prefect.io/latest/guides/deployment/developing-a-new-worker-type - Overview of work pools and queues: https://docs.prefect.io/latest/concepts/work-pools If you need specific ways to set up workers or handle exceptions, the provided links and concepts should serve as a starting point.