Yaron Levi
09/10/2023, 5:09 PMLuis Arias
09/11/2023, 2:49 PMprefect-aws
repo and am creating a venv so I can start work on https://github.com/PrefectHQ/prefect-aws/issues/310. Do you have a recommended python version?James Gatter
10/03/2023, 4:31 PMYaron Levi
10/23/2023, 6:44 AMSlackbot
10/24/2023, 6:02 AMLuca Vehbiu
10/24/2023, 9:25 AMAdi Gandra
10/24/2023, 5:39 PMAdi Gandra
10/25/2023, 1:58 AMSunjay
10/30/2023, 1:01 PMMattia Sappa
11/02/2023, 4:01 PMLilly Luo
11/10/2023, 4:51 PMGabriel
11/14/2023, 6:25 PMCannotPullContainerError: pull image manifest has been retried 5 time(s): failed to resolve ref <http://docker.io/prefecthq/prefect:2-latest|docker.io/prefecthq/prefect:2-latest>: failed to do request: Head "<https://registry-1.docker.io/v2/prefecthq/prefect/manifests/2-latest>": dial tcp 34.205.13.154:443: i/o timeout
I assume this is a connectivity issue and due to the ECS tasks being deployed in a random subnet, and if it is private
the error occurs (no NAT gateway).
How do I specify a subnet for my tasks? I couldn't find it in the Base Job Template of my work pool, is that even the right place?
Much appreciated if someone could point me in the right direction.Luca Vehbiu
11/15/2023, 8:13 AMbrokoli
11/15/2023, 1:10 PMbrokoli
11/15/2023, 1:14 PMpull_from_s3
step to workbrokoli
11/15/2023, 1:14 PMbrokoli
11/15/2023, 1:14 PMAlec Taggart
11/16/2023, 4:27 AMDylan
11/21/2023, 4:34 PMtask_customisations
field when attempting to save an ECSTask Block? I've followed the documentation but when looking up the ECS Block in Prefect Cloud, the JSON dictionary is empty.
ECSTask(
command=["echo", "hello world"],
vpc_id="vpc-01abcdf123456789a",
task_customizations=[
{
"op": "add",
"path": "/networkConfiguration/awsvpcConfiguration/securityGroups",
"value": ["sg-d72e9599956a084f5"],
},
],
)
Nick Gohman
12/03/2023, 8:31 PMDavid Anderson
12/06/2023, 3:07 PMwork_pool:
name: ecs-push-work-pool
work_queue_name: null
job_variables:
env:
EXTRA_PIP_PACKAGES: prefect-airbyte, prefect-hightouch, prefect-dbt
but, now im getting a cryptic error message with little to go on.
Flow run infrastructure exited with non-zero status code:
Exited with non 0 code. (Error Code: 1)
This may be caused by attempting to run an image with a misspecified platform or architecture.
anyone have any tips or suggestions?David Anderson
12/06/2023, 6:05 PMDominic Tarro
12/10/2023, 4:21 PMRubab Zahra
12/13/2023, 9:39 AMHoward Cornwell
01/17/2024, 2:52 PMcpu/memory
allocations on a per-flow basis.
Right now, the implementation includes
• An ECS cluster with an ECS worker running on it
• A single Docker image containing all code for all flows
• A base job template on the ECS work pool that specifies the cluster and image
The ECS worker spawns flow runs on the cluster, but it always spawns flow runs in containers with the default 1024/2048
cpu/memory
allocations
I've been trying to override these per-flow using Deployment.build_from_flow()
. See the attached example.
The commented out sections are the ways I've tried overriding the cpu/memory
allocations, but none have worked.
I'd prefer to not have to build each individual flow into separate docker images. We've got close to 100 flows that all run out of the same codebase, so having a single image seems the sensible way to do things.
I feel like I'm missing something crucial. Any pointers would be much appreciated.Ricardo Schalch
01/29/2024, 6:38 PMAnton L.
02/01/2024, 8:24 AMRomain Vincent
02/14/2024, 5:00 PMFailed to submit flow run '5035828c-12ea-48c9-a73a-9f4e8441c8b6' to infrastructure.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 1557, in _create_task_run
return ecs_client.run_task(**task_run_request)["tasks"][0]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/prefect/workers/base.py", line 896, in _submit_run_and_capture_errors
result = await self.run(
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 598, in run
) = await run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/prefect/utilities/asyncutils.py", line 91, in run_sync_in_worker_thread
return await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 761, in _create_task_and_wait_for_start
self._report_task_run_creation_failure(configuration, task_run_request, exc)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 757, in _create_task_and_wait_for_start
task = self._create_task_run(ecs_client, task_run_request)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0xffff50f7d3c0 state=finished raised IndexError>]
Does anyone encounter a similar situation? The error message is not very explicit.Anh Pham
03/01/2024, 12:18 PMTypeError: unhashable type: 'dict'
My hunch is problem is raised after one of these PRs:
• #369
• #373
• #375
This is quite a breaking change and prevent me from upgrading to latest version. Can someone take a look into this?Jace Iverson
03/08/2024, 6:29 PM