John Mizerany
09/26/2023, 4:53 PMInternal Server Error
as the response. Was not sure if this was a known issueBob Colner
09/26/2023, 6:21 PMslack_notifier
from from prefect.utilities.notifications
. I've been using this with great success for years on data-pipeline but I'm getting a JWT error when trying to set it up on a new prefect 1.4 flow: Missing Authorization header in JWT authentication mode
. I have my slack url setup as an config.toml
secret just like in my working application. Any ideas about why I'm seeing this error?Eric
09/26/2023, 7:53 PMVang Xiong
09/26/2023, 10:17 PMMarvin AI
in our prefect account to test it out. However, I am unable to disable or turn it off? I see a setting that looks to be disabled already. I did try toggling it on and off, but we are still seeing the Marvin AI logs. There are no documentations on this anywhere. Does anyone here have any information to help us turn off this feature. ThanksHenri
09/26/2023, 11:00 PMfrom prefect.filesystems import S3
import os
def s3_storage() -> None:
block = S3(
bucket_path="prefect-portal-us-east-1-qa",
aws_access_key_id=os.getenv("QA_AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("QA_AWS_SECRET_ACCESS_KEY"),
).save("ecs-s3-qa", overwrite=True)
if __name__ == "__main__":
s3_storage()
An example
from prefect_aws import AwsCredentials
from prefect_aws.ecs import ECSTask
from prefect import flow, task
from prefect.deployments.deployments import Deployment
from prefect.filesystems import S3
[...]
@flow(log_prints=True)
def cool_numbers(nums=[1, 2, 3, 5, 8, 13]): # essentially map_flow
print_nums(nums)
squared_nums = square_num.map(nums)
print_nums(squared_nums)
if __name__ == "__main__":
aws_credentials_block = AwsCredentials.load("aws-credentials-qa") # loads my aws cred
ecs_task_block = ECSTask.load("ecs-cluster-agent-qa") # refers to the aws ecs cluster using
s3_block = S3.load("ecs-s3-qa") # doesn't really work
deployment = Deployment.build_from_flow(
flow=cool_numbers,
name="ecs-cool-numbers",
work_queue_name="default",
infrastructure=ecs_task_block,
path="/",
apply=True,
tags=["ecs-testing"],
)
The tl;dr is that prefect can't seem to upload to s3 and I can't execute the flow as its looking in s3.Harikrishna T.R
09/27/2023, 6:59 AMAndreas
09/27/2023, 8:33 AMIf the run does not terminate after a grace period (default of 30 seconds), the infrastructure will be killed, ensuring the flow run exits.1. How can we change this default grace cancellation period of 30 seconds? 2. If an on _`on_cancellation`_ hook takes more time than this to complete what will happen? Will it get killed?
Oscar
09/27/2023, 2:12 PM@flow
def my_flow():
logger = get_run_logger()
<http://logger.info|logger.info>("Hello from ECS!!")
My flow configuration looks like:
###
### A complete description of a Prefect Deployment for flow 'my-flow'
###
name: my-flow
description: null
version: c11a68e0ce6739c76dc99de079edeb29
# The work queue that will handle this deployment's runs
work_queue_name: default
work_pool_name: ecs-pool
tags: []
parameters: {}
schedule: null
is_schedule_active: true
infra_overrides: {}
###
### DO NOT EDIT BELOW THIS LINE
###
flow_name: my-flow
manifest_path: null
infrastructure:
type: ecs-task
env: {}
labels: {}
name: null
command: null
aws_credentials:
aws_access_key_id: null
aws_secret_access_key: null
aws_session_token: null
profile_name: null
region_name: null
aws_client_parameters:
api_version: null
use_ssl: true
verify: true
verify_cert_path: null
endpoint_url: null
config: null
block_type_slug: aws-credentials
task_definition_arn: null
task_definition: null
family: null
image: prefecthq/prefect:2-python3.10
auto_deregister_task_definition: true
cpu: null
memory: null
execution_role_arn: some-execution-role-arn
configure_cloudwatch_logs: true
cloudwatch_logs_options: {}
stream_output: null
launch_type: FARGATE
vpc_id: some-vpc-id
cluster: cluster-worker-arn
task_role_arn: null
task_customizations:
- op: add
path: /networkConfiguration/awsvpcConfiguration/securityGroups
value:
- some_security_group
task_start_timeout_seconds: 120
task_watch_poll_interval: 5.0
_block_document_id: 99c69452-38cb-4ce0-a0f9-65035b2db175
_block_document_name: default-ecs-job
_is_anonymous: false
block_type_slug: ecs-task
_block_type_slug: ecs-task
storage: null
path: /opt/prefect/flows
entrypoint: minerva/test.py:my_flow
parameter_openapi_schema:
title: Parameters
type: object
properties: {}
required: null
definitions: null
timestamp: '2023-09-27T14:08:05.642381+00:00'
triggers: []
enforce_parameter_schema: null
The error I’m getting:
Failed to submit flow run '180f4415-b052-40d4-9f7c-21c1b6220835' to infrastructure.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/tenacity/_init_.py", line 382, in _call_
result = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 1524, in _create_task_run
return ecs_client.run_task(**task_run_request)["tasks"][0]
File "/usr/local/lib/python3.10/site-packages/botocore/client.py", line 535, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.10/site-packages/botocore/client.py", line 980, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ClusterNotFoundException: An error occurred (ClusterNotFoundException) when calling the RunTask operation: Cluster not found.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 724, in _create_task_and_wait_for_start
task = self._create_task_run(ecs_client, task_run_request)
File "/usr/local/lib/python3.10/site-packages/tenacity/_init_.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/usr/local/lib/python3.10/site-packages/tenacity/_init_.py", line 379, in _call_
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/_init_.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f997b4445b0 state=finished raised ClusterNotFoundException>]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/prefect/workers/base.py", line 843, in _submit_run_and_capture_errors
result = await self.run(
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 567, in run
) = await run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/prefect/utilities/asyncutils.py", line 91, in run_sync_in_worker_thread
return await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 728, in _create_task_and_wait_for_start
self._report_task_run_creation_failure(configuration, task_run_request, exc)
File "/usr/local/lib/python3.10/site-packages/prefect_aws/workers/ecs_worker.py", line 819, in _report_task_run_creation_failure
raise RuntimeError(
RuntimeError: Failed to run ECS task, cluster 'default' not found. Confirm that the cluster is configured in your region.
What am I missing? It seems like the task is trying to deploy into the default cluster which doesn’t exist although I have specified where I’d like to deploy the task into.Lior Barak
09/27/2023, 2:49 PMHans Lellelid
09/27/2023, 3:30 PMMike Safruk
09/27/2023, 4:09 PMJeffrey Lam
09/27/2023, 4:41 PMKyle
09/27/2023, 7:40 PMLiz
09/27/2023, 9:04 PMSivanandha Rajadurai
09/27/2023, 10:28 PMIryna
09/28/2023, 3:05 AMFinished in state Failed('Flow run encountered an exception. shutil.Error: [(\'C:\\\\Users\\\\user_x\\\\AppData\\\\Local\\\\Temp\\\\tmpu8dj3ri0prefect\\\\.git\\\\objects\\\\pack\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.idx\', \'C:\\\\Users\\\\user_x\\\\AppData\\\\Local\\\\Temp\\\\tmpj3paxn4kprefect\\\\.git\\\\objects\\\\pack\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.idx\', "[Errno 13] Permission denied: \'C:\\\\\\\\Users\\\\\\\\user_x\\\\\\\\AppData\\\\\\\\Local\\\\\\\\Temp\\\\\\\\tmpj3paxn4kprefect\\\\\\\\.git\\\\\\\\objects\\\\\\\\pack\\\\\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.idx\'"), (\'C:\\\\Users\\\\user_x\\\\AppData\\\\Local\\\\Temp\\\\tmpu8dj3ri0prefect\\\\.git\\\\objects\\\\pack\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.pack\', \'C:\\\\Users\\\\user_x\\\\AppData\\\\Local\\\\Temp\\\\tmpj3paxn4kprefect\\\\.git\\\\objects\\\\pack\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.pack\', "[Errno 13] Permission denied: \'C:\\\\\\\\Users\\\\\\\\user_x\\\\\\\\AppData\\\\\\\\Local\\\\\\\\Temp\\\\\\\\tmpj3paxn4kprefect\\\\\\\\.git\\\\\\\\objects\\\\\\\\pack\\\\\\\\pack-dcb61de38801bc82e121d9ddd04df4adace353d8.pack\'")]\n')
Ben Muller
09/28/2023, 3:22 AMpydantic > 2.0
and marvin requires >= 2.0
Any chance this will be getting resolved any time soon?Slackbot
09/28/2023, 7:05 AMTyndyll
09/28/2023, 10:04 AMAndreas Nord
09/28/2023, 10:13 AMPanda
09/28/2023, 2:19 PMBrian Newman
09/28/2023, 2:25 PMJason Motley
09/28/2023, 2:34 PMRichard Freeman
09/28/2023, 3:20 PMJared Rhodes
09/28/2023, 3:26 PMGoshDarnedHero
09/28/2023, 4:17 PMGoshDarnedHero
09/28/2023, 4:17 PMGoshDarnedHero
09/28/2023, 4:17 PMGoshDarnedHero
09/28/2023, 4:17 PMGoshDarnedHero
09/28/2023, 4:17 PM