Luuk
04/14/2022, 6:45 AMJacob Blanco
04/14/2022, 6:49 AMJacob Blanco
04/14/2022, 7:02 AMJars
04/14/2022, 7:14 AMEvan
04/14/2022, 7:15 AMStephen Lloyd
04/14/2022, 7:15 AMJacob Blanco
04/14/2022, 7:16 AMJars
04/14/2022, 7:22 AMGaurav kumar
04/14/2022, 7:31 AMJars
04/14/2022, 7:36 AMChris White
Jacob Blanco
04/14/2022, 7:39 AMStephen Lloyd
04/14/2022, 7:48 AMTraceback (most recent call last):
File "/Users/slloyd/projects/dwbi-orchestration/.venv/lib/python3.8/site-packages/prefect/engine/task_runner.py", line 880, in get_task_run_state
value = prefect.utilities.executors.run_task_with_timeout(
File "/Users/slloyd/projects/dwbi-orchestration/.venv/lib/python3.8/site-packages/prefect/utilities/executors.py", line 468, in run_task_with_timeout
return task.run(*args, **kwargs) # type: ignore
File "workable/src/flow.py", line 64, in fivetran_sync
status = FivetranSyncTask.run(
TypeError: method() missing 1 required positional argument: 'self'
creds is passed in from a PrefectSecret task.
@task
def fivetran_sync(connector: str, creds: dict) -> dict:
status = FivetranSyncTask.run(
api_key=creds['api_key'],
api_secret=creds['api_secret'],
connector_id=connector
)
return status
Chris White
John Muddle
04/14/2022, 12:48 PMGeert-Jan Van den Bogaerde
04/14/2022, 3:13 PMMike
04/14/2022, 4:13 PMAlexander Gorokhov
04/14/2022, 4:24 PMMax Kolasinski
04/14/2022, 4:36 PMJason
04/14/2022, 4:39 PM<https://github.com/PrefectHQ/prefect/blob/master/src/prefect/storage/docker.py#L610-L613>
Has anyone run into a similar exception? I'm successfully authed to ECR from my local docker --login as well.Jason
04/14/2022, 5:22 PMAlexander Belikov
04/14/2022, 5:39 PMegk
04/14/2022, 5:39 PMAric Huang
04/14/2022, 6:19 PM@task(result=GCSResult(bucket=<bucket>))
method of configuring a task result, is the bucket path fixed at flow registration time? If so, is there a way it can be dynamically set at flow run time? What I'm hoping to do is have flows that can be registered to run on different clusters (using agent labels), and have their GCSResult bucket path be configured via an env var on the cluster. That way we can re-use the same flow code across different clusters but have different results buckets depending on the cluster.Jason
04/14/2022, 6:48 PMPhilip MacMenamin
04/14/2022, 6:50 PM2022-04-14 09:42:06-0600] ERROR - prefect.TaskRunner | Task 'ShellTask[0]': Exception encountered during task execution!
Traceback (most recent call last):
File "/blah/python3.9/site-packages/prefect/engine/task_runner.py", line 880, in get_task_run_state
value = prefect.utilities.executors.run_task_with_timeout(
File "/blah/python3.9/site-packages/prefect/utilities/executors.py", line 468, in run_task_with_timeout
return task.run(*args, **kwargs) # type: ignore
File "/blah/python3.9/site-packages/prefect/utilities/tasks.py", line 456, in method
return run_method(self, *args, **kwargs)
File "/blah/python3.9/site-packages/prefect/tasks/shell.py", line 131, in run
tmp.write(command.encode())
AttributeError: 'list' object has no attribute 'encode'
I have a couple of questions:
• Is there a way to tag shell_tasks such that you can see some clue as to which one failed?
• Can I get a better description of the failure
At the moment I have
shell_task = ShellTask(log_stderr=True, return_all=True, stream_output=True)
egk
04/14/2022, 6:57 PMegk
04/14/2022, 7:02 PMJason
04/14/2022, 7:16 PM[14 April 2022 2:14pm]: An error occurred (InvalidParameterException) when calling the RunTask operation: No Fargate configuration exists for given values.
. The weird thing is that 4096 appears to be a valid entry: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html. Is it possible I screwed up the definition of my fargate
cluster?Jason
04/14/2022, 8:13 PMdocker_storage = Docker(
registry_url=environ["REGISTRY_URL"],
dockerfile="./Dockerfile",
image_name="{edited}-prod-platform-prefect-{project}",
image_tag="latest",
)
It seems like each flow could demand its own image in order to separate dependencies, which would mean creating an ECR repo for each workflow? I suppose this wouldn't be that difficult to script with Github Actions and aws-cli globbing a directory for workflow names?