Mansour Zayer
02/24/2023, 5:58 PMDavid Steiner Sand
02/24/2023, 5:59 PM--limit
flag in the agent, but that’s not ideal.
I though it would be nice to have a --prefetch-limit
option to the agent (beyond the already present --prefetch-seconds
). This way, an agent could have a big --limit
but not fetch too many flow runs at once, therefore spreading a bit the load to the other agents.
What do you think?Jehan Abduljabbar
02/24/2023, 7:05 PMEmma Keil-Vine
02/24/2023, 7:24 PMCannot create flow run. Failed to reach API at <https://api.prefect.cloud/api/accounts/a9c0f124-ca06-4646-a501-57a405ebf3c7/workspaces/43f1b8a7-ed9c-46d2-a88a-55eef95b8ef7/>.
-- does anyone have any recommendations? TIA!jack
02/24/2023, 8:31 PMflow.set_dependencies(task_4, upstream_tasks=[task_2, task_3])
jack
02/24/2023, 9:52 PMJohn Horn
02/24/2023, 11:43 PM@task(timeout_seconds=5)
def test_task_timeout(uuid: str):
logger = get_run_logger()
<http://logger.info|logger.info>(uuid)
time.sleep(10)
return 42
@flow(timeout_seconds=60)
def test_flow_timeouts(station_id: str):
logger = get_run_logger()
future = test_task_timeout.submit(uuid='uuid-123')
result = future.result()
<http://logger.info|logger.info>(result)
flow_context = context.get_run_context()
prefect_state_handler(future_object=future, context=flow_context)
Damien Dee Coureau
02/25/2023, 5:25 AMNimesh Kumar
02/25/2023, 10:52 AM@flow(name="Inferencing")
def start_inferencing(my_param, file_path):
job_id = my_param
algo_id = "402"
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>")
print(algo_id, job_id, file_path)
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>")
res_1 = generate_jobID.submit(algo_id, job_id)
print("\n\n\n\n ***************************\n\n\n\n ", res_1, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_1)
res_2 = get_file.submit(file_path)
print("\n\n\n\n ***************************\n\n\n\n ", res_2, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_2)
res_4 = choose_valid_file.submit(prev_task=res_2)
print("\n\n\n\n ***************************\n\n\n\n ", res_4, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_4)
res_5 = prepare_request_upload.submit(prev_task=res_4)
print("\n\n\n\n ***************************\n\n\n\n ", res_5, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_5)
res_6 = send_data_request.submit(prev_task=res_5)
print("\n\n\n\n ***************************\n\n\n\n ", res_6, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_6)
res_7 = prepare_predict_request.submit(prev_task=res_6)
print("\n\n\n\n ***************************\n\n\n\n ", res_7, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_7)
res_8 = send_predict_request.submit(prev_task=res_7)
print("\n\n\n\n ***************************\n\n\n\n ", res_8, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_8)
res_9 = extract_lunit_jobid.submit(prev_task=res_8)
print("\n\n\n\n ***************************\n\n\n\n ", res_9, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_9)
res_10 = prepare_fetch.submit(prev_task=res_9)
print("\n\n\n\n ***************************\n\n\n\n ", res_10, "\n\n\n\n ****************************\n\n\n\n")
call_on_failure_if_failed(res_10)
res_11 = fetch_request.submit(prev_task=res_10)
call_on_failure_if_failed(res_11)
print("\n\n\n\n ***************************\n\n\n\n ", res_11, "\n\n\n\n ****************************\n\n\n\n")
if res_11:
res_12 = convert_output.submit(prev_task=res_11)
call_on_failure_if_failed(res_12)
res_13 = zip_file.submit(prev_task=res_12)
call_on_failure_if_failed(res_13)
res_14 = send_to_HGW.submit(prev_task=res_13)
call_on_failure_if_failed(res_14)
else:
send_log(job_id=job_id, header="AI Gateway", subheader="Failed to Receive Inferencing Result", status="failed",
level=5, direction=True, send_ip=SEND_IP, msg_id=8)
if __name__ == "main":
start_inferencing_402(parameters=dict_)
the function res_12, res_13, res_14 i want to execute only if the res_11 gives True, By the way every function returns either true or false,
Problem:
1. Not even one print statement is execuiting,
2. Also even if the res_11 is false then also res_12, res_13, res_14 is getting executed,
Can anyone please tell me what i am doing wrong.Miremad Aghili
02/25/2023, 12:04 PMVera Zabeida
02/25/2023, 2:23 PMPeter Nagy
02/25/2023, 6:26 PMg5.4xlarge
EC2s in an an ASG with an amazon ecs-optimized linux gpu AMI. Here is how I am calling ECSTask:
ECSTask(
command=["echo", "hello world"],
image="<http://xxxxxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/my_repo:my_image|xxxxxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/my_repo:my_image>",
launch_type="EC2",
cluster="my-cluster",
stream_output=True,
execution_role_arn="arn:aws:iam::xxxxxxxxxxxx:role/prefect-ecs-task-role",
).run()
If anyone could help me figure out what I could try to get this working, I would be really grateful.Volker L
02/26/2023, 10:24 AMFederico Zambelli
02/26/2023, 12:24 PMThet Naing
02/26/2023, 9:04 PMprefect.exceptions.PrefectHTTPStatusError: Client error '422 Unprocessable Entity' for url '<https://api.prefect.cloud/api/accounts/abc/workspaces/xyz/deployments/None>'
Response: {'exception_message': 'Invalid request received.', 'exception_detail': [{'loc': ['path', 'id'], 'msg': 'value is not a valid uuid', 'type': 'type_error.uuid'}], 'request_body': None}
For more information check: <https://httpstatuses.com/422>
16:00:39.141 | ERROR | prefect.agent - Failed to get infrastructure for flow run 'abc'. Flow run cannot be cancelled.
Traceback (most recent call last):
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/prefect/agent.py", line 322, in cancel_run
infrastructure = await self.get_infrastructure(flow_run)
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/prefect/agent.py", line 381, in get_infrastructure
deployment = await self.client.read_deployment(flow_run.deployment_id)
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/prefect/client/orchestration.py", line 1504, in read_deployment
response = await self._client.get(f"/deployments/{deployment_id}")
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/httpx/_client.py", line 1757, in get
return await self.request(
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/httpx/_client.py", line 1533, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/prefect/client/base.py", line 253, in send
response.raise_for_status()
File "/Users/thet/SystemInternal/information-extraction-pipeline/venv/lib/python3.10/site-packages/prefect/client/base.py", line 130, in raise_for_status
raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__
Rikimaru Yamaguchi
02/27/2023, 8:11 AMTorstein Molland
02/27/2023, 9:44 AMFREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR;BYHOUR=18;BYMINUTE=15;WKST=MO
. However, it seems to be scheduled twice within the same minute as can be seen in the screenshot. What could be the reason for this and how can it be solved? Thank you kindly!Dmytro Ponomarchuk
02/27/2023, 10:19 AMrun_deployment()
?Mark NS
02/27/2023, 11:10 AMState
object for a failed task, but I'm not sure how to do that when the task throws an exception. Any advice?
try:
state: State = trigger_sync.with_options(
name=connection.name
)(
airbyte_server=airbyte_server,
connection_id=connection.connection_id
)
except AirbyteSyncJobFailed as e:
// state is undefined
flapili
02/27/2023, 1:46 PMadd(2+2)
at 10AM every days and add(3+3)
at 11AM every days ?Marius Vollmer
02/27/2023, 4:15 PMjack
02/27/2023, 4:28 PMMaryam Veisi
02/27/2023, 4:51 PMSam Cook
02/27/2023, 7:31 PMPrasanth Kothuri
02/27/2023, 8:33 PMSarhan
02/28/2023, 1:56 AMbotocore.exceptions.ClientError: An error occurred (MalformedXML) when calling the PutObject operation: The XML you provided was not well-formed or did not validate against our published schema
It worked fine yesterday. I’ve tried deploying using Python as well as CLI. Both give the above error.
No permission changes on S3 and no changes on my Python packages. Any insights?Mrityunjoy Das
02/28/2023, 2:29 AMRussell Brooks
02/28/2023, 3:52 AMStephen Lloyd
02/28/2023, 5:46 AM@task
def som_task():
...
return name, list
@task
def mapped_process():
...
return ??
@flow
def my_flow():
names = ['test1', 'test2']
x = some_task.map(names)
# x = [('test1', [1,2,3,4,5]), ('test2', [6,7,8,9,10])]
lots_of_data = mapped_process.map()
Parul Goyal
02/28/2023, 6:44 AM