Hello experts~~ I'm using prefect 1.0! I say thank...
# prefect-community
w
Hello experts~~ I'm using prefect 1.0! I say thank you so much for always answering me quickly before asking a question.💙 When the flow, which was well executed for small-scale data, tried to receive a large amount of data from the our web server as parameter, the following client error was encountered. I downloaded the log of the flow from prefect cloud UI and attached it.(logs.json) and wirte here, too.
Copy code
Failed to set task state with error: ClientError([{'path': ['set_task_run_states'], 'message': 'State payload is too large.', 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}])
Traceback (most recent call last):
  File "/home/da/enviorments/bdi/lib/python3.10/site-packages/prefect/engine/cloud/task_runner.py", line 91, in call_runner_target_handlers
    state = self.client.set_task_run_state(
  File "/home/da/enviorments/bdi/lib/python3.10/site-packages/prefect/client/client.py", line 1604, in set_task_run_state
    result = self.graphql(
  File "/home/da/enviorments/bdi/lib/python3.10/site-packages/prefect/client/client.py", line 464, in graphql
    raise ClientError(result["errors"])
prefect.exceptions.ClientError: [{'path': ['set_task_run_states'], 'message': 'State payload is too large.', 'extensions': {'code': 'INTERNAL_SERVER_ERROR'}}]
Actually, it wasn't the first time that task failed, and before that, it ran for about 3 minutes and then informed me that no heartbeat detected. (
No heartbeat detected from the remote task; marking the run as failed.
) So, the solution I found was configure heartbeats to use threads instead of processes and worte about the flow run config in the .py file. When I did that, the task of receiving parameters was performed longer than the first... (1st try : 3 minutes running -> 2nd try: 12 minutes running) Although the task was executed for a longer time, it was still a failure. 😞 How can i solve this problem? What's the problem of my engineering? This flow may have been written in the wrong way, so I also wrote the flow code below..
Copy code
import...
from prefect.run_configs import UniversalRun

def custom_function():
    '''some works'''
    return output 

@task
def parsing_waveforms(download):
    processing_target = download    
    '''some works by using above custom_function'''
    
with Flow('flow_waveforms')as flow:
    heir = Parameter('download')
    task1 = parsing_waveforms(download=heir)

flow.run_config = UniversalRun(env={'PREFECT__CLOUD__HEARTBEAT_MODE'}:'thread')
flow.register(project_name='data_factory')
✅ 1
m
Hey @wonsun I think the error in this case is pretty descriptive, depending on what data you're working with if you can break the data into parts and process it in smaller batches that may address the errors you're seeing here.
w
Thanks for the quick reply @Mason Menges. Is it right to understand that the data corresponding to the parameter received from the web should be divided into n pieces, for example? In other flows I made, data of the same size were handled well. It was the same cloud environment, and the only difference was that the web server sent the data to be processed..
And if my understanding is correct, how can I get the data divided? Do I have to send small batches from the web server?
And I don't know if this way of splitting the sending datas really well-matched works for us. The data sent from the web server right now is not a file, but just a str value(below image), so why does it take so long to receive it? Also, we are communicating with the web server and graphql API, and it takes a few seconds to receive the request status 200 code from the web server.