Hello everyone! How do I get task timeout value fr...
# ask-community
a
Hello everyone! How do I get task timeout value from GraphQL? When I query interactive API I get an error. My request:
Copy code
query DMS{
  flow (
    where: {
      name: {_eq: "Daily metrics summary"}, archived: {_eq: false}
    }
  ) { 
      id, version,
      tasks {
        id, name,
        task_runs(order_by: {start_time: desc}, limit: 1){
          start_time, end_time
        },
      # Here I'm getting error (see below)
      timeout
    }}
}
k
Hey @Andrey Nikonov, I am not sure this is possible from the GraphQL API. I’ll double check with the team. Could you move the error into the thread to free up some space in the main channel when you get the chance? 🙂
You can’t unfortunately. What are you trying to do with the timeout? Maybe we can find it some other way
a
Error:
Copy code
{
  "errors": [
    {
      "message": "Cannot query field \"timeout\" on type \"task\".",
      "extensions": {
        "code": "GRAPHQL_VALIDATION_FAILED"
      }
    }
  ]
}
@Kevin Kho Thanks for your reply What am I trying is to set up a flow that will track when tasks start/end and what is current timeout to run that task. Basically I want to send this metrics to cloudwatch or prometheus-pushgateway for monitoring purposes. If processing time gets close to timeout I'm going to send alerts to slack/pagerduty.
k
I understand. I think you can send the alert when it times out without a problem but before it does is a bit hard. If you had the timeout value, do you know how you would achieve this?
a
I believe if I have timeout value (which is in seconds) I do something like
Copy code
processing_time = to_time(end_time) - to_time(start_time)

if processing_time + timedelta(minutes=5) >= timeout:
    cloudwatch_client.put_metric_data(
        Namespace='incoming_timeout_failures',
        MetricData=[
            {'MetricName': 'processing_time', 'Value': processing_time, 'Dimensions': [{'Name': 'prefect_task', 'Value': task_name}]},
            {'MetricName': 'current_timeout', 'Value': timeout, 'Dimensions': [{'Name': 'prefect_task', 'Value': task_name}]},
        ]
    )
In general if timeout have already happened - I will see the Prefect flow failure for sure. But what I want is to create a flow that will get all flow_runs from current day and check if those runs are getting closer to timeout. This flow will run after all data-processing flows finished. Next day engineer see alerts in slack, increases timeout and do not worry timeout will happen this day
k
Will check with the team on this
Unfortunately we don’t have a good way to do this. You can do two things (or a combination of both): 1. Use the GraphQL to get tasks in the TimedOut state, and then look for the timeout in the code and compare execution time 2. Get the Flow object in Python code (maybe import it in). Get the timeouts using
Flow.tasks
, query runs for the flow and compare against the timeout.
a
Thanks, Kevin I think we choose the second way since we want to prevent our flows getting into that TimedOut state. So yeah, I guess we take list of flows with their tasks and just getting timeout from python task object. Looks pretty easy
👍 1