Hey does this just mean our EKS cluster hasn’t whi...
# prefect-community
d
Hey does this just mean our EKS cluster hasn’t whitelisted api.prefect.io?
Copy code
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<http://api.prefect.io|api.prefect.io>', port=443): Max retries exceeded with url: / (Caused by ReadTimeoutError("HTTPSConnectionPool(host='<http://api.prefect.io|api.prefect.io>', port=443): Read timed out. (read timeout=15)"))
k
No it could be you have a big API call happening. You can increase that like by setting
Copy code
prefect.context.config.cloud.request_timeout
as seen here
d
What’s the surface and size of the API calls?
And is there an environment variable for that setting? we’re running a prefect cloud agent so I’m not using that config.toml
Looking here for reference
k
Should be
Copy code
PREFECT__CLOUD__REQUEST_TIMEOUT
Size could depend on your Flow too. Do you have any GraphQL queries (could be under the hood of tasks too like
create_flow_run
?
d
I don’t have any tasks that create their own flows, pretty simple so far, flow registration runs separately from flow execution, flows executing are simple tasks (not composing tasks from or within tasks for example) but I am passing state between tasks. Would it be possible that this is an error masking an underlying connection error for a failed
prefect.engine.results.S3Result
I/O operation?
k
I don’t think so because that would use the boto3 client. I think this is specifically the Prefect Client. Was this a one time thing yesterday?
There was a spike in latency but I don’t think it was an incident. Or does this happen with some regularity?
d
Seems to just be once occurrence on 5/7
k
There may have been increased latency on 5/7 6pm ET so it might be a one off thing also