Im doing some async requesting with asyncio and e...
# ask-community
a
Im doing some async requesting with asyncio and even though im handling retries and not throwing exceptions im getting
Crash detected! Execution was cancelled by the runtime environment.
with no context no logging nothing. Ill reply with the function in question its pretty benign, anyone have any tips on how to diagnose
Copy code
async def async_get_with_retry(url: str, auth: Optional[aiohttp.BasicAuth] = None) -> Optional[Dict[str, Any]]:
    max_client_error_attempts = 10
    attempt = 0
    while attempt < max_client_error_attempts:
        try:
            retries = ExponentialRetry(attempts=10, max_timeout=180, statuses={429, 500, 503})
            async with RetryClient(retry_options=retries) as client:
                async with client.get(url, auth=auth) as response:
                    # Log the API response
                    log_api_response(response.status, text=await response.text())

                    if response.status < 400:
                        return await response.json()

            # If no exception and successful response, break out of the loop
            break

        except aiohttp.ClientError as e:
            # Log the exception here
            log_error(f"""Error in async_get_with_retry function in api_utils.py
            {attempt} out of {max_client_error_attempts} {url}
            {e}""")
            attempt += 1
            if attempt >= max_client_error_attempts:
                break
            await asyncio.sleep(1)  # Add delay before retrying

    return None
ultimately i just want to be able to get back json from ~45,000 separate urls and chase down their "next-token" if they have one. Id be happy to gut this for something more "prefect-onic"
above is called in chunks of 200 urls from the overall 45k urls in a single task with
Copy code
tasks = [
            asyncio.ensure_future(async_get_with_retry(url=url, auth=auth))
            for url in batch
        ]
k
Do you have a stack trace you can share?
a
i dont 😞
i am tripping the client error on a few things, but its not being thrown it just randomly dies
k
How is the flow deployed/served?
a
its deployed to a prefect agent running in GKE
good point ill see if i can pull something from k8s
no trace nothing there
k
hm
a
any idea on how to do this more prefect-thonic tm
like do you recommend using asyncio or do you push more so like 1 task per url and concurrency limits
also curiously the same task randomly restarted as well later with no log or error
k
overall being prescriptive is hard from our point of view because everyone's doing something different. like does having a zillion tasks on the graph in the UI actually serve your needs
a
likely no
and also it feels like it creates unnecessarily cost creating each task
k
yeah unless there's some really serious need for being able to recover from the failure of one of those urls and start back in the same spot, an all-or-nothing or chunked approach is fine.
I think the prefect-thonic way is to be as pythonic as you can until there's a need in the orchestration layer to invoke prefect functionality
a
hmm any tips on how to get a decent stack trace here ?
k
maybe deliberately raise an error in your
except
block
a
hmm okay ill give it a whirl was hoping you had a silver bullet
k
yeah the
except
as is may be eating the error with some useful info in it