Andy Irvine
02/12/2024, 11:42 AM11:40:12.345 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Running call get(timeout=1.9999821899691597) in thread 'APILogWorkerThread' │
│ 11:40:12.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' RUNNING, runtime=0.00> entered │
│ 11:40:14.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' COMPLETED, runtime=2.00> exited │
│ 11:40:14.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Encountered exception in call get(<dropped>) │
│ Traceback (most recent call last): │
│ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync │
│ result = self.fn(*self.args, **self.kwargs)
Marvin
02/12/2024, 11:42 AMthe debug logs of our worker show this error repeatedly. What is the cause and can it be resolved?```114012.345 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Running call get(timeout=1.9999821899691597) in thread 'APILogWorkerThread' │ │ 114012.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' RUNNING, runtime=0.00> entered │ │ 114014.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' COMPLETED, runtime=2.00> exited │ │ 114014.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Encountered exception in call get(<dropped>) │ │ Traceback (most recent call last): │ │ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync │ │ result = self.fn(*self.args, **self.kwargs) │ │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ │ │ File
Marvin
02/12/2024, 11:43 AMCapacityLimiter
where it's encountering a problem when trying to acquire on behalf of a flow run that it's already holding a token for.
Based on the details extracted from the open GitHub issue, it seems that you're not alone with this issue and it might be related to some agents handling the same flow run concurrently or multiple flow runs causing contention.
To work towards a resolution:
1. Check to make sure you're running the latest version of Prefect available, which may contain a fix for this issue.
2. Review the issue on GitHub to see if there are any suggested workarounds or if a fix is proposed, either in the form of a code change or a configuration update.
3. Participate in the conversation on the GitHub issue to provide your own insight or obtain more accurate advice or fixes from the maintainers or community.
4. Ensure that your agent scaling configuration isn't resulting in unexpected behavior where multiple agents might be trying to acquire the same flow runs.
5. Monitor the issue on GitHub for updates or patches that resolve the bug.
Would you like me to check the latest version of Prefect to ensure you're up to date?Andy Irvine
02/12/2024, 11:43 AM11:40:12.345 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Running call get(timeout=1.9999821899691597) in thread 'APILogWorkerThread' │
│ 11:40:12.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' RUNNING, runtime=0.00> entered │
│ 11:40:14.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - <WatcherThreadCancelScope, name='get' COMPLETED, runtime=2.00> exited │
│ 11:40:14.346 | DEBUG | APILogWorkerThread | prefect._internal.concurrency - Encountered exception in call get(<dropped>) │
│ Traceback (most recent call last): │
│ File "/usr/local/lib/python3.11/site-packages/prefect/_internal/concurrency/calls.py", line 318, in _run_sync │
│ result = self.fn(*self.args, **self.kwargs)
Marvin
02/12/2024, 11:44 AMprefect._internal.concurrency
module. The log message indicates that an exception was encountered after a timeout occurred on a 'get' call in the APILogWorkerThread
.
The specific cause of this error may be complicated, involving threading and asynchronous call management. It's quite difficult to determine the exact cause without more context about the Prefect configuration, the task being executed, and the complete stack trace.
To resolve such issues, here are some general steps:
1. Confirm that you are using a compatible version of Python and any relevant dependencies with your version of Prefect.
2. Ensure that the setup for running tasks (e.g., concurrency limits, timeouts) is appropriate for the workload and resources available.
3. Review the full stack trace to understand exactly where the error is occurring and which operation is failing.
4. Consult the Prefect documentation or seek support for configuration-specific issues that may relate to the worker or environment setup.
5. Consider updating to the latest stable version of Prefect if you are not already on it, as updates often include bug fixes.
Would you like to proceed with checking for any existing GitHub issues that resemble this problem in more detail or getting information on the latest version of Prefect that might contain an important fix?Andy Irvine
02/12/2024, 11:56 AMprefect:2.14.17-python3.11-kubernetes
Marvin
02/12/2024, 11:56 AM2.14.17
with the python3.11
and kubernetes
tags related to concurrency exceptions, no open issues were found.
Since no existing issues match the error you're experiencing, the next steps could be:
1. Ensure the rest of your setup is compatible with Prefect version 2.14.17
.
2. Check if this issue has been reported in closed issues, as it may have been resolved in a more recent version.
3. Look through the Prefect changelogs to see if a newer version includes relevant fixes.
4. If the issue persists and is not reported, consider opening a new GitHub issue with full details, including the complete logs, configuration, and environment setup.
Would you like to check for the latest version of Prefect to ensure you're operating with the most recent updates and possible bug fixes?