<@ULVA73B9P> tell me about any knowledge you have...
# ask-marvin
k
@Marvin tell me about any knowledge you have of the following error:
<frozen runpy>:128: RuntimeWarning: 'prefect.engine' found in sys.modules after import of package 'prefect', but prior to execution of 'prefect.engine'; this may result in unpredictable behaviour"
. Additional context: this error is from a pod running a kubernetsJob using a docker image from the base prefect image and my flow code.
m
It seems we've uncovered some information about the error you're encountering. Here are the insights step-by-step: ### Insights from Documentation: 1. Resource Warnings: The "frozen runpy128 RuntimeWarning" you've encountered could be indicative of resource warnings that arise when certain connections or resources are not properly closed. This might pertain to opened connections to something like a Docker daemon or a Kubernetes cluster which weren't properly closed. Prefect Troubleshooting Guide 2. Installation Notes: This issue might be related to the Python environment or something in the Prefect library code. Ensuring that you have the correct Python version and have properly installed Prefect in a virtual environment could alleviate some issues. Prefect Getting Started: Installation ### Insights from GitHub Issues: 1. File Handle Issues: There seems to be a similar issue encountered where improper handling of open files during flow cancellation led to issues. This could be related to the broader issue of how resources and modules are handled in the environment. Unable to handle 'Cancelling' events 2. Cancellation with Dask: Another related issue is seen when cancelling a flow running on Kubernetes using
DaskTaskRunner
, where the flow pod is removed immediately without proper cleanup, leading to orphaned pods. This kind of behavior may also provide clues into resource mismanagement that could lead to the runtime warning you're seeing. Flow cancellation does not delete Dask Scheduler &amp; Worker pods 3. Cloning Error: A related issue during the cloning step in deployments signifies that the same warning was displayed when re-running deployments, specifically showing resource mismanagement issues leading to unpredictable behavior/errors on subsequent runs. Cloning Error during pull step ### Conclusion: The error you encountered might be due to improper handling and closing of resources or modules within the environment, possibly exacerbated by specific deployment setups such as Kubernetes or Docker. Ensuring proper closure of resources, validating your installation setup, and keeping Prefect and its dependencies up-to-date could mitigate such issues. If the issue persists, consider exploring your flow's lifecycle management and how resources are handled during that lifecycle, especially concerning pod management in Kubernetes or connections established during the flow run.
💩 1