Wonder anyone had the same problem. Can't make it ...
# ask-community
x
Wonder anyone had the same problem. Can't make it short enough for Marvin.
n
hi @Xun Wang - this is likely related to anyio 4 somehow, since they now raise exception groups can you share the prefect version you have installed at the runtime?
x
Version: 2.20.0 API version: 0.8.4 Python version: 3.10.6 Git commit: 15274df8 Built: Thu, Aug 1, 2024 3:14 PM OS/Arch: linux/x86_64 Profile: id90-cloud-profile Server type: cloud
It happens to the previous version as well. I mean 2.19.x
n
can you share your anyio version as well?
x
Name: anyio Version: 4.4.0 Summary: High level compatibility layer for multiple asynchronous event loop implementations Home-page: Author: Author-email: Alex Grönholm <alex.gronholm@nextday.fi> License: MIT Location: /home/pentaho/anaconda3/envs/py310/lib/python3.10/site-packages Requires: exceptiongroup, idna, sniffio, typing-extensions Required-by: httpx, prefect, starlette
n
hmm I'm surprised the behavior is consistent between 2.19.x and 2.20.0 since anyio 4 compat came with 2.20.0 are you sure that you get the same behavior on 2.19.x?
i suspect you'll likely be able to get around this with
pip install 'anyio<4'
in your runtime, but im interested in figuring out why this is happening
x
I can try later to rollback to 2.19.x.
Some of the flows were running fine yesterday but it has the same error now. But I am sure I had the same issue for some flows when I use asyncio but not for those without it with version 2.19.x.
n
can you share how you run your worker? e.g. is it process, docker etc
x
yes it is the process worker
n
do you do any ad-hoc
pip install
commands at runtime? like
pull
steps or
EXTRA_PIP_PACKAGES
?
x
prefect worker start --pool 'id90-process-pool' --type process &
I am trying to remove my agent work pool. Feel it is easier for me to migrate to process pool first. Tried managed pool which requires the EXTRA_PIP_PACKAGES.
Copy code
pull:
- prefect.deployments.steps.set_working_directory:
    directory: /home/pentaho/prefect2
That is the only pull step.
@Nate I thought I have rolled back to 2.19.9 but the prefect command shows me the 2.20.1 version. But somehow the error disappears. Will try to test few things tomorrow. Thanks for your quick response! Have a great Friday!
n
no problem! have a great weekend catjam
x
Just an update. So far I don't see this issue shows up again after Prefect 2.20.1 update for the flows having the problem before. If I run into scenarios for the same issue with certain flow, will post the case. Again thanks for the help!
n
thanks for the update! glad things are working for you
🙏 1
x
@Nate It is weird that the error comes back again without any change made. "An error occurred while monitoring flow run ..." It is same exact error after the flow run completed, the error shows up in the cloud log after few seconds. I am wondering if this is related to something on the Prefect cloud side trying to monitor the process flow running on my side of server. It seems to be an annoying noise to have so far. It'd be nice to get rid of it. Will see if a restart of the process work pool will help.
n
to clarify, its the same error as you linked in your original message in this thread?
x
Yes
image.png
restarted the worker pool and the error has gone for now.
The last message in the console is: Process 3021410 exited cleanly. which was not for the one with the error.
n
any chance you can share the config of your work pool? feel free to DM it if you want
Copy code
prefect work-pool inspect <your-work-pool>
i'm wondering if this has to do with streaming logs or something
x
Is it related to the setting "Stream Output (Optional)"?
I have it enabled for the process pool though I am not knowing what it does. 😞
n
that was my suspicion, can you try disabling that? im not sure what that actually means for a process worker 🙂
1
x
Copy code
WorkPool(
    id='10b565af-671b-4dad-86c0-05605ed60644',
    created=DateTime(2024, 7, 29, 16, 32, 13, 360940, tzinfo=Timezone('+00:00')),
    updated=DateTime(2024, 8, 9, 21, 56, 43, 844284, tzinfo=Timezone('+00:00')),
    name='id90-process-pool',
    description='',
    type='process',
    base_job_template={
        'variables': {
            'type': 'object',
            'properties': {
                'env': {
                    'type': 'object',
                    'title': 'Environment Variables',
                    'description': 'Environment variables to set when starting a flow run.',
                    'additionalProperties': {'type': 'string'}
                },
                'name': {'type': 'string', 'title': 'Name', 'default': 'ID90WP-TP', 'description': 'Name given to infrastructure created by a worker.'},
                'labels': {'type': 'object', 'title': 'Labels', 'description': 'Labels applied to infrastructure created by a worker.', 'additionalProperties': {'type': 'string'}},
                'command': {
                    'type': 'string',
                    'title': 'Command',
                    'description': 'The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker.'
                },
                'working_dir': {
                    'type': 'string',
                    'title': 'Working Directory',
                    'format': 'path',
                    'description': 'If provided, workers will open flow run processes within the specified path as the working directory. Otherwise, a temporary directory will be created.'
                },
                'stream_output': {
                    'type': 'boolean',
                    'title': 'Stream Output',
                    'default': True,
                    'description': 'If enabled, workers will stream output from flow run processes to local standard output.'
                }
            }
        },
        'job_configuration': {
            'env': '{{ env }}',
            'name': '{{ name }}',
            'labels': '{{ labels }}',
            'command': '{{ command }}',
            'working_dir': '{{ working_dir }}',
            'stream_output': '{{ stream_output }}'
        }
    },
    status=WorkPoolStatus.READY,
    default_queue_id='7a3fab4f-beff-4116-af69-09d759a85b83'
)
Do I need to restart the worker after any setting change?
Anyway, I have restarted the worker. Let's wait for few days and see if the error pops up again or not. Thanks and have great evening!
👍 1
n
🤞 no problem!