Hi, tried to search answer in history but failed :...
# prefect-server
Hi, tried to search answer in history but failed 😞 We have
Set state
options in UI for each running flow. • If I press
- flow is stuck in Cancelling status forever and what is more dangerous for us child process on agent machine which actually executed this flow is stuck too and is never killed. It means it does not release resources. I can see 2 processes stuck (as on picture) - 1 for flow execution and 1 for it's heartbeat. • Setting
state seems working well, but not if I set state after flow is cancelled. Could you tell me something about this behaviour ? PS: I'm talking about Cloud backend + Local Agent
So cancellation is a best effort attempt to kill execution initiated on your infrastructure. It’s hard to say what is the root cause here. Can you share more about: 1. What is this flow doing - perhaps you can share an example I could reproduce 2. How did you start this stuck flow run? 3. Under what circumstances does it happen that Cancellation doesn’t work? Is it only 1 specific flow that causes issues?
@Anna Geller I just narrowed problem to Executor type. You can reproduce problem with code below. If you press cancel in UI before task is executed.
Copy code
from time import sleep

from prefect import Flow, task
from prefect.executors import LocalDaskExecutor

def sleep_task():

with Flow(name='train-flow') as flow:

if __name__ == '__main__':
    flow.executor = LocalDaskExecutor(scheduler='processes', num_workers=8)
But it works fine with LocalExecutor for example. I think 2nd and 3rd your questions are not relevant anymore. I will appreciate any updates about this problem.
The problem with that is that it’s hard to just cancel all the processes or threads that you started, you can experience the same when you use multithreading or multiprocessing in Python’s concurrent.futures. We have some open issues about that: • https://github.com/PrefectHQ/prefect/issues/4490https://github.com/PrefectHQ/prefect/issues/5198
sorry, I realized I added this to an internal repository, but all I wanted to say: we are aware of the issue and I linked your code to the open issue
👍 1