hi all, i'm working on a flow with a series of mapped tasks -- mapped task 1 feeds results into mapped task 2. i'm observing a strange behavior where:
• all child tasks of mapped task 1 complete successfully
• all child tasks of mapped task 2 are triggered and complete successfully
• BUT for some reason, prefect appears to re-trigger the (already completed) child tasks of mapped task 1 once all have completed. these end up failing (no heartbeat detected, presumably because the dask workers are released b/c the task completed)
technically the pipeline completes everything, but all of the children of mapped task 1 are marked as failures once bullet 3 hits ^^^ i'm running this in a gke autopilot cluster...working on testing in another environment now
any ideas on why prefect would even try to re-run a successful child task?
my current theory is that its related to the environment (GKE Autopilot w/ daskexecutor), but i'm not clear on what could go wrong w/ prefect to trigger the re-execution...
k
Kevin Kho
02/25/2022, 6:38 PM
If you sent tasks A -> B -> C to a Dask worker and A and B finished but C did not, and then the Dask worker dies, it still wants to compute C, and sees A and B as upstream dependencies so Dask automatically retries them
are you on Prefect Server or Cloud?
l
Luke Segars
02/25/2022, 8:36 PM
prefect cloud!
and its running A -> B successfully but then re-launches A for some reason. i'll try disabling re-tries though, that's a good thought
k
Kevin Kho
02/25/2022, 8:44 PM
Cloud has Version Locking though which would stop this. Can you check if Version Locking is enabled? I’m also wondering if it’s a race condition from the autoscaling
l
Luke Segars
02/25/2022, 8:51 PM
whoa, didn't know about version locking! yes i'll try this out