Hi - I'm trying to orchestrate a multi-step job in a nice way. I'd like to run each step on its Deployment's native infra/params/etc. but to chain them to get a kind of meta-Flow for running the whole job to have one overarching view.
I could get the first part easily enough with run_deployment() in a Task. However, I am finding it tricky to ensure that if those subflows/sub-Deployments fail, the whole chaining meta-Flow fails as well.
Right now, they don't seem to - I have cases where a subtask has failed but the whole job shows as Completed, not Failed. I would expect this to operate as, essentially, a monad. I feel like I'm missing a trick somewhere.
Anyone here got any idea what it might be?
Thanks @Islam Otmani, but I'm afraid that we're invested in on-prem deployment at this point, and the behaviour I'm observing honestly almost seems like a bug
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.