Sean Talia
06/08/2021, 3:37 PMJenny
06/08/2021, 3:41 PMKevin Kho
Kevin Kho
Sean Talia
06/08/2021, 3:48 PMFAIL
signal?" but at a glance that seems like an anti-prefect way of setting that upSean Talia
06/08/2021, 3:51 PMSean Talia
06/08/2021, 3:52 PMJenny
06/08/2021, 3:54 PMdef custom_trigger(upstream_states):
if not all(s.is_successful() for edge, s in upstream_states.items() if edge.upstream_task.name != "A"):
raise signals.TRIGGERFAIL("Not all upstream states were successful.")
else:
return True
Sean Talia
06/08/2021, 3:54 PMSean Talia
06/08/2021, 3:57 PMSean Talia
06/08/2021, 4:06 PMT_1, T_2, ... , T_n
that were downstream of C that I might want to run regardless of the outcome of C, and having to attach a custom trigger to all of those downstream T_x
tasks seems like overkill...it would be really nice be able to do something like:
@task(always_continue=True)
def task_c(...)
I totally get that the behind-the-scenes implementation of this might not be so easy though πJeremiah
C
failed (meaning you want it to enter a Failed state), but you do not want your downstream tasks to treat it as they would a normal failure. I would propose that you add a new task to your graph that essentially convertβs `C`βs failure into a success:
C
β new_task
β T
If you put an always_run
trigger on your new_task
and have it simply return True
(or return Cβs data, as appropriate) then you will be able to observe C's
failure states without passing them to the trigger on T
, as it will always receive new_task's
success state.Jeremiah
T1, T2, T3, T4
all downstream of new_task
and all responding to its Success
in the same way, such that you only have to define the dummy task once and reuse this behavior across all downstreams.
For convenience, you could write a function that automatically generated the dummy task always_continue(C)
Sean Talia
06/09/2021, 6:02 PM