Hi, I’m running a simple flow where get_logs is de...
# ask-community
h
Hi, I’m running a simple flow where get_logs is decorated with an always_run trigger, in case the container fails. In this case, the container fails, and get_logs is correctly run. But why is ping_slack_channel, which is set up with a default “all_successful” trigger. running as well?
k
Hey @Hugo Kitano, this is because it only sees the immediate upstream tasks. Are you intending for
ping_slack_channel
to fail here? I think one thing you can do is
set_upstream
on the
collect_container_logs
so that it won’t run if that task fails.
h
Ah that makes sense. I’m intending for ping_slack_channel to not run at all
k
Oh sorry yeah that’s what I meant. Yes I would connect
collect_container_logs
to make it a directly upstream task
h
so would I do something like
slack_response.set_upstream(exit_code)
?
k
Yes exactly
🙌 1
h
One more question: I read in the docs you can use the task names in the set_upstream function instead of the return objects. I thought that this would be clearer to read, but I’m having trouble making this work. For example, in my original code screenshot, I tried to do
get_logs.set_upstream(collect_container_result)
instead of
logs.set_upstream(exit_code)
and got a completely different graph. What would be the correct way of doing this?
k
Try
slack_response = ping_slack_channel(key, upstream_tasks=[get_logs])
h
keep getting this weird kind of error where the same task is instantiated twice in the graph
k
Oh I see. Yeah I guess it’s only good to do it this way if you use it like
Copy code
with Flow as flow():
      get_logs()
Then you can use the name but if you instantiate it with
logs = get_logs()
, then you’ll get the weird graph
h
I see, makes sense.