However now we want to map over multiple inputs to have multiple parallel pipelines of the above:
Copy code
whole_tuple_result = my_tuple_task.map(inputs)
otherresult1 = othertask1.map(whole_tuple_result) # tasks must break up tuple inside the function
otherresult2 = othertask2.map(whole_tuple_result)
Is there a way to maintain the elegant tuple-destructuring while still mapping over the result? Just trying to do it directly gives the error
Copy code
TypeError: Task is not iterable. If your task returns multiple results, pass nout to the task decorator/constructor, or provide a Tuple return-type annotation to your task.
(we’ve already set that which is why the first example works)
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.