or submit a single job and manage running the different models within the single spark job.
package, which has you do everything from method calls to an authentication object. So like you go
and point it at a special credentials file, then everything is through that. Should I just pass it to Secrets? Is there any risk of sensitive credentials being cached somewhere if I define the
gc = gspread.service_account(filename=<filename>)
object within the Flow itself?
increments, and map the individual data packets across the remaining DAG. In a way this kind of seems like a “workflow loop” with the parameters for the first node constantly updating.