Hey William, based on what you've described, I think I have a proposal. You could have an orchestrator flow (or parent flow, if you will), which is scheduled to run on a regular basis (once a day if you'd like). Whenever this parent flow executes, it can dynamically generate the start time and end time for requesting data from the API (maybe using something
like pendulum). Once it generates the start and end time, the orchestrator flow can then trigger an instance of a child flow run using
run_deployment. This function can take a deployment name, parameters, and a scheduled run time as input. After the parent calls
run_deployment()
, the result would be a child flow run, set to run at the defined scheduled time, with the parameters that were passed in by the parent. The child flow run can then request data from the API, and then drop the data into S3.