configuration combined with
support parallel execution of mapped tasks? Currently I'm unable to get the ECS task to spawn any additional processes - all are run in sequence (first image). When running the same flow on the local Prefect environment, the tasks are all done in parallel (second image). If I use
, the flow tasks are executed in parallel, but threaded flow only covers part of our use cases.
I tried out some already-deployed flows, they run fine. But if I re-deploy them they start raising the same error. Running
Failed to load and execute Flow's environment: TypeError("default() got an unexpected keyword argument 'default_scopes'")
in the container and on the server. Any advice would be great!
for flow change detection in automated flow register processes. I found out that whenever a flow is registered with a particular task, lets say
, and then I update the values of the parameters passed to the task (imagine for instance
task_A(var='typo_string') --> task_A(var='correct_string')
will remain invariant and thus the flow will not work according to the last changes because it will not bump a new version to the server. Would like to know if there is a better way to do this or I might be using the wrong approach
Failed to load and execute Flow's environment: ModuleNotFoundError("No module named 'ecs_test'")
I would like to combine those into a single Flow run with shared parameters passed to the flow runs. That seems to be what's described in this documentation. That mentions registering flows using the orchestration API to specific projects. So far I've been running flows without that. Is there any way to create a flow of flows with local importing of flows or do they have to be registered with the orchestration API to do so?
test-flow-of-flows: @echo 'Running flow A' @python flows/flow_a.py \ param1=foo \ param2=bar @python flows/flow_b.py \ param1=foo \ param2=bar @python flows/flow_c.py \ param1=baz \ param2=bar
into the schedule as part of
. I have hacked
to support this. 2. Something past my Python knowledge is causing
to do something weird when the yield returns. This causes the execution to drop directly out of the while loop and exit the method. Each new call to
winds up creating a new
and running the same event start_date again. If I build the clock directly and call it’s next on the iterable returned
it works as I would expect.
in python uses its own bundle and set
. All now good.