Ilya Galperin08/12/2022, 12:37 AM
commands from a parent directory containing multiple flows. This way, we use only one storage block but if multiple developers are working in the same shared bucket like in a staging environment, the blast radius is bigger. For example, if a developer is working on only a single flow, there is a risk of overwriting someone else’s in-development code if you commit code from another flow/folder accidentally.
2. Every flow gets its own storage block and running
deployment build flow_a/flow.py:entry_point -n flow_a_deployment --storage-block s3/universal-storage-block flows/ <working directory> flow_a/ flow_b/
commands from that flow’s root directory. This will obviously require us to use more storage blocks, but seems to decrease blast radius.
It seems to me like option 2 is more optimal, but are there any disadvantages or limitations we should be aware of in using multiple storage blocks?
deployment build ./flow.py:entry_point -n flow_a_deployment --storage-block s3/individual-storage-block flow_/a <working directory>
2. Docker storage build - i.e. flow code baked into the container for all flows
prefect deployment build --path flows/flow1.py:one --path flows/flow2.py:two ...
Ilya Galperin08/12/2022, 1:08 AM
argument would only save that particular path to storage, even if the
command is running from a parent directory? In other words, would this allow us to only selectively save flows that all live in different folders, under a common parent folder and using a single storage block, without overwriting all the other flows?
John Kang08/12/2022, 2:52 PM
Ilya Galperin08/12/2022, 3:59 PM
John Kang08/12/2022, 4:00 PM