John Kang
08/11/2022, 6:12 PMChandrashekar Althati
08/11/2022, 6:36 PMJon Ruhnke
08/11/2022, 7:11 PMMars
08/11/2022, 7:19 PMEXTRA_PIP_PACKAGES
for the Docker executor. Is there something similar for the k8s job executor?Yardena Meymann
08/11/2022, 7:53 PMSlackbot
08/11/2022, 8:19 PMSlackbot
08/11/2022, 8:28 PMJeff LaPorte
08/11/2022, 8:30 PMJoe Goldbeck
08/11/2022, 8:48 PMprefect register --dry-run
? I would like to add a CI step that ensures that we won’t fail after a merge when we try to register flows in CDNeil Natarajan
08/11/2022, 8:59 PMEdmondo Porcu
08/11/2022, 11:58 PMIlya Galperin
08/12/2022, 12:37 AMdeployment
build
and apply
commands from a parent directory containing multiple flows. This way, we use only one storage block but if multiple developers are working in the same shared bucket like in a staging environment, the blast radius is bigger. For example, if a developer is working on only a single flow, there is a risk of overwriting someone else’s in-development code if you commit code from another flow/folder accidentally.
deployment build flow_a/flow.py:entry_point -n flow_a_deployment --storage-block s3/universal-storage-block
flows/ <working directory>
flow_a/
flow_b/
2. Every flow gets its own storage block and running deployment
build
and apply
commands from that flow’s root directory. This will obviously require us to use more storage blocks, but seems to decrease blast radius.
deployment build ./flow.py:entry_point -n flow_a_deployment --storage-block s3/individual-storage-block
flow_/a <working directory>
It seems to me like option 2 is more optimal, but are there any disadvantages or limitations we should be aware of in using multiple storage blocks?Dean Magee
08/12/2022, 12:59 AMLennert Van de Velde
08/12/2022, 8:00 AMMarcin Grzybowski
08/12/2022, 9:07 AMJonathan Mathews
08/12/2022, 12:14 PMJames Brady
08/12/2022, 12:50 PMprefect deployment apply …
after updating Python files, but I can see in the S3 bucket that the Python files aren't being updated to match what's on my local computer.Tomas Knoetze
08/12/2022, 1:00 PMSean Malone
08/12/2022, 1:30 PMAndrei Tulbure
08/12/2022, 1:50 PMAndreas Nord
08/12/2022, 3:12 PMFlow could not be retrieved from deployment.
On Prefect 1 I would package all my python dependencies in an Docker image, as well as the flow code (using storage=Module()). What would be the equivalent for Prefect 2? I saw the option of putting the flow definitions on AWS but I don't really see the pointStéphan Taljaard
08/12/2022, 4:08 PMPhilip MacMenamin
08/12/2022, 4:13 PMtask_name
, which returns "some_value", and I do:
state = flow.run()
How to I get the return of task_name
? (ie "some_value")Jimmy Le
08/12/2022, 4:31 PMHeather DeHaven
08/12/2022, 4:42 PMwith Flow(name="my_parent_flow") as flow:
name = Parameter('name')
...
create_flow_run.map(
flow_name=unmapped('child_flow')
...
run_name=unmapped(f'<use Parameter 'name''s value> child run')
)
Alex Tam
08/12/2022, 5:02 PMSean Malone
08/12/2022, 5:05 PMDeepak Pilligundla
08/12/2022, 6:23 PMFailed to load and execute Flow's environment: FlowStorageError('An error occurred while unpickling the flow:\n ModuleNotFoundError("No module named \'redshift_connector\'")\nThis may be due to a missing Python module in your current environment. Please ensure you have all required flow dependencies installed.')
Isaac Kargar
08/12/2022, 6:38 PMPanos V.
08/12/2022, 6:41 PM