Chris Arderne
12/22/2022, 7:56 AMv2
(more details in thread!)
1. šļø Structuring project: We're using v2 with a GCS Storage block, a KubernetesJob block and a Flow Deployment (all three defined in Python files). Because we're using GCS Storage, the flow files are not pip installed
so we can only use relative imports in the flow like from .utils import foo
, which can make local development and bugchecking a bit more error-prone. We will probably shift to just using the Docker images local files, but haven't yet experimented with this in Prefect v2. Are there any best-practises around this? (Project structure in thread)
2. šāāļø Running flows: Prefect v1
had a prefect run ...
command in the CLI, which made it possible to run a local flow. Prefect`v2` doesn't seem to have this? We can't add a if __name__ == "__main__"
to the Flow file, because it has relative imports so running it doesn't work. As a workaround I've added a run.py
in a parent directory that imports and runs it, but this isn't very ergonomic. In either case if I want to pass parameters, I must create my own CLI for doing so... Have I missed something in the CLI, or is there a plan to add this? Or is there a better pattern I should be using instead?.
āāā Dockerfile
āāā requirements.txt
āāā src
āāā blocks.py
āāā deployments.py
āāā flows
ā āāā __init__.py
ā āāā flow.py
ā āāā utils
ā āāā __init__.py
āāā run.py
prefect deployment ...
as it means configuring a bunch of stuff in a CLI argument and seems like a messier CI/CD story, what am I missing there? Version controlling the deployment YAMLs and editing them directly doesn't seem like an option, as they'll immediately get out of date with the blocks? What's the best pattern here? (What I'm doing just seems weird and inconsistent, to do the blocks via the CLI and the deployments by running a file, and has implications for the kinds of imports each file can do.)
prefect block register -f src/blocks.py
python src/deployments.py