With the latest release, are the deployments from ...
# prefect-community
m
With the latest release, are the deployments from the examples here portable to the new version? In particular the second one where the flow file is baked into the image?
1
👀 1
m
If I'm understanding correctly, you are asking the same as I did here, but you did it in a slightly different way in previous versions. I got the answer that it should be possible somehow, but I can't figure out which storage block I should choose then
m
Sorry, I overlooked that one 😛
Thanks for the reference!
a
where the flow file is baked into the image?
it got so much better! you don't even have to bake your flow and even your custom modules into the image as they get packaged during deployment build the only thing that must be packaged are custom pip dependencies - example https://discourse.prefect.io/t/how-can-i-run-my-flow-in-a-docker-container/64#docker-deployment-using-a-custom-dockerfile-3
m
I don't want to be rude, but that's not an answer to @Matthias' question. In the shared example, some kind of remote storage is used, while on the repo Matthias shared, the flow is stored in the Docker image, which is the only object stored remotely. That makes it unnecessary to retrieve the flow code from S3/GCP/...
a
I see, we will provide CI recipes for packaging Docker images too
it's just for now we are still focused on getting the core functionality nailed to "prefection" and documented/explained, then we'll focus on more advanced use cases as CI
m
Yeah, I get that. I'm just confused because here you said that "it's already possible" and I don't know if "we'll provide recipes for that" means whether it's already possible but not documented or if 'the recipe' has yet to be implemented. I actually managed to 'hack' the system a little by searching for a solution I think 😅 I did the following: 1. made a
LocalFileSystem
with path /app 2. built a deployment using prefect CLI 3. built a Docker image using the following docker file:
Copy code
FROM prefecthq/prefect:2.0.0-python3.8

# Copy & install requirements
COPY requirements.txt /tmp/
RUN pip install -r /tmp/requirements.txt

COPY deployment.yaml /app/deployment.yaml
COPY etl-manifest.json /app/etl-manifest.json
COPY flow.py /app/flow.py
4. changed the following in deployment.yaml: ◦ image: image name from (3) ◦ storage: point at storage block created in (1) This makes it possible to bake everything into 1 docker image, but I neglected the comments in the deployment.yaml file. Guess I'm kind of a bad boy 😎😝
💯 2
a
whether it's already possible but not documented or if 'the recipe' has yet to be implemented.
both actually I love that hack! 💙 And it's not a hack at all IMO - local file system is supposed to point to your flow code living in the execution env. If the execution env is a docker container, pointing to the local path on the container doesn't sound like a hack at all. Thanks so much @Mathijs Carlu, I'll include that in one of the recipes! 🙌
🎉 1
🙏 1
m
Awesome @Anna Geller, that's really nice to hear! If I can give a little more feedback: it would be even nicer if I could just pass
-sb local-file-system/docker-storage
(with docker-storage the path from (1) ). Now, this raises a NameError, because it looks for the /app path on my machine, which does not exist. Maybe this exception can be omitted in the case that
infrastructure
is
KubernetesJob
or
DockerContainer
. This way, it wouldn't be necessary to modify the full storage part in the deployment.yaml file
👍 1
a
good point
j
This seems very helpful. Is there maybe a recipe for it already? :)
a
we could release a hacky recipe that would already work, but we are working on an actual feature that will address this way more elegantly
j
That makes sense thanks! @Mathijs Carlu do you have by any chance your hacky project in a github repo?
1