Currently when I deploy multiple flows I basically...
# prefect-community
j
Currently when I deploy multiple flows I basically loop over them and for each, build and apply the flow. Also found it on one of @Anna Geller’s posts/examples (https://github.com/anna-geller/dataflow-ops/blob/main/.github/workflows/main.yaml#L35). [I’ve put my version in the thread] This works fine but I was wondering if this is the preferred method? My concern is that I basically duplicate the storage-block (codebase) for each flow on the bucket. Is it possible to have a shared storage-block? It seems waste of deployment time to build the storage repeatedly. I understand that this course makes sense if you have different versions or not deploy all flows at the same time.
1
Copy code
release:
    stage: release
  
    parallel:
      matrix:
        - FLOW_PATH: src/flows/a.py
          STORAGE_PATH: a
        - FLOW_PATH: src/flows/b.py
          STORAGE_PATH: b
  
    interruptible: true
  
    script:
      - python -m venv .venv
      - source .venv/bin/activate
      - pip install poetry
      - poetry install
  
      - prefect deployment build "$FLOW_PATH:main"  --name "Production" --infra-block "kubernetes-job/default" --storage-block "gcs/flow-store/$STORAGE_PATH" --work-queue "kubernetes"
      - cat main-deployment.yaml
      - prefect deployment apply main-deployment.yaml
a
Very good question. Not sure whether you noticed but the way I implemented that in this demo was that I only upload code once using this maintenance flow, everything else has the skip upload flag to avoid any duplicated uploads When you leverage GitHub block instead of S3, this gets even easier since this way we always skip upload by default
j
Oe nice, thanks for the reply! I just quickly skimmed through it so missed that 🙊 But that is great, then I can just do that 🙂
Random: Also like the idea of the healthcheck flow. I’m going to add an additional step, to make a push step (statuscake) to alert me in case none of the flows are running 🙂
upvote 1