Jean-Michel Provencher
03/21/2024, 7:00 PMflow.from_source(
source=S3Bucket.load("a bucket block),
entrypoint="./a_path"
).deploy(...)
It will try to download the entire S3Bucket block before actually deploying the flow. Any way to work around that ? I'm using a bucket containing hundreds of versions of hundreds of flow, and downloading the entire s3 bucket everytime is not scalable. Also, creating one S3Bucket block per flow seems really like a bad practice.
This is blocking me from actually wanting to switch from agents to work-pools...
In my mind, I should have a way to deploy a flow by targetting an S3 location, without having to download the flow from the remote storage. The previous behaviour with build_from_flow with a storage configuration seemed a way better pattern than this one.