Hey! Curious if anyone else has run into this. I have my prefect2 flows running on ECS tasks with s3 storage. We got a security warning that Amazon ECS containers should have read-only access to its root filesystems that we need to address for our security protocols. I added
"readonlyRootFilesystem": true,
to the container definition, but then the flow fails to load. Looks like the error is when it tries to make the directory and can't, which makes sense
Copy code
File "/usr/local/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/opt/prefect/./<flow_name>
Claire Herdeman
07/11/2023, 5:39 PM
Happy to provide more context, but basically is there a way to load the flows without writing to the root filesystem
c
Christopher Boyd
07/11/2023, 6:17 PM
If you’re pulling it in from remote storage I’d expect no
Christopher Boyd
07/11/2023, 6:18 PM
If you bake your flow into the image , then it being read only shouldn’t matter
Christopher Boyd
07/11/2023, 6:18 PM
You could attach another volume + volume mount and load it to that ?
c
Claire Herdeman
07/11/2023, 7:38 PM
Yup it looks like that's the route! If anyone else deals with this i'm happy to share more code, but short version it looks like adding the "volume" parameter to the ECS task definition and then a "mountPoints" section to the container definition that has a "containerPath" outside of the root filesystem. I called mine "/prefect2/storage", and created/referenced it in the docker image:
Copy code
RUN mkdir -p /prefect2/storage
VOLUME ["/prefect2/storage"]
WORKDIR /prefect2/storage
Bring your towel and join one of the fastest growing data communities. Welcome to our second-generation open source orchestration platform, a completely rethought approach to dataflow automation.