Hi folks! I'm trying to determine best practices/...
# prefect-community
Hi folks! I'm trying to determine best practices/idioms for deploying and executing code shared between Flows without having everything in a single file. I'm using ECS for running my flows, and everything works great with S3/CloudPickle serialization as long as the code is dead simple (i.e. hello-world), but everything breaks down if I reference another file in my code. I have yet to find an example that utilizes more than one file to define a Flow, but surely there must be a way to structure code more sanely for large, complicated Flows. The various
classes seems to only be able to reference a single file, not entire modules/directories. Are there any examples of a somewhat complicated suite of flows with code shared between them?
Hi @Dane Gardner when you have multiple pieces of your flow in separate files then those files need to be available when your flow goes to execute. There are a couple ways you could do this depending on how you are packaging/deploying your flow. If you are running using the Local agent you can set a LocalRun on your flow with a working directory. If that working directory has all of your necessary imports on it then it will be able to import the multiple files. If you are using some containerized execution platform we recommend you package your dependencies inside of a flow’s Docker storage. This storage option lets you add files to the image at build time that can be available when your flow runs. If you don’t use docker storage and are still using containers to execute your flow then you should build a base image that contains your dependencies. Many of the container-based runconfigs allow for setting an image to run your flow in. I’m sure users in the community would have more concrete examples of how they set up their flow projects/repositories and I encourage them to jump in 🙂
Ahh, I missed that Docker allowed more than just each explicit file specified
Thank you!
@josh Is Docker storage compatible with ECS/Fargate Flow runs? I'm getting the following error, and can't figure out if I'm doing something wrong, or if it's just not possible.
An error occurred (InvalidParameterException) when calling the RunTask operation: Task definition does not support launch_type FARGATE.
Nevermind, I believe this is an ECS Task error, not a Prefect Task error
@josh I'm not sure if this is a bug. I seem to be the only one having this issue, so assuming PEBCAK In the ECS Agent when it creates the ECS task definition for use with Fargate. It must include
in the definition if a private ECR repository is used, otherwise it gives a massively cryptic error (above). There does not seem to be a way to specify this manually in the ECSRun configuration or in the ECSAgent. I'm new to Prefect and ECS, so any direction here would be appreciated.
I did see here in this video that
used in the Agent initialization for 0.14.x, but is commented out. Including that line causes errors in 0.14.2 for me:


Yeah, I think this is a bug. Specifying a task_definition in the RunConfig completely overwrites the Agent's settings for the container image. This seems to work, but I have to specify the taskRoleArn at the RunConfig instead of the Agent constructor (!?):
Copy code
task_definition = yaml.safe_load(read_bytes_from_path(DEFAULT_TASK_DEFINITION_PATH))
    "executionRoleArn": "arn:aws:iam::[...]:role/[...]",
    "taskRoleArn": "arn:aws:iam::[...]:role/[...]",

I would have thought the behavior would be the Agent's task definition as the base, and merge in the RunConfig on top of that, not completely ignore the Agent's config... which needs to dynamically specify the Docker image and tag
Anything set on the RunConfig will override the defaults set on the agent. This is by design. You can set defaults on the agent but setting it on the run config will always allow you to override those defaults
@Dane Gardner I've been running into this problem the last couple days as well. I'm very happy I came across this post because it wasn't obvious to me what the issue was given the error message. I ended up slightly modifying the default task definition to
Copy code
networkMode: awsvpc
cpu: 1024
memory: 2048
  - name: flow
taskRoleArn: arn:aws:iam::12345678910:role/prefect-ecs
executionRoleArn: arn:aws:iam::12345678910:role/ecsTaskExecutionRole
and starting the agent with
agent ecs start --cluster <cluster-arn> --token <token> --label uat  --task-definition <path/to/file.yaml>
and it worked great without having to set anything on the run config of each flow.
@Carter Kwon @josh Please if you can help me on this further I am getting the error : ERROR - agent | Failed to infer default networkConfiguration, please explicitly configure using
ValueError: Failed to infer default networkConfiguration, please explicitly configure using