I'm trying to switch to S3 storage (from Docker, w...
# ask-community
m
I'm trying to switch to S3 storage (from Docker, which was working). The flow shows up in cloud, but my local agent doesn't log anything about it. If I run a docker agent, it sees it, and complains about the storage type. I've reread all the docs on storage and agent types, and I don't know what I'm doing wrong.
z
HI @Michael Warnock could you share the error message you're seeing on your Docker agent?
A couple possible pitfalls to note • Local agents cannot run flows using
Docker
storage • Local agents automatically include their hostname as a label on startup. If your flow does not include the same hostname label as the agent (the flow only would only if you were using local storage and Prefect automatically added it), the agent will not try to run your flow. You can disable labels or manually add labels with CLI flags https://docs.prefect.io/orchestration/agents/local.html#labels
m
Oh, I was wrong- it's not the storage it complains about; it's the run_config. If I use a DockerRun and S3 storage, it runs in a default container that doesn't work (old python to start). I'm trying to run it in whatever makes sense for S3 storage, and won't create a docker-in-docker situation when I deploy it in ECS for production use
Copy code
[2021-07-31 15:32:05,567] ERROR - agent | Flow run 04f3d831-52dd-455c-9823-75c777559dc3 has a `run_config` of type `LocalRun`, only `DockerRun` is supported
Your second 'pitfall' explains what's going on, but what kind of agent/run_config should I use for S3 storage and DaskExecutor (with coiled) execution, running the agent locally for testing, and on ECS for production?
Thanks for responding on a Saturday, btw! I have a time-sensitive project starting Monday and I want to have my current one (prefect PoC) tied up neatly.
Reading your response again, should I just run a local-agent and use labels to make it work; just disable the hostname label? Trying it now.
Yep- that's the ticket! Thanks!!
k
I think it would be Docker Agent + Docker Run for local testing. The agent will spin up that container and then run the flow on top of it. When you move to ECS Agent + ECSRun, the agent will submit it as a
task
to ECS so you don’t get a container in container setup.
Ignore above because I think you’re good now. I think I mistakenly told you a while back that Docker agent with DaskExector on Coiled would lead to container in container. I think it’s more like one computer sending instructors to the Coiled cluster.
m
Actually, I'm running into the executor-doesn't-serialize thing again; I guess I need to use
stored_as_script
- but I was hoping to have a generic storage definition I could share between flows, and with script storage, I have to specify the flow script one way or another
k
What is the error you see? Yeah you would need to specify the location for all script based storage.
m
It's not an error- it's just that the local agent is running the tasks, instead of starting a dask cluster. I guess I'll add that; seems like something that could be inferred, so maybe I'll infer it myself.
k
Ah ok I see what you mean.
m
__file__
isn't defined when the agent loads the flow (stored-as-script), which is annoying in terms of inferring the script path, but after specifying it manually, everything is working as expected. Thanks Zach and Kevin!
👍 2