Just seeking some clarification: I was digging thr...
# ask-community
c
Just seeking some clarification: I was digging through old questions and came across this solution: [link: https://prefect-community.slack.com/archives/CL09KU1K7/p1615218708404000?thread_ts=1615022331.342300&cid=CL09KU1K7 ], and it says:
Copy code
# You'd set a secret in Prefect cloud containing your credentials as a JSON dictionary of the following form:
{"ACCESS_KEY": "your aws_access_key_id here",
 "SECRET_ACCESS_KEY": "your aws_secret_access_key here"}

# For now the secret name is hardcoded, so you'd need to name the secret "AWS_CREDENTIALS"

# you'd then attach the secret to your `S3` storage as follows:
flow.storage = S3(..., secrets=["AWS_CREDENTIALS"])
Assuming AWS_CREDENTIALS has been declared on the Prefect Cloud side, is this how we pass functions into both storage=storage_type(secrets=['']) and KubernetesRun(image_pull_secrets=['']) for now?
k
Hey @Charles Liu, Yes, you can pass that value via Storage or
run_config
to populate your AWS_CREDENTIALS at runtime against your Cloud backend - more information on this can be found in the documentation here.
c
It seems it works! (Kinda). I have two other flows that seem to have run after converting to S3 Storage/KubernetesRun. Unfortunately, I have one flow that suffers from "No heartbeat detected from the remote task; marking the run as failed.", whereas the one task it fails on would run successfully with a Docker Storage/Kubernetes configuration. Any ideas?
k
Hmm, usually if heartbeats aren’t detected this indicates that the task is resource-starved so I’d definitely start debugging by checking with your infrastructure, and potentially allocate more resources.
c
Okay that could mean our EC2 isn't scaling properly when attempting to run it in a S3 Storage/Kube configuration. Thanks! I'll take a look.