what's the proper way to make a Kubernetes agent (...
# ask-community
t
what's the proper way to make a Kubernetes agent (and a Kubernetes Run) use an S3 storage for the flows? we tried the k8s
service account
but it seems insufficient and we noticed some warning in the docs about that but couldn't really decipher what it would mean for us since we don't use these methods to define permissions:
a
the proper way would be IAM roles for service accounts if your cluster lives in AWS EKS - here is an example command you could use to attach S3 permissions:
Copy code
eksctl create iamserviceaccount --cluster=<clusterName> --name=s3-read-only --attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
eksctl docs explain it best here
and if you’re looking for the syntax, here is an example
t
our devops said that they don’t use the
eksctl
tool and that they already attached a policy with permissions for S3 (and the particular needed bucket) and for some reason we’re still getting AccessDenied 😞
@Anna Geller there is a service account attached (supposedly with S3 permissions, according to our devops) yet we’re still getting AccessDenied, any ideas? 😞 The AccessDenied happens when we run the flow, we have no problem registering it using S3 as storage
a
the issue is that you need IAM role for service account, not just a service account. Did you try the solution I shared using eksctl? How is your cluster deployed - do you use the managed version from AWS EKS? what is your cluster setup?
t
ya there is one (it happens to look very similar to
AmazonS3ReadOnlyAccess
except that it also restricts access just to the bucket where the flows are stored in )
i’m no devops expert but i know they use Helm, and ya it’s the managed version i believe (is there a non-managed one from AWS?)
a
Yes there is, you could deploy it yourself using EC2 or use customer managed nodegroups. Honestly it's hard to help here given that they use entirely their own setup and claim that the attached permissions should work. Can you talk to your DevOps folks and screenshare your issue showing that in fact you can't access S3 using their solution? Maybe they could then try the eksctl option (which is the recommended option from AWS docs).
t
hmmm, ok i doubt we use any of these, i believe we use the managed one (i’ll confirm with our devops when they become available)
they claim
eksctl
is simply not how we work with our setup since (from what i understand) it’s a manual command - and for them everything is defined using Helm recipes and configurations on top of YAML, etc. (e.g. even if they use it, then it will work now but it would not persist beyond the current life of the cluster, if it has to be recreated, duplicated, or anything similar) again, this is all just my modest understanding of their world 😆
a
Honestly whether you use eksctl or not, the setup is always the same, eksctl only makes it more convenient
Ask them about IAM roles for service accounts and tell them the current setup is not working :)
t
it’s not me who can’t access S3 - it’s them - and it appears that the created Jobs (i.e. the ones created by the
KubernetesRun
) are the ones lacking the permissions --- it looks like (from what i understand) that the agent has it properly
when we use an S3 storage for flows, is it fetched on the agent or on the job pods? (i’m assuming the latter?)
yes, they know - it’s them who are trying to set it up, i’m just a middle-man for the issue here. They are working on making it work, i’m trying to help by understanding from you what we might be missing.
👍 1
a
it's fetched from the agent, that's why the permissions must be set on the run config e.g. via IAM roles for service accounts
t
wdym when you say on the run_config? in python? i thought the service account is defined using the prefect agent cli command?
(btw, i didn’t clarify earlier but we have S3 access available in other services on EKS, that’s why we’re kinda baffled by this)
ok, scratch that - misconfiguration on our end - sorry for the confusion! (it works)
a
thanks for the update! 🙌
🙏 1