I'm trying to pass k8 secrets into deployment flow...
# prefect-kubernetes
j
I'm trying to pass k8 secrets into deployment flows from k8 secrets. I'd rather not store my secrets in Prefect Cloud even if they do use vault. The problem I'm encountering is that when I do use k8 secrets they show in the agent that is polling the queue. But the actual flow that gets kicked off by the deployment does not have access to those env. variables. To re-iterate I don't want to store company secrets on a third party vault if I can help it, I believe that rules out block secrets. Ideally my k8 secrets env vars would show up in the flows and not just the agent polling. Is there a way for the agent polling to transfer the env vars that it can see to the deployment flow it kicks off?
r
When you deploy your flow you can patch in the secrets you want k8s to pull into the pod that the job runs on
j
I'm probably making a mistake, I added the infra_overrides but don't see the env variable pulled in to the deployment flow:
Copy code
infra_overrides={
                "customizations": [
                    {
                        "op": "add",
                        "path": "/spec/template/spec/imagePullSecrets",
                        "value": [],
                    },
                    {
                        "op": "add",
                        "path": "/spec/template/spec/imagePullSecrets/0",
                        "value": {"name": "foo_password_env_var"},
                    },
                ],
            },
r
how are you passing it into the deployment
Copy code
infra_overrides={
               "customizations": [
                   "op": "add",
                   "path": "/spec/template/spec/containers/0/envFrom/-",
                   "value": {
                      "secretRef": {
                         "name": secret.lower()
                      }
                    }}
                ]
            }

# Load an already created K8s Block
k8sjob = k8s_job.load("devk8s")

deployment = Deployment.build_from_flow(
    flow=my_flow,
    name="s3-example",
    version=2,
    work_queue_name="aws",
    infrastructure=k8sjob,
    storage=storage,
    infra_overrides=infra_overrides,
)

deployment.apply()
j
hmm no luck still:
Copy code
deployment = Deployment.build_from_flow(
    name=deployment_name,
    flow=foo_flow,
    work_queue_name="kubernetes",
    parameters={
        "foo_param": "foo_value"
    },
    storage=gcs_block,
    infrastructure=KubernetesJob(
        namespace='foo_namespace',
        image='foo_image',
        image_pull_policy = KubernetesImagePullPolicy.IF_NOT_PRESENT,

        infra_overrides={
            "customizations": [
                {
                    "op": "add",
                    "path": "/spec/template/spec/containers/0/envFrom/-",
                    "value": {
                        "secretRef": {
                            "name": "foo_password_env_var"
                            }
                        }
                    }
                ]
            },
        )
)
deployment.apply()
I bashed into flow and didn't find the env var
r
no - pass infra_overrides into the deployment not the k8s job
Copy code
deployment = Deployment.build_from_flow(
    name=deployment_name,
    flow=foo_flow,
    work_queue_name="kubernetes",
    parameters={
        "foo_param": "foo_value"
    },
    storage=gcs_block,
    infrastructure=KubernetesJob(
        namespace='foo_namespace',
        image='foo_image',
        image_pull_policy = KubernetesImagePullPolicy.IF_NOT_PRESENT
    ),
    infra_overrides={
            "customizations": [
                {
                    "op": "add",
                    "path": "/spec/template/spec/containers/0/envFrom/-",
                    "value": {
                        "secretRef": {
                            "name": "foo_password_env_var"
                            }
                        }
                    }
                ]
    }
)
deployment.apply()
🙏 1
👀 1
j
hmmm getting back:
Copy code
jsonpointer.JsonPointerException: member 'envFrom' not found in {'env': [{'name': 'PREFECT_API_URL', 'value': '<https://api.prefect.cloud/api/accounts/foo/workspaces/foo>'}, {'name': 'PREFECT_API_KEY', 'value': 'foo\n'}, {'name': 'PREFECT__FLOW_RUN_ID', 'value': 'foo'}], 'name': 'prefect-job', 'image': 'foo-image', 'imagePullPolicy': 'IfNotPresent', 'args': ['python', '-m', 'prefect.engine']}
r
ah, that's because my base k8s job has an empty envFrom
j
apologies, i'm a little out of my element on k8. Should I be formatting my path differently or is it something else?
r
remove the /- from the end of the path
or use /0, cant remember and away from my laptop
j
weird I tried all single line variants with same error:
Copy code
"path": "/spec/template/spec/containers/0/envFrom/-",
"path": "/spec/template/spec/containers/0/envFrom/",
"path": "/spec/template/spec/containers/0/envFrom",
"path": "/spec/template/spec/containers/0/envFrom/0",
Copy code
18:17:08.802 | ERROR   | prefect.agent - Failed to submit flow run '0b48b2ff-4b5f-4146-973b-d817ce37d096' to infrastructure.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/jsonpointer.py", line 288, in walk
    return doc[part]
KeyError: 'envFrom'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/prefect/agent.py", line 484, in _submit_run_and_capture_errors
    result = await infrastructure.run(task_status=task_status)
  File "/usr/local/lib/python3.8/site-packages/prefect/infrastructure/kubernetes.py", line 300, in run
    manifest = self.build_job()
  File "/usr/local/lib/python3.8/site-packages/prefect/events/instrument.py", line 73, in inner
    raise exc
  File "/usr/local/lib/python3.8/site-packages/prefect/events/instrument.py", line 70, in inner
    return function(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/prefect/infrastructure/kubernetes.py", line 360, in build_job
    job_manifest = self.customizations.apply(job_manifest)
  File "/usr/local/lib/python3.8/site-packages/jsonpatch.py", line 669, in apply
    obj = operation.apply(obj)
  File "/usr/local/lib/python3.8/site-packages/jsonpatch.py", line 272, in apply
    subobj, part = self.pointer.to_last(obj)
  File "/usr/local/lib/python3.8/site-packages/jsonpointer.py", line 196, in to_last
    doc = self.walk(doc, part)
  File "/usr/local/lib/python3.8/site-packages/jsonpointer.py", line 291, in walk
    raise JsonPointerException("member '%s' not found in %s" % (part, doc))
jsonpointer.JsonPointerException: member 'envFrom' not found in {'env':
r
can you create a k8s block and use that as the base, I've no idea what manifest gets created with the way you are doing it so hard to write patch instructions
or remove the infra_overrides - add in output = 'deployment_build_output.yaml' and show me the output so we know how to patch it
my base job manifest looks like below - so I know I have those paths to write the patches
j
if i have a gcs block is that were the output goes to?
r
are you running this locally
j
yup
within a container
r
will just create the file in your flow folder
just need to see what
Copy code
infrastructure=KubernetesJob(
        namespace='foo_namespace',
        image='foo_image',
        image_pull_policy = KubernetesImagePullPolicy.IF_NOT_PRESENT
    ),
generates
which allows us to address paths like "/spec/template/spec/containers/0"
j
Copy code
infra_overrides:
  customizations:
  - op: add
    path: /spec/template/spec/containers/0/envFrom
    value:
      secretRef:
        name: foo_pass
infrastructure:
  type: kubernetes-job
  env: {}
  labels: {}
  name: null
  command: null
  image: foo-image
  namespace: foo-namespace
  service_account_name: null
  image_pull_policy: IfNotPresent
  cluster_config: null
  job:
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels: {}
    spec:
      template:
        spec:
          parallelism: 1
          completions: 1
          restartPolicy: Never
          containers:
          - name: prefect-job
            env: []
  customizations: []
  job_watch_timeout_seconds: null
  pod_watch_timeout_seconds: 60
  stream_output: true
  finished_job_ttl: null
  block_type_slug: kubernetes-job
  _block_type_slug: kubernetes-job
so just using the infra:
Copy code
infrastructure=KubernetesJob(
        namespace='foo_namespace',
        image='foo_image',
        image_pull_policy = KubernetesImagePullPolicy.IF_NOT_PRESENT
    ),
gives me back:
Copy code
job:
    apiVersion: batch/v1
    kind: Job
    metadata:
      labels: {}
    spec:
      template:
        spec:
          parallelism: 1
          completions: 1
          restartPolicy: Never
          containers:
          - name: prefect-job
            env: []
  customizations: []
  job_watch_timeout_seconds: null
  pod_watch_timeout_seconds: 60
  stream_output: true
  finished_job_ttl: null
  block_type_slug: kubernetes-job
  _block_type_slug: kubernetes-job
I think I'm missing the envFrom: [] and I believe that is the only way to pass secrets in wheres env: [] is for hardcoded vars
not sure why the default KubernetesJob would be missing envFrom or how to insert it back in
r
yeah the patch should add it in
the weird error is
Copy code
'envFrom' not found in {'env': [{'name': 'PREFECT_API_URL', 'value': '<https://api.prefect.cloud/api/accounts/foo/workspaces/foo>'}, {'name': 'PREFECT_API_KEY', 'value': 'foo\n'}, {'name': 'PREFECT__FLOW_RUN_ID', 'value': 'foo'}], 'name': 'prefect-job', 'image': 'foo-image', 'imagePullPolicy': 'IfNotPresent', 'args': ['python', '-m', 'prefect.engine']}
as nothing in that references the job manifest
j
that error was when I was trying to run my deployment not the base code you pasted
so the patch should add in envFrom: [] if it is missing?
r
yes
op="add"
path to add
value to add
so it should work
maybe try envFrom as an array in the patch
c
it should be a list , so it should be:
Copy code
{
  "env": [
    {
      "name": "SECRET_NAME",
      "valueFrom": {
        "secretKeyRef": {
          "name": "key_name",
          "key": "key_key"
        }
      }
    }
  ]
}
r
inside env?
c
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data since it’s at
/spec/template/spec/containers/0/env/-
, you can do something like:
Copy code
[
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/env/-",
    "value": {
      "name": "MY_SECRET",
      "valueFrom": {
        "secretKeyRef": {
          "name": "my-secret",
          "key": "my-secret-key"
        }
      }
    }
  }
]
envFrom comes from configmap doesn’t it?
ah nvm, if you want to reference all k:v in a secret
r
yeah this to pull from k8s/rancher secrets
its easier just having a base manifest
💯 1
c
Agreed, I just build a dynamic manifest based on ci/cd vars for everything
r
i had this trouble 6 months ago and settled on a base with empty envFrom list to patch
j
I learned a lot more about k8 manifests from this. Thanks for your help. If I used a base manifest I can see it would add value in being able to see the manifest w/o having to do a
Copy code
kubectl get pod POD_ID -o yaml
as far as dynamic variables I'm not sure if those would be able to pull from the cluster as well. I think for my purpose of grabbing secrets from the cluster one would still have to do the below but no json patch needed since it would be in the base manifest this worked btw:
Copy code
infra_overrides={
                "customizations": [
                    {
                        "op": "add",
                        "path": "/spec/template/spec/containers/0/env/-",
                        "value": {
                            "name": "foo_password_env_var",
                            "valueFrom": {
                                "secretKeyRef": {
                                    "name": "prefect-secrets",
                                    "key": "foo-password",
                                }
                            }
                        }
                    },
                ]
            }
1
thanks for your patience and help
r
No worries 👍