<@ULVA73B9P> when using `secretKeyRef` for deploying prefect deployment on kubernetes I get `is not ...
k
@Marvin when using
secretKeyRef
for deploying prefect deployment on kubernetes I get
is not of type \'object\'">'}
. what is the correct format?
m
When deploying Prefect on Kubernetes using
secretKeyRef
, it seems you are having an issue with the format, which has resulted in a type error. Although the specific format you require is not explicitly provided, the error message suggests that
secretKeyRef
must be an object. A generic way to define
secretKeyRef
in Kubernetes manifests is as follows:
Copy code
yaml
env:
  - name: SOME_SECRET_KEY
    valueFrom:
      secretKeyRef:
        name: name-of-the-secret
        key: key-in-the-secret
You would need to create a Kubernetes secret first and then reference it in your deployment manifests. Here's an example command to create a secret for a Prefect API key in Kubernetes:
Copy code
bash
kubectl create secret generic prefect-api-key \
  --namespace=prefect --from-literal=key=your-prefect-cloud-api-key
This
prefect-api-key
secret can then be referenced in the deployment configuration using
secretKeyRef
. Ensure you name the secret accordingly and that the
key
within
secretKeyRef
matches the key set when you created the Kubernetes secret. For more information on managing Prefect Cloud API keys and details on deploying Prefect to Kubernetes (though not specific to
secretKeyRef
), you can refer to the following documents: - Manage Prefect Cloud API Keys - Deployment to Kubernetes Guide If you're still having issues, it would be helpful to see the exact manifest or code snippet where you are using
secretKeyRef
to provide more tailored advice.
n
@Krystal fyi marvin has memory for the duration of a slack thread, so if you want marvin to remember what you said last, you should use the same thread
k
oh awesome! Thanks!
I can't seem to find a solution to this, could you help please? @Nate
Copy code
name: "collect-api-results"
  cron: null
  description: "Collect api results workflow"
  job_variables:
    env:
      - name: LD_SDK_KEY
        valueFrom:
          secretKeyRef:
            name: prefect-ld-sdk-key
            key: ld-sdk-key
This is what I am passing into to my
.deploy()
Copy code
.deploy(
        work_pool_name=constants.PrefectWorkpool.CPU.value,
        image=constants.PREFECT_WORKFLOW_IMAGE,
        build=False,
        push=False,
        **(what I have in the above),
    )
But i keep getting validation error 😞
Copy code
prefect.deployments.runner.DeploymentApplyError: Error creating deployment: <ValidationError: "[{'name': 'LD_SDK_KEY', 'valueFrom': {'secretKeyRef': {'name': 'prefect-ld-sdk-key', 'key': 'ld-sdk-key'}}}] is not of type 'object'">
Copy code
"job_configuration": {
                "env": "{{ env }}",
                "name": "{{ name }}",
                "labels": "{{ labels }}",
                "command": "{{ command }}",
                "namespace": "{{ namespace }}",
                "job_manifest": {
                    "kind": "Job",
                    "spec": {
                        "template": {
                            "spec": {
                                "tolerations": "{{ tolerations }}",
                                "containers": [
                                    {
                                        "env": "{{ env }}",
                                        "args": "{{ command }}",
                                        "name": "prefect-job",
                                        "image": "{{ image }}",
                                        "imagePullPolicy": "{{ image_pull_policy }}",
                                        "resources": "{{ resources }}"
                                    }
                                ],
                                "completions": 1,
                                "parallelism": 1,
                                "restartPolicy": "Never",
                                "serviceAccountName": "prefect-worker"
                            }
                        },
                        "backoffLimit": 0,
                        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
                    },
                    "metadata": {
                        "labels": "{{ labels }}",
                        "namespace": "{{ namespace }}",
                        "generateName": "{{ name }}-"
                    },
                    "apiVersion": "batch/v1"
                },
                "stream_output": "{{ stream_output }}",
                "cluster_config": "{{ cluster_config }}",
                "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
                "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
                }
            }
This is my config when I deployed my prefect worker in helm
Could it be that it's confusing with the
{{ env }}
inside and outside of "job_manifest" in the config?
n
yeah we don't natively support kubernetes secrets here like this
Copy code
name: "collect-api-results"
  cron: null
  description: "Collect api results workflow"
  job_variables:
    env:
      - name: LD_SDK_KEY
        valueFrom:
          secretKeyRef:
            name: prefect-ld-sdk-key
            key: ld-sdk-key
i think what you want is something like this
Copy code
{
  "job_configuration": {
    "env": "{{ env }}",
    "name": "{{ name }}",
    "labels": "{{ labels }}",
    "command": "{{ command }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "kind": "Job",
      "spec": {
        "template": {
          "spec": {
            "tolerations": "{{ tolerations }}",
            "containers": [
              {
                "env": [
                  {
                    "name": "LD_SDK_KEY",
                    "valueFrom": {
                      "secretKeyRef": {
                        "name": "prefect-ld-sdk-key",
                        "key": "ld-sdk-key"
                      }
                    }
                  },
                ],
                "args": "{{ command }}",
                "name": "prefect-job",
                "image": "{{ image }}",
                "imagePullPolicy": "{{ image_pull_policy }}",
                "resources": "{{ resources }}"
              }
            ],
            "completions": 1,
            "parallelism": 1,
            "restartPolicy": "Never",
            "serviceAccountName": "prefect-worker"
          }
        },
        "backoffLimit": 0,
        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
      },
      "metadata": {
        "labels": "{{ labels }}",
        "namespace": "{{ namespace }}",
        "generateName": "{{ name }}-"
      },
      "apiVersion": "batch/v1"
    },
    "stream_output": "{{ stream_output }}",
    "cluster_config": "{{ cluster_config }}",
    "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
    "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
  }
}
k
Can I change my config to
Copy code
{
  "job_configuration": {
    "env": "{{ env }}",
    "name": "{{ name }}",
    "labels": "{{ labels }}",
    "command": "{{ command }}",
    "namespace": "{{ namespace }}",
    "job_manifest": {
      "kind": "Job",
      "spec": {
        "template": {
          "spec": {
            "tolerations": "{{ tolerations }}",
            "containers":
              {
                "container_env": "{{ container_env }}",
                "args": "{{ command }}",
                "name": "prefect-job",
                "image": "{{ image }}",
                "imagePullPolicy": "{{ image_pull_policy }}",
                "resources": "{{ resources }}"
              }
            ],
            "completions": 1,
            "parallelism": 1,
            "restartPolicy": "Never",
            "serviceAccountName": "prefect-worker"
          }
        },
        "backoffLimit": 0,
        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
      },
      "metadata": {
        "labels": "{{ labels }}",
        "namespace": "{{ namespace }}",
        "generateName": "{{ name }}-"
      },
      "apiVersion": "batch/v1"
    },
    "stream_output": "{{ stream_output }}",
    "cluster_config": "{{ cluster_config }}",
    "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
    "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}"
  }
}
And then pass in the config like below?
Copy code
name: "collect-api-results"
  cron: null
  description: "Collect api results workflow"
  job_variables:
    container_env:
      - name: LD_SDK_KEY
        valueFrom:
          secretKeyRef:
            name: prefect-ld-sdk-key
            key: ld-sdk-key
n
no I dont believe so, if you wanted to do all the stuff in
prefect.yaml
you'd have to pass the whole
job_manifest
as a yaml object (which you could create a yaml
definition
for)
k
Or is it that prefect currently doesn't support list at all?
Ah ok
n
we do support lists, its just that
container_env
wouldnt be at the "right level" to be one of the valid
job_variables
whereas
job_manifest
is
k
Ah ok. Is there anyway I can somehow hack it and add the secret from my yaml file rather than the config?
I can't seem to find a way to overwrite the config-template if the work pool is already created because it gets ignored. So the deployment becomes really manual where i need to delete the work pool and do a fresh helm release 😞
Copy code
baseJobTemplateName: prefect-worker-cpu-base-job-template # -- JSON formatted base job template. If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored.
From the helm values.yaml. Unless you know of a way where i can overwrite that with changes?
n
if you wanted to do all the stuff in
prefect.yaml
you'd have to pass the whole
job_manifest
as a yaml object (which you could create a yaml
definition
for)
to clarify, your job config should be mostly unrelated to your worker, i.e. you should not need to delete / alter your work pool or worker to change your job config you can alter your
job_manifest
on a deployment basis by specifying it in your deployment definition in
prefect.yaml
like this
Copy code
name: "collect-api-results"
  cron: null
  description: "Collect api results workflow"
  job_variables:
    job_manifest:
      kind: Job
      ...
and to reiterate, to avoid a bunch of boilerplate, you can create yaml definitions that you can re-use for these job manifests, or create more granular definitions for your container specs so you can slot them into different manifests as needed
k
Ah ok I definitely misunderstood! Ok will give it a try, thanks a mill Nate!
n
catjam