Hello community. For the prefect-worker version "2024.5.29222000", can someone point me to the exact...
a
Hello community. For the prefect-worker version "2024.5.29222000", can someone point me to the exact helm chart values?
j
i’m afk at the moment but you can go that file in github and choose a specific tag in the upper left hand side
a
I did, terraform was complaining about the values taken from there
Copy code
│ Error: values don't meet the specifications of the schema(s) in the following chart(s):
│ prefect-worker:
│ - worker: Additional property autoscaling is not allowed
│ - worker.config.baseJobTemplate: Invalid type. Expected: [string,null], given: object
│ 
│ 
│   with module.prefect.helm_release.prefect_worker,
│   on ../../tfmodules/prefect/workers.tf line 1, in resource "helm_release" "prefect_worker":
│    1: resource "helm_release" "prefect_worker"
there is no autoscaling parameter in the version you referenced
a
ok thanks Jamie, I will book mark this page
j
once you upgrade though, that page (and values file) will no longer be relevant - just fyi!
a
upgrade to prefect 3?
j
upgrade to a new Helm chart version
for example, the latest version - 2024.6.4205506
a
I can use the latest version for the worker on prefect server 2.19.3?
j
yep, the worker is running version 2.19.4
a
ok how do I pass values to the helm chart then?
and the base job template?
I'm currently passing the template like this:
Copy code
resource "kubernetes_config_map" "base_job_template" {

  for_each = { for np in var.karpenter_node_pools : np.node_pool_name => np }
  
  metadata {
    name = "base-job-template-configmap"
    namespace = "${var.environment}-prefect-worker-${each.value.node_pool_name}"
  }

  data = {
    "baseJobTemplate.json" = jsonencode(
      // Your base job template JSON content goes here
      {
base job tempalte upgrade notes were mentioned in the previous release - here
a
That makes sense
just wondering though for the following:
Copy code
worker:
  # -- unique cluster identifier, if none is provided this value will be infered at time of helm install
  clusterUid: ""

  image:
    # -- worker image repository
    repository: prefecthq/prefect
    ## prefect tag is pinned to the latest available image tag at packaging time.  Update the value here to
    ## override pinned tag
    # -- prefect image tag (immutable tags are recommended)
    prefectTag: 2-python3.11-kubernetes
    # -- worker image pull policy
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: <https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/>
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    # -- worker image pull secrets
    pullSecrets: []
    # -- enable worker image debug mode
    debug: false

  ## general configuration of the worker
  config:
    # -- the work pool that your started worker will poll.
    workPool: ${WORK_POOL_NAME}
    # -- one or more work queue names for the worker to pull from. if not provided, the worker will pull from all work queues in the work pool
    workQueues: []
    # -- how often the worker will query for runs
    queryInterval: 5
    # -- when querying for runs, how many seconds in the future can they be scheduled
    prefetchSeconds: 10
    # -- connect using HTTP/2 if the server supports it (experimental)
    http2: true
    ## You can set the worker type here.
    ## The default image includes only the type "kubernetes".
    ## Custom workers must be properly registered with the prefect cli.
    ## See the guide here: <https://docs.prefect.io/2.11.3/guides/deployment/developing-a-new-worker-type/>
    # -- specify the worker type
    type: kubernetes
    ## one of 'always', 'if-not-present', 'never', 'prompt'
    # --  install policy to use workers from Prefect integration packages.
    installPolicy: prompt
    # -- the name to give to the started worker. If not provided, a unique name will be generated.
    name: null
    # -- maximum number of flow runs to start simultaneously (default: unlimited)
    limit: null
    # -- JSON formatted base job template. If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored.
    baseJobTemplate: ${BASE_JOB_TEMPLATE}
shouldn't be worker.config.baseJobTemplateName? then I would have a config map name that it would use if i create the config map as follows?
Copy code
resource "kubernetes_config_map" "base_job_template" {

  for_each = { for np in var.karpenter_node_pools : np.node_pool_name => np }
  
  metadata {
    name = "base-job-template-configmap"
    namespace = "${var.environment}-prefect-worker-${each.value.node_pool_name}"
  }

  data = {
    "baseJobTemplate.json" = jsonencode(
      // Your base job template JSON content goes here
      {
j
the exposed values for the base job template have changed - i linked the latest values file above to the BJT section
a
ok please because it is blocking me from testing out the worker
I just don't have a clear idea on how it works for the most recent version because there is alot in the json template
j
i’m not sure what you mean
if you have an existing config map, you would just pass the name of that to the worker. if you want the chart to create the config map for you, you would pass the data as a string to the configuration parameter
a
That's what I am talking about, so I can just do the following?
Copy code
baseJobTemplate: ${BASE_JOB_TEMPLATE}
and if that name exists in the namespace, I am good
j
no, please reference the values file i linked - you would need to pass the name of the CM under the existingConfigMapName parameter
a
aah, got it
Hey Jamie, one last thing.. If i am not in need of an api key, how can i remove it? Can I just take this out from the values?
Copy code
apiKeySecret:
      # -- prefect API secret name
      name: prefect-api-key
      # -- prefect API secret key
      key: key
      value: ""
j
yep, any values you aren't using can be removed from your values file