https://prefect.io logo
Title
j

Josh Greenhalgh

03/24/2021, 1:44 PM
Anyone got any ideas of how to force the postgres chart used in the helm chart to deploy on a particular node pool? Have tried this;
postgresql:
      postgresqlPassword: ***
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: some-node-pool
but doesn't seem to work
m

Mariia Kerimova

03/24/2021, 2:33 PM
Hello Josh! Can you try following:
postgresql:
      postgresqlPassword: ***
      primary:
        nodeSelector:
          <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: some-node-pool
And can you confirm that nodes have labels
<http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: some-node-pool
?
j

Josh Greenhalgh

03/24/2021, 2:35 PM
Yep will give that a go
👍 1
Nope 😞 - the node pool defo exists so thats not the issue
I suppose this is a perfect opportunity to use an external db...
m

Mariia Kerimova

03/24/2021, 2:44 PM
Did you update the chart with nodeselector? According to bitnami postgresql chart, nodeSelector should be set not globally, but under
primary
or
readReplicas
. So, if you will update Prefect helm chart to to have postgresql.primary.nodeSelector with labels on your nodes, it should work 🙂
Also, if you want to use your own external db, you can set
useSubChart: false
and provide
externalHostname
of your database
j

Josh Greenhalgh

03/24/2021, 2:57 PM
Yeah I did try that in the past with the primary key and it doesn't seem to be getting through; this is my values;
postgresql:
      postgresqlPassword: ChhXvTjB9w
      primary:
        nodeSelector:
          <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
    prefectConfig:
      logging:
        level: "DEBUG"
    hasura:
      resources: 
        limits:
          cpu: 0.5
          memory: 512Mi
        requests:
          cpu: 0.25
          memory: 256Mi
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
    graphql:
      image:
        tag: core-0.14.12
      resources: 
        limits:
          cpu: 0.2
          memory: 128Mi
        requests:
          cpu: 0.2
          memory: 128Mi
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
    apollo:
      apolloApiUrl: http://****:4200/graphql
      resources: 
        limits:
          cpu: 0.15
          memory: 256Mi
        requests:
          cpu: 0.15
          memory: 256Mi
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
    ui:
      resources: 
        limits:
          cpu: 0.5
          memory: 20Mi
        requests:
          cpu: 0.25
          memory: 10Mi
      image:
        tag: core-0.14.12
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
    towel:
      image:
        tag: core-0.14.12
      resources: 
        limits:
          cpu: 0.5
          memory: 128Mi
        requests:
          cpu: 0.25
          memory: 64Mi
      nodeSelector:
        <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: "${google_container_node_pool.prefect_fixed_compute.name}"
But if I look at the actual pod it looks like this with no ref to the nodeselector;
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-03-24T12:16:22Z"
  generateName: prefect-server-postgresql-
  labels:
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: prefect-server
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: postgresql
    controller-revision-hash: prefect-server-postgresql-7f5946fd88
    <http://helm.sh/chart|helm.sh/chart>: postgresql-9.3.4
    role: master
    <http://statefulset.kubernetes.io/pod-name|statefulset.kubernetes.io/pod-name>: prefect-server-postgresql-0
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:<http://app.kubernetes.io/instance|app.kubernetes.io/instance>: {}
          f:<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: {}
          f:<http://app.kubernetes.io/name|app.kubernetes.io/name>: {}
          f:controller-revision-hash: {}
          f:<http://helm.sh/chart|helm.sh/chart>: {}
          f:role: {}
          f:<http://statefulset.kubernetes.io/pod-name|statefulset.kubernetes.io/pod-name>: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"d45cceb2-b76b-4167-8f6e-368273d7c19f"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"prefect-server-postgresql"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"BITNAMI_DEBUG"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"PGDATA"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRES_DB"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRES_PASSWORD"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:secretKeyRef:
                    .: {}
                    f:key: {}
                    f:name: {}
              k:{"name":"POSTGRES_USER"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRESQL_ENABLE_LDAP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRESQL_ENABLE_TLS"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRESQL_INITSCRIPTS_USERNAME"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRESQL_PORT_NUMBER"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POSTGRESQL_VOLUME_DIR"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:livenessProbe:
              .: {}
              f:exec:
                .: {}
                f:command: {}
              f:failureThreshold: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":5432,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
            f:readinessProbe:
              .: {}
              f:exec:
                .: {}
                f:command: {}
              f:failureThreshold: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:securityContext:
              .: {}
              f:runAsUser: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/dev/shm"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/docker-entrypoint-initdb.d/"}:
                .: {}
                f:mountPath: {}
                f:name: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:hostname: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
        f:subdomain: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"custom-init-scripts"}:
            .: {}
            f:configMap:
              .: {}
              f:defaultMode: {}
              f:name: {}
            f:name: {}
          k:{"name":"data"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"dshm"}:
            .: {}
            f:emptyDir:
              .: {}
              f:medium: {}
              f:sizeLimit: {}
            f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-24T12:16:22Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.32.1.2"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2021-03-24T12:16:36Z"
  name: prefect-server-postgresql-0
  namespace: prefect
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: prefect-server-postgresql
    uid: d45cceb2-b76b-4167-8f6e-368273d7c19f
  resourceVersion: "11181483"
  selfLink: /api/v1/namespaces/prefect/pods/prefect-server-postgresql-0
  uid: ad82aaf5-9869-42b8-b5a6-8fe4f438045d
spec:
  containers:
  - env:
    - name: BITNAMI_DEBUG
      value: "false"
    - name: POSTGRESQL_PORT_NUMBER
      value: "5432"
    - name: POSTGRESQL_VOLUME_DIR
      value: /bitnami/postgresql
    - name: POSTGRESQL_INITSCRIPTS_USERNAME
      value: postgres
    - name: PGDATA
      value: /bitnami/postgresql/data
    - name: POSTGRES_USER
      value: prefect
    - name: POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          key: postgresql-password
          name: prefect-server-postgresql
    - name: POSTGRES_DB
      value: prefect
    - name: POSTGRESQL_ENABLE_LDAP
      value: "no"
    - name: POSTGRESQL_ENABLE_TLS
      value: "no"
    image: <http://docker.io/bitnami/postgresql:11.9.0-debian-10-r1|docker.io/bitnami/postgresql:11.9.0-debian-10-r1>
    imagePullPolicy: IfNotPresent
    livenessProbe:
      exec:
        command:
        - /bin/sh
        - -c
        - exec pg_isready -U "prefect" -d "dbname=prefect" -h 127.0.0.1 -p 5432
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    name: prefect-server-postgresql
    ports:
    - containerPort: 5432
      name: tcp-postgresql
      protocol: TCP
    readinessProbe:
      exec:
        command:
        - /bin/sh
        - -c
        - -e
        - |
          exec pg_isready -U "prefect" -d "dbname=prefect" -h 127.0.0.1 -p 5432
          [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    resources:
      requests:
        cpu: 250m
        memory: 256Mi
    securityContext:
      runAsUser: 1001
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /docker-entrypoint-initdb.d/
      name: custom-init-scripts
    - mountPath: /dev/shm
      name: dshm
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-5c9rd
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: prefect-server-postgresql-0
  nodeName: gke-ecaa-gke-cluster-prefect--jobs-fi-22bd82a9-2xld
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1001
  serviceAccount: default
  serviceAccountName: default
  subdomain: prefect-server-postgresql-headless
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: <http://node.kubernetes.io/not-ready|node.kubernetes.io/not-ready>
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: <http://node.kubernetes.io/unreachable|node.kubernetes.io/unreachable>
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - configMap:
      defaultMode: 420
      name: prefect-server-postgresql-init-scripts
    name: custom-init-scripts
  - emptyDir:
      medium: Memory
      sizeLimit: 1Gi
    name: dshm
  - emptyDir: {}
    name: data
  - name: default-token-5c9rd
    secret:
      defaultMode: 420
      secretName: default-token-5c9rd
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-03-24T12:16:22Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-03-24T12:16:36Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-03-24T12:16:36Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-03-24T12:16:22Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: <docker://856ea6c281a9b98c73526debd490f0602a0f702fc66b37089c06019c84222ae2>
    image: bitnami/postgresql:11.9.0-debian-10-r1
    imageID: <docker-pullable://bitnami/postgresql@sha256:b0eae02b16fed8fd6a2f7b13e68152657cd1a06f7b90192cf5244bfe78cb745f>
    lastState: {}
    name: prefect-server-postgresql
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-24T12:16:30Z"
  hostIP: 10.154.0.10
  phase: Running
  podIP: 10.32.1.2
  podIPs:
  - ip: 10.32.1.2
  qosClass: Burstable
  startTime: "2021-03-24T12:16:22Z"
The selectors on the other services all work as expected and its the same node pool so 🤷🏿
m

Mariia Kerimova

03/24/2021, 3:13 PM
Oh, I see. Your postgres uses bitnami helm chart 9.3.4 version. Current version is 11.9.0. In 9.3.4 you can set nodeSelector like this:
postgresql:
      postgresqlPassword: ***
      master:
        nodeSelector:
          <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: some-node-pool
In version 10 terms
master
and
slave
were renamed to
primary
and
readReplicas
j

Josh Greenhalgh

03/24/2021, 3:27 PM
Ahh ok thanks! Probably good idea to bump the bitnami chart too yeah?
m

Mariia Kerimova

03/24/2021, 3:36 PM
Yes, even though I don't see other breaking changes, I can't guarantee that there are no other caveats. I would bump and test it thoroughly.
j

Josh Greenhalgh

03/24/2021, 3:37 PM
Cool will do thanks!