Hello guys, I need help with the prefect-server in...
# ask-community
a
Hello guys, I need help with the prefect-server ingress. The values.yaml file has no way of adding the service port number for the ingress, even when I uncomment the servicePort, the helm install complains about it...
Copy code
ingress:
  # -- enable ingress record generation for server
  enabled: true

  # -- port for the ingress' main path
  #servicePort: 4200

  ## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster .
  ## ref: <https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/>
  # -- IngressClass that will be used to implement the Ingress (Kubernetes 1.18+)
  className: ""
  host:
    # -- default host for the ingress record
    hostname: ${HOST_NAME}
    # -- default path for the ingress record
    path: /

    # -- ingress path type
    pathType: Prefix
1
j
hi @alan - it should be the name of the service port that is set on the service
a
for this chart version "2024.5.23194919"
I get the following error when terraforming the chart
Copy code
module.prefect.helm_release.prefect_server: Modifying... [id=prefect-server]
╷
│ Error: values don't meet the specifications of the schema(s) in the following chart(s):
│ prefect-server:
│ - ingress: Additional property servicePort is not allowed
│ 
│ 
│   with module.prefect.helm_release.prefect_server,
│   on ../../tfmodules/prefect/main.tf line 1, in resource "helm_release" "prefect_server":
│    1: resource "helm_release" "prefect_server" {
│ 
╵
it is weird because the servicePort is in the chart before I touched it
j
hm we might have missed a json schema update
i’m afk but i’ll take a look when back
a
can i go back a chart version to resolve this?
Copy code
ingress:
  # -- enable ingress record generation for server
  enabled: true

  # -- port for the ingress' main path
  servicePort: 4200

  ## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster .
  ## ref: <https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/>
  # -- IngressClass that will be used to implement the Ingress (Kubernetes 1.18+)
  className: ""
  host:
    # -- default host for the ingress record
    hostname: ${HOST_NAME}
    # -- default path for the ingress record
    path: /

    # -- ingress path type
    pathType: Prefix
j
yep, any version before today should be good
a
and just setting servicePort: 4200 should suffice, right?
j
there was no service port parameter on the version before today
so the version you mentioned should work, just make sure you’re looking at the corresponding values file
a
can you link that file?
i fetched the values.yaml from your github repo
a
Yeah, but where do i set the servicePort number for the ingress there?
because the ingress shows up as empty and the loadbalancer controller doesn't spit out a url for me
Copy code
spec:
  rules:
  - host: prefect.ili-research.*****.com
    http:
      paths:
      - backend:
          service:
            name: prefect-server
            port:
              name: server-svc-port
        path: /
        pathType: Prefix
status:
  loadBalancer: {}
it sets a server-svc-port name, but the number is missing
j
ok i’ve double checked - the latest version (2024.5.29xxx) has the appropriate schema in place
so in your case, you were using an old version of the chart but trying to use the latest values schema
a
ok I will try with the newer version of the chart
same problem with the newer version. The service port number is missing and the ingress never gets a loadbalancer:
Copy code
http:
      paths:
      - backend:
          service:
            name: prefect-server
            port:
              name: server-svc-port
        path: /
        pathType: Prefix
status:
  loadBalancer: {}
@Jamie Zieziula can you guide me on why the port number isn't set?
Also, it isn't clear on how to turn off the internal postgresql service. Even after setting postgresql.enabled=False, it still shows up as a pod in kubernetes after terraforming
j
Hi @Alan the ingress is being linked to the prefect server service via the service port name
server-svc-port
are you deploying this into a cloud environment and expecting a cloud load balancer to be created? if so, you'd need to set some cloud-specific annotation that would tell gcp/aws/etc to provision a load balancer
re: postgres -- i'm confused, are you bringing your own postgres server?
a
I got that part. it seems I am having an issue with my aws loadbalancer controller which is why I am not seeing an ALB being created. As for the postgres server, yes, I am connecting to an rds database instead and the connection is working. I just need to remove the internally provided one:
j
i see i see
were you able to dynamically set your db connection string via the chart or did you have to do something manual to support it?
a
no, it was all through the chart
j
I'm looking at the chart now to see how we would expect
postgres.enabled=false
to work
can you share what you set?
a
Copy code
postgresql:
  enabled: false
  auth:
    # -- determines whether an admin user is created within postgres
    enablePostgresUser: false
    # -- name for a custom database
    database: server
    # -- name for a custom user
    username: prefect
    ## This is the password for `username` and will be set within the secret `{fullnameOverride}-postgresql` at the key `password`.
    ## This argument is only relevant when using the Postgres database included in the chart.
    ## For an external postgres connection, you must create and use `existingSecret` instead.
    # -- password for the custom user. Ignored if `auth.existingSecret` with key `password` is provided
    password: prefect-rocks

    ## This secret must contain two key-value pairs where the first key is `connection-string` and the value is the
    ## connection string containing your password (e.g. postgresql+asyncpg://{username}:{password}@{hostname}/{database}).
    ## The second key-value pair has the key `password` and the value is the {password} used in the connection string
    # -- Name of existing secret to use for PostgreSQL credentials.
    existingSecret: ${EXISTING_SECRET}

  # -- PostgreSQL container port
  containerPorts:
    postgresql: 5432

  # externalHostname defines the address to contact an externally
  # managed postgres database instance at. This is not required if
  # `internalPostgres` is `true`
  externalHostname: ""

  # -- enable use of bitnami/postgresql subchart
  useSubChart: true

  ## postgresql configuration below here is only used if using the subchart

  ## Initdb configuration
  ## ref: <https://github.com/bitnami/containers/tree/main/bitnami/postgresql#specifying-initdb-arguments>
  primary:
    initdb:
      # -- specify the PostgreSQL username to execute the initdb scripts
      user: postgres

    ## persistence enables a PVC that stores the database between deployments. If making changes to the database deployment, this
    ## PVC will need to be deleted for database changes to take effect. This is especially notable when the authentication password
    ## changes on redeploys. This is disabled by default because we do not recommend using the subchart deployment for production deployments.
    persistence:
      # -- enable PostgreSQL Primary data persistence using PVC
      enabled: false
      # -- PVC Storage Request for PostgreSQL volume
      size: 8Gi
I set the existing secret
j
i see, i think postgresql is still deploying bc you are setting usSubChart = true
I'm testing locally rn to confirm
a
🙏
j
Copy code
helm install prefect-server prefect/prefect-server --set postgresql.enabled=false --set postgresql.useSubChart=false
this works -- no postgresql pod is deployed. This is definitely confusing, i'll open an issue and we will figure out a better way. thanks for raising!
a
I just checked, it works!!
j
woot!
a
@Jamie Zieziula can you help me with technical limits in terms of scalability from the free version of prefect
Like am I going to be limited in terms of how many pods I can have running per worker pool
j
no limits on number of pods/workers/servers that you can run from our end
m
@Alan - I opened https://github.com/PrefectHQ/prefect-helm/pull/341 to address #336. If you have a chance to give it a look that feedback would be helpful 🤝