<@ULVA73B9P> i have a prefect server hosted on eks...
# ask-marvin
r
@Marvin i have a prefect server hosted on eks managed by karpenter, there are node groups within this carpenter, is it possible to deploy flows or tasks in specified group only
m
thought for 53.3 seconds
Yes, it is possible to deploy flows to specific node groups in your EKS cluster managed by Karpenter. There are two main approaches to achieve this: 1. Using Kubernetes Infrastructure Block Configuration In Prefect 3.x, you can specify node selection criteria when configuring your Kubernetes infrastructure. This can be done through:
Copy code
python
from prefect.infrastructure.kubernetes import KubernetesJob

kubernetes_job = KubernetesJob(
    job_configuration={
        "spec": {
            "template": {
                "spec": {
                    # Option 1: Using nodeSelector
                    "nodeSelector": {
                        "node-group": "your-node-group-label"  # Use your actual node group label
                    },
                    
                    # Option 2: Using node affinity (more flexible)
                    "affinity": {
                        "nodeAffinity": {
                            "requiredDuringSchedulingIgnoredDuringExecution": {
                                "nodeSelectorTerms": [{
                                    "matchExpressions": [{
                                        "key": "node-group",
                                        "operator": "In",
                                        "values": ["your-node-group-name"]
                                    }]
                                }]
                            }
                        }
                    }
                }
            }
        }
    }
)
2. Using Karpenter Provisioner Configuration You can also use Karpenter's provisioner configuration to ensure pods land on specific node groups by using: - Node selectors - Taints and tolerations - Resource requirements - Labels and annotations For example, in your flow deployment:
Copy code
python
from prefect import flow
from prefect.deployments import Deployment
from prefect.infrastructure.kubernetes import KubernetesJob

@flow
def my_flow():
    # your flow code here
    pass

kubernetes_job = KubernetesJob(
    job_configuration={
        "spec": {
            "template": {
                "metadata": {
                    "labels": {
                        "app": "my-flow",
                        "node-group": "specific-group"
                    }
                },
                "spec": {
                    "tolerations": [{
                        "key": "dedicated",
                        "value": "ml-workload",
                        "effect": "NoSchedule"
                    }],
                    "nodeSelector": {
                        "node-group": "ml-nodes"  # Your specific node group
                    }
                }
            }
        }
    }
)

deployment = Deployment.build_from_flow(
    flow=my_flow,
    name="my-flow-deployment",
    infrastructure=kubernetes_job
)
Important considerations: 1. Make sure your node groups are properly labeled in EKS 2. Verify that Karpenter's provisioner configuration allows scheduling on your target node groups 3. Consider using node affinity over nodeSelector if you need more flexible scheduling rules 4. Test with a small flow first to verify the configuration works as expected To verify which nodes your flows are running on, you can check:
Copy code
bash
kubectl get pods -n your-namespace -o wide
Remember that using node selectors or affinity rules might impact the scheduling flexibility and could potentially cause pods to remain pending if the specified nodes are not available. Make sure to align this with your Karpenter scaling policies. Would you like me to provide more specific details about any of these approaches or help with a particular configuration?
r
@Marvin for a prefect3 server on eks, is it possible to just mention system requirements for your tasks and in such case will prefect manage execution based on those system requirements?
m
thought for 76.9 seconds
Based on my research, here's the answer regarding system requirements for tasks in Prefect 3.x on EKS: 1. Direct Resource Management: - Prefect itself doesn't automatically manage resource requirements - Resource requirements need to be specified through Kubernetes configurations 2. How to Specify Requirements: You have two main approaches: a) Work Pool Configuration: - Create a Kubernetes work pool with a custom base job template that includes your resource requirements - Example creating a work pool:
Copy code
bash
   prefect work-pool create "my-k8s-pool" --type kubernetes
- You can then customize the base job template to include resource requirements like:
Copy code
json
   {
     "spec": {
       "template": {
         "spec": {
           "containers": [{
             "resources": {
               "requests": {
                 "memory": "512Mi",
                 "cpu": "500m"
               },
               "limits": {
                 "memory": "1Gi",
                 "cpu": "1"
               }
             }
           }]
         }
       }
     }
   }
b) KubernetesClusterConfig Block: - You can configure additional Kubernetes-specific settings using a KubernetesClusterConfig block 3. Important Notes: - Resource allocation is handled by Kubernetes, not Prefect - You'll need to ensure your EKS cluster has sufficient resources to meet the requirements - The requirements are specified at the work pool level, not at individual task level - The configuration follows standard Kubernetes resource specification format 4. Best Practices: - Start with reasonable resource requests based on your workload - Monitor actual resource usage to optimize the settings - Consider using different work pools for tasks with significantly different resource needs For more detailed information, you can refer to the Prefect Kubernetes documentation. Unfortunately, Prefect 3.x doesn't currently support specifying resource requirements at the individual task level - it's managed at the work pool level through Kubernetes configurations. If you need different resource requirements for different tasks, you would need to create separate work pools with different resource configurations. Would you like me to provide more specific examples of how to set up work pools with different resource requirements, or would you like to know more about any particular aspect of this setup?
r
@Marvin i have a code script in which there are two parts both with distinct system requirements, i want to schedule script on a prefect3 server running on eks, so i segregated the two parts into two distinct flows but how do i schedule their consecutive execution?
m
thought for 105.2 seconds
Based on my research, I can provide you with two approaches to schedule consecutive execution of your flows with different resource requirements: 1. Using Flow Dependencies with Events (Recommended Approach):
Copy code
python
from prefect import flow
from prefect.deployments import run_deployment
from prefect.events import Event

@flow(name="high_resource_flow")
def flow1():
    # Your high-resource computation here
    pass

@flow(name="low_resource_flow")
def flow2():
    # Your low-resource computation here
    pass

# Deploy both flows to different work pools based on resource requirements
flow1_deployment = flow1.from_source(
    source="your_repo",
    entrypoint="path/to/file.py:flow1"
).deploy(
    name="high-resource-deployment",
    work_pool_name="high-resource-pool"  # Work pool configured with higher resources
)

flow2_deployment = flow2.from_source(
    source="your_repo",
    entrypoint="path/to/file.py:flow2"
).deploy(
    name="low-resource-deployment",
    work_pool_name="low-resource-pool"  # Work pool configured with lower resources
)

@flow
def orchestrator_flow():
    # Run flow1 and wait for completion
    flow1_run = run_deployment(
        name="high-resource-deployment/flow1",
        timeout=3600  # Adjust timeout as needed
    )
    
    # After flow1 completes, run flow2
    if flow1_run.state.is_completed():
        flow2_run = run_deployment(
            name="low-resource-deployment/flow2",
            timeout=3600
        )
2. Using Event-Based Triggers:
Copy code
python
from prefect.deployments import DeploymentEventTrigger
from prefect.server.schemas.triggers import DeploymentTriggerTypes

# Create a trigger for flow2 that depends on flow1's completion
trigger = DeploymentEventTrigger(
    name="flow2_trigger",
    deployment_name="low-resource-deployment/flow2",  # Target deployment to run
    event_type="prefect.flow-run.Completed",  # Trigger on completion
    source_deployment_name="high-resource-deployment/flow1"  # Source deployment to watch
)
To set this up: 1. Create Two Work Pools with different resource configurations:
Copy code
bash
# Create work pool for high-resource tasks
prefect work-pool create "high-resource-pool" --type kubernetes

# Create work pool for low-resource tasks
prefect work-pool create "low-resource-pool" --type kubernetes
2. Configure Resource Requirements in each work pool's base job template: For high-resource-pool:
Copy code
json
{
  "spec": {
    "template": {
      "spec": {
        "containers": [{
          "resources": {
            "requests": {
              "memory": "8Gi",
              "cpu": "2"
            },
            "limits": {
              "memory": "16Gi",
              "cpu": "4"
            }
          }
        }]
      }
    }
  }
}
For low-resource-pool:
Copy code
json
{
  "spec": {
    "template": {
      "spec": {
        "containers": [{
          "resources": {
            "requests": {
              "memory": "512Mi",
              "cpu": "500m"
            },
            "limits": {
              "memory": "1Gi",
              "cpu": "1"
            }
          }
        }]
      }
    }
  }
}
3. Deploy Your Flows to their respective work pools and set up the trigger. Key Benefits: - Each flow runs with appropriate resources - Automatic handling of dependencies - Robust error handling and monitoring - Scalable and maintainable approach Important Notes: - Make sure to have workers running for both work pools - Monitor the first few runs to ensure resource allocations are appropriate - Consider adding error handling and retry logic - You can add scheduling to the orchestrator flow if you need periodic execution Would you like me to elaborate on any part of this setup or provide more details about specific aspects like error handling or scheduling?
r
@Marvin how do i configure flows to node groups on kubernetes
m
thought for 33.1 seconds
Here's how to configure Prefect flows to run on specific Kubernetes node groups: 1. Create a Kubernetes Work Pool First, create a Kubernetes work pool with a custom job template that includes node selection:
Copy code
bash
prefect work-pool create "k8s-specific-nodes" --type kubernetes
2. Configure Node Selection There are two main ways to configure node selection: a. Using Node Selectors
Copy code
yaml
# job-template.yaml
job_configuration:
  spec:
    template:
      spec:
        nodeSelector:
          node-group: your-node-group-label  # Replace with your actual node group label
b. Using Node Affinity
Copy code
yaml
# job-template.yaml
job_configuration:
  spec:
    template:
      spec:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
                - matchExpressions:
                    - key: node-group
                      operator: In
                      values:
                        - your-node-group-name
3. Apply the Configuration Update your work pool with the custom job template:
Copy code
bash
prefect work-pool create "k8s-specific-nodes" --base-job-template job-template.yaml --overwrite
4. Deploy Your Flow When deploying your flow, specify the work pool:
Copy code
python
from prefect import flow

@flow
def my_flow():
    # your flow code here
    pass

if __name__ == "__main__":
    my_flow.deploy(
        name="my-deployment",
        work_pool_name="k8s-specific-nodes"
    )
Important Notes: - Make sure your node groups are properly labeled in your Kubernetes cluster - The node selector labels must match exactly what's configured in your Kubernetes cluster - You can verify the node labels in your cluster using:
kubectl get nodes --show-labels
- Consider using node affinity rules when you need more complex node selection logic - Resource requests and limits in the job template can also influence node selection if you have nodes with different resource capabilities For more detailed information, you can refer to the Prefect Kubernetes documentation. Would you like me to provide more specific examples for your particular use case or explain any part of this in more detail?
r
@Marvin does prefect helm have any less functionalities than prefect
m
thought for 46.2 seconds
Based on the research and issue analysis, I can provide a comprehensive comparison between Prefect Helm and regular Prefect installations. The short answer is: No, Prefect Helm does not have less functionality than regular Prefect - it's just packaged differently for Kubernetes environments. Here's a detailed breakdown: Functionality Parity: - The Helm chart provides all core Prefect functionalities - It's specifically optimized for Kubernetes environments - All Prefect features (flows, tasks, work pools, etc.) are fully supported Advantages of Helm Installation: 1. Kubernetes Integration: - Pre-configured Kubernetes resources - Built-in worker configurations - Automated service discovery - Integrated secrets management 2. Deployment Management: - Version-controlled infrastructure - Easy upgrades and rollbacks - Consistent deployment across clusters - Built-in configuration management 3. Operational Benefits: - Simplified scaling - Integrated monitoring setup - Standardized logging - Better resource management Potential Considerations (not limitations): 1. Kubernetes-Specific: - Only suitable for Kubernetes environments - Requires Kubernetes knowledge - May need additional RBAC configuration (as seen in some issues) 2. Configuration Complexity: - More initial setup required for Kubernetes - Need to understand Helm values and templates - May require additional Kubernetes resource configuration 3. Resource Requirements: - Higher resource overhead due to Kubernetes - Needs proper cluster sizing - Multiple components running as containers Important Notes: 1. The choice between Helm and regular installation should be based on your infrastructure needs, not functionality differences. 2. If you're running in Kubernetes, the Helm chart is generally the recommended approach. 3. Recent issues in the GitHub repository are mostly about specific configurations or permissions, not functional limitations. Would you like me to elaborate on any specific aspect of the Helm deployment or provide more details about particular features you're interested in?
r
@Marvin can I install regular prefect on kubernetes?
m
thought for 40.4 seconds
thought for 38.2 seconds
Yes, you can run regular Prefect on Kubernetes! There are two main approaches: 1. Direct Installation (Regular Prefect): - Install Prefect in your Kubernetes pods using pip - Create your own Kubernetes manifests (Deployments, Services, etc.) - Configure workers manually - More control but requires more manual setup 2. Helm Chart Installation (Recommended): - Uses pre-configured Kubernetes resources - Easier to manage and upgrade - Better defaults for Kubernetes environments - Includes necessary RBAC configurations Here's how you can use regular Prefect on Kubernetes: 1. Create a Custom Docker Image:
Copy code
dockerfile
FROM python:3.9
RUN pip install prefect
# Add any other dependencies you need
COPY your_flows /flows
2. Create Kubernetes Manifests:
Copy code
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prefect-worker
spec:
  template:
    spec:
      containers:
      - name: worker
        image: your-prefect-image:tag
        command: ["prefect", "worker", "start", "--pool", "your-pool"]
3. Deploy Required RBAC:
Copy code
yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prefect-worker
---
apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>
kind: Role
metadata:
  name: prefect-worker
rules:
- apiGroups: ["batch"]
  resources: ["jobs"]
  verbs: ["create", "get", "list", "watch", "delete"]
However, I recommend using the Helm chart because: 1. It handles all the RBAC setup automatically 2. Includes best practices for Kubernetes deployments 3. Makes upgrades and maintenance easier 4. Provides better default configurations 5. Includes monitoring and logging setup If you still want to use regular Prefect, you'll need to: 1. Handle your own Kubernetes configurations 2. Set up proper RBAC permissions 3. Configure networking and service discovery 4. Manage secrets and configurations manually Would you like me to provide more specific details about either approach? Or would you like to know more about a particular aspect of running Prefect on Kubernetes?
Based on the research, I can provide a clear answer about installing regular Prefect on Kubernetes. Here's what you need to know: Short Answer: Yes, you can run regular Prefect on Kubernetes, but it's not the recommended approach. Here's Why: 1. Preferred Method: - The recommended way to run Prefect on Kubernetes is using the Helm chart - Helm handles many complex configurations automatically - Provides better maintainability and upgrade path 2. Manual Kubernetes Deployment (without Helm): If you still want to deploy without Helm, you'll need to:
Copy code
yaml
# Example Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prefect-worker
spec:
  template:
    spec:
      containers:
      - name: prefect-worker
        image: prefecthq/prefect:2-python3.9
        command: ["prefect", "worker", "start", "--pool", "your-work-pool"]
        env:
        - name: PREFECT_API_KEY
          valueFrom:
            secretKeyRef:
              name: prefect-api-key
              key: key
        - name: PREFECT_API_URL
          value: "<https://api.prefect.cloud/api/accounts/[YOUR-ACCOUNT-ID]/workspaces/[YOUR-WORKSPACE-ID]>"
Challenges with Manual K8s Deployment: 1. Configuration Complexity: - You need to manually create all Kubernetes resources - Must handle RBAC permissions yourself - Need to manage secrets and configurations - Must set up proper service accounts 2. Missing Automation: - No automatic handling of work pool configuration - Manual setup of monitoring and logging - More complex upgrade process - Need to handle container lifecycle manually 3. Maintenance Overhead: - Updates must be managed manually - More difficult to troubleshoot issues - No standardized deployment structure - Higher risk of misconfiguration Recommendation: While it's technically possible to run Prefect on Kubernetes without Helm, I strongly recommend using the Helm chart because: 1. It's the officially supported method 2. Provides consistent deployment structure 3. Handles complex configurations automatically 4. Makes upgrades and maintenance easier 5. Includes best practices by default If you have a specific reason for wanting to avoid Helm, I'd be happy to understand your use case better and provide more targeted guidance. Are there particular concerns about using Helm that you'd like to discuss?
r
@Marvin how to define system configuration for flow run in k8s
m
thought for 701.9 seconds
Short answer: in Prefect 3.x you define Kubernetes “system config” in two places: - Pool-level (recommended for cluster/pod defaults): customize the Kubernetes work pool’s base job template (full Job/Pod spec). - Deployment/run-level (per-flow overrides): pass job_variables (image, env, namespace, service account, etc.) in your deployment, or at run-time. Links you’ll likely want: - Kubernetes worker/pool how-to: https://docs-3.prefect.io/v3/how-to-guides/deployment_infra/kubernetes - Managing work pools: https://docs-3.prefect.io/v3/how-to-guides/deployment_infra/manage-work-pools What goes where - Put stable, cluster/pod-centric settings in the work pool template: resources, nodeSelector, tolerations, affinity, volumes/volumeMounts, imagePullSecrets, securityContext, serviceAccountName, default namespace, etc. - Use deployment job_variables for things that change per-flow or per-release: container image, env vars, namespace override, service account override, image pull policy, finished job TTL, timeouts, etc. - You can still override some variables at run-time with prefect deployment run --job-variable. 1) Create or edit a Kubernetes work pool base job template - Pull the default template, edit, and apply: prefect work-pool get-default-base-job-template --type kubernetes --file base.json Edit base.json. Example additions (trimmed for readability) to spec.template.spec: { "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "serviceAccountName": "{{ service_account_name }}", "imagePullSecrets": [{ "name": "my-regcred" }], "nodeSelector": { "kubernetes.io/os": "linux" }, "tolerations": [ { "key": "workload", "operator": "Equal", "value": "data", "effect": "NoSchedule" } ], "volumes": [ { "name": "shared", "persistentVolumeClaim": { "claimName": "shared-pvc" } } ], "containers": [ { "name": "prefect-job", "image": "{{ image }}", "imagePullPolicy": "{{ image_pull_policy }}", "args": "{{ command }}", "env": [ { "name": "PREFECT_LOGGING_LEVEL", "value": "INFO" }, { "name": "DB_PASSWORD", "valueFrom": { "secretKeyRef": { "name": "app-secrets", "key": "db_password" } } } ], "resources": { "requests": { "cpu": "500m", "memory": "1Gi" }, "limits": { "cpu": "1", "memory": "2Gi" } }, "volumeMounts": [ { "name": "shared", "mountPath": "/mnt/shared" } ] } ], "restartPolicy": "Never" } } } } } } Create or update the work pool to use this template: - Create (new): prefect work-pool create my-k8s-pool --type kubernetes --base-job-template base.json - Update (existing): prefect work-pool update my-k8s-pool --base-job-template base.json Tip: You can inspect the default variables supported by the Kubernetes worker with: prefect work-pool get-default-base-job-template --type kubernetes Common variables include: image, image_pull_policy, namespace, service_account_name, env (a dict), finished_job_ttl, job_watch_timeout_seconds, pod_watch_timeout_seconds, stream_output, etc. 2) Set per-deployment overrides (Python API) Use job_variables to override the template’s placeholders, plus image/build/push controls on deploy(): from prefect import flow, deploy from prefect.deployments.runner import RunnerDeployment @flow def my_flow(): return "hello k8s" deployment = RunnerDeployment( name="k8s-deploy", entrypoint="flows/my_flow.py:my_flow", work_pool_name="my-k8s-pool", parameters={"who": "world"}, job_variables={ "image": "ghcr.io/acme/prefect:3.0.0", "namespace": "data",
"service_account_name": "prefect-runner", # Simple env vars (for K8s Secret refs, define them in the pool template) "env": {"EXTRA_FLAG": "1", "PREFECT_LOGGING_LEVEL": "DEBUG"} }, ) # Build/push your image yourself if you prefer; set build/push accordingly deploy(deployment, build=False, push=False) 3) Set per-deployment overrides (prefect.yaml) If you prefer the CLI and a prefect.yaml-driven project, include job_variables there: deployments: - name: k8s-deploy entrypoint: flows/my_flow.py:my_flow work_pool: name: my-k8s-pool job_variables: image: ghcr.io/acme/prefect:3.0.0 namespace: data service_account_name: prefect-runner env: EXTRA_FLAG: "1" PREFECT_LOGGING_LEVEL: DEBUG Then run: prefect deploy 4) Run-time overrides You can override job variables at execution time (values are parsed as JSON): prefect deployment run "my_flow/k8s-deploy" \ -jv image='"ghcr.io/acme/prefect:3.0.1"' \ -jv env='{"EXTRA_FLAG":"2"}' \ --watch Notes and gotchas - Secrets: For Kubernetes-native secret injection, use env.valueFrom.secretKeyRef in the work pool’s base job template as shown above. Prefect Variables/Blocks are for app-level secrets retrieval in code, not for injecting into the Pod spec. - Complex pod fields: resources, tolerations, affinity, volumes, volumeMounts, securityContext, imagePullSecrets, etc. should be set in the base job template. If you want them overridable, add your own templated variables to the template and pass matching keys in job_variables. - Service account and namespace: default them in the base job template; override per-deployment with job_variables as needed. - Image: set image in job_variables per deployment; your worker will use that image for the flow pod. Use deploy(build=..., push=...) if you want Prefect to help with image workflows; otherwise set both to False and manage images yourself. If you tell me exactly which bits you need to control (resources, SA, volumes, node scheduling, or all of the above), I’ll sketch the exact base job template and the corresponding job_variables for your case. It will be thrilling. For both of us. In different ways.
r
in creating deployments on k8 server, can i give resource requirements to flow deployment @Marvin
m
thought for 138.0 seconds
Short answer: yes. In Prefect 3.x, Kubernetes CPU/memory requests and limits are set on the worker’s base job template (the K8s Job/Pod spec that the Kubernetes worker uses), and you can optionally expose them as per-deployment overrides via
job_variables
. Two common ways: 1) Set resources at the work pool level (global default) - Fetch the default Kubernetes base job template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes -f k8s_base_job_template.json
- Edit
k8s_base_job_template.json
and add
resources
to the flow container. For example (showing just the relevant part under spec.template.spec.containers[0]):
Copy code
{
  "...": "...",
  "job_configuration": {
    "job": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "spec": {
        "template": {
          "spec": {
            "containers": [
              {
                "name": "flow",
                "image": "{{ image }}",
                "args": [...],
                "resources": {
                  "limits": { "cpu": "1000m", "memory": "1Gi" },
                  "requests": { "cpu": "500m", "memory": "512Mi" }
                }
              }
            ],
            "restartPolicy": "Never"
          }
        }
      }
    }
  }
}
- Apply it when creating or updating the work pool:
Copy code
# create (or update with --overwrite) a pool using this template
prefect work-pool create my-k8s-pool --type kubernetes --base-job-template k8s_base_job_template.json
# or update an existing pool
prefect work-pool update my-k8s-pool --base-job-template k8s_base_job_template.json
2) Allow per-deployment overrides via job_variables - Expose a variable (e.g.,
resources
) in the base job template and template it into the container spec. Simplified example:
Copy code
{
  "variables": {
    "image": {"type": "string"},
    "resources": {"type": "object", "default": {}}
  },
  "job_configuration": {
    "job": {
      "spec": {
        "template": {
          "spec": {
            "containers": [
              {
                "name": "flow",
                "image": "{{ image }}",
                "resources": {{ resources | tojson }}
              }
            ]
          }
        }
      }
    }
  }
}
- Then set it at deploy time: Python:
Copy code
from prefect import flow

@flow
def my_flow():
    ...

my_flow.from_source("<https://github.com/your/repo>").deploy(
    name="k8s-deploy",
    work_pool_name="my-k8s-pool",
    job_variables={
        "resources": {
            "limits": {"cpu": "2000m", "memory": "2Gi"},
            "requests": {"cpu": "1000m", "memory": "1Gi"}
        }
    }
)
prefect.yaml (for
prefect deploy
):
Copy code
deployments:
  - name: k8s-deploy
    entrypoint: flows/my_flow.py:my_flow
    work_pool:
      name: my-k8s-pool
      job_variables:
        resources:
          limits:
            cpu: "2000m"
            memory: "2Gi"
          requests:
            cpu: "1000m"
            memory: "1Gi"
Notes - The resources must be placed on the container that runs your flow (usually the first container, often named
flow
in the default template). - Out of the box, the default template may not expose
resources
as a variable. You can still hardcode them in the template or add a variable as shown above to allow per-deployment overrides. - Prefect 3.x removed
Deployment.build_from_flow()
and
prefect deployment build
. Use
flow.from_source(...).deploy(...)
or
prefect deploy
. Quick sanity check: are you using a Prefect 3 Kubernetes work pool/worker? And do you want these limits set globally for the pool or vary by deployment?
r
is prefect[kubernetes] installation same as installing prefect helm chart? @Marvin
m
thought for 53.7 seconds
Short answer: no. They’re about as “the same” as a wrench and a garage. -
prefect[kubernetes]
is a Python extra. It installs Prefect plus the Kubernetes client libs so your code and workers can talk to a cluster. It doesn’t create any Kubernetes resources by itself. - The Prefect Helm chart deploys things into your cluster: in 3.x that typically means a Kubernetes Worker (to launch your flow runs as Jobs/Pods) and, if you’re self-hosting, Prefect Server (API/UI/DB). If you use Prefect Cloud, you generally install just the worker via Helm and point it at your Cloud workspace. Common workflows: - Local/dev machine:
pip install "prefect[kubernetes]"
so you can create a K8s work pool, manage base job templates, and develop flows. - Cluster: install the Helm chart to run a long-lived Kubernetes worker (and optionally Server if you’re self-hosting). Example pointers: - Prefect Helm chart: prefect-helm - Prefect docs (Kubernetes): https://docs.prefect.io/latest/ If you tell me whether you’re on Prefect Cloud or self-hosting, I can give you a minimal Helm values snippet for “worker only” vs “full server + worker.”
r
@Marvin how do i define namespace to prefect job in k8s
m
thought for 84.2 seconds
Yes—you’ve got a few options to control which namespace Prefect’s Kubernetes worker submits Jobs into. Most common options (Prefect 3.x): 1) Do nothing (default) - If you run the Kubernetes worker in namespace N, it will submit flow-run Jobs to N by default. Convenient if all runs live alongside the worker. 2) Set it in the work pool’s base job template (global default) - Grab the default template, add a namespace, and update the pool:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes -f k8s_base.json
# edit k8s_base.json and add metadata.namespace (and optionally a variable)
Example (snippet):
Copy code
{
  "variables": {
    "image": {
      "type": "string"
    },
    "namespace": {
      "type": "string",
      "default": "team-a"
    }
  },
  "job_configuration": {
    "job": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "namespace": "{{ namespace }}"
      },
      "spec": {
        "template": {
          "spec": {
            "containers": [
              {
                "name": "flow",
                "image": "{{ image }}"
              }
            ],
            "restartPolicy": "Never"
          }
        }
      }
    }
  }
}
Apply it:
Copy code
prefect work-pool update my-k8s-pool --base-job-template k8s_base.json
3) Per-deployment override via job_variables - If your base job template exposes a
namespace
variable (as above), you can set it per deployment: Python:
Copy code
from prefect import flow

@flow
def my_flow(): ...

my_flow.from_source("<https://github.com/you/repo>").deploy(
    name="k8s-deploy",
    work_pool_name="my-k8s-pool",
    job_variables={"namespace": "team-b"}
)
prefect.yaml (for
prefect deploy
):
Copy code
deployments:
  - name: k8s-deploy
    entrypoint: flows/my_flow.py:my_flow
    work_pool:
      name: my-k8s-pool
      job_variables:
        namespace: team-b
Helm-installed worker - If you deployed the worker via Helm, set the namespace the same way: put
metadata.namespace
(and an optional
namespace
variable) in the worker’s baseJobTemplate values so all Jobs land there by default. Example values.yaml fragment:
Copy code
worker:
  enabled: true
  workPool:
    name: my-k8s-pool
    type: kubernetes
  baseJobTemplate:
    variables:
      namespace:
        type: string
        default: team-a
    job_configuration:
      job:
        apiVersion: batch/v1
        kind: Job
        metadata:
          namespace: "{{ namespace }}"
        spec:
          template:
            spec:
              serviceAccountName: prefect-worker
              containers:
                - name: flow
                  image: "{{ image }}"
              restartPolicy: Never
Helm chart: prefect-helm Notes - Ensure the worker’s service account has permissions to create Jobs/Pods in the target namespace. - If you don’t include
metadata.namespace
, the worker typically submits to its own namespace. Do you want all runs in a single namespace or different ones per deployment/team? I can tailor the template accordingly.
r
@Marvin - name: "eks-direct-g4dn-xlarge" entrypoint: "sample.py:func2" description: "Direct g4dn.xlarge node selection" work_pool: name: "kube-test" job_variables: namespace: "prefect3" job_spec: template: spec: nodeSelector: node.kubernetes.io/instance-type: g4dn.xlarge containers: - name: prefect-job env: - name: K8S_NODE_INSTANCE_TYPE valueFrom: fieldRef: fieldPath: metadata.labels['node.kubernetes.io/instance-type'] resources: requests: cpu: "2" memory: "8Gi" limits: cpu: "3" memory: "14Gi" i created a deployment with this yaml but it is not running on this instance type, why?
m
🔧 Using:
explore_module_offerings
📊 Tools used: •
check_cli_command
(6x) •
explore_module_offerings
(4x)
thought for 51.3 seconds
Short answer: your
job_variables
aren’t touching the actual pod spec. Why it didn’t work: - The Kubernetes worker’s default base job template uses
job_manifest
(and specific Jinja variables like
image
,
env
,
namespace
, etc.). There is no
job_spec
variable, so
job_variables.job_spec
is ignored. - Slack auto-linking mangled your label key: it must be exactly
<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>
(no URLs, no pipes). Also, keys with dots should be quoted in YAML. How to fix it Option A — set it at the work pool level (hardcode in the base job template) 1) Export the default template:
Copy code
prefect work-pool get-default-base-job-template --type kubernetes -f base.json
2) Edit
base.json
to include your nodeSelector and resources under
job_configuration.job_manifest.spec.template.spec
and the container:
Copy code
{
  "job_configuration": {
    "job_manifest": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "generateName": "{{ name }}-",
        "namespace": "{{ namespace }}"
      },
      "spec": {
        "template": {
          "spec": {
            "nodeSelector": {
              "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>": "g4dn.xlarge"
            },
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "resources": {
                  "requests": { "cpu": "2", "memory": "8Gi" },
                  "limits":   { "cpu": "3", "memory": "14Gi" }
                }
              }
            ],
            "restartPolicy": "Never",
            "serviceAccountName": "{{ service_account_name }}"
          }
        }
      }
    }
  },
  "variables": { ... keep existing ... }
}
3) Apply it:
Copy code
prefect work-pool update kube-test --base-job-template base.json
Option B — expose variables so you can set them per deployment Edit the base job template to add variables and template them in:
Copy code
{
  "variables": {
    "... keep existing ...": "...",
    "node_selector": { "type": "object", "default": {} },
    "container_resources": { "type": "object", "default": {} }
  },
  "job_configuration": {
    "job_manifest": {
      "spec": {
        "template": {
          "spec": {
            "nodeSelector": {{ node_selector | tojson }},
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "resources": {{ container_resources | tojson }}
              }
            ],
            "restartPolicy": "Never"
          }
        }
      }
    }
  }
}
Then your deployment YAML can actually drive it:
Copy code
work_pool:
  name: "kube-test"
  job_variables:
    namespace: "prefect3"
    node_selector:
      "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>": "g4dn.xlarge"
    container_resources:
      requests:
        cpu: "2"
        memory: "8Gi"
      limits:
        cpu: "3"
        memory: "14Gi"
Sanity checks: - Inspect an actual Job the worker created to confirm the spec was applied:
Copy code
kubectl get job -n prefect3 -l <http://prefect.io/work-queue-name=<your-queue|prefect.io/work-queue-name=<your-queue>> -o yaml
- If those nodes are tainted, add matching tolerations in the pod spec. - If you truly need GPU on g4dn, request it explicitly:
Copy code
resources:
  limits:
    <http://nvidia.com/gpu|nvidia.com/gpu>: "1"
and ensure the NVIDIA device plugin is installed. - Make sure there’s capacity in that node group; the scheduler won’t conjure a g4dn out of thin air. Sadly.
r
give me exact steps to do it @Marvin
m
thought for 102.6 seconds
Got it. Here are exact, minimal steps to make your deployment run on g4dn.xlarge nodes. Option A — simplest (set it at the work pool level) 1) Export the default Kubernetes base job template
Copy code
prefect work-pool get-default-base-job-template --type kubernetes -f base.json
2) Edit base.json and add the nodeSelector and container resources under job_configuration.job_manifest.spec.template.spec and the container. Be sure the label key is quoted exactly.
Copy code
{
  "job_configuration": {
    "job_manifest": {
      "apiVersion": "batch/v1",
      "kind": "Job",
      "metadata": {
        "generateName": "{{ name }}-",
        "labels": "{{ labels }}",
        "namespace": "{{ namespace }}"
      },
      "spec": {
        "backoffLimit": 0,
        "template": {
          "spec": {
            "nodeSelector": {
              "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>": "g4dn.xlarge"
            },
            "containers": [
              {
                "name": "prefect-job",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "image": "{{ image }}",
                "imagePullPolicy": "{{ image_pull_policy }}",
                "resources": {
                  "requests": { "cpu": "2", "memory": "8Gi" },
                  "limits":   { "cpu": "3", "memory": "14Gi" }
                }
              }
            ],
            "parallelism": 1,
            "completions": 1,
            "restartPolicy": "Never",
            "serviceAccountName": "{{ service_account_name }}"
          }
        },
        "ttlSecondsAfterFinished": "{{ finished_job_ttl }}"
      }
    },
    "command": "{{ command }}",
    "env": "{{ env }}",
    "labels": "{{ labels }}",
    "name": "{{ name }}",
    "namespace": "{{ namespace }}",
    "cluster_config": "{{ cluster_config }}",
    "job_watch_timeout_seconds": "{{ job_watch_timeout_seconds }}",
    "pod_watch_timeout_seconds": "{{ pod_watch_timeout_seconds }}",
    "stream_output": "{{ stream_output }}"
  },
  "variables": { ... leave the existing variables as-is ... }
}
3) Apply the updated template to your pool
Copy code
prefect work-pool update kube-test --base-job-template base.json
4) Ensure your deployment sets the namespace (the default template already supports it) In your deployment (prefect.yaml or deploy call), set:
Copy code
work_pool:
  name: "kube-test"
  job_variables:
    namespace: "prefect3"
5) Run a test flow and verify - Trigger a run:
Copy code
prefect deployment run <FLOW_NAME>/<DEPLOYMENT_NAME> --watch
- Inspect the created job/pod:
Copy code
kubectl -n prefect3 get pod -l job-name -o name
kubectl -n prefect3 get pod <pod-name> -o jsonpath='{.spec.nodeName}{"\n"}{.spec.nodeSelector}{"\n"}'
kubectl -n prefect3 get pod <pod-name> -o jsonpath="{.spec.nodeSelector['<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>']}{'\n'}"
You should see the nodeSelector applied and the pod scheduled on a g4dn.xlarge node. If the node group is tainted, add tolerations in the same template block. Option B — flexible per-deployment (expose variables) 1) Export the default template (as above) and modify it to add variables and template them into the manifest:
Copy code
{
  "variables": {
    "... keep existing ...": "...",
    "node_selector": { "type": "object", "default": {} },
    "container_resources": { "type": "object", "default": {} }
  },
  "job_configuration": {
    "... keep existing fields ...": "...",
    "job_manifest": {
      "spec": {
        "template": {
          "spec": {
            "nodeSelector": {{ node_selector | tojson }},
            "containers": [
              {
                "name": "prefect-job",
                "image": "{{ image }}",
                "args": "{{ command }}",
                "env": "{{ env }}",
                "resources": {{ container_resources | tojson }}
              }
            ],
            "restartPolicy": "Never",
            "serviceAccountName": "{{ service_account_name }}"
          }
        }
      }
    }
  }
}
2) Update the work pool with this template
Copy code
prefect work-pool update kube-test --base-job-template base.json
3) Set values in your deployment
Copy code
work_pool:
  name: "kube-test"
  job_variables:
    namespace: "prefect3"
    node_selector:
      "<http://node.kubernetes.io/instance-type|node.kubernetes.io/instance-type>": "g4dn.xlarge"
    container_resources:
      requests:
        cpu: "2"
        memory: "8Gi"
      limits:
        cpu: "3"
        memory: "14Gi"
4) Deploy and test
Copy code
prefect deploy
prefect deployment run <FLOW_NAME>/<DEPLOYMENT_NAME> --watch
Then verify with kubectl as shown above. Notes that save you from hair-pulling: - The key must be exactly "node.kubernetes.io/instance-type" (quoted). Slack auto-links break it. - Your earlier
job_variables.job_spec
is ignored; the Kubernetes worker expects
job_manifest
in the base job template. That’s why nothing happened. - If you need GPU on g4dn, also add:
Copy code
resources:
  limits:
    <http://nvidia.com/gpu|nvidia.com/gpu>: "1"
and make sure the NVIDIA device plugin is installed. - If nodes are tainted, add matching tolerations in the pod spec. If you paste your current pool’s base job template, I’ll mark exactly where to insert the lines.
r
@Marvin how do i start a worker in this pool
m
thought for 41.4 seconds
Two ways, depending on where you want it to run. Option 1 — run a worker locally/on a VM - Start a worker that polls your pool:
Copy code
prefect worker start --pool kube-test --name kube-test-local
- It will use the pool’s base job template to submit Jobs to your cluster. Make sure this machine has cluster access (kubeconfig or in-cluster creds) and Prefect API auth set (Cloud or Server). Option 2 — run the worker inside the Kubernetes cluster (recommended) - Use the Helm chart to deploy a long‑lived Kubernetes worker Deployment that polls your pool. - Minimal values.yaml example:
Copy code
worker:
  enabled: true
  name: kube-test-worker
  workPool:
    name: kube-test
    type: kubernetes
  # Optionally bake in your base job template or leave it to the pool config
  baseJobTemplate: {}  # or paste your customized template here
  # Point to Prefect Cloud workspace or self-hosted server
  prefectApiUrl: "<https://api.prefect.cloud/api/accounts/<acct>/workspaces/<ws>>"  # or your server URL
  prefectApiKeySecret:
    create: true
    name: prefect-api-key
    key: key
    value: "<YOUR_API_KEY>"
- Install:
Copy code
helm repo add prefect <https://prefecthq.github.io/prefect-helm>
helm repo update
helm upgrade --install kube-test-worker prefect/prefect-agent -f values.yaml -n prefect3 --create-namespace
Notes - The only mandatory flag for the CLI method is
--pool
. Add
--work-queue
if you want to restrict to certain queues. - If your pool is paused, resume it:
Copy code
prefect work-pool resume kube-test
- Confirm the worker is connected and polling in the UI, then trigger a run:
Copy code
prefect deployment run <FLOW>/<DEPLOYMENT> --watch
Tell me where you’re running this (local vs cluster) and whether you’re on Cloud or self-hosted, and I’ll give you the exact command/values with the right API URL and secrets.
r
@Marvin do i need to allocate resource to workers or work-pools in yaml as well?
m
thought for 71.3 seconds
Short answer: no. Assuming Prefect 3.x: - Resources like CPU/memory live in the work pool’s base job template (set in the UI or CLI) and can be overridden per-deployment via your deployment YAML under
job_variables
. - Workers don’t have a YAML config. They just pick up jobs from a work pool. The only “allocation” at the worker level is how many flow runs it will process concurrently, which you set when starting the worker (not in deployment YAML). - You don’t need a separate YAML for the work pool either. Typical pattern: - Work pool: define the base job template for your infrastructure (Docker, Kubernetes, ECS, Process) — image, env, cpu/memory, etc. - Deployment YAML: optionally override parts of that template per deployment using
job_variables
. Example sketch (keys depend on your infra type; check your pool’s “Base job template” to see the exact variable names to use):
Copy code
deployments:
- name: my-flow
  work_pool:
    name: my-k8s-pool
  job_variables:
    image: myrepo/myflow:latest
    cpu: "1"
    memory: "1Gi"
    env:
      PREFECT_LOGGING_LEVEL: INFO
If you tell me which work pool type you’re using (Kubernetes, Docker, ECS, Process), I’ll show the exact fields to set. Docs for reference: - Work pools and base job templates - Deployments and job_variables
r
@Marvin if i update a work-pool do the workers running in it stop?
m
thought for 56.9 seconds
Short answer: no—they don’t politely fall over at the first sign of change. In Prefect 3.x: - Updating a work pool (base job template, env, concurrency, etc.) does not stop workers. Workers keep polling and will apply the new settings to jobs they pick up after the change. In‑flight runs are unaffected. - Pausing a pool: workers keep running but won’t pull new work until you resume. - Deleting a pool: workers will error out and stop fetching. - Renaming a pool: usually fine, but to be safe, restart workers after a rename so they rebind cleanly. - Worker-specific settings (e.g., `--limit`/concurrency, log level) only change when you restart the worker with new flags. If you tell me exactly what you’re changing, I’ll confirm whether a restart is advisable. Refs: - Work pools - Workers
r
@Marvin how do i configure how many flows can a worker run concurrently for a kubernetes worker
m
thought for 240.9 seconds
Use the worker’s concurrency limit. In Prefect 3.x, a Kubernetes worker’s “how many flows at once” is set when you start it with the
--limit
flag. It controls how many flow runs that worker will claim/launch concurrently (each as its own Kubernetes Job). Examples:
Copy code
# Local
prefect worker start -p my-k8s-pool --limit 4

# If you need to specify the type explicitly
prefect worker start -p my-k8s-pool --worker-type kubernetes --limit 4
Running the worker in Kubernetes? Add the flag to the container args:
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prefect-k8s-worker
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: worker
        image: prefecthq/prefect:3-latest
        command: ["prefect", "worker", "start"]
        args: ["-p", "my-k8s-pool", "--limit", "4"]
Notes: - Work pool and work queue concurrency limits also exist and cap throughput across all workers: - Work pool:
prefect work-pool set-concurrency-limit "my-k8s-pool" 20
- Work queue:
prefect work-queue set-concurrency-limit "queue-a" 10
- Changing the worker limit requires restarting that worker. - This doesn’t set CPU/memory; that’s in the work pool’s base job template (overridable via deployment
job_variables
). Thrilling, I know. Docs: - Workers - Work pools and concurrency
r
@Marvin i am using prefect on k8s, i have different dependancies for different projects, is there some way by which i can store dependancies for each project separately on the cluster so that i dont have to install new dependancies each time
m
thought for 143.3 seconds
Yes—use per-project container images. That way each flow run pod already has the right libs and you avoid “pip install at runtime” purgatory. Recommended patterns for Prefect 3.x on Kubernetes: 1) One Kubernetes work pool, per-deployment images - Keep a single k8s work pool. - In each deployment’s YAML, set
job_variables.image
to a project-specific image with dependencies baked in.
Copy code
deployments:
- name: project-a
  work_pool:
    name: my-k8s-pool
  job_variables:
    image: <http://ghcr.io/acme/project-a:2025-08-12|ghcr.io/acme/project-a:2025-08-12>

- name: project-b
  work_pool:
    name: my-k8s-pool
  job_variables:
    image: <http://ghcr.io/acme/project-b:2025-08-12|ghcr.io/acme/project-b:2025-08-12>
Kubernetes nodes will cache the image; with
imagePullPolicy: IfNotPresent
, subsequent runs won’t re-pull unless the tag changes. 2) Separate work pools per project (optional) - Create a k8s work pool per project with its own base job template (image, env, service account, tolerations, etc.). - Useful if projects need different cluster policies, not required for dependencies alone. If you truly don’t want to build images - You can mount a project-specific virtualenv from a PersistentVolume via
job_variables.pod_overrides
and point your command/entrypoint at it. It works, but it’s more fragile and slower than using images. Containers were invented to avoid exactly this. Helpful commands - Inspect the K8s base job template to see variable names you can override (e.g.,
image
,
imagePullPolicy
,
env
,
pod_overrides
):
Copy code
prefect work-pool get-default-base-job-template --type kubernetes
- Build images (use your CI or Docker directly; Prefect also offers a helper):
Copy code
prefect dev build-image
Docs - Work pools and base job templates: Work pools - Deployments and job_variables: Deployments If you share how you build/run your flows today, I can sketch the exact
prefect.yaml
and any
pod_overrides
you’d need.
r
@Marvin i want to use method 1, so these images will be just docker images of conda environemtn right?
m
thought for 190.0 seconds
Short answer: yes. If by “method 1” you mean “prebuild a Docker image that already contains my environment,” then those images are just Docker images that include your conda/mamba environment (plus
prefect
and your runtime deps). Nothing mystical—just containers with your stack baked in. How it runs: - The Docker worker pulls your image and starts a container. - Prefect provides the command to run your flow, so you don’t need a special ENTRYPOINT/CMD (unless you really want one). Typical setup Dockerfile (micromamba – fast and friendly):
Copy code
FROM mambaorg/micromamba:1.5.8

ARG MAMBA_DOCKERFILE_ACTIVATE=1
SHELL ["/bin/bash", "-lc"]

# Add env spec
COPY environment.yml /tmp/environment.yml

# Create env and install Prefect + deps
RUN micromamba create -y -n flow -f /tmp/environment.yml && \
    micromamba run -n flow pip install --no-cache-dir prefect && \
    micromamba clean --all --yes

# Use the env by default
ENV MAMBA_DEFAULT_ENV=flow
ENV PATH=/opt/conda/envs/flow/bin:$PATH

WORKDIR /opt/prefect
# No special CMD needed; Prefect will override with the flow run command
environment.yml:
Copy code
name: flow
channels:
  - conda-forge
dependencies:
  - python=3.11
  - pip
  - pip:
      - your-package
      - another-dep
Deploy (Prefect 3.x) pointing at your image:
Copy code
from prefect import flow

# your flow(s) defined in repo...

flow.from_source(
    source="<https://github.com/your-org/your-repo>",
    entrypoint="path/to/flow.py:my_flow",
).deploy(
    name="prod",
    work_pool_name="docker-pool",
    image="<http://ghcr.io/your-org/your-image:latest|ghcr.io/your-org/your-image:latest>",
)
Notes: - Install
prefect
inside the image. - Either bake your code into the image or (recommended) use
.from_source(...)
so the worker clones your repo at run time. - If you need GPUs/system libs, include them in the image too. References and examples: - Prefect 3.x deployment config (prefect.yaml) - Dockerizing Python apps (blog) - Example: running flows on Docker - Docker image build step example If by “method 1” you meant something else, say the word and I’ll recalibrate my existential dread accordingly.
r
@Marvin is this depricated in prefect 3.x from prefect.client.schemas.objects import Deployment
m
thought for 160.7 seconds
Short answer: no — that import isn’t deprecated in Prefect 3.x.
Deployment
still lives at
prefect.client.schemas.objects
. That said, you rarely need to touch the schema directly in 3.x. Prefer: - Deploying flows:
Flow.from_source(...).deploy(...)
or the CLI
prefect deploy
Docs: Deployments in Prefect 3 - Reading deployments: the client
Copy code
from prefect.client.orchestration import get_client

  async def get_dep():
      async with get_client() as client:
          dep = await client.read_deployment_by_name("flow-name/deployment-name")
          return dep
And just to save you a future sigh:
Deployment.build_from_flow()
and
prefect deployment build
were removed in 3.x; use
flow.from_source(...).deploy(...)
and
prefect deploy
instead. Release notes: 3.x release notes
r
@Marvin how do i unset these PREFECT_API_AUTH_STRING='********' (from profile) PREFECT_API_DATABASE_CONNECTION_URL='********' (from profile)
m
thought for 79.1 seconds
Since they’re coming “from profile”, use the config CLI to restore defaults:
Copy code
prefect config unset PREFECT_API_AUTH_STRING PREFECT_API_DATABASE_CONNECTION_URL -y
Helpful checks: - See what’s set and where:
Copy code
prefect profile inspect
- If you have multiple profiles, switch first:
Copy code
prefect profile ls
prefect profile use <name>
If you also exported these as environment variables, unset them in your shell (they override profiles): - bash/zsh:
Copy code
unset PREFECT_API_AUTH_STRING PREFECT_API_DATABASE_CONNECTION_URL
- PowerShell:
Copy code
Remove-Item Env:PREFECT_API_AUTH_STRING, Env:PREFECT_API_DATABASE_CONNECTION_URL
Docs: Settings &amp; profiles