<@ULVA73B9P> I keep getting internal server error ...
# marvin-in-the-wild
m
@Marvin I keep getting internal server error when i try to instantiate a k8s workpool using the rest API as follow with my prefect 2.0 server: curl -X POST http://10.30.8.177:4200/api/work_pools/ \ -H "Content-Type: application/json" \ -d '{ "name": "test-k8s-work-pool", "type": "kubernetes", "base_job_template": { "job_configuration": { "pod_template": { "spec": { "containers": [ { "name": "base", "image": "docker.io/prefecthq/prefect:2-latest", "env": [ { "name": "KUBECONFIG", "value": "kube_config.yaml" }, { "name": "AWS_ACCESS_KEY_ID", "value": "minio" }, { "name": "AWS_SECRET_ACCESS_KEY", "value": "minio123" }, { "name": "MLFLOW_S3_ENDPOINT_URL", "value": "http://10.30.8.146:9000" } .... etc {"exception_message":"Internal Server Error"}%
m
Grug understand you hitting wall with 500 error when making k8s workpool on Prefect 2.0 with API call. Let's break down how to fix. 1. Let's check Prefect 2.0 docs for proper k8s workpool API setup. 2. Check GitHub issues if anyone else hit same wall. Grug need few minutes to search. Grug find few useful things: 1. Prefect 2.0 work pool docs mention using UI, CLI, API but not give detailed API example. Make sure all required fields are set. 2. GitHub issue #14918 show similar error when using
publish_as_work_pool()
, indicating bug in job template. 3. Issue may be related to how
job_configuration
set. Grug recommend: - Double check
job_configuration
follow Prefect 2.0 standard. - Report issue to Prefect GitHub if problem persist. Links: - Work pools - GitHub issue #14918