Jake Wilkins
11/24/2025, 12:17 PMCreateJob requests in a row? Results in a 409 error for us, appears like so in logs:Idriss Bellil
11/24/2025, 3:25 PMFailed to generate job configuration: Client error '429 Too Many Requests' for url '<http://orion-internal:4200/api/accounts>...
For more information check: <https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429>
a few more also were stuck at the pending state..
is this sort of rate limiting documented somewhere for the Pro Plan? is it something we can control?Nathan Low
11/24/2025, 8:43 PMLoup
11/25/2025, 10:47 AMdeployments:
- name: my_deployment
work_pool:
name: my_worker_pool
job_variables:
pod:
spec:
containers:
- name: flow
resources:
requests:
cpu: "2"
memory: "8Gi"
limits:
cpu: "4"
memory: "8Gi"Chu
11/25/2025, 4:25 PMJuri Ganitkevitch
11/25/2025, 5:05 PMGabriel Rufino
11/25/2025, 9:16 PMkeep_jobs set to false so even if it's duplicating some job it's probably also deleting.
We're not tracking the flow state or anything like that from python, we just "fire and forget" these flows.
This is my code to launch them:
for model in collection_models:
scenarios = model_to_scenarios.get(model, [])
if not scenarios:
logger.warning("No scenarios found for model %s, skipping", model)
continue
# Submit task for this model's subflow with model-specific name
future = launch_model_subflow_task.with_options(
name=f"launch-{model}",
retries=2,
retry_delay_seconds=60,
).submit(
model=model,
scenarios=scenarios,
collection_limit_per_brand_model=request.collection_limit_per_brand_model,
)
deployment_futures.append(future)
and the definition:
flow_run = await run_deployment( # type: ignore[misc]
name=f"process-batch-subflow/{PREFECT_DEPLOYMENT_SUFFIX}-subflow",
parameters={
"model": model,
"collection_scenarios": scenarios,
"collection_limit_per_brand_model": collection_limit_per_brand_model,
},
job_variables={
"cpu": CPU_CORES,
"memory": "32G",
},
timeout=0, # Fire-and-forget: don't wait for subflow completion (execution timeout is set on the subflow itself)
tags=[model],
)Emery Conrad
11/26/2025, 9:16 AMpydocket-feedstock to unblock conda prefect-feedstock? This will help us update our env, which would be nice!
https://github.com/conda-forge/pydocket-feedstock/pullsJai P
11/26/2025, 3:09 PMProcessPoolTaskExecutor (or really, any of the parallel task runners). Since tasks can be nested inside of tasks, what's the execution model when i task.submit() a task, and it calls another_task() inside of it, where does another_task run? in the process that is handling task? i ask because it used to be that we needed to wrap tasks in flow runs to control which task runner to useJakub Roman
11/26/2025, 8:50 PMprefect.exceptions.MissingResult: The result was not persisted and is no longer available
We observed this issue three times, first one on Saturday, then Monday, and today. This is the first time we encountered this issue after using Prefect Cloud for over 2 years now.
We're using ECS Push work pools.
Does anyone experience the same issue?Christian Dalsvaag
11/27/2025, 7:30 AMPierre L
11/27/2025, 1:46 PM>>> from prefect.blocks.notifications import SlackWebhook
>>> slack_webhook_block = SlackWebhook.load("slack-prefect-prod-failures-v2")
>>> slack_webhook_block.notify("Hello from Prefect!")
I create the automation locally with prefect automation create --from-file automations.yaml . Creating the simplest automation in the UI doesn't solve the problem.
What could be the cause ? I did not changed much things since yesterday.Simon
11/28/2025, 1:17 PMThis run didn't generate Logs
Task logs exist in the DB and do display correctly in the flow log view. The task logs used to display at the task log view until about 2 days ago. It may have something to do with a single new flow which was added which called get_run_logger , a pattern which did not exist previously, has since been removed, but the problem remains globally for all our flows.Sebastian S
11/28/2025, 4:21 PMXinglin Qiang
11/30/2025, 1:53 AMXinglin Qiang
11/30/2025, 3:16 AMXinglin Qiang
11/30/2025, 7:34 AMabc
12/01/2025, 12:05 PMtask.submit() followed by task.result() to retrieve the output. However, even a simple Python function (e.g., retrieving a value from a dictionary) takes around 30 seconds to complete.abc
12/01/2025, 12:08 PMthiago
12/01/2025, 4:03 PMOTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true does not work with formatting the log output. Adjusting PREFECT_FORMATTERS_* leads to missing otel’s injected properties.
traces and metrics set inside the flow code does not propagate to the Otel collector
is there a magic trick to have Otel working with Prefect?Ben Muller
12/04/2025, 4:11 AMRevital Eres
12/04/2025, 2:08 PMNick Ackerman
12/08/2025, 3:38 PMRaymond Lin
12/08/2025, 7:53 PMAlexandre lazerat
12/09/2025, 10:24 AMJoão Pedro Boufleur
12/09/2025, 5:24 PMKarthik R
12/10/2025, 6:58 PMMarc D.
12/10/2025, 7:46 PMYaron Levi
12/11/2025, 8:44 AMPierre L
12/11/2025, 3:54 PMprefect.exceptions.PrefectHTTPStatusError: Server error '500 Internal Server Error' for url '<http://prefect-server.prefectoss.svc.cluster.local:4200/api/flow_runs/dbdf008d-f9d3-42b1-be21-ef2a4b12b567>' (url can change)
I found these logs in the prefect server pod :
11:08:48.406 | ERROR | prefect.server - Encountered exception in request: Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/asyncpg/connection.py", line 2421, in connect return await connect_utils._connect( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/asyncpg/connect_utils.py", line 1049, in _connect conn = await _connect_addr( ^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/asyncpg/connect_utils.py", line 886, in _connect_addr return await __connect_addr(params, True, *args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/asyncpg/connect_utils.py", line 931, in __connect_addr tr, pr = await connector ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/asyncpg/connect_utils.py", line 818, in _create_ssl_connection new_tr = await loop.start_tls( ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1268, in start_tls await waiter asyncio.exceptions.CancelledError
In the logs of postgres, at the same time, I have :
{"level":"info","ts":"2025-12-10T11:08:48.310120701Z","logger":"postgres","msg":"record","logging_pod":"cnpg-database-cluster-2","record":{"log_time":"2025-12-10 11:08:48.306 UTC","process_id":"386195","connection_from":"10.2.14.94:42384","session_id":"693954b9.5e493","session_line_num":"1","session_start_time":"2025-12-10 11:08:41 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"08P01","message":"SSL error: unexpected eof while reading","backend_type":"not initialized","query_id":"0"}} {"level":"info","ts":"2025-12-10T11:08:48.31023509Z","logger":"postgres","msg":"record","logging_pod":"cnpg-database-cluster-2","record":{"log_time":"2025-12-10 11:08:48.307 UTC","process_id":"386195","connection_from":"10.2.14.94:42384","session_id":"693954b9.5e493","session_line_num":"2","session_start_time":"2025-12-10 11:08:41 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"08006","message":"could not receive data from client: Connection reset by peer","backend_type":"not initialized","query_id":"0"}} {"level":"info","ts":"2025-12-10T11:08:48.310409409Z","logger":"postgres","msg":"record","logging_pod":"cnpg-database-cluster-2","record":{"log_time":"2025-12-10 11:08:48.309 UTC","process_id":"386196","connection_from":"10.2.14.94:42388","session_id":"693954b9.5e494","session_line_num":"1","session_start_time":"2025-12-10 11:08:41 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"08006","message":"could not accept SSL connection: Connection reset by peer","backend_type":"not initialized","query_id":"0"}}
The prefect-server pods have not restarted and they do not seem limited by resources. we have :
Limits: cpu: 300m memory: 600Mi
Requests: cpu: 300m memory: 600Mi
while the max cpu usage I see on grafana peaks to 82m.
My questions :
knowing that all is working most of the time, do I need to set
PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_ENABLED=true
?
If not, explain if I should set these environment variables and which values I should set:
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE
PREFECT_SQLALCHEMY_POOL_SIZE
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE
PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT