Daniel
11/19/2025, 1:08 PMTom Han
11/19/2025, 5:31 PMFailed to pull from remote:
Cloning into '/tmp/tmp0h4b39iiprefect'...
fatal: unable to access '<https://github.com/org/repo.git/>': Failed to connect to <http://github.com|github.com> port 443 after 131017 ms: Couldn't connect to server
errors on most of my deploymentsMatt Liu
11/21/2025, 12:04 AMprod tag but child flows not.
anyone with the same issue?
during daytime its fine. super weird.Mattijs
11/21/2025, 8:48 AMSrijith Poduval
11/21/2025, 9:20 PMLucas
11/23/2025, 6:20 PMprefecthq/prefect:3-latest docker image using the latest UX? It looks different from the prefect cloud UX, but I could be wrongLoup
11/23/2025, 7:37 PMEPM Admin
11/23/2025, 7:51 PMJake Wilkins
11/24/2025, 12:17 PMCreateJob requests in a row? Results in a 409 error for us, appears like so in logs:Idriss Bellil
11/24/2025, 3:25 PMFailed to generate job configuration: Client error '429 Too Many Requests' for url '<http://orion-internal:4200/api/accounts>...
For more information check: <https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429>
a few more also were stuck at the pending state..
is this sort of rate limiting documented somewhere for the Pro Plan? is it something we can control?Nathan Low
11/24/2025, 8:43 PMLoup
11/25/2025, 10:47 AMdeployments:
- name: my_deployment
work_pool:
name: my_worker_pool
job_variables:
pod:
spec:
containers:
- name: flow
resources:
requests:
cpu: "2"
memory: "8Gi"
limits:
cpu: "4"
memory: "8Gi"Chu
11/25/2025, 4:25 PMJuri Ganitkevitch
11/25/2025, 5:05 PMGabriel Rufino
11/25/2025, 9:16 PMkeep_jobs set to false so even if it's duplicating some job it's probably also deleting.
We're not tracking the flow state or anything like that from python, we just "fire and forget" these flows.
This is my code to launch them:
for model in collection_models:
scenarios = model_to_scenarios.get(model, [])
if not scenarios:
logger.warning("No scenarios found for model %s, skipping", model)
continue
# Submit task for this model's subflow with model-specific name
future = launch_model_subflow_task.with_options(
name=f"launch-{model}",
retries=2,
retry_delay_seconds=60,
).submit(
model=model,
scenarios=scenarios,
collection_limit_per_brand_model=request.collection_limit_per_brand_model,
)
deployment_futures.append(future)
and the definition:
flow_run = await run_deployment( # type: ignore[misc]
name=f"process-batch-subflow/{PREFECT_DEPLOYMENT_SUFFIX}-subflow",
parameters={
"model": model,
"collection_scenarios": scenarios,
"collection_limit_per_brand_model": collection_limit_per_brand_model,
},
job_variables={
"cpu": CPU_CORES,
"memory": "32G",
},
timeout=0, # Fire-and-forget: don't wait for subflow completion (execution timeout is set on the subflow itself)
tags=[model],
)Emery Conrad
11/26/2025, 9:16 AMpydocket-feedstock to unblock conda prefect-feedstock? This will help us update our env, which would be nice!
https://github.com/conda-forge/pydocket-feedstock/pullsJai P
11/26/2025, 3:09 PMProcessPoolTaskExecutor (or really, any of the parallel task runners). Since tasks can be nested inside of tasks, what's the execution model when i task.submit() a task, and it calls another_task() inside of it, where does another_task run? in the process that is handling task? i ask because it used to be that we needed to wrap tasks in flow runs to control which task runner to useJakub Roman
11/26/2025, 8:50 PMprefect.exceptions.MissingResult: The result was not persisted and is no longer available
We observed this issue three times, first one on Saturday, then Monday, and today. This is the first time we encountered this issue after using Prefect Cloud for over 2 years now.
We're using ECS Push work pools.
Does anyone experience the same issue?Christian Dalsvaag
11/27/2025, 7:30 AMPierre L
11/27/2025, 1:46 PM>>> from prefect.blocks.notifications import SlackWebhook
>>> slack_webhook_block = SlackWebhook.load("slack-prefect-prod-failures-v2")
>>> slack_webhook_block.notify("Hello from Prefect!")
I create the automation locally with prefect automation create --from-file automations.yaml . Creating the simplest automation in the UI doesn't solve the problem.
What could be the cause ? I did not changed much things since yesterday.Simon
11/28/2025, 1:17 PMThis run didn't generate Logs
Task logs exist in the DB and do display correctly in the flow log view. The task logs used to display at the task log view until about 2 days ago. It may have something to do with a single new flow which was added which called get_run_logger , a pattern which did not exist previously, has since been removed, but the problem remains globally for all our flows.Sebastian S
11/28/2025, 4:21 PMXinglin Qiang
11/30/2025, 1:53 AMXinglin Qiang
11/30/2025, 3:16 AMXinglin Qiang
11/30/2025, 7:34 AMabc
12/01/2025, 12:05 PMtask.submit() followed by task.result() to retrieve the output. However, even a simple Python function (e.g., retrieving a value from a dictionary) takes around 30 seconds to complete.abc
12/01/2025, 12:08 PMthiago
12/01/2025, 4:03 PMOTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true does not work with formatting the log output. Adjusting PREFECT_FORMATTERS_* leads to missing otel’s injected properties.
traces and metrics set inside the flow code does not propagate to the Otel collector
is there a magic trick to have Otel working with Prefect?Ben Muller
12/04/2025, 4:11 AMRevital Eres
12/04/2025, 2:08 PM