Russell Brooks
12/01/2022, 6:09 AMArnoldas Bankauskas
12/01/2022, 12:26 PMJoshua Greenhalgh
12/01/2022, 5:25 PMJoshua Grant
12/01/2022, 7:04 PMget_run_context()
from within a flow. MRE in 🧵Michael K
12/01/2022, 7:52 PMdeployment build
I get the following errors (in thread).
Perhaps they are related, but at this point I'm stumped. Not sure why it runs but cant be deployed.
Any help would be appreciated and I'm happy to provide more info as needed. Thanks!Sam Cook
12/01/2022, 8:25 PMmap
to pass those items/results to follow on tasks. Consistently, if I have a large number of items (50+) then the entire process will lock up during the follow on tasks and fail to terminate cleanly. In the UI the root job is marked as crashed
but the tasks are always stuck in a running state. I'm running in a Kubernetes environment as well so the stuck jobs have to be manually cleaned up whenever this occurs as they never reach completion.
I think this might possibly be related to @Boggdan Barrientos question from yesterday.Ben Muller
12/01/2022, 9:45 PMState message: Submission failed. IndexError: list index out of range
Unfortunately there are no logs in the agent with any failures either 🤷
This did lead me to see some other errors in the agent 🧵Scott Walsh
12/01/2022, 11:09 PMprefect kubernetes manifest agent
command seems to only take in one queue. I'd like the agent to read from multiple queues, is there syntax I can change or is this a bug?
using prefect==2.6.7
(scott/CD-562-prefect-poc) prefect kubernetes manifest agent -q test1 -q test2
apiVersion: apps/v1
kind: Deployment
metadata:
name: prefect-agent
namespace: default
labels:
app: prefect-agent
spec:
selector:
matchLabels:
app: prefect-agent
replicas: 1
template:
metadata:
labels:
app: prefect-agent
spec:
containers:
- name: agent
image: prefecthq/prefect:2.6.7-python3.9
command: ["prefect", "agent", "start", "-q", "test2"]
imagePullPolicy: "IfNotPresent"
env:
- name: PREFECT_API_URL
Faheem Khan
12/02/2022, 1:06 AMBen Muller
12/02/2022, 3:35 AMimport time
from prefect import flow, get_run_logger
from prefect.deployments import run_deployment
@flow
def run():
x = run_deployment(name="dro-gallops-fields/default", idempotency_key=str(int(time.time())), timeout=0)
while x:
time.sleep(5)
get_run_logger().info(x.state)
get_run_logger().info(x.state_name)
get_run_logger().info(x.state_type)
get_run_logger().info(x.state.message)
get_run_logger().info(x.state.result)
if __name__ == "__main__":
run()
This does everything I expect in Prefect Cloud but the logs on the parent flow "run" show something like this ( forever - it never exits the Scheduled state and updates in line with its actual state ? ) :
While the subflow run has the flow makred as Complete
- is there a different way to ping the state in prefect 2 ?Andreas Nigg
12/02/2022, 7:29 AMCrash detected! Execution was interrupted by an unexpected exception: PrefectHTTPStatusError: Server error '500 Internal Server Error' for url '<https://api.prefect.cloud/api/accounts/bd169b15-9cf0-41df-9e46-2233ca3fcfba/workspaces/f507fe51-4c9f-400d-8861-ccfaf33b13e4/task_runs/5d8fc4a8-5ce7-444d-8749-135dce75d9be/set_state>'
Response: {'detail': {'exception_message': 'Internal Server Error'}}
For more information check: <https://httpstatuses.com/500>
EDIT: Ok I found a thread in the best-practices-coordination - channel - there it was confirmed that there was a small service interruption. Therefore this question is obsolete.Deepanshu Aggarwal
12/02/2022, 8:30 AMSylvain Hazard
12/02/2022, 9:49 AMSlackbot
12/02/2022, 2:14 PMBraun Reyes
12/02/2022, 3:45 PMXavier Babu
12/02/2022, 3:49 PMPatrick Alves
12/02/2022, 4:19 PMKubernetesJobs
.
I’ve experiencing an issue when I run workflows from UI.
1. I select a Flow -> RUN
2. It schedules a Flow for the next seconds.
3. But the flow keep on pending forever.
4. It actually creates another flow that runs (with no tags, see the images)
On the image every squared are actually the same RUN created twice. The original never run, and it creates a new one that runs.jack
12/02/2022, 4:44 PMSean Conroy
12/02/2022, 5:12 PMTraceback (most recent call last):
ERROR | asyncio - Task exception was never retrieved
"future: <Task finished name='Task-36' coro=<<async_generator_athrow without __name__>()> exception=KeyError(139836083032272)>"
File "/usr/local/lib/python3.8/dist-packages/prefect/utilities/asyncutils.py", line 263, in on_shutdown yield
GeneratorExit
During handling of the above exception, another exception occurred:
File "/usr/local/lib/python3.8/dist-packages/prefect/utilities/asyncutils.py", line 267, in on_shutdown
EVENT_LOOP_GC_REFS.pop(key)
KeyError: 139836083032272"
Sander
12/02/2022, 7:03 PMMalek
12/02/2022, 7:38 PMdeployments.run_deployment()
. However, this makes it not possible to manually retry a failed subflow, is there a better way to do this? My use case is mostly the parent flow fetching inputs and then each subflow run would handle one of the inputs (I'm looping over the task that triggers the subflow deployment runs)
Thanks!Chris McClellan
12/03/2022, 2:17 AMmerlin
12/03/2022, 6:51 AMTim Galvin
12/03/2022, 8:18 AMPREFECT_LOGGING_EXTRA_LOGGERS
changed from 2.6.7
to 2.6.9
at all? I was previously using it successfully in combination with DaskTaskExecutor
and dask_jobqueue.SLURMCluster
to capture logs from another package. It seems though with an updated to 2.6.8
or 2.6.9
I have lost this ability.
I do see the logs being printed to my slurm output files, and I do see it is formatted in the prefect
style, but I am not seeing these logs being submitted through to my self-hosted Orion server (as presented by the web UI).
Any ideas?Andreas Nigg
12/03/2022, 1:33 PMTim Galvin
12/04/2022, 8:46 AMPREFECT_LOGGING_EXTRA_LOGGERS
no longer reporting logs through to orion.
I went reading through prefect/logging/configuration.py
and see in #7569 that there was some change to the logic. Now there is a test made against config['incremental']
before the orion log handler is attached to the logger of the extra module.
I am not sure exactly what incrmental
is in this sense, but when I disable this check to force the orion handler to attach itself to my extra module, things work as expected and logs are streamed to orion.
It seems that by default that this config['incremental']
is set to True
(at least I have not knowingly set it), and the test made against its value is negative when evaluating whether to attach the orion logger.
So my question is what is this incremental
configurable, and how does one set it? Is the check made against it actually intended to be negated? Once I remove the check my logs for extra modules outlined in PREFECT_LOGGING_EXTRA_LOGGERS
behave the same as pre v2.6.9 / #7569
@Anna Geller @Zanie - I see both your names on the change -- please don't hate me for tagging you directly 🙂Marwan Sarieddine
12/04/2022, 2:26 PMHAITAM BORQANE
12/04/2022, 4:09 PMYaron Levi
12/04/2022, 6:57 PMprefect deployment apply jobs/selfServiceDaily.yaml
Are there any shortcuts to apply many yaml files at once?João Coelho
12/04/2022, 8:45 PM