Hello Prefect team. I wonder how to properly raise...
# ask-community
h
Hello Prefect team. I wonder how to properly raise a FAIL signal from within a task without being retried. Retries is mostly to handle intermittent failures. When we set
retries=3
for a task_A, we want to retry task_A for up to 3 times when it fails due to some network connection issue or intermittent failures. Meanwhile, however, we have the logic in the
task_A
that explicitly and intentionally raise
FAIL_A
signal for downstream tasks. That
FAIL_A
is not meant to be re-tried by
task_A
. When
task_A
encounters this intentional
FAIL_A
, it shall skip retries and return it directly. How could I do that?
c
Hi @Hui Zheng! I think this can be achieved via a cleverly crafted state handler on the task; I’ll try to cook up an example and get back to you shortly
h
thank you.
c
OK yes, this retry handler works by intercepting the state transition from
Failed
->
Retrying
and, if the Failed state has a
FAIL
signal attached to it it prevents the task from entering a
Retrying
state:
Copy code
def retry_handler(t, old, new):
    if new.is_retrying() and isinstance(old.result, FAIL):
        return Failed("No retries allowed")
Here’s a simple example; if you remove the state handler you’ll see the task retry 3 times, but with the handler it fails immediately:
Copy code
@task(max_retries=3, retry_delay=timedelta(0), state_handlers=[retry_handler])
def fail():
    raise FAIL("failed")

f = Flow("test", tasks=[fail])
f.run()
h
Thank you. Chris. I will do that.
👍 1
c
@Marvin archive “How to prevent my task from retrying when I manually raise a FAIL signal?”
h
I wonder how often other people might also have the same use case. Usually when I raises a FAIL signal, it’s an intended failure and I don’t want the task to retry.
c
Yea it’s a good question and that does seem reasonable — we try to not assume any semantic meaning with errors or states, and instead try to provide enough hooks for users to layer on their own intent. Although in this case you may be correct that raising a
FAIL
signal almost universally implies bypassing retries — if someone wants to retry they can instead raise a retrying signal….
h
Thank you. I have another related question. In the same example, if I set
task_A
with timeout=20. In the case of task timeout, it will raise a
TimeoutError
exception (not a signals.FAIL) . In this situation, will the
task_A
attempts retries?
c
Yes it will - timeouts are considered a type of failure
h
thank you
c
anytime
h
Hi Chris, I think I haven’t completed understand the relationship between
state.Failed
and
signals.Fail
. I assume
signals.Fail
is just a special type of exception. When I raise
signals.Fail
in task_A, there is specific failure message and failure result that I want to pass to downstream models. Could I pass the message and result of the
signals.Fail
to the downsteam models like this?
Copy code
def skip_retries_on_signal_fail(task, old_state, new_state):
    if new_state.is_retrying() and isinstance(old_state.result, signals.FAIL):
        return state.Failed(
            message=old_state.message,
            result=old_state.result)
c
Yes that works! Note that the old state result in this example is an exception, so you might want to do somethign like:
Copy code
raise FAIL(message="Hui's message", result="Hui's special result")
and within your state handler check for Hui’s special result as the
.result
attribute of the old state, instead of the exception type
Also FYI this thread got me thinking, and I cooked up another way to achieve ordered mapping via state handlers for you here: https://github.com/PrefectHQ/prefect/issues/3951
h
Thank you, Chris. I will check it out.