Jacques06/23/2020, 3:51 PM
parameter. I'm catching the error from boto and then logging the error immediately before doing a
to trigger the retry mechanism. When the boto call fails (it does this once or twice a day - unpredictably) the error is caught, logs show the task is set to
, and downstream tasks are set to
. All looks good until the flow is scheduled to run again, then I get a python stack overflow as some object is being pickled (I think - seeing looped calls to bits like
in the stack trace) directly after the
File "/var/lang/lib/python3.7/pickle.py", line 662 in save_reduce
message. I'm using
Beginning Flow run
if that matters.
# Subclasses of ClientError's are dynamically generated and # cannot be pickled unless they are attributes of a # module. So at the very least return a ClientError back.
Jim Crist-Harif06/23/2020, 3:54 PM
Jacques06/23/2020, 3:54 PM
Jacques06/23/2020, 4:03 PM
Jim Crist-Harif06/23/2020, 4:06 PM
class MyBotoError(Exception): pass def mytask(...): try: some_boto_thing() except SomeBotoError as exc: raise MyBotoError(str(exc))
Jacques06/23/2020, 6:13 PM
Jim Crist-Harif06/23/2020, 6:15 PM
will then be serializable.
Jacques06/24/2020, 1:09 PM
), but it still fails in the same way. Is it possible prefect is collecting all the exceptions and not just the most recent? This is fairly problematic as it causes a python stack overflow, not just a failed flow run on retry. Is there anything else that I could try?