Matt Delacour
04/11/2022, 4:18 PMJosh
04/11/2022, 4:22 PMException raised while calling state handlers: KeyError('The secret SLACK_PREFECT_NOTIFICATIONS_WEBHOOK_URL was not found. Please ensure that it was set correctly in your tenant: <https://docs.prefect.io/orchestration/concepts/secrets.html>')
Anders Segerberg
04/11/2022, 4:27 PMcreate_flow_run(..., idempotency_key="{date}-"+MY_VAR)
function as expected? It's hard to tell, because I don't know yet how Prefect compiles the string templating as part of tasks. Would I need to generate this string as a task result, and then pass the result to the kwarg?Atsushi Saito
04/11/2022, 5:26 PMlate runs
icon on cloud UI.
However, cloud-UI shows neither success (green status) nor failure (red status)
Is this a issue about docker-login ? or are there other possible causes?Naimesh Chaudhari
04/11/2022, 5:28 PMSam Garvis
04/11/2022, 5:45 PMAnders Segerberg
04/11/2022, 6:03 PM13:53:26 INFO CloudFlowRunner
Flow run SUCCESS: all reference tasks succeeded
13:55:13 INFO
User marked flow run
as Failed
13:55:21 INFO
null restarted this flow run
13:55:22
INFO agent Submitted for execution: Container ID:
<id>
13:55:22 INFO
S3 Downloading flow from s3://<key>
13:55:23 INFO
S3 Flow successfully downloaded. ETag: <>, LastModified: <> VersionId: <>
13:55:23 INFO CloudFlowRunner
Beginning Flow run for <pipeline>
13:55:23 INFO CloudFlowRunner
Flow run SUCCESS: all reference tasks succeeded
I would expect the flow to re-run entirely. However, it doesn't look like it does (it just jumps straight into Success). Is this due to Prefect's default input caching?
To be specific, this flow is a flow-of-flows. I have two subflowsNaimesh Chaudhari
04/11/2022, 6:12 PMDavid Haynes
04/11/2022, 6:20 PMKarim Zaghw
04/11/2022, 6:27 PMNaimesh Chaudhari
04/11/2022, 6:33 PMVaikath Job
04/11/2022, 7:19 PMTony Yun
04/11/2022, 7:48 PMAnders Segerberg
04/11/2022, 8:57 PMVaikath Job
04/11/2022, 11:09 PMEddie Atkinson
04/12/2022, 2:35 AMECSRun
flow using the LocalDaskExecutor
with 30GB of RAM. For large jobs this flow OOMs and gets killed by the Prefect scheduler. My question is this: If I set up a Dask cluster to run these jobs would it gracefully handle memory issues?
That is to say if I had 30GB of RAM in the cluster and a job that required 50GB would Dask OOM or would it simply run slower? Do I need to modify my code to use Dask dataframes or is there some smarts here I’m not quite across?Atsushi Saito
04/12/2022, 5:19 AMlabels
be specified in both running environments (docker runner or local runner) and remote server-side (i.e. cloud UI or server UI) ??Alexander Belikov
04/12/2022, 8:59 AMRAISS Zineb
04/12/2022, 12:30 PMConstantino Schillebeeckx
04/12/2022, 1:55 PMDomenico Di Gangi
04/12/2022, 2:31 PMVasco Leitão
04/12/2022, 2:48 PMdefaults_from_attrs
helper decorator. What am I missing? 😅 (Code in the thread)Naimesh Chaudhari
04/12/2022, 3:56 PMAnders Segerberg
04/12/2022, 4:20 PMcreate_flow_run
, with the idempotency_key
set to this file path.
Running the flow the first time, will write this key. Running it a second time, won't -- as expected, because of the idempotency behavior.
What I don't understand is that the second flow run has every task state set to 'success' -- but wouldn't I expect them to be 'cached' or something else, indicating that the flow isn't being re-ran?Prasanth Kothuri
04/12/2022, 4:31 PMFile "/usr/local/lib/python3.8/dist-packages/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/usr/local/lib/python3.8/dist-packages/cloudpickle/cloudpickle_fast.py", line 602, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle '_thread.lock' object
to make it simple it has just one task as below
@task(log_stdout=True)
def get_file_names():
files = s3.Bucket(s3_bucket).objects.all()
file_names = []
for my_bucket_object in files:
file_name = my_bucket_object.key
regex = re.search(r".ctl", str(file_name))
if regex is not None:
file_names.append(file_name)
return file_names
and flow
# flow to chain the tasks
with Flow("my_flow", storage=storage, schedule=schedule) as f:
ctl_files = get_file_names()
any ideas why prefect is unable to serialize / pickle ???Josh
04/12/2022, 6:00 PMFlow run is cancelling…
message for 2 weeks. Is there anything I can do to let prefect cloud know that the flow is actually dead and never succeeded?Atsushi Saito
04/12/2022, 6:18 PMkiran
04/12/2022, 7:33 PMtry/except/else/finally
less often than before (in my main code) because I figure prefect will catch (and log) things. I still find uses for it inside actual tasks/functions. Is this what other people have done or am I thinking about it the wrong way? Thanks!Patrick Tan
04/12/2022, 7:54 PMDavid Yang
04/12/2022, 8:12 PM