https://prefect.io logo
Title
o

Ofir

04/18/2023, 9:56 PM
What’s the best practice for Data Retention Policy on Prefect deployment runs? Just as a reference, here is how it is implemented for Apache Airflow, as yet another garbage collector DAG: https://stackoverflow.com/questions/66580751/configure-logging-retention-policy-for-apache-airflow I’m sure that Prefect has either a built-in mechanism for that, or encourages a common idiom for rotating / archiving / deleting artifacts from old runs. Context: We have a persistent storage on Azure Blob Storage (the S3 equivalent) where we store artifacts (e.g. output files and images) from a Machine Learning (Kedro) run. The space can pile up pretty quickly across runs and we would run out of storage, rendering our Prefect deployments not operational. What kind of policies are recommended to evict data from old runs? I don’t want to run out of space and I want the Prefect pipelines to remain operational. I know that some of you would say: “_It depends_”, so for the sake of this example let’s imagine that I have a dedicated 256GB of storage. Should I set a threshold (e.g. 70% of full) that will be as a trigger for evicting (removing) artifacts from old runs? Also, when should this run? as the first (prerequisite) subflow in my bigger flow, or as yet another deployment in Prefect on a recurring schedule? Thanks!
n

Nate

04/19/2023, 12:50 AM
hi @Ofir - what do you think about configuring lifecycle rules on your azure blob storage?
o

Ofir

04/19/2023, 1:42 PM
That’s very interesting, I didn’t think about shifting the responsibility to the storage object
I will look at that, thanks!
👍 1