Hi Folks, I would like first to say that I'm very impressed with the project and the extensive documentation and information on the channel. good work.
I have a few questions this I hope might clarify a few things to me.
1. where Perfect core keeps the state of long-running tasks, what happens is the instance running the perfect core is failing?
2. If I have a long-running task, for example, I'm doing a boto call to start an emr cluster and then sending a step to it (for example do ETL and write to S3) and then I would like to and another step based on the outcome of the first step a step once the cluster is ready ? is the mean I need to write down-stream task to keep doing API calls to monitoring the step result ? or should I run the perfect core inside the pyspark job ?
3. does perfect have something similar to an airflow sensor like this GoogleCloudStoragePrefixSensor ?