I have a pipeline that takes a list of inputs and runs a model on each input. The inputs are handled by a top-level flow, that batches them (for performance reasons) and passes them to sub-deployments. Some inputs in may have been already processed previously, and I'd like to just skip their processing. I was thinking of using the built-in prefect caching to do that, but the batching gets in the way (it's often a random mix of processed and unprocessed entries). I was thinking of directly hooking into the caching mechanism and manually handle the caching on the input-level. Would that make sense? Or maybe there's some clever trick with built-in caching I could use.