Is there an easy way to disable caching when runni...
# prefect-community
j
Is there an easy way to disable caching when running tests? Something like the
disable_run_logger
1
a
perhaps via an environment variable in your cache key function? btw you can also just run your tests against a throwaway DB or use the harnest tests you can get lots of ideas also by looking at the test suite in the Prefect repo incl. how we test caching
z
We reset the database between every test so we get this automatically.
👍 3
You could include a check for an environment variable in your
cache_key_fn
j
I think I narrowed down what I’ve been seeing, possibly related to the new
cache_results_in_memory
option. The following test will pass as is, but if you uncomment line 19 and explicitly set the
results_storage
it will fail that the cache block isn’t found. Change the fixture from
function
to
session
and it fails because the second run has a state of
Cached
z
Can you share the error with the missing cache block?
j
Here’s the full
pytest
output
@Zanie @Anna Geller any guidance on this one?
z
Still catching up on everything this morning :)_
I’ll look soon…
🙏 1
Ah so when the block is saved, its id is persisted on the
LocalFileSystem
object
Since you’ve declared that object outside your test, it persists between the test runs
Since we find the cached id, we do not save the block again, but it does not exist because the database has been reset
Copy code
@pytest.mark.parametrize("execution_number", range(2))
def test_flow(execution_number):
    simple_task.result_storage._block_document_id = None
    result = simple_flow()
    assert result[0].name == "Completed"
Resetting the identifier to
None
fixes the issue
j
Hmm. This is a pretty trivial example, but the real tests are invoking flows with several tasks. Is there any way to do it more programatically?
Or on the flip side if I was able to use the test_harness across the whole session, it sounds like this issue wouldn’t happen, but it then starts to fail because I have results cached between tests.