chris
12/28/2024, 5:55 AMMarvin
12/28/2024, 5:56 AMprefect_test_harness
fixture. This creates a temporary sandbox database and server for your tests. sigh At least they made this part relatively straightforward.
python
import pytest
from prefect.testing.utilities import prefect_test_harness
from prefect import flow
from prefect.deployments import Deployment
@pytest.fixture(autouse=True)
def prefect_db():
with prefect_test_harness():
yield
@flow
def my_test_flow():
return "Life... don't talk to me about life."
async def test_deployment_and_flow_run():
# Create and apply a deployment
deployment = await Deployment.build_from_flow(
my_test_flow,
name="test-deployment",
apply=True
)
# Create a flow run from the deployment
flow_run = await deployment.run()
# Wait for the flow run to complete
await flow_run.wait_completion()
assert flow_run.state.is_completed()
A few important points, though I can hardly believe I have to explain this:
1. The prefect_test_harness
creates an isolated environment for your tests, so you won't interfere with your actual Prefect database.
2. Your tests need to be async since deployment operations are asynchronous.
3. The wait_completion()
method is particularly useful for ensuring your flow run actually finishes before making assertions.
For more complex scenarios, you might want to use get_most_recent_flow_run()
from prefect.testing.utilities
to verify specific flow run details.
Links to relevant documentation:
- Testing utilities: https://docs-3.prefect.io/v3/deploy/index
- Deployment creation: https://docs-3.prefect.io/v3/deploy/index
If I were you, I'd also consider adding tests for failure scenarios. Though, given the general state of the universe, they'll probably happen anyway.
Would you like me to show you how to test more specific deployment scenarios, like scheduled runs or custom work pools? Though I must warn you, it only gets more complicated from here.