are there any best practices around testing flows/...
# prefect-community
are there any best practices around testing flows/tasks in prefect 2.0? I see this page but something we're noticing that testing can be particularly slow on `flow`s, (sometimes taking ~1s to start up a each test) and it appears we always need to wrap `task`s inside of a flow to test them
discourse 1
We don’t have recommendations yet, we’re hoping to design a nice testing UX for both tasks and flows. In particular, we’re planning to create a way to test tasks outside of flows. cc @alex Can you share an example where your test takes a second to start the flow? We’re running thousands of flows in our internal tests and I haven’t seen that.
here's a trivial example, and the associated output from running
Copy code
==================================================================================== test session starts =====================================================================================
platform darwin -- Python 3.10.2, pytest-7.1.1, pluggy-1.0.0
rootdir: /path/to/dir
plugins: anyio-3.5.0
collected 10 items

tests/ ..........                                                                                                                                                          [100%]

==================================================================================== 10 passed in 11.09s =====================================================================================
Hm interesting this seems to be related to the test harness utility
We run our internal tests with a higher performance lower level reset of the database
If I switch your example to that, it runs in about 3.5 seconds
The test harness we provide creates a temporary directory and new database for each test. You’ll find it much more performant to use that at the session scope then use a separate fixture to delete all the data between tests. We can probably expose this in the near future.
ah yeah, switching it to session scope cut the time in half! i guess there's a risk with conflicts between tests if i do that? or should i be generally safe because flows shouldn't really interact between tests
and i guess when you say
expose this in the near future
you're talking about the higher performance lower level reset? the session scope is just a pytest change right?
If it’s session scoped, yeah your tests can collide if you’re making assertions about state that requires a clean database. You should be fine since you’re just testing your flows and not asserting things like one call of a flow function results in one flow run in the backend like we are
And yeah we can expose a lower-level faster reset in the future, you can definitely just change the scope of the fixture yourself immediately.
gotcha. i think we may have cases where we want to assert subflows are kicked off but i think if things are a little slower when it comes to that stuff, its ok. we can always just use this as a local workaround and let our CI be a little bit slower until the lower level faster reset is available. is there anywhere i may be able to track the progress/availability of that? also thanks so much for responding so quickly!
You can still make assertions about the subflows by returning their states and querying for the associated flow run ids. That’s exactly the kind of thing we want to make a great UX for, like
returns a
object that gives you full introspection of all of the task and flow runs that it created, their states, number of retries, return values, etc. so you can make the assertions you want.
@Marvin open “Using
per test is slow”
ohhh that type of UX would be epic, definitely looking forward to that being rolled out! Also thanks for the issue link! i'll be sure to record it on our side so we can keep an eye on it. thanks so much and have a good one!
I run into when I developed a new feature for the was collection, At the time I did this PR which i replaced with the test harness after. @Zanie Why using session level can be a problem? Every run of the flow will have a different flow run which should not be a problem unless I'm missing something
We cannot use a session level fixture internally because we’re making assertions about the contents of the database directly 🙂 for most users, a session scoped fixture will definitely be fine!
hi there! (i actually work with jai 🙂) thinking of testing, i'd also like to be able to tasks without all the overhead of needing a flow. ideally, it'd be nice to unit-test tasks as if they were plain old python functions! (both cause it'd be faster to run, and simpler to read) in my code, i'm doing an approach like this...
Copy code
from prefect import task

def double_value(value: int) -> int:
    return value * 2

from my_task import double_value

def test_double_value():
    assert double_value.__wrapped__(1) == 2
is this an ok approach? would it make sense to make a lil helper function for this? if this is ok, i could add it to your testing docs!
You can use
You can’t run a task without a flow while it is being orchestrated, but yeah you can test the behavior of your underlying function if it does not rely on any Prefect behavior.
is even clearer than wrapped. thanks! yeah a few of our tasks won't rely on prefect behavior, so this is nice for those could be a nice addition to these docs, if you'd like me to diff it in?
Yeah go for it!
Thanks 🙂
Ok that make sense. I was in the context of flow or task testing only