https://prefect.io logo
e

emre

04/14/2020, 3:27 PM
Hello everyone, I am thinking about adding a small feature to
ShellTask
, but in my case, one of the tests (test_shell_logs_error_on_non_zero_exit) fails without me touching anything. It is the only test in
test_<http://shell.py|shell.py>
that uses
caplog
, and
caplog
seems to be capturing 0 log lines. Does anybody have an idea? I have tried
caplog.set_level(logging.DEBUG)
, to no avail.
🧐 1
l

Laura Lorenz (she/her)

04/14/2020, 8:42 PM
Hi! Can you share anything more about your environment or how you are invoking the test? I can’t reproduce over here (off Prefect master, Python 3.8, stepping through the test with the debugger). My caplog object over here is populated after f.run() in line 102. FWIW I have heard that other people have experienced caplog being ā€œweirdā€ before
e

emre

04/14/2020, 8:58 PM
Sure, also on prefect master, Python 3.6.8, have been calling the tests by
pytest tests/tasks/test_<http://shell.py|shell.py>
.
l

Laura Lorenz (she/her)

04/14/2020, 9:35 PM
Ok gotcha, I don’t have a good lead off the top of my head but maybe we can troubleshoot it a bit together. Do you mind sending the traceback from the test? If nothing else should forward a few more environment details. When you run the whole test suite (or perhaps just any other test that captures logs with caplog), do you also have failures/is the
caplog.records
still unexpectedly empty during the test? For environment comparison purposes, here’s my output:
Copy code
(prefect_dev) āžœ  prefect git:(master) āœ— pytest tests/tasks/test_<http://shell.py|shell.py>
============================================================================================= test session starts ==============================================================================================
platform darwin -- Python 3.8.1, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /Users/laura/Development/prefect, inifile: setup.cfg
plugins: env-0.6.2, asyncio-0.10.0, xdist-1.31.0, forked-1.1.3, cov-2.8.1
collected 16 items                                                                                                                                                                                             

tests/tasks/test_<http://shell.py|shell.py> ................                                                                                                                                                               [100%]

============================================================================================== 16 passed in 0.85s =============================================================================================
e

emre

04/15/2020, 10:44 AM
Found it! Turns out I had
pytest-logbook
installed, which probably invalidated
caplog
. I normally don’t use
pytest
, so don’t know how the plugin got in the first place, though it was used by prefects tests.
pip uninstall pytest-logbook
did the trick. In case you are wondering, here is the erroneous output:
Copy code
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.6.8, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /Users/emreakgun/Documents/prefect_server/prefect, inifile: setup.cfg
plugins: logbook-1.2.0, env-0.6.2, xdist-1.31.0, forked-1.1.3, cov-2.8.1
collected 16 items

tests/tasks/test_shell.py ..........F.....                                                                                                                                 [100%]

==================================================================================== FAILURES ====================================================================================
_____________________________________________________________________ test_shell_logs_error_on_non_zero_exit _____________________________________________________________________

caplog = <_pytest.logging.LogCaptureFixture object at 0x1101a0710>

    def test_shell_logs_error_on_non_zero_exit(caplog):
        with Flow(name="test") as f:
            task = ShellTask()(command="ls surely_a_dir_that_doesnt_exist")
        out = f.run()
        assert out.is_failed()

        error_log = [c for c in caplog.records if c.levelname == "ERROR"]
>       assert len(error_log) == 1
E       assert 0 == 1
E        +  where 0 = len([])

tests/tasks/test_shell.py:106: AssertionError
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
[2020-04-15 10:38:27,570] INFO - prefect.FlowRunner | Beginning Flow run for 'test'
[2020-04-15 10:38:27,578] INFO - prefect.FlowRunner | Starting flow run.
[2020-04-15 10:38:27,610] INFO - prefect.TaskRunner | Task 'ShellTask': Starting task run...
[2020-04-15 10:38:27,621] ERROR - prefect.Task: ShellTask | Command failed with exit code 1: ls: surely_a_dir_that_doesnt_exist: No such file or directory
[2020-04-15 10:38:27,636] INFO - prefect.TaskRunner | Task 'ShellTask': finished task run for task with final state: 'Failed'
[2020-04-15 10:38:27,640] INFO - prefect.FlowRunner | Flow run FAILED: some reference tasks failed.
----------------------------------------------------------------------------- Captured logbook call ------------------------------------------------------------------------------
[INFO] prefect.FlowRunner: Beginning Flow run for 'test'
[INFO] prefect.FlowRunner: Starting flow run.
[INFO] prefect.TaskRunner: Task 'ShellTask': Starting task run...
[ERROR] prefect.Task: ShellTask: Command failed with exit code 1: ls: surely_a_dir_that_doesnt_exist: No such file or directory
[INFO] prefect.TaskRunner: Task 'ShellTask': finished task run for task with final state: 'Failed'
[INFO] prefect.FlowRunner: Flow run FAILED: some reference tasks failed.
============================================================================ short test summary info =============================================================================
FAILED tests/tasks/test_shell.py::test_shell_logs_error_on_non_zero_exit - assert 0 == 1
========================================================================== 1 failed, 15 passed in 2.40s ==========================================================================
Thanks for all the help!