Code Monkey home page Code Monkey logo

pytest-check's Introduction

pytest-check

A pytest plugin that allows multiple failures per test.


Normally, a test function will fail and stop running with the first failed assert. That's totally fine for tons of kinds of software tests. However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.

pytest-check allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong.

Installation

From PyPI:

$ pip install pytest-check

From conda (conda-forge):

$ conda install -c conda-forge pytest-check

Example

Quick example of where you might want multiple checks:

import httpx
from pytest_check import check

def test_httpx_get():
    r = httpx.get('https://www.example.org/')
    # bail if bad status code
    assert r.status_code == 200
    # but if we get to here
    # then check everything else without stopping
    with check:
        assert r.is_redirect is False
    with check:
        assert r.encoding == 'utf-8'
    with check:
        assert 'Example Domain' in r.text

Import vs fixture

The example above used import: from pytest_check import check.

You can also grab check as a fixture with no import:

def test_httpx_get(check):
    r = httpx.get('https://www.example.org/')
    ...
    with check:
        assert r.is_redirect == False
    ...

Validation functions

check also helper functions for common checks. These methods do NOT need to be inside of a with check: block.

Function Meaning Notes
equal(a, b, msg="") a == b
not_equal(a, b, msg="") a != b
is_(a, b, msg="") a is b
is_not(a, b, msg="") a is not b
is_true(x, msg="") bool(x) is True
is_false(x, msg="") bool(x) is False
is_none(x, msg="") x is None
is_not_none(x, msg="") x is not None
is_in(a, b, msg="") a in b
is_not_in(a, b, msg="") a not in b
is_instance(a, b, msg="") isinstance(a, b)
is_not_instance(a, b, msg="") not isinstance(a, b)
almost_equal(a, b, rel=None, abs=None, msg="") a == pytest.approx(b, rel, abs) pytest.approx
not_almost_equal(a, b, rel=None, abs=None, msg="") a != pytest.approx(b, rel, abs) pytest.approx
greater(a, b, msg="") a > b
greater_equal(a, b, msg="") a >= b
less(a, b, msg="") a < b
less_equal(a, b, msg="") a <= b
between(b, a, c, msg="", ge=False, le=False) a < b < c
between_equal(b, a, c, msg="") a <= b <= c same as between(b, a, c, msg, ge=True, le=True)
raises(expected_exception, *args, **kwargs) Raises given exception similar to pytest.raises
fail(msg) Log a failure

Note: This is a list of relatively common logic operators. I'm reluctant to add to the list too much, as it's easy to add your own.

The httpx example can be rewritten with helper functions:

def test_httpx_get_with_helpers():
    r = httpx.get('https://www.example.org/')
    assert r.status_code == 200
    check.is_false(r.is_redirect)
    check.equal(r.encoding, 'utf-8')
    check.is_in('Example Domain', r.text)

Which you use is personal preference.

Defining your own check functions

Using @check.check_func

The @check.check_func decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.

from pytest_check import check

@check.check_func
def is_four(a):
    assert a == 4

def test_all_four():
    is_four(1)
    is_four(2)
    is_four(3)
    is_four(4)

Using check.fail()

Using @check.check_func is probably the easiest. However, it does have a bit of overhead in the passing cases that can affect large loops of checks.

If you need a bit of a speedup, use the following style with the help of check.fail().

from pytest_check import check

def is_four(a):
    __tracebackhide__ = True
    if a == 4:
        return True
    else: 
        check.fail(f"check {a} == 4")
        return False

def test_all_four():
  is_four(1)
  is_four(2)
  is_four(3)
  is_four(4)

Using raises as a context manager

raises is used as context manager, much like pytest.raises. The main difference being that a failure to raise the right exception won't stop the execution of the test method.

from pytest_check import check

def test_raises():
    with check.raises(AssertionError):
        x = 3
        assert 1 < x < 4

Pseudo-tracebacks

With check, tests can have multiple failures per test. This would possibly make for extensive output if we include the full traceback for every failure. To make the output a little more concise, pytest-check implements a shorter version, which we call pseudo-tracebacks.

For example, take this test:

def test_example():
    a = 1
    b = 2
    c = [2, 4, 6]
    check.greater(a, b)
    check.less_equal(b, a)
    check.is_in(a, c, "Is 1 in the list")
    check.is_not_in(b, c, "make sure 2 isn't in list")

This will result in:

=================================== FAILURES ===================================
_________________________________ test_example _________________________________
FAILURE:
assert 1 > 2
  test_check.py, line 14, in test_example() -> check.greater(a, b)
FAILURE:
assert 2 <= 1
  test_check.py, line 15, in test_example() -> check.less_equal(b, a)
FAILURE: Is 1 in the list
assert 1 in [2, 4, 6]
  test_check.py, line 16, in test_example() -> check.is_in(a, c, "Is 1 in the list")
FAILURE: make sure 2 isn't in list
assert 2 not in [2, 4, 6]
  test_check.py, line 17, in test_example() -> check.is_not_in(b, c, "make sure 2 isn't in list")
------------------------------------------------------------
Failed Checks: 4
=========================== 1 failed in 0.11 seconds ===========================

Red output

The failures will also be red, unless you turn that off with pytests --color=no.

No output

You can turn off the failure reports with pytests --tb=no.

Stop on Fail (maxfail behavior)

Setting -x or --maxfail=1 will cause this plugin to abort testing after the first failed check.

Setting -maxfail=2 or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest.

In other words, the maxfail count is counting tests, not checks. The exception is the case of 1, where we want to stop on the very first failed check.

any_failures()

Use any_failures() to see if there are any failures.
One use case is to make a block of checks conditional on not failing in a previous set of checks:

from pytest_check import check

def test_with_groups_of_checks():
    # always check these
    check.equal(1, 1)
    check.equal(2, 3)
    if not check.any_failures():
        # only check these if the above passed
        check.equal(1, 2)
        check.equal(2, 2)

Speedups

If you have lots of check failures, your tests may not run as fast as you want. There are a few ways to speed things up.

  • --check-max-tb=5 - Only first 5 failures per test will include pseudo-tracebacks (rest without them).

    • The example shows 5 but any number can be used.
    • pytest-check uses custom traceback code I'm calling a pseudo-traceback.
    • This is visually shorter than normal assert tracebacks.
    • Internally, it uses introspection, which can be slow.
    • Allowing a limited number of pseudo-tracebacks speeds things up quite a bit.
    • Default is 1.
      • Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures
  • --check-max-report=10 - limit reported failures per test.

    • The example shows 10 but any number can be used.
    • The test will still have the total nuber of failures reported.
    • Default is no maximum.
  • --check-max-fail=20 - Stop the test after this many check failures.

    • This is useful if your code under test is slow-ish and you want to bail early.
    • Default is no maximum.
  • Any of these can be used on their own, or combined.

  • Recommendation:

    • Leave the default, equivelant to --check-max-tb=1.
    • If excessive output is annoying, set --check-max-report=10 or some tolerable number.

Local speedups

The flags above are global settings, and apply to every test in the test run.

Locally, you can set these values per test.

From examples/test_example_speedup_funcs.py:

def test_max_tb():
    check.set_max_tb(2)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_report():
    check.set_max_report(5)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_fail():
    check.set_max_fail(5)
    for i in range(1, 11):
        check.equal(i, 100)

Contributing

Contributions are very welcome. Tests can be run with tox. Test coverage is now 100%. Please make sure to keep it at 100%. If you have an awesome pull request and need help with getting coverage back up, let me know.

License

Distributed under the terms of the MIT license, "pytest-check" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Changelog

See changelog.md

pytest-check's People

Contributors

alblasco avatar bzah avatar crockettjf avatar dillonm197 avatar dkorytkin avatar eliahkagan avatar foreverwintr avatar fperrin avatar hirotokirimaru avatar hrichards avatar ionelmc avatar janfreyberg avatar joaonc avatar juliotux avatar marksmayo avatar mgorny avatar okken avatar pre-commit-ci[bot] avatar sco1 avatar skhomuti avatar sunshine-syz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-check's Issues

Using `--showlocals` does not print variables

When pytest is run with --showlocals switch, we can see the variables printed whenever the assert statement fails.
But when using any check methods, they are not printed.

Am I missing out on something?

pytest command: pytest --showlocals -vvv -rA tests/test_me.py

case 1 (with assert):

def test_me:
    a = 0
    b = 1
    assert a

output:

tests/test_me.py::test_me FAILED                                                        [100%]

========================================== FAILURES ===========================================
___________________________________________ test_me ___________________________________________

    @pytest.mark.galaxy("dum")
    @pytest.mark.integration_name("dum")
    def test_me():
        a = 0
>       assert a
E       assert 0

a          = 0
b          = 1

tests/test_me.py:8: AssertionError
=================================== short test summary info ===================================
FAILED tests/test_me.py::test_me - assert 0
====================================== 1 failed in 0.14s ======================================

case 2 (with check):

def test_me:
    a = 0
    b = 1
    check.is_true(a)

output:


========================================== FAILURES ===========================================
___________________________________________ test_me ___________________________________________
FAILURE: 
assert False
 +  where False = bool(0)
tests/test_me.py:8 in test_me() -> check.is_true(a)
------------------------------------------------------------
Failed Checks: 1
=================================== short test summary info ===================================
FAILED tests/test_me.py::test_me
====================================== 1 failed in 0.18s ======================================

trouble repackaging package it as rpm and IPS packages.

Just normal build, install and test cycle used on building package from non-root account:

  • "setup.py build"
  • "setup.py install --root </install/prefix>"
  • "pytest with $PYTHONPATH and $PATH pointing to sitelib and /usr/bin inside </install/prefix>
+ PATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest_check-1.0.1-2.fc35.x86_64/usr/bin:/usr/bin:/usr/sbin:/usr/local/sbin
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest_check-1.0.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest_check-1.0.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ PYTHONDONTWRITEBYTECODE=1
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 40 items

. .                                                                                                                                                                  [  2%]
tests/test_check.py ..................FFF..FF                                                                                                                        [ 66%]
tests/test_check_context_manager.py ..FFFF                                                                                                                           [ 82%]
tests/test_check_errors.py FFF                                                                                                                                       [ 89%]
tests/test_check_fixture.py E                                                                                                                                        [ 92%]
tests/test_check_func_decorator.py ..F                                                                                                                               [100%]

================================================================================== ERRORS ==================================================================================
___________________________________________________________________ ERROR at setup of test_check_fixture ___________________________________________________________________
file /home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_fixture.py, line 1
  def test_check_fixture(check):
E       fixture 'check' not found
>       available fixtures: LineMatcher, _config_for_test, _pytest, _session_faker, _sys_snapshot, aiohttp_client, aiohttp_raw_server, aiohttp_server, aiohttp_unused_port, benchmark, benchmark_weave, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_based_httpbin, class_based_httpbin_secure, class_mocker, cov, current_cases, doctest_namespace, event_loop, faker, fast, freezer, fs, httpbin, httpbin_both, httpbin_ca_bundle, httpbin_secure, linecomp, loop, loop_debug, mocker, module_mocker, monkeypatch, no_cover, package_mocker, patching, proactor_loop, pytestconfig, pytester, raw_test_server, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, session_mocker, smart_caplog, stdouts, test_client, test_server, testdir, testrun_uid, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, unused_port, unused_tcp_port, unused_tcp_port_factory, virtualenv, weave, worker_id, workspace, xprocess
>       use 'pytest --fixtures [testpath]' for help on them.

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_fixture.py:1
================================================================================= FAILURES =================================================================================
_________________________________________________________________________ test_watch_them_all_fail _________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_watch_them_all_fail0')>

    def test_watch_them_all_fail(testdir):
        testdir.makepyfile(
            """
            import pytest_check as check

            def test_equal():
                check.equal(1,2)

            def test_not_equal():
                check.not_equal(1,1)

            def test_is():
                x = ["foo"]
                y = ["foo"]
                check.is_(x, y)

            def test_is_not():
                x = ["foo"]
                y = x
                check.is_not(x, y)

            def test_is_true():
                check.is_true(False)


            def test_is_false():
                check.is_false(True)


            def test_is_none():
                a = 1
                check.is_none(a)


            def test_is_not_none():
                a = None
                check.is_not_none(a)


            def test_is_in():
                check.is_in(4, [1, 2, 3])


            def test_is_not_in():
                check.is_not_in(2, [1, 2, 3])


            def test_is_instance():
                check.is_instance(1, str)


            def test_is_not_instance():
                check.is_not_instance(1, int)

            def test_almost_equal():
                check.almost_equal(1, 2)
                check.almost_equal(1, 2.1, abs=0.1)
                check.almost_equal(1, 3, rel=1)


            def test_not_almost_equal():
                check.not_almost_equal(1, 1)
                check.not_almost_equal(1, 1.1, abs=0.1)
                check.not_almost_equal(1, 2, rel=1)


            def test_greater():
                check.greater(1, 2)
                check.greater(1, 1)


            def test_greater_equal():
                check.greater_equal(1, 2)


            def test_less():
                check.less(2, 1)
                check.less(1, 1)


            def test_less_equal():
                check.less_equal(2, 1)
                #check.equal(2, 1)

        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(failed=18, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 18}
E         {'passed': 18} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check.py:183: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_watch_them_all_fail0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 18 items

test_watch_them_all_fail.py ..................                           [100%]

============================== 18 passed in 0.26s ==============================
_____________________________________________________________________________ test_check_xfail _____________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_check_xfail0')>

    def test_check_xfail(testdir):
        testdir.makepyfile(
            """
            import pytest_check as check
            import pytest

            @pytest.mark.xfail()
            def test_fail():
                check.equal(1, 2)
        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(xfailed=1)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'xfailed': 0} != {'xfailed': 1}
E         {'xpassed': 1} != {'xpassed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check.py:199: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_check_xfail0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_check_xfail.py X                                                    [100%]

============================== 1 xpassed in 0.09s ==============================
_________________________________________________________________________ test_check_xfail_strict __________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_check_xfail_strict0')>

    def test_check_xfail_strict(testdir):
        testdir.makepyfile(
            """
            import pytest_check as check
            import pytest

            @pytest.mark.xfail(strict=True)
            def test_fail():
                check.equal(1, 2)
        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(xfailed=1)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'xfailed': 0} != {'xfailed': 1}
E         {'failed': 1} != {'failed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check.py:215: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_check_xfail_strict0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_check_xfail_strict.py F                                             [100%]

=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________
[XPASS(strict)]
=========================== short test summary info ============================
FAILED test_check_xfail_strict.py::test_fail
============================== 1 failed in 0.09s ===============================
__________________________________________________________________________ test_check_and_assert ___________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_check_and_assert0')>

    def test_check_and_assert(testdir):
        testdir.makepyfile(
            """
            import pytest_check as check
            import pytest

            def test_fail_check():
                check.equal(1, 2)

            def test_fail_assert():
                assert 1 == 2
        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(failed=2)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 1} != {'failed': 2}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check.py:271: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_check_and_assert0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 2 items

test_check_and_assert.py .F                                              [100%]

=================================== FAILURES ===================================
_______________________________ test_fail_assert _______________________________

    def test_fail_assert():
>       assert 1 == 2
E       assert 1 == 2

test_check_and_assert.py:8: AssertionError
=========================== short test summary info ============================
FAILED test_check_and_assert.py::test_fail_assert - assert 1 == 2
========================= 1 failed, 1 passed in 0.09s ==========================
____________________________________________________________________________ test_stop_on_fail _____________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail0')>

    def test_stop_on_fail(testdir):
        testdir.makepyfile(
            """
            import pytest_check as check

            class TestStopOnFail():

                def test_1(self):
                    check.equal(1, 1)
                    check.equal(1, 2)
                    check.equal(1, 3)


                def test_2(self):
                    check.equal(1, 1)
                    check.equal(1, 2)
                    check.equal(1, 3)
        """
        )

        result = testdir.runpytest("-x")
>       result.assert_outcomes(failed=1)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 2} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check.py:296: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 2 items

test_stop_on_fail.py ..                                                  [100%]

============================== 2 passed in 0.10s ===============================
________________________________________________________________________ test_context_manager_fail _________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_context_manager_fail0')>

    def test_context_manager_fail(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check

            def test_failures():
                with check: assert 1 == 0
                with check: assert 1 > 2
                with check: assert 1 < 5 < 4
        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_context_manager.py:34: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_context_manager_fail0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_context_manager_fail.py .                                           [100%]

============================== 1 passed in 0.10s ===============================
____________________________________________________________________ test_context_manager_fail_with_msg ____________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_context_manager_fail_with_msg0')>

    def test_context_manager_fail_with_msg(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check

            def test_failures():
                with check("first fail"): assert 1 == 0
                with check("second fail"): assert 1 > 2
                with check("third fail"): assert 1 < 5 < 4
        """
        )

        result = testdir.runpytest()
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_context_manager.py:57: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_context_manager_fail_with_msg0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_context_manager_fail_with_msg.py .                                  [100%]

============================== 1 passed in 0.09s ===============================
____________________________________________________________________________ test_stop_on_fail _____________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail1')>

    def test_stop_on_fail(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check

            def test_failures():
                with check: assert 1 == 0
                with check: assert 1 > 2
                with check: assert 1 < 5 < 4
        """
        )

        result = testdir.runpytest('-x')
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_context_manager.py:80: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail1
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_stop_on_fail.py .                                                   [100%]

============================== 1 passed in 0.09s ===============================
________________________________________________________________________ test_stop_on_fail_with_msg ________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail_with_msg0')>

    def test_stop_on_fail_with_msg(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check

            def test_failures():
                with check("first fail"): assert 1 == 0
                with check("second fail"): assert 1 > 2
                with check("third fail"): assert 1 < 5 < 4
        """
        )

        result = testdir.runpytest('-x')
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_context_manager.py:99: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_stop_on_fail_with_msg0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_stop_on_fail_with_msg.py .                                          [100%]

============================== 1 passed in 0.09s ===============================
____________________________________________________________________________ test_setup_failure ____________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_setup_failure0')>

    def test_setup_failure(testdir):
        testdir.makepyfile(
            """
            import pytest
            import pytest_check as check

            @pytest.fixture()
            def a_fixture():
                check.equal(1, 2)

            def test_1(a_fixture):
                pass
            """
        )
        result = testdir.runpytest()
>       result.assert_outcomes(errors=1)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 1,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'errors': 0} != {'errors': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_errors.py:16: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_setup_failure0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_setup_failure.py .                                                  [100%]

============================== 1 passed in 0.09s ===============================
__________________________________________________________________________ test_teardown_failure ___________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_teardown_failure0')>

    def test_teardown_failure(testdir):
        testdir.makepyfile(
             """
             import pytest
             import pytest_check as check

             @pytest.fixture()
             def a_fixture():
                 yield
                 check.equal(1, 2)

             def test_1(a_fixture):
                 pass
             """
        )
        result = testdir.runpytest()
>       result.assert_outcomes(passed=1, errors=1)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 1,...pped': 0, ...}
E         Omitting 5 identical items, use -vv to show
E         Differing items:
E         {'errors': 0} != {'errors': 1}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_errors.py:36: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_teardown_failure0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_teardown_failure.py .                                               [100%]

============================== 1 passed in 0.09s ===============================
_________________________________________________________________________________ test_mix _________________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_mix0')>

    def test_mix(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check

            def test_fail_and_error(check):
                check.equal(1, 2)
                assert 2 == 3
            """
        )
        result = testdir.runpytest()
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'errors': 1} != {'errors': 0}
E         {'failed': 0} != {'failed': 1}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_errors.py:51: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_mix0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_mix.py E                                                            [100%]

==================================== ERRORS ====================================
____________________ ERROR at setup of test_fail_and_error _____________________
file /tmp/pytest-of-tkloczko/pytest-117/test_mix0/test_mix.py, line 3
  def test_fail_and_error(check):
E       fixture 'check' not found
>       available fixtures: _session_faker, aiohttp_client, aiohttp_raw_server, aiohttp_server, aiohttp_unused_port, benchmark, benchmark_weave, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, class_based_httpbin, class_based_httpbin_secure, class_mocker, cov, current_cases, doctest_namespace, event_loop, faker, fast, freezer, fs, httpbin, httpbin_both, httpbin_ca_bundle, httpbin_secure, loop, loop_debug, mocker, module_mocker, monkeypatch, no_cover, package_mocker, patching, proactor_loop, pytestconfig, raw_test_server, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, session_mocker, smart_caplog, stdouts, test_client, test_server, testrun_uid, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, unused_port, unused_tcp_port, unused_tcp_port_factory, virtualenv, weave, worker_id, workspace, xprocess
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-tkloczko/pytest-117/test_mix0/test_mix.py:3
=========================== short test summary info ============================
ERROR test_mix.py::test_fail_and_error
=============================== 1 error in 0.08s ===============================
________________________________________________________________________________ test_fail _________________________________________________________________________________

testdir = <Testdir local('/tmp/pytest-of-tkloczko/pytest-117/test_fail0')>

    def test_fail(testdir):
        testdir.makepyfile(
            """
            from pytest_check import check_func

            @check_func
            def is_four(a):
                assert a == 4

            def test_all_four():
                is_four(1)
                is_four(2)
                should_be_False = is_four(3)
                should_be_True = is_four(4)
                print('should_be_True={}'.format(should_be_True))
                print('should_be_False={}'.format(should_be_False))
        """
        )

        result = testdir.runpytest("-s")
>       result.assert_outcomes(failed=1, passed=0)
E       AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E         Omitting 4 identical items, use -vv to show
E         Differing items:
E         {'failed': 0} != {'failed': 1}
E         {'passed': 1} != {'passed': 0}
E         Use -v to get the full diff

/home/tkloczko/rpmbuild/BUILD/pytest_check-1.0.1/tests/test_check_func_decorator.py:41: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /tmp/pytest-of-tkloczko/pytest-117/test_fail0
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 1 item

test_fail.py should_be_True=True
should_be_False=False
.

============================== 1 passed in 0.09s ===============================
========================================================================= short test summary info ==========================================================================
ERROR tests/test_check_fixture.py::test_check_fixture
FAILED tests/test_check.py::test_watch_them_all_fail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check.py::test_check_xfail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check.py::test_check_xfail_strict - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check.py::test_check_and_assert - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check.py::test_stop_on_fail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_context_manager.py::test_context_manager_fail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_context_manager.py::test_context_manager_fail_with_msg - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_context_manager.py::test_stop_on_fail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_context_manager.py::test_stop_on_fail_with_msg - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_errors.py::test_setup_failure - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 1,...pped': 0, ...}
FAILED tests/test_check_errors.py::test_teardown_failure - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 1,...pped': 0, ...}
FAILED tests/test_check_errors.py::test_mix - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_check_func_decorator.py::test_fail - AssertionError: assert {'errors': 0,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
================================================================== 13 failed, 25 passed, 1 error in 9.47s ==================================================================

pytest_check is the only pytest plugin that uses an underscore in its name

Apparently, this is the only pytest plugin with an underscore in its name, rather than a dash. I searched here for pytest_ and found no other modules named this way. I don't know if this has any impact on anything at all (likely not).

When I do a local list of installed modules, again, this one is the only one with an underscore.

pytest                6.2.5
pytest-bdd            5.0.0
pytest_check          1.0.5
pytest-cov            2.7.1
pytest-forked         1.4.0
pytest-html           3.1.1
pytest-metadata       2.0.1
pytest-rerunfailures  10.2
pytest-runner         4.5.1
pytest-timeout        2.1.0
pytest-tui            0.9.4
pytest-xdist          2.5.0

Not worried about it, but thought I would bring it to your attention.

"'NoneType' object is not subscriptable" when you use retry decorator

We're getting the error when validation fails inside a function using retry (https://pypi.org/project/retry/). It seems that the contextlist gets None as it goes through the retry function calls in the stack to collect the failure logs.

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
<decorator-gen-54>:2: in _
    ???
utils/retrier.py:24: in retry_decorator
    rv = retry_call(f, fargs, fkwargs, exceptions, tries, delay, max_delay, backoff, jitter, log)
.eggs/retry-0.9.2-py3.6.egg/retry/api.py:101: in retry_call
    return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter, logger)
.eggs/retry-0.9.2-py3.6.egg/retry/api.py:33: in __retry_internal
    return f()
internal/fixtures/connectors_verification.py:87: in _
    validate_incidents(incidents_with_entity_name, expected_detectors, creation_time)
internal/fixtures/connectors_verification.py:122: in _
    check_detector_matches_in_incidents(incidents, expected_detectors)
internal/fixtures/anon.py:137: in _
    f'Mismatch between expected detectors {expected_detectors} and actual detectors {actual_detectors}'
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

level = 9

    def get_full_context(level):
        (_, filename, line, funcname, contextlist) = inspect.stack()[level][0:5]
        filename = os.path.relpath(filename)
>       context = contextlist[0].strip()
E       TypeError: 'NoneType' object is not subscriptable

.eggs/pytest_check-0.3.4-py3.6.egg/pytest_check/check_methods.py:170: TypeError`

Here is what inspect.stack()[level][0:5] looks like when the error occurs.
Capture

Assert like exception messages

The error messages pytest-check has are slightly different than what comes out of when we use assert. This causes IDE's to not benefit from parsing the errors and adding extra functionality.

Ex: when PyCharm parses something like test_MyFile:100 it will recognize that pattern and add a link to the code.

Here's an example:

With assert:

F
tests/common/test_Environment.py:281 (TestCache.test_get_cache_file_doesnt_exist[beta])
self = <tests.common.test_Environment.TestCache object at 0x108dfb748>
cache_file = ('r9UtOGtT.tmp', <qacore.common.Environment.Environment object at 0x108dfb4a8>)

    @pytest.mark.parametrize('cache_file', [EnvironmentType.BETA], indirect=True)
    def test_get_cache_file_doesnt_exist(self, cache_file):
        filename, environment = cache_file
        actual_file, exists = environment._get_cache_file(filename)
        actual_dir, actual_filename = os.path.split(actual_file)
        check.is_false(exists)
        check.equal(actual_dir, environment._get_cache_dir())
        check.equal(filename, actual_filename)
        # check.is_true(False)
>       assert False
E       assert False

test_Environment.py:291: AssertionError

With check:

F
tests/common/test_Environment.py:281 (TestCache.test_get_cache_file_doesnt_exist[beta])
FAILURE: 
assert False
 +  where False = bool(False)
  test_Environment.py, line 291, in test_get_cache_file_doesnt_exist() -> check.is_true(False)
------------------------------------------------------------
Failed Checks: 1

I guess it boils down to check having:
test_Environment.py:291 in test_... (no comma after the line number)
instead of:
test_Environment.py, line 291, in test_

1.0.1: missing git tag

According to pypi latest version is 1.0.1 however in git repo there is no git tag for that version.

Detailed logging for failures

When check catches a failure the logging isn't detailed. For example:

FAILURE: 
  integration/foo.py, line 123, in test_foo() -> check.equal(foo, bar)
AssertionError

Ideally would also see the specifics of the AssertionError but it seems that these are dropped in log_failure (I could be mistaken, but there doesn't seem to be a way to display verbose output).

ntpath.py: ValueError: path is on mount 'C:', start on mount 'D:' error shows after calling get_full_context() from a different drive on Windows 10

Summary:

The check_methods.get_full_context(level) function does not work correctly on Windows 10, error in determining filename on line 195: filename = os.path.relpath(filename)

Detailed description:

If the function was called from a file on the D drive and filename is on the C drive, then os.path.relpath will give an error:
ValueError: path is on mount 'C:', start on mount 'D:'

Pytest will failed on the ValueError exception from module ntpath.py line 703.

This behavior is possible if we override the call of any pytest_check function not from the test file, but from the module file, which is located on a different disk than the test.
What I mean is function overrides can be getted through Handlers and inheritance.

I have prepared a vivid example with overriding the behavior of the logging function.
When calling logging.info(), pytest_check.is_none() will also be called to check the assertion which will always fail.

We will also intercept all pytest logging during pytest_runtest_call()
This is where we can initialize pytest_check.is_none() from one drive while the test is on another a drive.

Perhaps there are inaccuracies in terminology in my description, I think it will be clear from the code below what I mean.

For example:

We have Python 3.9.1 with modules (logging, pytest, pytest_check) installed on disc C:/Python391/
We have tests with conftest installed on disc D:/test/ with this code:

D:/test/conftest.py:

import logging
import pytest_check


class MyHandler(logging.Handler):
    def emit(self, record):
        pytest_check.is_none(record.getMessage())


def pytest_runtest_call():
    logger = logging.getLogger()
    my_handler = MyHandler()

    logger.addHandler(my_handler)

D:/test/test_logging_assert.py:

import logging


def test_logging_assert(request):
    logging.info('not none')

Run the command line from the test folder on the D drive:

D:\test> C:/Python391/python.exe -m pytest --log-cli-level=info test_logging_assert.py

Actual result:

D:\test> C:/Python391/python.exe -m pytest --log-cli-level=info test_logging_assert.py
================================================= test session starts =================================================
platform win32 -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: D:\test
plugins: allure-pytest-2.8.33, pytest_check-1.0.1, rerunfailures-9.1.1
collected 1 item

test_logging_assert.py::test_logging_assert
---------------------------------------------------- live log call ----------------------------------------------------
INFO     root:test_logging_assert.py:5 not none
FAILED                                                                                                           [100%]

====================================================== FAILURES =======================================================
_________________________________________________ test_logging_assert _________________________________________________

args = ('not none',), kwds = {}, __tracebackhide__ = True

    @functools.wraps(func)
    def wrapper(*args, **kwds):
        __tracebackhide__ = True
        try:
>           func(*args, **kwds)

C:\Python391\lib\site-packages\pytest_check\check_methods.py:84:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = 'not none', msg = ''

    @check_func
    def is_none(x, msg=""):
>       assert x is None, msg
E       AssertionError:
E       assert 'not none' is None

C:\Python391\lib\site-packages\pytest_check\check_methods.py:127: AssertionError

During handling of the above exception, another exception occurred:

request = <FixtureRequest for <Function test_logging_assert>>

    def test_logging_assert(request):
>       logging.info('not none')

test_logging_assert.py:5:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Python391\lib\logging\__init__.py:2085: in info
    root.info(msg, *args, **kwargs)
C:\Python391\lib\logging\__init__.py:1434: in info
    self._log(INFO, msg, args, **kwargs)
C:\Python391\lib\logging\__init__.py:1577: in _log
    self.handle(record)
C:\Python391\lib\logging\__init__.py:1587: in handle
    self.callHandlers(record)
C:\Python391\lib\logging\__init__.py:1649: in callHandlers
    hdlr.handle(record)
C:\Python391\lib\logging\__init__.py:948: in handle
    self.emit(record)
conftest.py:7: in emit
    pytest_check.is_none(record.getMessage())
C:\Python391\lib\site-packages\pytest_check\check_methods.py:195: in get_full_context
    filename = os.path.relpath(filename)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

path = 'C:\\Python391\\lib\\logging\\__init__.py', start = '.'

    def relpath(path, start=None):
        """Return a relative version of a path"""
        path = os.fspath(path)
        if isinstance(path, bytes):
            sep = b'\\'
            curdir = b'.'
            pardir = b'..'
        else:
            sep = '\\'
            curdir = '.'
            pardir = '..'

        if start is None:
            start = curdir

        if not path:
            raise ValueError("no path specified")

        start = os.fspath(start)
        try:
            start_abs = abspath(normpath(start))
            path_abs = abspath(normpath(path))
            start_drive, start_rest = splitdrive(start_abs)
            path_drive, path_rest = splitdrive(path_abs)
            if normcase(start_drive) != normcase(path_drive):
>               raise ValueError("path is on mount %r, start on mount %r" % (
                    path_drive, start_drive))
E                   ValueError: path is on mount 'C:', start on mount 'D:'

C:\Python391\lib\ntpath.py:703: ValueError
-------------------------------------------------- Captured log call --------------------------------------------------
INFO     root:test_logging_assert.py:5 not none
=============================================== short test summary info ===============================================
FAILED test_logging_assert.py::test_logging_assert - ValueError: path is on mount 'C:', start on mount 'D:'
================================================== 1 failed in 0.27s ==================================================

D:\test> 

Expected Result:

D:\test> C:/Python391/python.exe -m pytest --log-cli-level=info test_logging_assert.py
================================================= test session starts =================================================
platform win32 -- Python 3.9.1, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: D:\test
plugins: allure-pytest-2.8.33, pytest_check-1.0.1, rerunfailures-9.1.1
collected 1 item

test_logging_assert.py::test_logging_assert
---------------------------------------------------- live log call ----------------------------------------------------
INFO     root:test_logging_assert.py:5 not none
FAILED                                                                                                           [100%]

====================================================== FAILURES =======================================================
_________________________________________________ test_logging_assert _________________________________________________
FAILURE:
assert 'not none' is None
D:\test\test_logging_assert.py:5 in test_logging_assert() -> logging.info('not none')
C:\Python391\lib\logging\__init__.py:2085 in info() -> root.info(msg, *args, **kwargs)
C:\Python391\lib\logging\__init__.py:1434 in info() -> self._log(INFO, msg, args, **kwargs)
C:\Python391\lib\logging\__init__.py:1577 in _log() -> self.handle(record)
C:\Python391\lib\logging\__init__.py:1587 in handle() -> self.callHandlers(record)
C:\Python391\lib\logging\__init__.py:1649 in callHandlers() -> hdlr.handle(record)
C:\Python391\lib\logging\__init__.py:948 in handle() -> self.emit(record)
D:\test\conftest.py:7 in emit() -> pytest_check.is_none(record.getMessage())
------------------------------------------------------------
Failed Checks: 1
-------------------------------------------------- Captured log call --------------------------------------------------
INFO     root:test_logging_assert.py:5 not none
=============================================== short test summary info ===============================================
FAILED test_logging_assert.py::test_logging_assert
================================================== 1 failed in 0.17s ==================================================

D:\test>

Possible solution:

Change finding os.path.relpath to finding os.path.abspath:
filename = os.path.abspath(filename)

Environment:

OS: Windows 10
Python: 3.9.1
Pytest: 6.2.2
Pytest check: 1.0.1

Pull request #54

Exception output is overwritten if there are failed checks in test

in the case of failed checks in the test, the hook pytest_runtest_makereport will overwrite report.longrepr instead of appending to it. but report.longrepr hold the output to be printed for the exception.

(if there are no failures naturally the hook does not overwrite this var and the output of the exception gets printed)

Raises as a context manager: log failure when exception is not raised

Reference:

  • pytest 7.1.1
  • pytest_check 1.0.5
    In following example:
from pytest_check import raises


class CustomException(Exception):
    pass


def test_context_manager_not_raised_1():
    with raises(CustomException):
        print("Custom Exception is not raised")


def test_context_manager_not_raised_2():
    with raises(AssertionError):
        print("Assertion Exception is not raised")

I expected these tests pass both. But both fail. E.g: The first one:
image

Please let me know if my understanding is right.

testsuite fails on Debian/Testing with python3.9

Hi,
trying to package pytest-check 1.0.4 for Debian I run into these errors in the testsuite with python 3.9 on Debian/Testing:

=========================== short test summary info ============================
FAILED tests/test_check.py::test_watch_them_all_fail - AssertionError: assert...
FAILED tests/test_check.py::test_check_xfail - AssertionError: assert {'error...
FAILED tests/test_check.py::test_check_xfail_strict - AssertionError: assert ...
FAILED tests/test_check.py::test_check_and_assert - AssertionError: assert {'...
FAILED tests/test_check.py::test_stop_on_fail - AssertionError: assert {'erro...
FAILED tests/test_check_context_manager.py::test_context_manager_fail - Asser...
FAILED tests/test_check_context_manager.py::test_context_manager_fail_with_msg
FAILED tests/test_check_context_manager.py::test_stop_on_fail - AssertionErro...
FAILED tests/test_check_context_manager.py::test_stop_on_fail_with_msg - Asse...
FAILED tests/test_check_errors.py::test_setup_failure - AssertionError: asser...
FAILED tests/test_check_errors.py::test_teardown_failure - AssertionError: as...
FAILED tests/test_check_errors.py::test_mix - AssertionError: assert {'errors...
FAILED tests/test_check_func_decorator.py::test_fail - AssertionError: assert...
FAILED tests/test_maxfail.py::test_check_maxfail_1 - AssertionError: assert {...
FAILED tests/test_maxfail.py::test_check_maxfail_2 - AssertionError: assert {...
ERROR tests/test_check_fixture.py::test_check_fixture
==================== 15 failed, 24 passed, 1 error in 0.60s ====================
E: pybuild pybuild:367: test: plugin distutils failed with: exit code=1: cd /build/pytest-check-1.0.4/.pybuild/cpython3_3.9_pytest-check/build; python3.9 -m pytest tests
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13

Best

Setup docs under Sphinx

I'll make a PR to help on this, want to make a checklist for things that need to be done as well as have some discussion.

  • [] Initial docs pages using MyST for Markdown
  • [] api.md with sphinx.ext.autodoc
  • [] GHA for publishing docs

Questions

  • Where do you want these hosted?
  • Is everything "public"?
  • Related: want to add some good docstrings, perhaps with the Google style (for sphinx.ext.napoleon formatting)
  • You don't currently have type hints, if you plan to add them, I can setup the autodoc extension for that

Failure message color in pytest-html ?

Hello. I found that pytest-check do not have classic red color after exporting stdout to html using pytest-html.
Below examples of same run with pytest-check and pytest-assume

here is grey color of pytest-check:
image

comparing with pytest-assume here is red color:
image

Nothing changed between these screenshots instead of replacing with assume: and with check:
Do you have any option or suggestion how to make it classic red color? Thx

pytest-check fails with pytest's --maxfail option

pytest-check can not be used in combination with the --maxfail option because @check_func seems to have no effect and AssertionErrors are not caught.

minimal example

Use example from readme as test_check.py:

def test_example(check):
    a = 1
    b = 2
    c = [2, 4, 6]
    check.greater(a, b)
    check.less_equal(b, a)
    check.is_in(a, c, "Is 1 in the list")
    check.is_not_in(b, c, "make sure 2 isn't in list")

start with pytest.exe --maxfail=10 test_check.py

result

================================================ test session starts =================================================
platform win32 -- Python 3.7.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: XXXX
plugins: pytest_check-1.0.1, cov-2.12.1
collected 1 item

test_check.py F                                                                                                [100%]

====================================================== FAILURES ====================================================== 
____________________________________________________ test_example ____________________________________________________ 

check = <module 'pytest_check.check_methods' from 'XXXX\\lib\\site-packages\\pytest_check\\check_methods.py'>

    def test_example(check):
        a = 1
        b = 2
        c = [2, 4, 6]
>       check.greater(a, b)

test_check.py:5:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 1, b = 2, msg = ''

    @check_func
    def greater(a, b, msg=""):
>       assert a > b, msg
E       AssertionError: 
E       assert 1 > 2

XXXX\lib\site-packages\pytest_check\check_methods.py:175: AssertionError
============================================== short test summary info ============================================== 
FAILED test_check.py::test_example - AssertionError:
================================================= 1 failed in 0.18s ================================================= 

Same results with linux (docker) with python 3.9.

return True if assert passed

when pytest_check failed it will return False. when it passed it will return None. Nice if it return True if check passed.

why I am geeting no attribute 'equal' error?

Traceback (most recent call last):
File "c:/projects/xyz/1.py", line 63, in
test()
File "c:/projects/xyz/1.py", line 62, in test
print(check.equal(5,6))
AttributeError: 'CheckContextManager' object has no attribute 'equal'

Flaky and pytest-check don't fail when used together

Hi,

I've also filed this under box/flaky#162, but since the issue arises only when using both plugins together, I'm not sure where the responsibilities lie, so flagging it with both projects to be sure.

This simple case fails as expected:

from pytest_check import check
import pytest

@pytest.mark.flaky
def test_foo():
    # with check:
        assert 11 == 10

But uncomment the with check and the test now "passes".

The "Failed Checks: 1" output disappears.

The "Flaky Test Report" in the console prints test_foo passed 1 out of the required 1 times. Success!, whereas the pytest final line says no tests ran instead of either the red 1 failed (expected) or the green 1 passed (which would be more consistent with the flaky report).

The exit code from pytest is 0 (i.e., tests passing) instead of either 1 (expected) or 5 (which would be more consistent with the no tests ran printout).

This was using python 3.7.3, pytest 4.6.5, pytest-check 0.3.5 and flaky 3.6.1.

Regards,
A.

Switch to flit or variant of flit

I'd like to switch the project to use pyproject.toml
Would like to keep src dir if possible.

Would prefer to keep tox.ini separate since I'm not really a fan of the multi-line string tox support of pyproject.toml

xfail support failed with pytest 5.4

The test test_check_xfail() shows that with pytest 5.4, marking a test as @pytest.mark.xfail no longer works with check methods.

Workaround: @pytest.mark.xfail(strict=True) still works.

I haven't looked into why yet.

Define a GH Action for automated release publishing

I noticed a comment in #26 that you'd like to configure this project to automate publishing to PyPI.

If you're interested, there is a GitHub Action under the PyPA umbrella domain that will publish distribution files to PyPA when the action is triggered.

You can find PyPA's guide here, and here's an example of the workflow I use for flake8-annotations to build & publish a release to PyPI every time a new release is published on GitHub:

name: Publish to PyPI

on:
  release:
    types: [published]

jobs:
  build:
    name: Build dist & publish
    runs-on: ubuntu-18.04
    
    steps:
    - uses: actions/checkout@v1
    - name: Set up Python
      uses: actions/setup-python@v1
      with:
        python-version: '3.x'

    - name: Install build dependencies & build
      run: |
        python -m pip install --upgrade pip
        pip install setuptools wheel
        python setup.py sdist bdist_wheel
      
    - name: Publish package
      uses: pypa/[email protected]
      with:
        user: __token__
        password: ${{ secrets.pypi_api_token }}

Create single method to assert value and type (e.g. float != Decimal)

Hi Brian (et al.) πŸ‘‹

I'm a big fan of this package, particularly when doing a lot of asserts when converting a large dict to a non primitive type/class instance. I thought about creating a PR, but thought this would be worth a discussion. I'm trying to assert that a value has been correctly cast to Decimal in a single check.

All example tests below assume the following imports:

from decimal import Decimal

import pytest_check as check

Some examples that illustrate the paradigm with the currently available options

Test 1 β€” This test passes, but does not check if the type is correct.

def test__use__check_equal():
    float_input = 1.25

    decimal_value = Decimal(float_input)

    check.equal(float_input, decimal_value) # Passes

Test 2 β€” This test passes, but requires 2 checks, which is cumbersome when you are converting a lot of values

def test__use__check_equal__and__is_instance():
    float_input = 1.25

    decimal_value = Decimal(float_input)

    check.equal(float_input, decimal_value) # Passes
    check.is_instance(decimal_value, Decimal) # Passes

Test 3 β€” This (understandably) fails with the message assert Decimal('1.25') is Decimal('1.25')

def test__use__check_is_():

    float_input = 1.25

    decimal_value = Decimal(float_input)

    check.is_(decimal_value, Decimal(1.25)) # Fails

This is my attempt at a proposed addition. Not great, but figured it might get the conversation rolling.

A passing test...

def test__use__check_equal_with_type__show_passing_example():

    float_input = 1.25

    decimal_value = Decimal(float_input)

    check.equal_with_type(decimal_value, Decimal(1.25)) # Passes

A failing test...

def test__use__check_equal_with_type__show_failing_example():

    float_input = 1.25

    decimal_value = Decimal(float_input)

    check.equal_with_type(decimal_value, 1.25) # Fails

Implementation might look something like

@check_func
def equal_with_type(a, b, msg=""):
    assert a == b and type(a) is type(b), msg

Is early error checking possible?

Is there now a surefire way to check for errors even before the test ends?

I can do something like this, but it seems that this is not quite the intended scenario for using pytest-check

import pytest_check as check
from pytest_check.check_methods import get_failures


def test_some():
    check.equal(1, 2, msg="Numbers are not equal")
    check.equal(2, 1, msg="Numbers are not equal")
    assert not get_failures()
    raise NotImplementedError

Assertion failures are not reported if check failures are present

Hi,

Not sure if this is similar to #43, as the description is quite succinct.

When a test contain both regular assertions and checked assertions, and both fail, only the checked assertions are reported. The regular assertion (which may very well be the root issue) is not reported, and therefore considered as passing by the user.

Example:

from pytest_check import check

def test_one():
    with check:
       assert 1 == 2
    assert 2 == 3 

Results in:

test.py F                                                                                                                                                                           [100%]

======================================================================================== FAILURES =========================================================================================
________________________________________________________________________________________ test_one _________________________________________________________________________________________
FAILURE: assert 1 == 2
test.py:5 in test_one() -> assert 1 == 2
------------------------------------------------------------
Failed Checks: 1
================================================================================ 1 failed in 0.02 seconds =================================================================================

I would have expected this output (or something similar) in addition to the existing output:

    def test_one():
        with check:
           assert 1 == 2
>       assert 2 == 3
E       assert 2 == 3

test.py:6: AssertionError

Versions tested: pytest-4.6.5 + pytest_check-0.3.6

Cheers,
A.

Writing tests against testdir.runpytest() and local plugin (no tox)

Hi, I found this project after reading your great Python Testing With Python Book.

After downloading the project locally, I was able to run tox and have all the tests pass. However, when I try to run the tests using a local python3 instance (I'm using an activated conda env), the "integration" style tests that inject in python scripts using the testdir fixture all fail.

For example, I wrote this test trying to reproduce #43 and #44:

def test_mix_of_checks_and_asserts_are_reported_ok(testdir):
    testdir.makepyfile(
        """
        from pytest_check import check

        def test_mix():
            with check:
                assert 1 == 2
            assert 2 == 3    
        """
    )
    result = testdir.runpytest()
    result.assert_outcomes(failed=1, passed=0)
    # code to traverse the test report output and verify that both assertions failed 

When using a local (non tox) pytest (in my case, in the conda envir ./envs/default/bin), for ex

PYTHONPATH=src envs/default/bin/pytest --trace-config tests/test_check_context_manager.py::test_mix

The trace-config output shows me that the pytest-check fixture is being loaded, but the figure isn't being loaded in the test run via testdir.runpytest. Can I get any of the tests that use testdir to pass without using tox, and instead use a project-local pytest instance running via the command line?

Thank you!

Question. Is it possible to display assertion errors in allure?

Hi @okken First of all, thank you for the plugin! It solves the problem for complex tests, where impossible to split it on steps, but still - need to assert several stages of it.

I found out that in allure report there is no information with assertion error, but in IDE logs they exist. Is there any way to display them in allure?
Screenshot from 2020-07-27 18-46-47 This is - how I expect to see errors (here I put regular 'assert' check)

Screenshot from 2020-07-27 18-54-34
And this one - how I see them (there is no information about triggered assert message)

Screenshot from 2020-07-27 18-56-00
IDE screenshot

Thank you in advance.

Check as a context manager: AttributeError: __enter__

I try to use the check as a context manager and get the error.
Python verion: 3.10.5
Pytest version: 7.1.2
pytest-check version: 1.0.9

def test_multiple_failures():
    with check: assert 1 == 0
    with check: assert 1 > 2
    with check: assert 1 < 5 < 4
tests/hdz_app/test_sample.py:9 (test_multiple_failures)
def test_multiple_failures():
>       with check: assert 1 == 0
E       AttributeError: __enter__

test_sample.py:11: AttributeError

shift use of fixtures to import

The use of fixtures makes it difficult to put check calls in helper functions.

Moving to non-fixture will break existing code, but I think the new model is cleaner.

Idea: context manager API

So one thing in pytest-check that is reminiscent of unittest and nose is the assertion api. While it should work fine with pytest's assertion rewriter you still need to remember what the method name is. Is it check.less or check.less_than? Is it check.equal or check.is_equal? You can never tell.

So an alternative:

with check: assert a < 1
with check: assert 1 < a < 10

Context managers can "silence" exceptions by returning True in __exit__.

pytest-check does not FAIL in fixtures

Why is pytest.check does NOT report failures in fixtures, but assert does it.

e.g.
`
import pytest
import pytest_check as check

@pytest.fixture()
def before():
check.is_true(False, "I am failed - before")
print('\nbefore each test')

def test_1(before):
check.is_true(False, "I am failed - test_1")
print('test_1()')
`

============================= test session starts =============================
platform win32 -- Python 2.7.15, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('c:\Temp\.hypothesis\examples')
rootdir: c:\Temp, inifile:
plugins: rerunfailures-6.0, repeater-0.1.1, repeat-0.8.0, random-0.2, json-0.4.0, cov-2.6.1, check-0.3.4, assume-1.2.2, hypothesis-4.14.2
collected 1 item

test3.py F [100%]

================================== FAILURES ===================================
___________________________________ test_1 ____________________________________
FAILURE: I am failed - test_1
assert False

  • where False = bool(False)
    test3.py, line 10, in test_1() -> check.is_true(False, "I am failed - test_1")

Failed Checks: 1
---------------------------- Captured stdout setup ----------------------------

before each test
---------------------------- Captured stdout call -----------------------------
test_1()
========================== 1 failed in 0.13 seconds ===========================

Add real-time error messages during testing

Sometimes it is useful to have some information about your test asserts during execution, particularly in long period tests. I propose to add real-time error messages using python's logging module.

Other plugins might not get the failed status from pytest-check

Hi,

The allure-pytest plugin also uses the pytest_runtest_makereport plugin for getting test status, and it seems to be getting always a pass instead of a failure from pytest-check.

Examining the code, I see that pytest-check uses the option tryfirst=True in the hookimpl decorator:

@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()

Ideally I believe that the pytest-check plugin should change the test status ASAP, in order to allow further plugins to have the correct status.

I naively assumed it was correct then to have tryfirst=True, however, because the hookwrapper implementation in pytest-check only uses the "teardown" part (after yield), it is actually the last plugin to run the teardown part.

Therefore, I believe it should be changed to trylast=True instead, so its teardown becomes the very first thing to be executed and assuring the correct test status as early as possible. Changing it locally here then yields the correct behaviour with allure-pytest.

I could create a PR w/ unit tests in case we agree on it. Thanks!

Would it be possible to implement this without changing assert?

Hi @okken -- thanks for this plugin and for all your contributions. I've enjoyed your podcasts for a while!

I was looking for a plugin like this, though hoping it could be activated with a command-line option rather than changing to check from assert. Do you know if that's possible in theory? For example, by adjusting pytest's rewriting of asserts to something like the check implementation here?

TBC, I'm not asking for an implementation; but I figure you may have thought about this.

Thanks!

Support f-strings in py-check custom error messages

I have a test like so (fixture handles sending requests and storing the response data):

def test_something(self):
    url = self.response.data['url']
    check.is_true(url.endswith(self.extension), f'Image quality uri for device {self.device} should end with {self.extension}')

Depending on the params used for the test, the extension and device changes. This way I can feed in different parameters and still use the same test. However the output I get is as follows:

FAILURE: Image quality uri for device android should end with .tflite
assert False
 +  where False = bool(False)
integration/tests/test_initialize.py:64 in test_initialize_returns_correct_image_uri() -> check.is_true(url.endswith(self.extension), f'Image quality uri for device {self.device} should end with {self.extension}')
------------------------------------------------------------
Failed Checks: 1

I would like the f-string to properly interpolate the string with the variables, if possible πŸ˜„

Not sure if this is a clone of #52

check_func() should be exported. I'd like to use it on my own assert functions.

check_func() decorator is very useful on it's own (as evidenced by it's usage in the library itself!).
I would like to be able to decorate my own helper functions with it.

I don't want to rewrite helper assert functions from using 'asserts' to using 'expects', because I don't want to change their meaning globally. But in certain contexts I would like to make them into 'expects'.
This will greatly improve modularity and composability of my test code.

After that if would be good to have it (at least optionally) also convert pytest's OutcomeExceptions into non-fatal errors.

when I try to compare numbers I get "IndexError: list index out of range"

when I try to compare:
pytest_check.greater(1, 2)

I get:
Traceback (most recent call last):
File "D:\Project\autotest_vm\venv\lib\site-packages\pytest_check\check_methods.py", line 84, in wrapper
func(*args, **kwds)
File "D:\Project\autotest_vm\venv\lib\site-packages\pytest_check\check_methods.py", line 175, in greater
assert a > b, msg
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Project\autotest_vm\3.py", line 2, in
pytest_check.greater(1, 2)
File "D:\Project\autotest_vm\venv\lib\site-packages\pytest_check\check_methods.py", line 89, in wrapper
log_failure(e)
File "D:\Project\autotest_vm\venv\lib\site-packages\pytest_check\check_methods.py", line 304, in log_failure
(file, line, func, context) = get_full_context(level)
File "D:\Project\autotest_vm\venv\lib\site-packages\pytest_check\check_methods.py", line 292, in get_full_context
(_, filename, line, funcname, contextlist) = inspect.stack()[level][0:5]
IndexError: list index out of range

Why i getting "During handling of the above exception, another exception occurred"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.