asottile / detect-test-pollution Goto Github PK
View Code? Open in Web Editor NEWa tool to detect test pollution
License: MIT License
a tool to detect test pollution
License: MIT License
In my test suite, I have some tests that hang due to test pollution, ie, they don't complete/fail.
I know about pytest-timeout, and was wondering if there is a way to combine it with this tool.
In particular, if the failing test is segfaulting, then pytest will not get an opportunity to write out JSON
In order to use detect-test-pollution
on a project with pytest-randomly just now, I had to patch it to remove the -p no:randomly
option it autoatmically adds. This is because I wanted to use the reseeding capability of pytest-randomly, to ensure that randomly generated data was the same, just in case that was related to the test failures.
Would it be sensible to drop -p no:randomly
, and instead recommend putting --randomly-dont-reorganize
in pytest.ini
for those using pytest-randomly?
sometimes a test is only passing due to pollution and fails on its own
it would be nice to have this as an option (currently I just edit the test to invert the error)
Was browsing the code and came across this comment:
Wanted to point out that https://github.com/pytest-dev/pytest-reportlog is meant to support this use case exactly. If any feature you need is missing, should be easy to enhance that plugin too.
Hi @asottile, thank you for another awesome project!
I am getting this error when running with --testids-file
option
root@01c459141970:/path/to/project# pipenv run detect-test-pollution --failing-test 'project/tests/tests.py::Test::test' --testids-file './testidsfile.txt'
discovering all tests...
-> pre-discovered 100 tests!
ensuring test passes by itself...
-> OK!
ensuring test fails with test group...
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/project-Qkx4pV5j/bin/detect-test-pollution", line 8, in <module>
sys.exit(main())
File "/root/.local/share/virtualenvs/project-Qkx4pV5j/lib/python3.9/site-packages/detect_test_pollution.py", line 299, in main
return _bisect(testpath, args.failing_test, testids)
File "/root/.local/share/virtualenvs/project-Qkx4pV5j/lib/python3.9/site-packages/detect_test_pollution.py", line 224, in _bisect
if _passed_with_testlist(testpath, failing_test, testids):
File "/root/.local/share/virtualenvs/project-Qkx4pV5j/lib/python3.9/site-packages/detect_test_pollution.py", line 136, in _passed_with_testlist
with open(results_json) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp4c4aahsz/results.json'
I will try to make a reproducible example on the weekend. In the meantime I was hoping maybe you have some clues what could be the problem here.
There can be a setup with tempdir having the same ancestor as current working directory:
pytest -vv tests/detect_test_pollution_test.py::test_discover_tests
================================ test session starts =================================
platform linux -- Python 3.12.2, pytest-8.1.2, pluggy-1.5.0 -- /usr/src/RPM/BUILD/python3-module-detect-test-pollution-1.2.0/.run_venv/bin/python3
cachedir: .pytest_cache
rootdir: /usr/src/RPM/BUILD/python3-module-detect-test-pollution-1.2.0
collected 1 item
tests/detect_test_pollution_test.py::test_discover_tests FAILED [100%]
====================================== FAILURES ======================================
________________________________ test_discover_tests _________________________________
tmp_path = PosixPath('/usr/src/tmp/pytest-of-builder/pytest-12/test_discover_tests0')
def test_discover_tests(tmp_path):
f = tmp_path.joinpath('t.py')
f.write_text('def test_one(): pass\ndef test_two(): pass\n')
> assert _discover_tests(f) == ['t.py::test_one', 't.py::test_two']
E AssertionError: assert ['tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py::test_one', 'tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py::test_two'] == ['t.py::test_one', 't.py::test_two']
E
E At index 0 diff: 'tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py::test_one' != 't.py::test_one'
E
E Full diff:
E [
E - 't.py::test_one',
E - 't.py::test_two',
E + 'tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py::test_one',
E + 'tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py::test_two',
E ]
tests/detect_test_pollution_test.py:140: AssertionError
-------------------------------- Captured stdout call --------------------------------
tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.py: 2
============================== short test summary info ===============================
FAILED tests/detect_test_pollution_test.py::test_discover_tests - AssertionError: assert ['tmp/pytest-of-builder/pytest-12/test_discover_tests0/t.p...
In this case Pytest's rootdir
is /usr/src
, TMPDIR
is /usr/src/tmp
.
Exactly the same happens to tests/detect_test_pollution_test.py::test_passed_with_testlist_failing
and tests/detect_test_pollution_test.py::test_passed_with_testlist_passing
.
When trying to debug a flaky test, I got the following output
$ detect-test-pollution --failing-test ... --tests ...
discovering all tests...
-> discovered 83 tests!
ensuring test passes by itself...
-> OK!
ensuring test fails with test group...
-> OK!
running step 1:
- 82 tests remaining (about 7 steps)
running step 2:
- 41 tests remaining (about 6 steps)
running step 3:
- 21 tests remaining (about 5 steps)
running step 4:
- 11 tests remaining (about 4 steps)
running step 5:
- 6 tests remaining (about 3 steps)
running step 6:
- 3 tests remaining (about 2 steps)
running step 7:
- 2 tests remaining (about 1 steps)
double checking we found it...
Traceback (most recent call last):
File "/usr/local/bin/detect-test-pollution", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/detect_test_pollution.py", line 289, in main
return _bisect(testpath, args.failing_test, testids)
File "/usr/local/lib/python3.9/site-packages/detect_test_pollution.py", line 241, in _bisect
raise AssertionError('unreachable? unexpected pass? report a bug?')
AssertionError: unreachable? unexpected pass? report a bug?
at this point, I cannot tell which tests were considered and I'm not any smarter about the test pollution than I have been before using the tool
I've tried passing options to pytest using PYTEST_ADDOPTS
(PYTEST_ADDOPTS="-vvv" detect-test-pollution --failing-test ... --tests ...
), but that didn't help.
I recall having pollution only when 3 tests were run in a given order in the past, so it is a totally fine if the tool cannot find it, but should help the user still
Some solution ideas that come to mind would be
-vvv
to pytest
)--verbose
flag (or -vvvv
style) and under the " tests remaining (about steps)" message print out the testids consideredbut of course there might be other alternatives I haven't considered
Hi there! thanks for the plugin, very helpful!
Just wondering: is there any way to handle the case where you have tests stored in different directories? I'm trying to identify the origins of some test pollution with a layout that looks like:
packagename
├── __init__.py
├── module1
│ ├── __init__.py
│ ├── module1.py
│ ├── tests
│ │ ├── __init__.py
│ │ ├── test_a.py
│ │ ├── test_b.py
├── module2
│ ├── __init__.py
│ ├── module2.py
│ ├── tests
│ │ ├── __init__.py
│ │ ├── test_c.py
│ │ ├── test_d.py
I can reproduce the failure when running with just pytest with just
pytest packagename/module1/tests/ packagename/module2/tests/test_d.py::failing_test_id
but to use detect-test-pollution
the following would result in a " failing test was not part of discovered tests!" error:
detect-test-pollution --failing-test packagename/module2/tests/test_d.py::failing_test_id --tests packagename/module1/tests/
while I could use
detect-test-pollution --failing-test packagename/module2/tests/test_d.py::failing_test_id --tests packagename/
I wanted to take advantage of the manual bisecting that I had already done. I ended up just copying over test_d.py
over to packagename/module1/tests/
, but it'd be nice if I could avoid that (i.e., if --tests
could accept multiple paths).
detect-test-pollution
seemed to hang the first time I ran it. I had left in a breakpoint()
like a dummy, but with no output I couldn't see this. Perhaps output could be enabled by default, and optionally turned off? Some formatting could make detect-test-pollution
's output stand out from pytest's.
The title says it.
I have a test suite that runs into a flaky test pollution.
While reworking the test set is one option, I was wondering if you have encountered such a case and if so, do you have a strategy to detect it with this tool?
While debugging a fix for #93 I have to wait for the it to verify that yes, the entire test suite fails with the problem, even though I know there is a problem. It would be nice to be able to restart bisection at the point in time I had gotten to previously
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.