Code Monkey home page Code Monkey logo

pytest-watch's Introduction

pytest-watch -- Continuous pytest runner

Current version on PyPI Say Thanks!

pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It beeps on failures and can run arbitrary commands on each passing and failing test run.

Motivation

Whether or not you use the test-driven development method, running tests continuously is far more productive than waiting until you're finished programming to test your code. Additionally, manually running pytest each time you want to see if any tests were broken has more wait-time and cognitive overhead than merely listening for a notification. This could be a crucial difference when debugging a complex problem or on a tight deadline.

Installation

$ pip install pytest-watch

Usage

$ cd myproject
$ ptw
 * Watching /path/to/myproject

Note: It can also be run using its full name pytest-watch.

Now develop normally and check the terminal every now and then to see if any tests are broken. Alternatively, pytest-watch can notify you when tests pass or fail:

  • OSX

    $ ptw --onpass "say passed" --onfail "say failed"

    $ ptw --onpass "growlnotify -m \"All tests passed!\"" \
          --onfail "growlnotify -m \"Tests failed\""

    using GrowlNotify.

  • Windows

    > ptw --onfail flash

    using Console Flash

You can also run a command before the tests run, e.g. seeding your test database:

$ ptw --beforerun init_db.py

Or after they finish, e.g. deleting a sqlite file. Note that this script receives the exit code of pytest as an argument.

$ ptw --afterrun cleanup_db.py

You can also use a custom runner script for full pytest control:

$ ptw --runner "python custom_pytest_runner.py"

Here's an minimal runner script that runs pytest and prints its exit code:

# custom_pytest_runner.py

import sys
import pytest

print('pytest exited with code:', pytest.main(sys.argv[1:]))

Need to exclude directories from being observed or collected for tests?

$ ptw --ignore ./deep-directory --ignore ./integration_tests

See the full list of options:

$ ptw --help
Usage: ptw [options] [--ignore <dir>...] [<directory>...] [-- <pytest-args>...]

Options:
  --ignore <dir>        Ignore directory from being watched and during
                        collection (multi-allowed).
  --ext <exts>          Comma-separated list of file extensions that can
                        trigger a new test run when changed (default: .py).
                        Use --ext=* to allow any file (including .pyc).
  --config <file>       Load configuration from `file` instead of trying to
                        locate one of the implicit configuration files.
  -c --clear            Clear the screen before each run.
  -n --nobeep           Do not beep on failure.
  -w --wait             Waits for all tests to complete before re-running.
                        Otherwise, tests are interrupted on filesystem events.
  --beforerun <cmd>     Run arbitrary command before tests are run.
  --afterrun <cmd>      Run arbitrary command on completion or interruption.
                        The exit code of "pytest" is passed as an argument.
  --onpass <cmd>        Run arbitrary command on pass.
  --onfail <cmd>        Run arbitrary command on failure.
  --onexit <cmd>        Run arbitrary command when exiting pytest-watch.
  --runner <cmd>        Run a custom command instead of "pytest".
  --pdb                 Start the interactive Python debugger on errors.
                        This also enables --wait to prevent pdb interruption.
  --spool <delay>       Re-run after a delay (in milliseconds), allowing for
                        more file system events to queue up (default: 200 ms).
  -p --poll             Use polling instead of OS events (useful in VMs).
  -v --verbose          Increase verbosity of the output.
  -q --quiet            Decrease verbosity of the output (precedence over -v).
  -V --version          Print version and exit.
  -h --help             Print help and exit.

Configuration

CLI options can be added to a [pytest-watch] section in your pytest.ini file to persist them in your project. For example:

# pytest.ini

[pytest]
addopts = --maxfail=2


[pytest-watch]
ignore = ./integration-tests
nobeep = True

Alternatives

  • xdist offers the --looponfail (-f) option (and distributed testing options). This instead re-runs only those tests which have failed until you make them pass. This can be a speed advantage when trying to get all tests passing, but leaves out the discovery of new failures until then. It also drops the colors outputted by pytest, whereas pytest-watch doesn't.
  • Nosey is the original codebase this was forked from. Nosey runs nose instead of pytest.

Contributing

  1. Check the open issues or open a new issue to start a discussion around your feature idea or the bug you found
  2. Fork the repository, make your changes, and add yourself to Authors.md
  3. Send a pull request

If you want to edit the README, be sure to make your changes to README.md and run the following to regenerate the README.rst file:

$ pandoc -t rst -o README.rst README.md

If your PR has been waiting a while, feel free to ping me on Twitter.

Use this software often? Say Thanks! 😃

pytest-watch's People

Contributors

aldanor avatar asford avatar avallbona avatar bendtherules avatar blueyed avatar carsongee avatar casio avatar coltonprovias avatar jacebrowning avatar joeyespo avatar lukaszb avatar rakjin avatar remcohaszing avatar suggiro avatar touilleman avatar xsteve avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-watch's Issues

File interlocking

My emacs creates backup and interlock files. I moved backups to other directory. Interlock files are harder to manage, but I don't want to disable them. Thus ptw runs at twice from time to time - because, for example, .#zzz.py appears in folder directory. What can I do?

Consider moving under pytest-dev?

Hi,

Would you like to move the plugin under the pytest-dev org? The idea of such a move is just to increase the visibility of the plugin and eventually share maintenance, but as the original author you still retain full control over the implementation and oversee of the plugin. You can read more in Submitting plugins to pytest-dev.

Cheers!

Error with `--collect-only` gets not displayed verbatim: RuntimeError: populate() isn't reentrant

The traceback after --collect-only failed is not the same as with calling py.test --collect-only directly:

With ptw project-dir:

Error: Could not run --collect-only to find the pytest config file. Trying again without silencing stdout...
Traceback (most recent call last):
  File "…/pyenv/project/bin/ptw", line 9, in <module>
    load_entry_point('pytest-watch', 'console_scripts', 'ptw')()
  File "…/pytest-watch/pytest_watch/command.py", line 83, in main
    if not merge_config(args, pytest_args, verbose=args['--verbose']):
  File "…/pytest-watch/pytest_watch/config.py", line 86, in merge_config
    config_path = _collect_config(pytest_args, silent)
  File "…/pytest-watch/pytest_watch/config.py", line 78, in _collect_config
    return _run_pytest_collect(pytest_args)
  File "…/pytest-watch/pytest_watch/config.py", line 52, in _run_pytest_collect
    exit_code = pytest.main(argv, plugins=[collect_config_plugin])
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 39, in main
    config = _prepareconfig(args, plugins)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 118, in _prepareconfig
    pluginmanager=pluginmanager, args=args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
    wrap_controller.send(call_outcome)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/helpconfig.py", line 28, in pytest_cmdline_parse
    config = outcome.get_result()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
    raise ex[1].with_traceback(ex[2])
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
    self.result = func()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 861, in pytest_cmdline_parse
    self.parse(args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 966, in parse
    self._preparse(args, addopts=addopts)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 937, in _preparse
    args=args, parser=self._parser)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
    return call_outcome.get_result()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
    raise ex[1].with_traceback(ex[2])
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
    self.result = func()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 238, in pytest_load_initial_conftests
    _setup_django()
  File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 134, in _setup_django
    django.setup()
  File "…/django/django/__init__.py", line 18, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "…/django/django/apps/registry.py", line 78, in populate
    raise RuntimeError("populate() isn't reentrant")
RuntimeError: populate() isn't reentrant

With py.test --collect-only project-dir:

Traceback (most recent call last):
  File "…/pyenv/project/bin/py.test", line 11, in <module>
    sys.exit(main())
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 39, in main
    config = _prepareconfig(args, plugins)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 118, in _prepareconfig
    pluginmanager=pluginmanager, args=args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
    wrap_controller.send(call_outcome)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/helpconfig.py", line 28, in pytest_cmdline_parse
    config = outcome.get_result()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
    raise ex[1].with_traceback(ex[2])
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
    self.result = func()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 861, in pytest_cmdline_parse
    self.parse(args)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 966, in parse
    self._preparse(args, addopts=addopts)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/config.py", line 937, in _preparse
    args=args, parser=self._parser)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
    return call_outcome.get_result()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
    raise ex[1].with_traceback(ex[2])
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
    self.result = func()
  File "…/pyenv/project/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 238, in pytest_load_initial_conftests
    _setup_django()
  File "…/pyenv/project/lib/python3.5/site-packages/pytest_django/plugin.py", line 134, in _setup_django
    django.setup()
  File "…/django/django/__init__.py", line 18, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "…/django/django/apps/registry.py", line 108, in populate
    app_config.import_models(all_models)
  File "…/django/django/apps/config.py", line 202, in import_models
    self.models_module = import_module(models_module_name)
  File "…/pyenv/project/lib/python3.5/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 986, in _gcd_import
  File "<frozen importlib._bootstrap>", line 969, in _find_and_load
  File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 658, in exec_module
  File "<frozen importlib._bootstrap_external>", line 764, in get_code
  File "<frozen importlib._bootstrap_external>", line 724, in source_to_code
  File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
  File "…/app/models.py", line 2082
    import bpdb
         ^
SyntaxError: invalid syntax

Installation fails when using Windows 10, Python 3.6 and Pipenv 9.0.3

Tried installing the package using Windows 10, Python 3.6 and Pipenv 9.0.3 but failed because of the following error:

Error:  An error occurred while installing pytest-watch!
Exception:
Traceback (most recent call last):
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\basecommand.py", line 215, in main
    status = self.run(options, args)
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\commands\install.py", line 342, in run
    prefix=options.prefix_path,
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_set.py", line 784, in install
    **kwargs
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_install.py", line 851, in install
    self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\req\req_install.py", line 1064, in move_wheel_files
    isolated=self.isolated,
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\wheel.py", line 247, in move_wheel_files
    prefix=prefix,
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\pip\locations.py", line 153, in distutils_scheme
    i.finalize_options()
  File "c:\users\jklmustoju\.virtualenvs\projectx-oczq1phh\lib\site-packages\setuptools\command\install.py", line 38, in finalize_options
    orig.install.finalize_options(self)
  File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\command\install.py", line 346, in finalize_options
    'userbase', 'usersite')
  File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\command\install.py", line 487, in convert_paths
    setattr(self, attr, convert_path(getattr(self, attr)))
  File "c:\users\jklmustoju\appdata\local\programs\python\python36\Lib\distutils\util.py", line 125, in convert_path
    raise ValueError("path '%s' cannot be absolute" % pathname)
ValueError: path '/Lib/site-packages' cannot be absolute

Suggestion: use global (/local) config file for default settings

Just an idea.

I found myself using the same set of options over and over, e.g. like so:

$ ptw --nobeep --poll --ignore=.tox -- \
    --import . tests -sv \
    --cov=some.package --cov-report=term \
    --cov-report=html --cov-config=tox.ini

Maybe we could support loading the defaults from ~/.ptwrc which could look like this

[pytest-watch]
nobeep = true
poll = true
ignore = .tox
addopts = --import . --cov-report=term --cov-report=html --cov-config=tox.ini -sv

and then the command above would be shortened to just

$ ptw -- --cov=some.package  # default options, with coverage
$ ptw                        # default options, no coverage

Maybe it could also look for a local ./.ptwrc file first (in the current folder).

Tries collection twice when aborted during it

When aborting ptw in the collection phase it will not quit, but retry it:

% ptw ...
^CError: Could not run --collect-only to find the pytest config file. Trying again without silencing stdout..

I think it should exit on SIGINT here.

--last-failed and retest on success

I'd love some guidance on how to get pytest --last-failed to play nice with ptw. Right now, if the last failed tests succeeds, then ptw sits patiently until the next file change.

What I'd like is for it to run normally with last-failed, but when all of the last-failed tests succeed, then rerun all tests. Any suggestions?

ini path collected doesnt respect pytest options -c and [file_or_dir]

From pytest docs:

usage: py.test [options] [file_or_dir] [file_or_dir] [...]

positional arguments:
file_or_dir

general:
-c file - load configuration from file instead of trying to locate one of the implicit configuration files.

Both the file_or_dir and -c options, when supplied can effectively change the directory tests get loaded from.
ptw should also try to locate pytest.ini file from there, which it doesnt as the args list is not passed while locating ini path. Maybe we should do that?

Soln 1: Pass whole arg list, alongwith --collect-only

One problem is that some plugins (like pytest-cov ) are a bit odd (or, maybe they have some reason) and don't respect the --collect-only flag. So, if we do add all args, it might not be as much a dry-run as we would like it to be and have side-effects (like, pytest-cov generates coverage of tests collected ).
On other hand, it might not be too much of side-effect, as we are going to run the tests anyway in the next step. But we should still mention this in docs, lest someone wants to disable this.

Soln 2: Parse its arg list and only pass in this two options

Most desirable result. But parsing and trying to guess the important parameters makes ptw less robust and prone to pytest changes, not to mention the code repetition from pytest

Loops endlessly after one change was detected in Boxcryptor environment

I just tried this on my project and it looks good initially. However, once I update any file (i.e. re-save), pytest-watch just performs test after test after test ... until I ctrl-c.

It should definitely not do that.

There is either a loop in the tool itself or it detects changes to files that are generated during the build process and thus always attempts to rebuild.

The latter is unlikely however since it does not start looping until the very first change is detected. Furthermore, I checked which files had changed in the directory since the "original" run of pytest-watch and it was only the file I actually did change and its corresponding __pycache__/*.pyc file. And that file is very likely not regenerated if the source file didn't change either.

It would be nice if the changed file was displayed in the console before the command is rerun.

Environment:
Windows7, Python 3.5.0 (64-bit), pytest 2.8.3, py 1.4.31, pytest-watch 3.8.0, watchdog 0.8.3

Running with django

There is an issue with pytest-watch running with django. I think it is related to the #21 issue.

While developing a django project sometims I get test database already exists when using pytest-watch. I think it's because pytest-watch in some circumstances launches second py.test copy. So we need some sort of locking mechanism, because otherwise it's really hard to use.

Add tests

There's a growing need to be sure the CLI arguments all play nicely together, that it's cross-platform, and runs on Python 2.7 and 3.4+.

Funny enough, I'm not exactly sure how to approach it here.

Ideas?

Might exit silently with exit code 2 (in merge_config)

merge_config might make ptw exit silently with code 2, when there are missing pytest plugins:

Removing with silence() shows this:

usage: ptw [options] [file_or_dir] [file_or_dir] [...]
ptw: error: unrecognized arguments: --reuse-db --dc=Tester
inifile: …/project/setup.cfg
rootdir: …/project/project

Source:

with silence():
pytest.main(['--collect-only'], plugins=[collect_config])

It would be good to display the errors from py.test in this case.

This issue was triggered when I've tried to use pytest-watch installed globally using https://github.com/mitsuhiko/pipsi/.

-v to increase verbosity

I try to use -v to increase verbosity of the output of the pytest to get a full diff

 Use -v to get the full diff 

with pytest -v work fine, I get the full diff. But when I use -v with ptw I don't get the full diff

Add --onstart similar to --onexit

ptw currently has --onexit, but no --onstart - There should be no reason for this asymmetrical design. There is already --beforerun, but it is run before every run of pytest and so, isn't a good candidate for initialising something just once.

It can be useful for starting things like a external notifier, which wants to run for the whole duration of ptw.

Adding it should be trivial.

Regression: passes watch dir to py.test

With ptw dir -- path/to/test_file.py ptw will include the dir in the py.test run:

Running: py.test dir path/to/test_file.py

This however will cause py.test to run the whole directory, and not only the specified test file.

This was not the case with 3.10.0, which does:

Running: py.test path/to/test_file.py

btw: I found the best method is to specify testpaths = path/to/tests with the pytest options in setup.cfg, which gets used when running just py.test.

If this was done intentionally, I think it should only get done when there are no py.test args provided, but that feels a bit too magic.

Tests directory name gets autocompleted

My project has a directory tests containing the unit tests I want to run with pytests. It also has a directory tests_functional, which I don't want to run continuously.

py.test tests/

will run only my unit tests, while

ptw tests/

will also run everything in tests_functional, even so:

ptw --ignore ./tests_functional ./tests

I don't see any way to run only what is in the tests folder. Moreover, I would expect a pytest extension to gather tests the same way pytest (and pytest-xdist) does.

merge_config might fail when using dirs outside of cwd

The following fails for me (silently, with exit code 2), similar to #43.

ptw project ~/.pyenv/versions/3.5.1/lib/python3.5/unittest/ -- --custom-option

Running it directly through py.test:

% py.test project ~/.pyenv/versions/3.5.1/lib/python3.5/unittest --custom-option http://localhost:9000/
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: --custom-option http://localhost:9000/
  inifile: None
  rootdir: /home/user

It looks like py.test does not handle dirs outside of cwd properly, and it should not use plugins from there anyway.

Therefore the watch dirs should not be passed to it.

Can there be `include` as well as ignore?

I have a project where files look kind of like:

app/
  module/
    module.py
tests/
  module/
    module_test.py
  some_test.py

In my ideal usage, I would say ptw --include module* which would watch module_test.py and module.py because that's the current test I'm working on, and don't want all the other tests to keep re running as I'm working.. also I'm finding writing many many --ignore to be cumbersome, though I'm probably doing that incorrectly.

ptw fails silently on startup if there is a syntax error

With the following unit test file:

# file test_syntax.py
print("hello"

ptw will fail silently.

D:\temp\t>type test_syntax.py
print("hello"

D:\temp\t>ptw

D:\temp\t>

If I fix the syntax error, I can start ptw. If I introduce a syntax error later (after starting ptw, while it is still running), I get the expected SyntaxError exception:

Running: py.test
============================= test session starts =============================
platform win32 -- Python 3.5.3, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
rootdir: D:\temp\t, inifile:
collected 0 items / 1 errors

=================================== ERRORS ====================================
_______________________ ERROR collecting test_syntax.py _______________________
d:\tools\python35\lib\site-packages\_pytest\python.py:408: in _importtestmodule
    mod = self.fspath.pyimport(ensuresyspath=importmode)
d:\tools\python35\lib\site-packages\py\_path\local.py:662: in pyimport
    __import__(modname)
E     File "D:\temp\t\test_syntax.py", line 2
E
E       ^
E   SyntaxError: unexpected EOF while parsing
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.20 seconds ===========================

Feature request: Partial run for modified files

Hi! Checking the docs I see that you can specify to run part of the tests specifying the test files or the test directories. However, it would be great if something like this could happen:

Imagine a large project in which the whole unit test suite takes various seconds to run. In this situation, it can be annoying to run the whole suite on each change. Instead, what would be great is to run only the tests related to the code that is being modified. For understanding this better let me put a project structure example:

my-project
 |-my-project
 |   |-source1.py
 |   |-source2.py
 |-tests
     |-ut
         |-test_source1.py
         |-test_source2.py

In this case, when developing, when I modify the source1 or the test_source1 I would like to only run the tests located in test_source1, not the ones in test_source2. I think that naming the tests the same name as the source files prefixed with test_ is a common practice and could help on this.

Do you think that this or a similar approach is possible? I'm open to create a PR myself if you think it is appropriate. This feature would make the development and the TDD cycle much faster 🙂

Cannot specify excluded directory patterns

If possible I'd like to add a new flag to pytest-watch --nowatchdirs, which matches up with the naming of --norecursedirs, in pytest.

I've been looking at http://hypothesis.readthedocs.org/en/latest/, a quickcheck tool, which automatically generates random inputs to test functions to ultimately make more robust software, part of its processing involves creating modules on-the-fly. Hypothesis creates modules on the fly at the root of the main pytest running process and therefore pytest-watch picks up on the new modules as they are created and pytest-watch never stops.

If I could include:
--nowatchdirs ".hypothesis"

This would allow for such operation. Do you have any thoughts?

Hook to be run before running pytest / configure py.test command

Instead of only calling py.test, it would be nice to be able to configure some hook to be run before it.

The workaround appears to be creaating a py.test wrapper script that would do this, and prepend it via $PATH. But it would be more comfortable to configure the path to py.test itself, which could be this wrapper script then.

pytest-watch -- tests/specificfile.py

Things like ptw {some ptw options} works. Also do ptw {some ptw options} -- -x and this latter becomes the same as pytest -x.

But I want to test just a specific file. Without ptw it would be pytest tests/test_specificfile.py and that'll ignore tests/otherfile.py etc.

But this doesn't work: ptw {some ptw options} -- tests/test_specificfile.py :(

Wait for the previous pytest process to exit before re-running

The proposal is to send SIGTERM to the pytest process when pytest-watch observes a file change. It'll then wait for the process to finish before starting the new cycle.

This will solve #23 as well as any other quirky behavior that comes from having two pytest instances running the same test suite.

This could also help with #21. The underlying question is what to do when a test initiates an interactive session like pdb. If pdb ignores our termination signal, we're done. If not, we may also need a --full CLI argument to tell pytest-watch to wait out the full test session before starting the new one. Similar to how pytest automatically adds -s when you use --pdb, pytest-watch can automatically add --full when --pdb is present in pytest's argument list or in the config file.

multiple directories support

It would be great if:

ptw [options] [<directory> ...]

rather than:

ptw [options] [<directory>]

I managed to patch this for my personal use.
But I hope this could be done in simpler way with plugin.
How about changing your main logic

  • from wrapping around and system-calling py.test ...
  • to being a pytest-plugin and adding entry_points={ 'pytest11': ... } to your setup.py? (like other pytest-something plugins do)

Anyway, this is my wish:

py.test --watch --onfail='...' --onpass='...' tests

for watching and testing tests/, or

py.test --watch-dir=mypackage --onfail='...' --onpass='...' tests examples

for watching mypackage/ but testing tests/ and examples/.
Or some better interface could be possible...

Thanks for reading this and releasing v2!

pytest.ini: can't specify multiple ignore directories

I'd like to ignore multiple directories in my pytest.ini, but it doesn't seem to work. When I touch a file in one of the ignored directories, ptw still starts a run.

I did try to specify multiple ignore keys (like how the CLI works), but ini files cannot have duplicated keys.

[pytest-watch]
ignore = __pycache__,junk,misc,resources,venv,wz_profiler
ext = .py,.html,.txt

pytest-watch with debugger

Quite often I invoke pytest with the --ipdb option so I could see what is wrong with my code on first failure.

So I invkoed pytest-watch with ptw -- --ipdb. The problem is that if after droping into ipdb I modify a test or a file it starts a new session and the one with ipdb console goes to the background. In the end I end up with dozens of ipdb instances running.

Is it possible to block ptw if current session has dropped to the ipdb?

Toggle options at runtime

Even though this project runs pytest automagically, I still need to cancel and re-run pretty often, because I like to run it with different flags.

E.g.: I like to increase / decrease verbosity or toggle pdb.

I think it would be nice to do this while pytest-watch is running using keyboard shortcuts. For example:

Pressing P could toggle pdb.
Pressing V (verbose) or Q (quiet) could switch between -q, ,-vand-vv.

Rediscover new tests

If I add a test file when PTW is already running the test is ignored by PTW unless I restart it.

It seems that you listen on changes for a list of files you gathered on PTW start and do not look for the new ones. Haven't checked the code yet but I think it's so. Maybe we should discover new tests as they appear.

Improve `--ignore`

Let's continue the discussion on how to improve the current single-level --ignore (alternatively, --norecursedirs). #6 was the original issue.

Summary

The core problem is that we need a way to exclude directory trees from the watch list. If we don't, ptw can crawl to a halt walking the entire tree or worse, use up the system's file resources.

Not all operating systems handle recursive watching natively, so using solutions like inotify (Linux) and kqueues (Mac) adds a new watch to each subdirectory instead of just one at the root. (Note that some systems do, like the Windows API and Mac's FSEvents.)

So we need to balance simplicity and efficiency. As mentioned above, not all platforms need a custom recursive solution, so using one in those cases hurts instead of helps.

Another idea is to shop around. If watchdog or another 3rd-party library exposes an efficient cross-platform solution (recursive watching/ignoring on Linux, and PatternMatchingEventHandler or a regex solution like #7 for Windows and Mac), we could pass it along instead of re-implementing it.

Add ability to use as a pytest plugin

From the start, pytest-watch has lived outside of pytest. It reactively runs py.test based on filesystem events, but also allows you to swap that out with another testing tool with --runner.

It's also had to duplicate some behavior that comes out-of-the-box with pytest, such as inifile discovery, directory inputs, and --ignore to name a few.

Moving the bulk of ptw to an official pytest plugin might alleviate a lot of this. The project can still include the ptw and pytest-watch console scripts (I personally enjoy running ptw), however, they can forward all options to py.test instead of reimplementing them (e.g. --verbose, --quiet, --pdb, --ignore details, inifile discovery, etc). These scripts could even continue exposing the CLI arguments that don't make sense to move to a plugin, like --runner.

A pytest-watch plugin could cleanly focus on its own options (--onpass, --spool, --poll), leaving the rest to pytest.

Potential downsides:

  • A custom --runner won't know how to interpret the CLI arguments, so it'll effectively have to do a lot of what pytest-watch does now, unless it runs py.test right away. (Probably not a huge problem.)

Thoughts?

Don't trigger onfail for KeyboardInterrupt

PTW reruns pytest 2-3 times in a row on every file save for me, in vim. Vim makes three changes on one save: moves file to file~, then creates file, then saves the latest text to file.

On its own, that would be fine (thank goodness for ptw interrupt and restart). But I'm using an onfail hook to display a failure message. KeyboardInterrupt seems to trigger the onfail hook, so I get a failure message every time I save a file. :(

My instinct is to change this else to elif exit_code != EXIT_INTERRUPTED:

if exit_code in [EXIT_OK, EXIT_NOTESTSCOLLECTED]:
run_hook(onpass)
else:
if beep_on_failure:
beep()
run_hook(onfail)

I'm not familiar with this code base though, and I'm pretty new to ptw. Am I using it wrong somehow? Is there a better way to hook a message on real test failures?

If my code suggestion is a good approach, I'm happy to do a quick patch PR.

Provide data about failed tests for onfail callback

It would be useful to have the name of failed tests available with the --onfail callback, e.g. in an environment variable.

This could be captured from subprocess.call and then parsed. Although that probably means to force / require specific options to py.test to get a useful short test summary, i.e. -rfEsxXw.

I could imagine using a py.test hook for this though: when pytest-watch is used as a plugin, it could capture this data itself.

The data would be useful to easily open the failing test in your editor, without having to copy'n'paste it (although this can be quite easy with a plugin like https://github.com/kopischke/vim-fetch already).

Windows entry points are broken

Python 3.5.1, Windows 7

The Windows executables appear to be broken.
Also, the readme is misleading:

Note: It can also be run using its full name py.test.watch.

py.test.watch does not in fact exist

>pip install pytest-watch
>ptw
    failed to create process
>py.test.watch
    'py.test.watch' is not recognized as an internal or external command, operable program or batch file.
>venv/Scripts/ptw.exe
    failed to create process
>venv/Scripts/pytest-watch.exe
    failed to create process
>python venv/Scripts/pytest-watch-script.py
    this works!

Running py.test from another environment no longer works

I have installed pytest-watch globally. I used to use this to run py.test from a virtual environment.

Since I've updated to 4.0.0 this no longer works.

As a workaround I now just install pytest-watch inside of the virtualenv as well, but I prefer not having to do so for each virtualenv.

Add ability to ignore file patterns

Flycheck for python generates temp files for each change done in the file even if the file has not been saved. So my_file.py will generate flycheck_my_file.py. As far as I can tell there is no way to change the path or extension of these files.

This creates a problem with pytest-watch. Whenever I change anything it runs the tests even if I havent saved the file. I would really like it to only run on file save. What would be nice was if --ignore could also ignore files by pattern similar to how .gitignore works. So I could run ptw --ignore flycheck* and not trigger a test when the flycheck files are generated.

Regardless, I am loving this tool. Thanks for developing it.

ptw fails to start on ModuleNotFoundError:

ptw exits immediately if there an import error like :

import not_found

def test_100():
    assert True

But if ptw starts with no error, then the same import error is correctly displayed (and ptw continues to run as expected) :

Running: py.test
============================================================================================================= test session starts =============================================================================================================
platform linux -- Python 3.6.3, pytest-3.0.7, py-1.5.2, pluggy-0.4.0
rootdir: /tmp/toto, inifile:
plugins: hypothesis-3.38.5
collected 1 items 

test_dummy.py .

========================================================================================================== 1 passed in 0.00 seconds ===========================================================================================================

Change detected: test_dummy.py

Running: py.test
============================================================================================================= test session starts =============================================================================================================
platform linux -- Python 3.6.3, pytest-3.0.7, py-1.5.2, pluggy-0.4.0
rootdir: /tmp/toto, inifile:
plugins: hypothesis-3.38.5
collected 0 items / 1 errors 

=================================================================================================================== ERRORS ====================================================================================================================
_______________________________________________________________________________________________________ ERROR collecting test_dummy.py ________________________________________________________________________________________________________
ImportError while importing test module '/tmp/toto/test_dummy.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_dummy.py:1: in <module>
    import not_found
E   ModuleNotFoundError: No module named 'not_found'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================================================================== 1 error in 0.12 seconds ===========================================================================================================



Evaluate good module for creating test directory structures

I'd really like to add some tests for the ini_config code first.

For that, we need a easy way of creating directory structures with some content filled in some files, somewhat like what can be done in cloud-init config files. Cloud-init seems too heavy and unix-specific. There is also Cloudbase-init for Windows.

We could write separate helpers for this, but it is better to shop for good modules instead. This are the ones I could find easily : testdata and pyboiler. Also, this blog post (http://btmiller.com/2015/03/17/represent-file-structure-as-yaml-with-python.html) can be a starting point

None of them can set the file contents though.

We need to choose one, that will also be used in other tests.

Child (py.test) not killed when ptw is killed

When ptw is running, killing it (pkill ptw) will not terminate the running py.test process.
I think that pytest-watch should forward the signal to py.test (or kill it with INT or TERM always).

Added feature to poll files for help with virtual machines/docker

We do a lot of our development in docker, and the events for changed files are generated in the host OS instead of inside docker, so ptw doesn't see that files have changed. Do you have any interest in adding a polling option, that will just scan the folder for modified file times and trigger tests? I may be able to write the code if you are.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.