Code Monkey home page Code Monkey logo

stestr's Introduction

stestr

CI Testing status

Code coverage

Latest Version

License:

Note

stestr v2.x.x release series will be the last series that supports Python 2. Support for Python 2.7 was dropped in stestr release 3.0.0.

Overview

stestr is parallel Python test runner designed to execute unittest test suites using multiple processes to split up execution of a test suite. It also will store a history of all test runs to help in debugging failures and optimizing the scheduler to improve speed. To accomplish this goal it uses the subunit protocol to facilitate streaming and storing results from multiple workers.

stestr originally started as a fork of the testrepository project. But, instead of being an interface for any test runner that used subunit, like testrepository, stestr concentrated on being a dedicated test runner for python projects. While stestr was originally forked from testrepository it is not backwards compatible with testrepository. At a high level the basic concepts of operation are shared between the two projects but the actual usage is not exactly the same.

Installing stestr

stestr is available via pypi, so all you need to do is run:

pip install -U stestr

to get stestr on your system. If you need to use a development version of stestr you can clone the repo and install it locally with:

git clone https://github.com/mtreinish/stestr.git && pip install -e stestr

which will install stestr in your python environment in editable mode for local development

Using stestr

After you install stestr to use it to run tests is pretty straightforward. The first thing you'll want to do is create a .stestr.conf file for your project. This file is used to tell stestr where to find tests and basic information about how tests are run. A basic minimal example of the contents of this is:

[DEFAULT]
test_path=./project_source_dir/tests

which just tells stestr the relative path for the directory to use for test discovery. This is the same as --start-directory in the standard unittest discovery.

Alternatively, if you're using stestr with tox you can integrate your stestr config in a stestr section in the tox.ini file, for example:

[stestr]
test_path=./project_source_dir/tests

After stestr is configured you should be all set to start using stestr to run tests. To run tests just use:

stestr run

it will first create a results repository at .stestr/ in the current working directory and then execute all the tests found by test discovery. If you're just running a single test (or module) and want to avoid the overhead of doing test discovery you can use the --no-discover/-n option to specify that test.

For all the details on these commands and more thorough explanation of options see the stestr manual: https://stestr.readthedocs.io/en/latest/MANUAL.html

Migrating from testrepository

If you have a project that is already using testrepository stestr's source repo contains a helper script for migrating your repo to use stestr. This script just creates a .stestr.conf file from a .testr.conf file. (assuming it uses a standard subunit.run test command format) To run this from your project repo just call:

$STESTR_SOURCE_DIR/tools/testr_to_stestr.py

and you'll have a .stestr.conf created.

Building a manpage

The stestr manual has been formatted so that it renders well as html and as a manpage. The html output and is autogenerated and published to: https://stestr.readthedocs.io/en/latest/MANUAL.html but the manpage has to be generated by hand. To do this you have to manually run sphinx-build with the manpage builder. This has been automated in a small script that should be run from the root of the stestr repository:

tools/build_manpage.sh

which will generate the troff file in doc/build/man/stestr.1 which is ready to be packaged and or put in your system's man pages.

Contributing

To browse the latest code, see: https://github.com/mtreinish/stestr To clone the latest code, use: git clone https://github.com/mtreinish/stestr.git

Guidelines for contribution are documented at: http://stestr.readthedocs.io/en/latest/developer_guidelines.html

Use github pull requests to submit patches. Before you submit a pull request ensure that all the automated testing will pass by running tox locally. This will run the test suite and also the automated style rule checks just as they will in CI. If CI fails on your change it will not be able to merge.

Community

Besides Github interactions there is also a stestr IRC channel:

#stestr on OFTC

feel free to join to ask questions, or just discuss stestr.

stestr's People

Contributors

0xdec0de avatar afrittoli avatar arun-n2020 avatar bz2 avatar chkumar246 avatar cmsj avatar dirkmueller avatar emonty avatar frankban avatar hroncok avatar jelmer avatar jml avatar jogo avatar kajinamit avatar masayukig avatar mtreinish avatar nisimond avatar petrutlucian94 avatar psathyan avatar pshchelo avatar pvinci avatar rbtcollins avatar ssbarnea avatar stephenfin avatar stmcginnis avatar svilgelm avatar tbreeds avatar toabctl avatar tosky avatar zaneb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

stestr's Issues

Add a silent output mode to stestr run and stestr load

There is an occasional use case for having a flag on stestr run and load that doesn't output anything. It just returns either 0 or 1. We should add a flag to stestr run and load that does this.

The primary motivator for this is the --analyze-isolation flag which does multiple test runs during it's operation. If there are failures they print to stdout making it confusing to follow.

Use subunit v2 for storage in file repository

Something that was in progress in testr before the fork was the migration to start using subunit v2 as the format for streams stored to disk in the file repository. We're using subunit v2 everywhere in testr except for storing it on disk. We convert it to v2 for all of the commands that read from the repository. I'm not sure how much work is actually needed to finish that migration, but it should simplify things if we can start using v2 everywhere.

Enable to show the license on GitHub

I've noticed that the GitHub has a feature to show the license of the project[1]. The help looks like just says adding a license file is enough for it. We already have it[2], of course. However, the license doesn't show up on the page.
I'm not sure how to do it, actually. But it shouldn't be so hard. In the worse case, just delete the LICENSE file, and then, do the thing help[1] says.

This page[3] also explains the feature. I couldn't find the solution for this, though.

[1] https://help.github.com/articles/adding-a-license-to-a-repository/
[2] https://github.com/mtreinish/stestr/blob/master/LICENSE
[3] https://help.github.com/articles/licensing-a-repository/

gdbm not found running stestr on Mac OSX

Error trying to run "stestr list" on Mac OSX. Already ran "brew install gdbm".

I am getting this error:

(stestr27) ➜ stestr git:(testpath) stestr list
Traceback (most recent call last):
File "/usr/local/opt/pyenv/versions/stestr27/bin/stestr", line 10, in
sys.exit(main())
File "/Users/step6927/Projects/stestr/stestr/cli.py", line 102, in main
sys.exit(args[0].func(args))
File "/Users/step6927/Projects/stestr/stestr/commands/list.py", line 55, in run
cmd = conf.get_run_command(_args, ids, filters)
File "/Users/step6927/Projects/stestr/stestr/config_file.py", line 85, in get_run_command
repository = util.get_repo_open(options.repo_type, options.repo_url)
File "/Users/step6927/Projects/stestr/stestr/repository/util.py", line 37, in get_repo_open
repo_module = importlib.import_module('stestr.repository.' + repo_type)
File "/usr/local/opt/pyenv/versions/2.7.9/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
File "/Users/step6927/Projects/stestr/stestr/repository/file.py", line 22, in
from six.moves import dbm_gnu as dbm
File "/usr/local/opt/pyenv/versions/2.7.9/envs/stestr27/lib/python2.7/site-packages/six.py", line 203, in load_module
mod = mod._resolve()
File "/usr/local/opt/pyenv/versions/2.7.9/envs/stestr27/lib/python2.7/site-packages/six.py", line 115, in _resolve
return _import_module(self.mod)
File "/usr/local/opt/pyenv/versions/2.7.9/envs/stestr27/lib/python2.7/site-packages/six.py", line 82, in _import_module
import(name)
ImportError: No module named gdbm

test_listing_fixture is a bad name

The name for the module stestr.test_listing_fixture and it's class name TestListingFixture are a terrible name and misleading in their function. It sounds like it runs tests on the listing_fixture. The problem is I'm not exactly sure how to describe it.

Currently the class TestListingFixture defines a fixture for the lifecycle of a temp file that contains a list of test_ids. These test_ids are either provided as an input parameter or found through running test discovery (by launching subunit.run discover --list). The fixture also provides functions that require the existence of that temporary test list, which includes the launching of the test runner workers. This means that the test_listing_fixture is where we actually run the test processes. (which is the source of the confusion around the name)

If anyone has an idea for a better name feel free to push a patch renaming things, or just comment on the bug.

Add subunit2sql repository type

Right now there is only 1 really usable repository type, the file backed one. This works well but has a number of limitations. A long term goal of the subunit2sql project was to provide a more rich repository that could be used for something like stestr/testrepository. Most of the pieces are there in subunit2sql to enable this, the only missing one is writing attachments out from the database, but this isn't necessarily a blocker for an initial implementation. Once the repo type is there we can work on making it the eventual default in subunit2sql.

stestr run --analyze-isolation doesn't work

The --analyze-isolation command used for bisect parallel test failures was never completed from the original fork. There are lots of things broken about that call path that prevent it from even running today. This will all need to be cleaned up. Also because of the complexity of this code path we should add real unit tests to ensure it works properly moving forward.

"WARNING: missing Worker N! Race in testr accounting." when using stestr --failing

While iteratively using the --failing argument to find and fix problems in a large suite of tests (the OpenStack nova unit tests) I received

WARNING: missing Worker 9! Race in testr accounting.

warnings for one or more workers, when spreading tests across 16 cores. As the number of tests being run shrank, the chance for the warning seemed to increase.

It seemed to be the case that a worker was reporting the problem when rationalizing the discovered tests against the list of failing tests and resulting in none.

Number of ran tests and seconds in "Totals" aren't matched each other when `stestr run --until-failure`

I noticed that the number of ran tests and seconds aren't matched each other.

The number of tests shows the sum of execution number of runs tests, but the seconds shows the only for one time execution. Belows are the example.

Ran: 110 tests in 5.4677 sec.
   :
Ran: 111 tests in 5.2347 sec.
$ stestr run --until-failure ......
:
======
Totals
======
Ran: 110 tests in 5.4677 sec.
 - Passed: 110
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 169.2701 sec.

==============
Worker Balance
==============
 - Worker 0 (110 tests) => 0:09:18.657611
{0} cinder.tests.unit.api.contrib.test_volume_type_encryption.VolumeTypeEncryptionTest.test_delete_with_volume_in_use [1.716951s] ... ok

======
Totals
======
Ran: 111 tests in 5.2347 sec.
 - Passed: 111
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 170.9871 sec.
:

Is this related to #119?

CLI arguments should use a standarized data structure and interface

Currently the stestr.cli module passes the raw output from argparse's parse_known_args() method to the run() method in each command module. While this works it's not a very friendly interface and it makes using the commands via a python interface a bit more complicated since you'd have to construct a tuple with the argparse Namespace object and any additional parameters to pass arguments into the run() methods.

To really make the commands externally consumable via a python api we need to standardize how we pass arguments into these functions. Whether that consists of switching from the tuple with the Namespace and list to something like a dict, or leaving it as is doesn't really matter too much. We just need a clearly writtend doc defining how parameters are passed in and having examples on how to use it outside of stestr.cli.

bad character range while listing or running refstack tests

Issue description

Download the refstack tests:

wget "https://refstack.openstack.org/api/v1/guidelines/2018.02/tests?target=platform&type=required&alias=true&flag=false" -O 2018.02-test-list.txt

then list or run refstack tests using stestr:

(.venv) [chkumar246@fedora stestr]$ stestr list --whitelist-file 2018.02-test-list.txt 
bad character range
(.venv) [chkumar246@fedora stestr]$ stestr list --whitelist-file 2018.02-test-list.txt --debug
bad character range
Traceback (most recent call last):
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib/python2.7/site-packages/cliff/app.py", line 399, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib/python2.7/site-packages/cliff/command.py", line 184, in run
    return_code = self.take_action(parsed_args) or 0
  File "/home/chkumar246/arena/stackers/stestr/stestr/commands/list.py", line 72, in take_action
    filters=filters)
  File "/home/chkumar246/arena/stackers/stestr/stestr/commands/list.py", line 121, in list_command
    cmd.setUp()
  File "/home/chkumar246/arena/stackers/stestr/stestr/test_processor.py", line 148, in setUp
    black_regex=self.black_regex)
  File "/home/chkumar246/arena/stackers/stestr/stestr/selection.py", line 113, in construct_list
    list_of_test_cases = filter_tests(regexes, test_ids)
  File "/home/chkumar246/arena/stackers/stestr/stestr/selection.py", line 29, in filter_tests
    _filters = list(map(re.compile, filters))
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib64/python2.7/re.py", line 194, in compile
    return _compile(pattern, flags)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib64/python2.7/re.py", line 251, in _compile
    raise error, v # invalid expression
error: bad character range
Traceback (most recent call last):
  File "/home/chkumar246/arena/stackers/stestr/.venv/bin/stestr", line 10, in <module>
    sys.exit(main())
  File "/home/chkumar246/arena/stackers/stestr/stestr/cli.py", line 101, in main
    return cli.run(argv)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib/python2.7/site-packages/cliff/app.py", line 278, in run
    result = self.run_subcommand(remainder)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib/python2.7/site-packages/cliff/app.py", line 399, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib/python2.7/site-packages/cliff/command.py", line 184, in run
    return_code = self.take_action(parsed_args) or 0
  File "/home/chkumar246/arena/stackers/stestr/stestr/commands/list.py", line 72, in take_action
    filters=filters)
  File "/home/chkumar246/arena/stackers/stestr/stestr/commands/list.py", line 121, in list_command
    cmd.setUp()
  File "/home/chkumar246/arena/stackers/stestr/stestr/test_processor.py", line 148, in setUp
    black_regex=self.black_regex)
  File "/home/chkumar246/arena/stackers/stestr/stestr/selection.py", line 113, in construct_list
    list_of_test_cases = filter_tests(regexes, test_ids)
  File "/home/chkumar246/arena/stackers/stestr/stestr/selection.py", line 29, in filter_tests
    _filters = list(map(re.compile, filters))
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib64/python2.7/re.py", line 194, in compile
    return _compile(pattern, flags)
  File "/home/chkumar246/arena/stackers/stestr/.venv/lib64/python2.7/re.py", line 251, in _compile
    raise error, v # invalid expression
sre_constants.error: bad character range

When I tried to run the same with os-testr it works fine:

(py27) [chkumar246@fedora tempest]$ ostestr -l --whitelist-file 2018.02-test-list.txt
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_create_agent[id-1fc6bdc8-0b6d-4cc7-9f30-9b04fabe5b90]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_delete_agent[id-470e0b89-386f-407b-91fd-819737d0b335]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents[id-6a326c69-654b-438a-80a3-34bcc454e138]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents_with_filter[id-eabadde4-3cd7-4ec4-a4b5-5a936d2d4408]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_update_agent[id-dc9ffd51-1c50-4f0e-a820-ae6d2a568a9e]

Expected behavior and actual behavior

stestr would able to list the tests.

I think there is something fishy going on here: https://github.com/mtreinish/stestr/blob/master/stestr/selection.py#L29

System information

stestr version (stestr --version):
stestr 2.1.1.dev4
Python release (python --version):
Python 2.7.15
pip packages (pip freeze):
alabaster==0.7.10
alembic==0.9.9
Babel==2.6.0
certifi==2018.4.16
chardet==3.0.4
cliff==2.12.0
cmd2==0.8.7
contextlib2==0.5.5
coverage==4.5.1
ddt==1.1.3
debtcollector==1.19.0
decorator==4.3.0
docutils==0.14
enum34==1.1.6
extras==1.0.0
fixtures==3.0.0
flake8==2.5.5
funcsigs==1.0.2
future==0.16.0
hacking==0.11.0
idna==2.7
imagesize==1.0.0
iso8601==0.1.12
Jinja2==2.10
linecache2==1.0.0
Mako==1.0.7
MarkupSafe==1.0
mccabe==0.2.1
mock==2.0.0
monotonic==1.5
netaddr==0.7.19
netifaces==0.10.7
oslo.config==6.2.1
oslo.db==4.38.0
oslo.i18n==3.20.0
oslo.utils==3.36.2
packaging==17.1
pbr==4.0.4
pep8==1.5.7
prettytable==0.7.2
pyflakes==0.8.1
Pygments==2.2.0
pyparsing==2.2.0
pyperclip==1.6.2
python-dateutil==2.7.3
python-editor==1.0.3
python-mimeparse==1.6.0
python-subunit==1.3.0
pytz==2018.4
PyYAML==3.12
requests==2.19.1
rfc3986==1.1.0
six==1.11.0
snowballstemmer==1.2.1
Sphinx==1.7.5
sphinxcontrib-websupport==1.1.0
SQLAlchemy==1.2.8
sqlalchemy-migrate==0.11.0
sqlparse==0.2.4
-e git+https://github.com/mtreinish/stestr.git@113015c2c7b094cdfad72198fc76c9de1b72ff97#egg=stestr
stevedore==1.28.0
subprocess32==3.5.2
subunit2sql==1.9.0
Tempita==0.5.2
testresources==2.0.1
testscenarios==0.5.0
testtools==2.3.0
traceback2==1.4.0
typing==3.6.4
unicodecsv==0.14.1
unittest2==1.1.0
urllib3==1.23
voluptuous==0.11.1
wcwidth==0.1.7
wrapt==1.10.11

Operating System:

Fedora 28

stestr load run time output doesn't reflect reality

When you run stestr load on a stream not being processed in real time with stestr run (or via stdin with a subunit emitting test run) the run time included in the output generated via the subunit_trace module doesn't reflect reality. Instead it is the time it took to process the subunit stream which isn't accurate. We should update the subunit trace module to figure out the run time from the timestamps inside the stream instead of taking 2 timestamps (one before the other after) around the subunit processing to figure out the run time.

implement support for interactive debugger

I thin that maybe is time to implement a workaround for allowing us to start the debugger on a failure. See https://bugs.launchpad.net/testrepository/+bug/902881

Instructions from https://wiki.openstack.org/wiki/Testr#Debugging_.28pdb.29_Tests are outdated as they do not include syntax changes (list instead of list-tests command)

This would be of great use for developers.

Ideally we should have a command line parameter (and environment variable) which would enable the alternative behaviour of starting the debugger on failed tests. Environment variable use is really useful as it avoids having to touch the code base to activate it. For example ansible can enable the debugger just by defining ANSIBLE_STRATEGY=debug which will stop on each failed tasks, no need to add params to calls (which an be hidden inside scripts). Once the developer defines the magic variable, it informs that he wants to debug failures.

Maybe we could even trigger this post-execution, when we already know which tests failed, so we would run again in non-parallel mode those that failed (or just call python -m testtools.run discover --load-list ... with the list of failed tests)

An error occurs when `stestr run` with unmatched regex

An error occurs when I run stestr run with an unmatched regex like stestr run foobar

$ stestr run foobar
running=${PYTHON:-python} -m subunit.run discover -t ./ ./stestr/tests --list
arguments:None, None
Traceback (most recent call last):
  File "/Users/igawa/openstack/stestr/.tox/py27/bin/stestr", line 10, in <module>
    sys.exit(main())
  File "/Users/igawa/openstack/stestr/stestr/cli.py", line 102, in main
    sys.exit(args[0].func(args))
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 185, in run
    subunit_out=args.subunit)
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 337, in _run_tests
    return run_tests()
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 334, in run_tests
    repo_url=cmd.options.repo_url)
  File "/Users/igawa/openstack/stestr/stestr/commands/load.py", line 103, in load
    streams = [sys.stdin.buffer]
AttributeError: 'file' object has no attribute 'buffer'

And when I use python3.6, an error doesn't occur but it never returns.

Add support for switching repository type

Eventually we will have more than 1 repo type, see #10, (the memory repo type works, but is kinda of limiting in practice) we should have a combination of cli and/or config file options to enable users to choose which repository type they'd like to use and also set up required configuration for the backend.

stestr shell help doesn't work

When running the stestr shell (by just running stestr with no commands) the help command doesn't work for any of the commands. For example:

(stestr) help failing
Traceback (most recent call last):
  File "/home/computertreker/.venvs/venv2/lib/python2.7/site-packages/cmd2.py", line 786, in onecmd_plus_hooks
    stop = self.onecmd(statement)
  File "/home/computertreker/.venvs/venv2/lib/python2.7/site-packages/cmd2.py", line 974, in onecmd
    stop = func(statement)
  File "/home/computertreker/.venvs/venv2/lib/python2.7/site-packages/cliff/interactive.py", line 114, in do_help
    self.default(self.parsed('help ' + arg))
AttributeError: InteractiveApp instance has no attribute 'parsed'
EXCEPTION of type 'AttributeError' occurred with message: 'InteractiveApp instance has no attribute 'parsed''
To enable full traceback, run the following command:  'set debug true

Add support for setting discovery arguments via python module paths

Issue description

This is more a feature wish than a bug:

Someone might need stestr to run tests from a pip installed package.
However, these pip installed packages can be in /usr/local/lib/pythonX.X/dist-packages/ or in a virtual environment.
And in a virtual environment, the directory in which are the tests can also have been installed by pip install -e somepackage/ when "somepackage" is a git repo.

Therefore, making a stestr.conf file which will work on unique platforms is currently impossible. when the tests scripts are in another module/repo than the repo you work on.

I.E.: I work on a repo of my openstack platform deplyment, and need to run tempest tests which are in a python module (tempest is a set of tests which run on a deployed openstack environment. I did not write these tests. tempest tests are there to check your openstack platform works properly).

Currently, I need to do this kind of stuff to make stestr platform agnostic so that anyone can launch our openstack tempest tests:

# create .stestr.conf:
TEMPEST_PATH=$(/usr/bin/env python -c "import os, tempest;print(os.path.dirname(os.path.realpath(tempest.__file__)))")
TEMPEST_TEST_DIR="$TEMPEST_PATH/test_discover"
echo "creating $(pwd)/.stestr.conf file"
echo "[DEFAULT]" > .stestr.conf
echo "test_path=$TEMPEST_TEST_DIR" >> .stestr.conf
echo "top_dir=$TEMPEST_PATH" >> .stestr.conf
echo 'group_regex=([^\.]*\.)*' >> .stestr.conf
# Now you can run stestr

We have a CI, so I can hard code the conf for our CI, but if one wants to launch a new set of tests (new tempest version) to check these tests work on the platform before pushing them to the CI, the .stestr.conf for the CI will not work.

It would be a very nice feature to be able to configure a "python module import path" instead of a directory.

Expected behavior and actual behavior

Curently we need this kind of configuration where someuser can be anyone:

[DEFAULT]
test_path=/some/path/which/may/be/different/to/my_package/test_discover
top_dir=/some/path/which/may/be/different/to/my_package
group_regex=([^\.]*\.)*

A very nice feature would be to be able to provide such configurations:

[DEFAULT]
module_test_path=my_package.test_discover
module_top_dir=my_package
group_regex=([^\.]*\.)*

Of course, it would work only if my_package.test_discover is in the python path. See in my example above how I find the real path from the python module. Python is totally capable of doing so (since I find the path with python commands). So having stestr supporting this would be more than great.

Steps to reproduce the problem

Create a .stesr.conf in your openstack platform deployment repo which works for you and allows you to launch tempest on the installed platform (tempest being a python module).
Let someone clone your repo
If this someone's $HOME path is not the exact same (and it's generally the case), stestr will fail unless this user recreates a .stestr.conf file.

Specifications like the version of the project, operating system, or hardware

Runing openstack tempest tests on openstack platforms.

System information

**stestr version (stestr --version): stestr 2.0.0

**Python release (python --version): Python 2.7.6 and Python 3.4.3

Of course I'm totally aware it's not a bug, but since python can very easily find a path from an importable python module, being able to configure a pyhton module for testing would be a very very nice feature.

Let me know if anything was unclear.

Regards.

Failed to install stestr on mac 10.13.4

This package can not be installed on mac 10.13.4, this is the error message when installing.

╰─$ pip install -U stestr                                                                                                                                                      1 ↵
Looking in indexes: http://mirrors.aliyun.com/pypi/simple/
Collecting stestr
  Downloading http://mirrors.aliyun.com/pypi/packages/e9/f8/8c2f7b2bcfbfc2c260893594c5af9a9486e11326fdecbc1f15b0c0b61f81/stestr-2.0.0.tar.gz (101kB)
    100% |████████████████████████████████| 102kB 992kB/s
    Complete output from command python setup.py egg_info:
    running egg_info
    creating pip-egg-info/stestr.egg-info
    writing requirements to pip-egg-info/stestr.egg-info/requires.txt
    writing pip-egg-info/stestr.egg-info/PKG-INFO
    writing top-level names to pip-egg-info/stestr.egg-info/top_level.txt
    writing dependency_links to pip-egg-info/stestr.egg-info/dependency_links.txt
    writing entry points to pip-egg-info/stestr.egg-info/entry_points.txt
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/k5/jlzm7kjj0ws9d4p4x222zzf40000gn/T/pip-install-cjnV29/stestr/setup.py", line 29, in <module>
        pbr=True)
      File "/usr/local/lib/python2.7/site-packages/setuptools/__init__.py", line 129, in setup
        return distutils.core.setup(**attrs)
      File "/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 151, in setup
        dist.run_commands()
      File "/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands
        self.run_command(cmd)
      File "/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command
        cmd_obj.run()
      File "/usr/local/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 271, in run
        writer(self, ep.name, os.path.join(self.egg_info, ep.name))
      File "/usr/local/lib/python2.7/site-packages/pbr/pbr_json.py", line 25, in write_pbr_json
        git_dir = git._run_git_functions()
      File "/usr/local/lib/python2.7/site-packages/pbr/git.py", line 131, in _run_git_functions
        if _git_is_installed():
      File "/usr/local/lib/python2.7/site-packages/pbr/git.py", line 83, in _git_is_installed
        _run_shell_command(['git', '--version'])
      File "/usr/local/lib/python2.7/site-packages/pbr/git.py", line 49, in _run_shell_command
        env=newenv)
      File "/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 390, in __init__
        errread, errwrite)
      File "/usr/local/Cellar/python@2/2.7.14_3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1025, in _execute_child
        raise child_exception
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)

Datastores are duplicated

I find it great that stestr exists to do a great native job for users. I'm very sad that the datastores are duplicated; testrepository is python and I'd be happy for testrepository to grow whatever interfaces you need to store data efficiently while still interoperating.

stestr run doesn't work from the shell

When trying to us the run command from inside the stestr shell you're not able to. This is due to cmd2 (which is what cliff uses for the interactive shell) having a builtin command run defined. This conflicts with the run command stestr defines, and the cmd2 builtin takes precedence. We'll have to find some way to override that cmd2 builtin to make this work.

Add an option to stestr run to not store in the repository

There are certain instances where you want to run tests in parallel but not store the results in the repository. We should have a flag on stestr run where we can say run the tests but don't track the results.

This is another feature motivated by the stestr run --analyze-isolation flag. We're inadvertently storing a lot of intermediate runs in the repository when during it's operation and it's not necessarily something we want to track. For the isolation flag whether we decide to leverage this feature or the --combine flag to store all the runs as a single entry doesn't change this would be a good feature to have.

`stestr run -r` doesn't work

-r option(Randomize the test order) with stestr run doesn't work.

$ stestr run -r
usage: stestr [--version] [-v | -q] [--log-file LOG_FILE] [-h] [--debug]
              [-d HERE] [--config CONFIG] [--repo-type {file,sql}]
              [--repo-url REPO_URL] [--test-path TEST_PATH]
              [--top-dir TOP_DIR] [--group-regex GROUP_REGEX]
stestr: error: argument --repo-type/-r: expected one argument
$ stestr run --help | grep "random," -A1
  --random, -r          Randomize the test order after they are partitioned
                        into separate workers

I think stestr (or cliff) should be able to be specified for both stestr and run individually.

test_return_code failure on windows

Issue description

When running the stestr unit tests in the test_return_codes module in a windows environment we encountered a failure in cleanup trying to remove the temporary directory created to setup the functional environment:

https://ci.appveyor.com/project/mtreinish/stestr/build/1.0.478/job/k9bm98380del4g2x

In case that link ever dies the trace back is:

==============================
Failed 1 tests - output below:
==============================
stestr.tests.test_return_codes.TestReturnCodes.test_until_failure_fails_from_func
---------------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    b'Traceback (most recent call last):'
    b'  File "C:\\projects\\stestr\\.tox\\py36\\lib\\shutil.py", line 494, in rmtree'
    b'    return _rmtree_unsafe(path, onerror)'
    b'  File "C:\\projects\\stestr\\.tox\\py36\\lib\\shutil.py", line 393, in _rmtree_unsafe'
    b'    onerror(os.rmdir, path, sys.exc_info())'
    b'  File "C:\\projects\\stestr\\.tox\\py36\\lib\\shutil.py", line 391, in _rmtree_unsafe'
    b'    os.rmdir(path)'
    b"PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\appveyor\\\\AppData\\\\Local\\\\Temp\\\\1\\\\stestr-unitriyexd7t'"
    b''

It looks like the subprocess stestr is still accessing the temporary directory, but it definitely should have exited by the time cleanup is running. So it's not clear why this is failing. It also passed the other 2 jobs on the appveyor build which run the same tests, just with different python versions.

Stack trace returned if no parameter is specified

If no parameter is specified at all a stack trace is returned:

(stestr) andreafrittoli@galadriel:/git/github.com/mtreinish/stestr (master)$ stestr Traceback (most recent call last): File "/Users/andreafrittoli/virtualenvs/stestr/bin/stestr", line 11, in <module> sys.exit(main()) File "/Users/andreafrittoli/virtualenvs/stestr/lib/python3.5/site-packages/stestr/cli.py", line 102, in main sys.exit(args[0].func(args)) AttributeError: 'Namespace' object has no attribute 'func'

The UI should return an help message instead, like:

(stestr) andreafrittoli@galadriel:/git/github.com/mtreinish/stestr (master)$ stestr meh usage: stestr [-h] [-d HERE] [-q] [--version] [--config CONFIG] [--repo-type {file,sql}] [--repo-url REPO_URL] [--test-path TEST_PATH] [--top-dir TOP_DIR] [--group_regex GROUP_REGEX] {run,list,slowest,failing,stats,last,init,load} ... stestr: error: invalid choice: 'meh' (choose from 'run', 'list', 'slowest', 'failing', 'stats', 'last', 'init', 'load')

SQL mode: KeyError: 'success' occurs when all tests are failed

In SQL mode, "KeyError: 'success' occurs when all tests are failed.

$ stestr -r sql run test_parallel_passing_bad_regex
running=${PYTHON:-python} -m subunit.run discover -t ./ ./stestr/tests --list
running=${PYTHON:-python} -m subunit.run discover -t ./ ./stestr/tests  --load-list /var/folders/q4/1whx01t50xs0n14m502f_6640000gn/T/tmpZMtEl2
Ran 1 tests in 1.832s
FAILED (id=6cece5da-8e89-4a5a-9468-0f35f0c48848, failures=1)
Traceback (most recent call last):
  File "/Users/igawa/openstack/stestr/.tox/py27/bin/stestr", line 10, in <module>
    sys.exit(main())
  File "/Users/igawa/openstack/stestr/stestr/cli.py", line 103, in main
    sys.exit(args[0].func(args))
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 185, in run
    subunit_out=args.subunit)
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 340, in _run_tests
    return run_tests()
  File "/Users/igawa/openstack/stestr/stestr/commands/run.py", line 337, in run_tests
    repo_url=cmd.options.repo_url)
  File "/Users/igawa/openstack/stestr/stestr/commands/load.py", line 141, in load
    case.run(result)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testsuite.py", line 171, in run
    result.status(**event_dict)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 467, in status
    _strict_map(methodcaller('status', *args, **kwargs), self.targets)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 442, in _strict_map
    return list(map(function, *sequences))
  File "/Users/igawa/openstack/stestr/stestr/repository/sql.py", line 289, in status
    self.hook.status(*args, **kwargs)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 467, in status
    _strict_map(methodcaller('status', *args, **kwargs), self.targets)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 442, in _strict_map
    return list(map(function, *sequences))
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 908, in status
    self._hook.status(*args, **kwargs)
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 825, in status
    self.on_test(self._inprogress.pop(key))
  File "/Users/igawa/openstack/stestr/.tox/py27/lib/python2.7/site-packages/testtools/testresult/real.py", line 900, in _handle_test
    self.on_test(test_record.to_dict())
  File "/Users/igawa/openstack/stestr/stestr/repository/sql.py", line 248, in _handle_test
    values['passes'] = self.totals['success']
KeyError: 'success'

Deprecate the --partial options

Tracing through the code the use of --partial on stestr run and stestr load don't seem to have any real effect. This option was directly ported from testr in the initial fork and I didn't really think about it but the flags serve no real purpose from what I can tell. I think the original idea was to distinguish an entry in a repository for a full run vs one that was a subset. (but you'd have to check with the testrepository authors on that) But nothing else respects or expects that difference, and from an end user perspective it doesn't really matter. The repository contains the historical record of previous runs and information for scheduling is aggregated independently from that. So it doesn't really matter if the record is for a partial execution or not.

Therefore I think it'll be good to just remove those options, Unfortunately the options are in our stable interfaces (both the cli and python commands api) so we'll have to deprecate them for a bit first. We can remove the flags and just print a deprecation warning on usage to warn people things are going away.

The only functional code that seems to use it is in the repository:

if self.partial:
# Seed with current failing
inserter = testtools.ExtendedToStreamDecorator(repo.get_inserter())
inserter.startTestRun()
failing = self._repository.get_failing()
failing.get_test().run(inserter)
inserter.stopTestRun()
inserter = testtools.ExtendedToStreamDecorator(
repo.get_inserter(partial=True))

and

if not self._partial:
self._repository._failing = OrderedDict()

Which are related, the file repo uses the memory repo call after that highlighted if statement. I'm not exactly sure on the function of that code so investigation will be needed to understand what's being triggered there and how to implement it without a user facing flag.

loading subunit stream from stdin with stestr load is broken

Loading a subunit stream from stdin is not working for stestr load. A simple example to reproduce this is to run::

stestr last --subunit | stestr load

Which will yield this traceback:

Traceback (most recent call last):
File "/home/mtreinish/.venv2/bin/stestr", line 10, in
sys.exit(main())
File "/home/mtreinish/git/stestr/stestr/cli.py", line 102, in main
sys.exit(args[0].func(args))
File "/home/mtreinish/git/stestr/stestr/commands/load.py", line 75, in run
abbreviate=args.abbreviate)
File "/home/mtreinish/git/stestr/stestr/commands/load.py", line 128, in load
streams = [sys.stdin.buffer]
AttributeError: 'file' object has no attribute 'buffer'

We should fix this and add a unit and/or functional test checking that load from stdin works

Add config/cli flag to group tests by class

Issue description

Currently we only support dynamic grouping (ignoring manual scheduling) via a grouping regex. This is fine and very flexible, but it's tricky to construct a regex that works exactly as needed. A common use case is that a test suite needs to be parallelize on the test class and not individual methods/test_ids. This is currently accomplishable by just setting the group_regex to: ([^\.]*\.)*. But, it would probably be easier/better to add a new option to the stestr.conf and cli to split at the class level so people don't need to remember that regex.

I'm just not sure of the best name for that --parallel-class or something?

Also something we'll have to put some thought and discussion behind is the priority of arguments. Like does --parallel-class (or whatever we call it) take priority over group_regex. Also between parallel-class in a config file and --group-regex on the cli (and vice versa)

No error handling for unknown arguments

Unknown args aren't being treated as fatal because we call parse_known_args() which makes it non-fatal. (https://docs.python.org/3.6/library/argparse.html#argparse.ArgumentParser.parse_known_args ) See: https://github.com/mtreinish/stestr/blob/master/stestr/cli.py#L94

We use this method to get the regexes for selection, but this is really just a bug. The fix here is to just define a regexes param that has nargs='*' and switch the call to parse_args().

This was originally found by @masayukig in #98

Report to user when subprocess is killed via signal

Issue description

When one of the stestr subprocesses is killed by a signal (such as what happens with OOMKiller is invoked) - the result is the word "Killed" in the stdout and the appending of _StringException to the test name in the subunit.

An example of html from the subunit of a run where this happened is here:

http://logs.openstack.org/03/592303/23/check/openstacksdk-functional-devstack-tips/64f4c71/testr_results.html.gz

and the stdout output here:

http://logs.openstack.org/03/592303/23/check/openstacksdk-functional-devstack-tips/64f4c71/job-output.txt.gz#_2018-10-02_22_23_03_867658

It's possible in C to detect that a subprocess was killed by a signal, and also what the signal was. It would be nice, if it's possible to do the same in python, to report to the user in some way that the process was killed by an external signal, as otherwise it seems like a test was killed by a hard timeout.

Using --serial and --blacklist-file together does not work

Using --serial and --blacklist-file parameters together does not work in that the blacklist file is never used.

I believe this is due to the following check in test_processor [1] where it will only list IDs for concurrency == 1 if you pass a filter or a worker path. I think this should be expanded to check for white list, black list, and black_regex.

In particular this broke the OpenStack Trove gate when ostestr was changed to use stestr. OpenStack Trove needs to pass --serial and --blacklist-file together to serialize the tests and blacklist some tests for py3.x for one component that hasn't been upgraded to Python 3 support yet.

The workaround for the issue is to pass a regex like '.*' as a test filter.

[1] https://github.com/mtreinish/stestr/blob/master/stestr/test_processor.py#L132

stestr run is parallel by default

Right now when you do stestr run it executes in parallel. I actually like this more than the testr default of running serially. (the only reason anyone uses stestr/testr instead of something like py.test is the parallel execution model and/or subunit output) However, the cli args are setup with a --parallel option that makes it opt in.

The best way to fix this is to change that to a --serial (or --no-parallel) option and update the docs to say it is parallel by default. Alternatively, we could fix the default, but I think that's less desirable.

"'Namespace' object has no attribute 'repo_type'" when running with `--slowest` option

When I run the command like below, I got an error 'Namespace' object has no attribute 'repo_type' :

$ stestr --debug run --slowest
<SNIP>
'Namespace' object has no attribute 'repo_type'
Traceback (most recent call last):
  File "/home/masayuki/git/stestr/.tox/py36/lib/python3.6/site-packages/cliff/app.py", line 400, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/masayuki/git/stestr/.tox/py36/lib/python3.6/site-packages/cliff/command.py", line 184, in run
    return_code = self.take_action(parsed_args) or 0
  File "/home/masayuki/git/stestr/stestr/commands/run.py", line 154, in take_action
    slowest.slowest(repo_type=args.repo_type, repo_url=args.repo_url)
AttributeError: 'Namespace' object has no attribute 'repo_type'
Traceback (most recent call last):
  File "/home/masayuki/git/stestr/.tox/py36/bin/stestr", line 10, in <module>
    sys.exit(main())
  File "/home/masayuki/git/stestr/stestr/cli.py", line 96, in main
    return cli.run(argv)
  File "/home/masayuki/git/stestr/.tox/py36/lib/python3.6/site-packages/cliff/app.py", line 279, in run
    result = self.run_subcommand(remainder)
  File "/home/masayuki/git/stestr/.tox/py36/lib/python3.6/site-packages/cliff/app.py", line 400, in run_subcommand
    result = cmd.run(parsed_args)
  File "/home/masayuki/git/stestr/.tox/py36/lib/python3.6/site-packages/cliff/command.py", line 184, in run
    return_code = self.take_action(parsed_args) or 0
  File "/home/masayuki/git/stestr/stestr/commands/run.py", line 154, in take_action
    slowest.slowest(repo_type=args.repo_type, repo_url=args.repo_url)
AttributeError: 'Namespace' object has no attribute 'repo_type'

https://github.com/mtreinish/stestr/blob/master/stestr/commands/run.py#L154
self.app_args.... should be passed instead of args.... here.

When run under tox coloring is always disabled regardless --color flag

Issue description

I have being trying to make stestr to use coloring on output under tox but apparently it does not want to respect the --color flag.

When run outside tox, and mentioning --color, it works as expected. Even so, coloring is not enabled by default when user have interactive console, and it should be, just like other tools (including tox).

Steps to reproduce the problem

Use stester run --color inside tox.ini.

System information

stestr version (stestr --version):
2.1.0

Python release (python --version):
py27

pip packages (pip freeze):

Additional information

Windows CI failing on pip >=10.0.0

The windows CI is failing on installing tox with pip releases >=10.0.0. The failure is:

For example from: https://ci.appveyor.com/project/mtreinish/stestr/build/1.0.482/job/vcjjyshnpcupdyi2

pip install -U tox
Traceback (most recent call last):
  File "c:\python27-x64\lib\runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "c:\python27-x64\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "C:\Python27-x64\Scripts\pip.exe\__main__.py", line 5, in <module>
ImportError: cannot import name main

This looks like it could be related to the upstream pip issue: pypa/pip#5223 but that is supposed to be fixed in pip 10.0.1. But this failure is still occurring in our CI environment.

Add cli interface for stestr config file options

Right now there are a few common options that can only be set in the .stestr.conf. Mainly the top_dir, test_path, and group_regex. While this makes it easier for projects wanting to integrate .stestr as a standard test runner not having a CLi option makes it a bit more difficult to use as a standalone runner. We should have all these options exposed as common cli args.

Add a --pdb option to stestr run

Right now if you want to run with pdb stestr breaks that. The way to workaround this is to call subunit.run or testtools.run directly. We point this out in the stestr development docs:

http://stestr.readthedocs.io/en/latest/developer_guidelines.html#running-the-tests

However there is no reason we can't just fast path stestr to call the testr runner directly if we want to use pdb. We probably don't want to store results in the repository in that case.

For an example of this being done, we added this to os-testr: https://github.com/openstack/os-testr/blob/master/os_testr/ostestr.py#L197-L210 we probably could steal that logic and dump it into stestr.commands.run.

[RFE] pytest-style filesystem paths

The pytest runner allows you to provide a filepath-like arguments to run tests instead of paths runners.

pytest test_mod.py::TestClass::test_method

We already have support for the path aspect of this, using the --no-discover/-n option but this does not allow you specify a class/method.

stestr -n test_mod.py

To do this, you need to use a Python path.

stestr test_mod.TestClass.test_method

It would be helpful to support the pytest-style case as this is frequently quicker to run and doesn't require discovery outside of the file provided (or so I assume).

Unexpected Successes do not result in overall failure

Issue description

It appears that tests that pass unexpectedly, when marked with @unittest.expectedFailure do not trigger an overall test suite failure.

Based on changes in python3.4 from the fix landing for https://bugs.python.org/issue20165 this is rather unexpected.

Expected behavior and actual behavior

Mark any successful test with the decorator @unittest.expectedFailure
Run the tests using stestr run --slowest
Exit code should be non-zero

Actual behaviour is that current exit code is zero (0)

Steps to reproduce the problem

Mark any successful test with the decorator @unittest.expectedFailure and run tests with stestr.

Specifications like the version of the project, operating system, or hardware

System information

stestr version (stestr --version): stestr 2.1.0

Python release (python --version): Python 3.5.2

pip packages (pip freeze):
-f /var/cache/pip
cliff==2.13.0
cmd2==0.9.2
colorama==0.3.9
extras==1.0.0
fixtures==3.0.0
flake8==3.5.0
future==0.16.0
linecache2==1.0.0

mccabe==0.6.1
pbr==4.1.0
pkg-resources==0.0.0
prettytable==0.7.2
pycodestyle==2.3.1
pyflakes==1.6.0
pyparsing==2.2.0
pyperclip==1.6.2
python-mimeparse==1.6.0
python-subunit==1.3.0
PyYAML==3.13
six==1.11.0
stestr==2.1.0
stevedore==1.28.0
testtools==2.3.0
traceback2==1.4.0
unittest2==1.1.0
voluptuous==0.11.1
wcwidth==0.1.7

Additional information

It would appear that testtools TestResult class contains a method that has the correct behaviour at https://github.com/testing-cabal/testtools/blob/master/testtools/testresult/real.py#L174 and it would be possible to fix this in stestr by changing from calling summary_result.wasSuccessful() at https://github.com/mtreinish/stestr/blob/master/stestr/commands/load.py#L222 to call testtools.TestResult.wasSuccessful(summary_result) instead to resolve. There would also be a few other places that would need updating.

Based on looking at the tests in testing-cabal/testtools It may be that testtools is maintaining backwards compatibility with unittest.TestResult prior to the fix landing in python 3.4 in the StreamSummary class and downstream projects at expected to handle this instead. Will log an issue there as well referencing this issue to try and determine where is the correct place to submit a fix for this.

Windows support

It would be good to support Windows to be happy Windows Python users. :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.