Code Monkey home page Code Monkey logo

green's Introduction

Version CI Status Coverage Status

Green -- A clean, colorful, fast python test runner.

Features

  • Clean - Low redundancy in output. Result statistics for each test is vertically aligned.
  • Colorful - Terminal output makes good use of color when the terminal supports it.
  • Fast - Tests run in independent processes. (One per processor by default. Does not play nicely with gevent)
  • Powerful - Multi-target + auto-discovery.
  • Traditional - Use the normal unittest classes and methods for your unit tests.
  • Descriptive - Multiple verbosity levels, from just dots to full docstring output.
  • Convenient - Bash-completion and ZSH-completion of options and test targets.
  • Thorough - Built-in integration with coverage.
  • Embedded - Can be run with a setup command without in-site installation.
  • Modern - Supports Python 3.8+. Additionally, PyPy is supported on a best-effort basis.
  • Portable - macOS, Linux, and BSDs are fully supported. Windows is supported on a best-effort basis.
  • Living - This project grows and changes. See the changelog

Community

  • For questions, comments, or feature requests, please open a discussion
  • For bug reports, please submit an issue to the GitHub issue tracker for Green.
  • Submit a pull request with a bug fix or new feature.
  • ๐Ÿ’– Sponsor the maintainer to support this project

Training Course

There is a training course available if you would like professional training: Python Testing with Green.

Python Testing with Green

Screenshots

Top: With Green! Bottom: Without Green :-(

Python Unit Test Output

Quick Start

pip3 install green    # To upgrade: "pip3 install --upgrade green"

Now run green...

# From inside your code directory
green

# From outside your code directory
green code_directory

# A specific file
green test_stuff.py

# A specific test inside a large package.
#
# Assuming you want to run TestClass.test_function inside
# package/test/test_module.py ...
green package.test.test_module.TestClass.test_function

# To see all examples of all the failures, errors, etc. that could occur:
green green.examples


# To run Green's own internal unit tests:
green green

For more help, see the complete command-line options or run green --help.

Config Files

Configuration settings are resolved in this order, with settings found later in the resolution chain overwriting earlier settings (last setting wins).

  1. $HOME/.green
  2. A config file specified by the environment variable $GREEN_CONFIG
  3. setup.cfg in the current working directory of test run
  4. .green in the current working directory of the test run
  5. A config file specified by the command-line argument --config FILE
  6. Command-line arguments

Any arguments specified in more than one place will be overwritten by the value of the LAST place the setting is seen. So, for example, if a setting is turned on in ~/.green and turned off by a command-line argument, then the setting will be turned off.

Config file format syntax is option = value on separate lines. option is the same as the long options, just without the double-dash (--verbose becomes verbose).

Most values should be True or False. Accumulated values (verbose, debug) should be specified as integers (-vv would be verbose = 2).

Example:

verbose       = 2
logging       = True
omit-patterns = myproj*,*prototype*

Troubleshooting

One easy way to avoid common importing problems is to navigate to the parent directory of the directory your python code is in. Then pass green the directory your code is in and let it autodiscover the tests (see the Tutorial below for tips on making your tests discoverable).

cd /parent/directory
green code_directory

Another way to address importing problems is to carefully set up your PYTHONPATH environment variable to include the parent path of your code directory. Then you should be able to just run green from inside your code directory.

export PYTHONPATH=/parent/directory
cd /parent/directory/code_directory
green

Integration

Bash and Zsh

To enable Bash-completion and Zsh-completion of options and test targets when you press Tab in your terminal, add the following line to the Bash or Zsh config file of your choice (usually ~/.bashrc or ~/.zshrc)

which green >& /dev/null && source "$( green --completion-file )"

Coverage

Green has built-in integration support for the coverage module. Add -r or --run-coverage when you run green.

setup.py command

Green is available as a setup.py runner, invoked as any other setup command:

python setup.py green

This requires green to be present in the setup_requires section of your setup.py file. To run green on a specific target, use the test_suite argument (or leave blank to let green discover tests itself):

# setup.py
from setuptools import setup

setup(
    ...
    setup_requires = ['green'],
    # test_suite = "my_project.tests"
)

You can also add an alias to the setup.cfg file, so that python setup.py test actually runs green:

# setup.cfg

[aliases]
test = green

Django

Django can use green as the test runner for running tests.

  • To just try it out, use the --testrunner option of manage.py:
./manage.py test --testrunner=green.djangorunner.DjangoRunner
  • Make it persistent by adding the following line to your settings.py:
TEST_RUNNER="green.djangorunner.DjangoRunner"
  • For verbosity, green adds an extra command-line option to manage.py which you can pass the number of v's you would have used on green.
./manage.py test --green-verbosity 3

nose-parameterized

Green will run generated tests created by nose-parameterized. They have lots of examples of how to generate tests, so follow the link above if you're interested.

Unit Test Structure Tutorial

This tutorial covers:

  • External structure of your project (directory and file layout)
  • Skeleton of a real test module
  • How to import stuff from your project into your test module
  • Gotchas about naming...everything.
  • Where to run green from and what the output could look like.
  • DocTests

For more in-depth online training please check out Python Testing with Green:

  • Layout your test packages and modules correctly
  • Organize your tests effectively
  • Learn the tools in the unittest and mock modules
  • Write meaningful tests that enable quick refactoring
  • Learn the difference between unit and integration tests
  • Use advanced tips and tricks to get the most out of your tests
  • Improve code quality
  • Refactor code without fear
  • Have a better coding experience
  • Be able to better help others

External Structure

This is what your project layout should look like with just one module in your package:

proj                  # 'proj' is the package
โ”œโ”€โ”€ __init__.py
โ”œโ”€โ”€ foo.py            # 'foo' (or proj.foo) is the only "real" module
โ””โ”€โ”€ test              # 'test' is a sub-package
    โ”œโ”€โ”€ __init__.py
    โ””โ”€โ”€ test_foo.py   # 'test_foo' is the only "test" module

Notes:

  1. There is an __init__.py in every directory. Don't forget it. It can be an empty file, but it needs to exist.

  2. proj itself is a directory that you will be storing somewhere. We'll pretend it's in /home/user

  3. The test directory needs to start with test.

  4. The test modules need to start with test.

When your project starts adding code in sub-packages, you will need to make a choice on where you put their tests. I prefer to create a test subdirectory in each sub-package.

proj
โ”œโ”€โ”€ __init__.py
โ”œโ”€โ”€ foo.py
โ”œโ”€โ”€ subpkg
โ”‚ย ย  โ”œโ”€โ”€ __init__.py
โ”‚ย ย  โ”œโ”€โ”€ bar.py
โ”‚ย ย  โ””โ”€โ”€ test              # test subdirectory in every sub-package
โ”‚ย ย      โ”œโ”€โ”€ __init__.py
โ”‚ย ย      โ””โ”€โ”€ test_bar.py
โ””โ”€โ”€ test
    โ”œโ”€โ”€ __init__.py
    โ””โ”€โ”€ test_foo.py

The other option is to start mirroring your subpackage layout from within a single test directory.

proj
โ”œโ”€โ”€ __init__.py
โ”œโ”€โ”€ foo.py
โ”œโ”€โ”€ subpkg
โ”‚ย ย  โ”œโ”€โ”€ __init__.py
โ”‚ย ย  โ””โ”€โ”€ bar.py
โ””โ”€โ”€ test
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ subpkg            # mirror sub-package layout inside test dir
    โ”‚ย ย  โ”œโ”€โ”€ __init__.py
    โ”‚ย ย  โ””โ”€โ”€ test_bar.py
    โ””โ”€โ”€ test_foo.py

Skeleton of Test Module

Assume foo.py contains the following contents:

def answer():
    return 42

class School():

    def food(self):
        return 'awful'

    def age(self):
        return 300

Here's a possible version of test_foo.py you could have.

# Import stuff you need for the unit tests themselves to work
import unittest

# Import stuff that you want to test.  Don't import extra stuff if you don't
# have to.
from proj.foo import answer, School

# If you need the whole module, you can do this:
#     from proj import foo
#
# Here's another reasonable way to import the whole module:
#     import proj.foo as foo
#
# In either case, you would obviously need to access objects like this:
#     foo.answer()
#     foo.School()

# Then write your tests

class TestAnswer(unittest.TestCase):

    def test_type(self):
        "answer() returns an integer"
        self.assertEqual(type(answer()), int)

    def test_expected(self):
        "answer() returns 42"
        self.assertEqual(answer(), 42)

class TestSchool(unittest.TestCase):

    def test_food(self):
        school = School()
        self.assertEqual(school.food(), 'awful')

    def test_age(self):
        school = School()
        self.assertEqual(school.age(), 300)

Notes:

  1. Your test class must subclass unittest.TestCase. Technically, neither unittest nor Green care what the test class is named, but to be consistent with the naming requirements for directories, modules, and methods we suggest you start your test class with Test.

  2. Start all your test method names with test.

  3. What a test class and/or its methods actually test is entirely up to you. In some sense it is an artform. Just use the test classes to group a bunch of methods that seem logical to go together. We suggest you try to test one thing with each method.

  4. The methods of TestAnswer have docstrings, while the methods on TestSchool do not. For more verbose output modes, green will use the method docstring to describe the test if it is present, and the name of the method if it is not. Notice the difference in the output below.

DocTests

Green can also run tests embedded in documentation via Python's built-in doctest module. Returning to our previous example, we could add docstrings with example code to our foo.py module:

def answer():
    """
    >>> answer()
    42
    """
    return 42

class School():

    def food(self):
        """
        >>> s = School()
        >>> s.food()
        'awful'
        """
        return 'awful'

    def age(self):
        return 300

Then in some test module you need to add a doctest_modules = [ ... ] list to the top-level of the test module. So lets revisit test_foo.py and add that:

# we could add this to the top or bottom of the existing file...

doctest_modules = ['proj.foo']

Then running green -vv might include this output:

  DocTests via `doctest_modules = [...]`
.   proj.foo.School.food
.   proj.foo.answer

...or with one more level of verbosity (green -vvv)

  DocTests via `doctest_modules = [...]`
.   proj.foo.School.food -> /Users/cleancut/proj/green/example/proj/foo.py:10
.   proj.foo.answer -> /Users/cleancut/proj/green/example/proj/foo.py:1

Notes:

  1. There needs to be at least one unittest.TestCase subclass with a test method present in the test module for doctest_modules to be examined.

Running Green

To run the unittests, we would change to the parent directory of the project (/home/user in this example) and then run green proj.

In a real terminal, this output is syntax highlighted

$ green proj
....

Ran 4 tests in 0.125s using 8 processes

OK (passes=4)

Okay, so that's the classic short-form output for unit tests. Green really shines when you start getting more verbose:

In a real terminal, this output is syntax highlighted

$ green -vvv proj
Green 4.1.0, Coverage 7.4.1, Python 3.12.2

test_foo
  TestAnswer
.   answer() returns 42
.   answer() returns an integer
  TestSchool
.   test_age
.   test_food

Ran 4 tests in 0.123s using 8 processes

OK (passes=4)

Notes:

  1. Green outputs clean, hierarchical output.

  2. Test status is aligned on the left (the four periods correspond to four passing tests)

  3. Method names are replaced with docstrings when present. The first two tests have docstrings you can see.

  4. Green always outputs a summary of statuses that will add up to the total number of tests that were run. For some reason, many test runners forget about statuses other than Error and Fail, and even the built-in unittest runner forgets about passing ones.

  5. Possible values for test status (these match the unittest short status characters exactly)

  • . Pass
  • F Failure
  • E Error
  • s Skipped
  • x Expected Failure
  • u Unexpected pass

Origin Story

Green grew out of a desire to see pretty colors. Really! A big part of the whole Red/Green/Refactor process in test-driven-development is actually getting to see red and green output. Most python unit testing actually goes Gray/Gray/Refactor (at least on my terminal, which is gray text on black background). That's a shame. Even TV is in color these days. Why not terminal output? Even worse, the default output for most test runners is cluttered, hard-to-read, redundant, and the dang status indicators are not lined up in a vertical column! Green fixes all that.

But how did Green come to be? Why not just use one of the existing test runners out there? It's an interesting story, actually. And it starts with trial.

trial

I really like Twisted's trial test runner, though I don't really have any need for the rest of the Twisted event-driven networking engine library. I started professionally developing in Python when version 2.3 was the latest, greatest version and none of us in my small shop had ever even heard of unit testing (gasp!). As we grew, we matured and started testing and we chose trial to do the test running. If most of my projects at my day job hadn't moved to Python 3, I probably would have just stuck with trial, but at the time I wrote green trial didn't run on Python 3 (but since 15.4.0 it does). Trial was and is the foundation for my inspiration for having better-than-unittest output in the first place. It is a great example of reducing redundancy (report module/class once, not on every line), lining up status vertically, and using color. I feel like Green trumped trial in two important ways: 1) It wasn't a part of an immense event-driven networking engine, and 2) it was not stuck in Python 2 as trial was at the time. Green will obviously never replace trial, as trial has features necessary to run asynchronous unit tests on Twisted code. After discovering that I couldn't run trial under Python 3, I next tried...

nose

I had really high hopes for nose. It seemed to be widely accepted. It seemed to be powerful. The output was just horrible (exactly the same as unittest's output). But it had a plugin system! I tried all the plugins I could find that mentioned improving upon the output. When I couldn't find one I liked, I started developing Green (yes, this Green) as a plugin for nose. I chose the name Green for three reasons: 1) It was available on PyPi! 2) I like to focus on the positive aspect of testing (everything passes!), and 3) It made a nice counterpoint to several nose plugins that had "Red" in the name. I made steady progress on my plugin until I hit a serious problem in the nose plugin API. That's when I discovered that nose is in maintenance mode -- abandoned by the original developers, handed off to someone who won't fix anything if it changes the existing behavior. What a downer. Despite the huge user base, I already consider nose dead and gone. A project which will not change (even to fix bugs!) will die. Even the maintainer keeps pointing everyone to...

nose2

So I pivoted to nose2! I started over developing Green (same repo -- it's in the history). I can understand the allure of a fresh rewrite as much as the other guy. Nose had made less-than-ideal design decisions, and this time they would be done right! Hopefully. I had started reading nose code while writing the plugin for it, and so I dived deep into nose2. And ran into a mess. Nose2 is alpha. That by itself is not necessarily a problem, if the devs will release early and often and work to fix things you run into. I submitted a 3-line pull request to fix some problems where the behavior did not conform to the already-written documentation which broke my plugin. The pull request wasn't initially accepted because I (ironically) didn't write unit tests for it. This got me thinking "I can write a better test runner than this". I got tired of the friction dealing with the nose/nose2 and decided to see what it would take to write my own test runner. That brought be to...

unittest

I finally went and started reading unittest (Python 2.7 and 3.4) source code. unittest is its own special kind of mess, but it's universally built-in, and most importantly, subclassing or replacing unittest objects to customize the output looked a lot easier than writing a plugin for nose and nose2. And it was, for the output portion! Writing the rest of the test runner turned out to be quite a project, though. I started over on Green again, starting down the road to what we have now. A custom runner that subclasses or replaces bits of unittest to provide exactly the output (and other feature creep) that I wanted.

I had three initial goals for Green:

  1. Colorful, clean output (at least as good as trial's)
  2. Run on Python 3
  3. Try to avoid making it a huge bundle of tightly-coupled, hard-to-read code.

I contend that I nailed 1. and 2., and ended up implementing a bunch of other useful features as well (like very high performance via running tests in parallel in multiple processes). Whether I succeeded with 3. is debatable. I continue to try to refactor and simplify, but adding features on top of a complicated bunch of built-in code doesn't lend itself to the flexibility needed for clear refactors.

Wait! What about the other test runners?

  • pytest -- Somehow I never realized pytest existed until a few weeks before I released Green 1.0. Nowadays it seems to be pretty popular. If I had discovered it earlier, maybe I wouldn't have made Green! Hey, don't give me that look! I'm not omniscient!

  • tox -- I think I first ran across tox only a few weeks before I heard of pytest. It's homepage didn't mention anything about color, so I didn't try using it.

  • the ones I missed -- Er, haven't heard of them yet either.

I'd love to hear your feedback regarding Green. Like it? Hate it? Have some awesome suggestions? Whatever the case, go open a discussion

green's People

Contributors

althonos avatar anomitra avatar attomos avatar bkmd11 avatar charles-l avatar cleancut avatar dizballanze avatar dotlambda avatar dougthor42 avatar eltoder avatar gitter-badger avatar gurnec avatar jayvdb avatar mammadori avatar matslangoh avatar minchinweb avatar miohtama avatar nmustaki avatar ogaday avatar reputet avatar rolobio avatar skeggse avatar smspillaz avatar sodul avatar svisser avatar tbarron avatar thijstriemstra avatar timgates42 avatar vladv avatar ybakos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

green's Issues

Consider using drone.io

drone.io is a competitor to Travis. We should evaluate whether it would be good to use in addition to or instead of Travis (if at all).

Change blue to cyan for skipped tests

The blue color tends to not show up very well under windows CMD:

image
image

If changed to cyan:
image
image

Diff for output.py

--78: return termstyle.blue(text)
++78: return termstyle.cyan(text)

Options set via config file are disrespected

Options set via a config file are no longer working for me. Here's a simple example:

$ cat ~/.green
verbose = 2

$ green
.....
Ran 5 tests in 0.001s
OK (passes=5)

I observe the same problem with other options in my config file (eg, file_pattern). They aren't being picked up at all.

When I checkout to v1.71, the example above works as expected, but not in v1.80 and beyond.

Command-line completion for tests

Command-line completion for test names passed to the test runner, whether module name ("project.test.test_module_a") or test case ("project.test.test_module_a.TestCaseA") or test itself ("project.test.test_module_a.TestCaseA.test_run1").

improve the pytest section

would be nice to mention that apparently pytest has all the features of green and more or maybe comment on what is missing.

[question] nose @raises decorator

Hi there,

I have a lot of code covered by nose tests and I am evaluating migration to another test runner. I agree with your explanations and conclusions.

I have unit tests that import some nose sugar coating:

  from nose.tools import raises

and then use it like this:

  @raises(MyExceptionThatShouldBeRaised)
  def test_number_one():
      foo()

Do we have some similar syntaxic sugar in green yet?

Does green support different file naming patterns?

At various points in loader.py, it looks like green intends to support the idea of a file_pattern. But I don't see any command-line options to control this behavior. Also, I don't think that the value of file_pattern is actually being passed through the call stack down to the underlying discover() function. Perhaps I'm overlooking something.

Anyway, green looks like a great project and I'm eager to try it out (frustrated with nose, pytest, and unittest). But on my team we follow a *_tests.py naming convention, so I've been unable to get out of the starting gate.

Consider creating a decorator that would enable marking bare functions using asserts to be run as a TestCase

Maybe it would be possible to create a decorator which wrapped a bare function and assert and generated a standard unittest class/method out of it. It would require importing the decorator and decorating the function, obviously. Maybe it would look something like this:

from green import testme

@testme
def myfunc():
    assert foo == bar, "foo = bar"

Maybe that could return a wrapped version of the function that would be semantically equivalent to:

import unittest

class TestMyfunc(unittest.TestCase):

    def test_myfunc(self):
        # XXX Call myfunc() and translate its behavior into unittest XXX

Seems like Green runs the same tests multiple times

I have a project that I've been fiddling with. I was trying to decide between the following two project structures:

Structure A (the original structure):

+ ProjectFoler
  + dist
  + doc
  + main_package
    - __init__.py
    - module1.py
    - module2.py
    + tests
      - __init__.py
      - test_module1.py
      - test_module2.py
  - .travis.yml
  - other stuff like dev_requirements.txt, appveyor.yml, setup.py, etc.

Structure B (what I tried moving to):

+ ProjectFoler
  + dist
  + doc
  + main_package
    - __init__.py
    - module1.py
    - module2.py
  + tests             <-- note that this is *not* under the package directory
    - __init__.py
    - test_module1.py
    - test_module2.py
  - .travis.yml
  - other stuff like dev_requirements.txt, appveyor.yml, setup.py, etc.

Before making the change from A to B, nosetests and green worked just fine. My project currently has 23 tests.

After switching to structure B, nosetests worked just fine, while green ran 46 tests - exactly 2x the number of tests there are. Running green in verbose showed that it was still running tests from ProjectFolder\main_package\tests in addition to the tests found in ProjectFoler\tests

So I decided to switch back to structure A. Again, nosetests run 23 tests correctly, but this time green ran 69 tests - 3x! Running in verbose shows that green runs:

  1. tests from ProjectFolder\main_package\tests
  2. tests from ProjectFolder\tests
  3. tests from ProjectFolder\main_package\tests again

Does green save anything? Is there a file I should delete? I've already tried deleting __pycache__ in all folders.

I'm trying to recreate the issue with a simpler project, but have so far been unsuccessful. I'll continue to try and get exact steps to reproduce the issue.

Edit 1

Oh, I forgot relevant information:

  • Green version 1.9.4 (1.10 and above gives me different errors and doesn't run)
  • Python 3.4.3, 64bit
  • Windows 7 Professional
  • Running python via WinPython 1.1

Make green work with nose_parameterized

Green doesn't work with nose_parameterized since it executes tests that nose_parameterized marks as disabled using the nose-specific __test__ attribute

This attribute is easy to detect, so we should prune any tests that have it set.

Use containerized travis-ci build.

sudo isn't used in the .travis.yml file. Travis CI has a much faster container based infrastructure that we can use if we're just using PyPI to install dependencies.

Pip install does not register nose plugin

Hello,

I wanted to give green a try, did 'sudo pip install green', which was sucessful, but 'nosetest --plugins' is not showhing it installed, and my output looks the same.

possible problem with virtualenv

I'm using py.test for my project. It works nicely from within a virtualenv.

(env) (precise)tad@localhost:/share/husl-numpy$ which python
/home/tad/Downloads/share/husl-numpy/env/bin/python
(env) (precise)tad@localhost:/share/husl-numpy$ which py.test
/home/tad/Downloads/share/husl-numpy/env/bin/py.test
(env) (precise)tad@localhost:/share/husl-numpy$ python -c "import numpy; print(numpy.__version__)"
1.9.2

Note that numpy is importable. Let's run the tests.

(env) (precise)tad@localhost:/share/husl-numpy$ py.test test.py 
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.4.3 -- py-1.4.30 -- pytest-2.7.2
rootdir: /home/tad/Downloads/share/husl-numpy, inifile: 
collected 33 items 

test.py .................................

====================================================================== 33 passed in 0.93 seconds =======================================================================

Yay! It works. Now, here's where I've put green:

(env) (precise)tad@localhost:/share/husl-numpy$ which green
/home/tad/Downloads/share/husl-numpy/env/bin/green

Let's run the tests with green:

(env) (precise)tad@localhost:/share/husl-numpy$ green test.py 
Traceback (most recent call last):
  File "/usr/local/bin/green", line 9, in <module>
    load_entry_point('green==2.0.0', 'console_scripts', 'green')()
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/cmdline.py", line 79, in main
    result = run(test_suite, stream, args) # pragma: no cover
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/runner.py", line 92, in run
    targets = [(target, manager.Queue()) for target in toParallelTargets(suite, args.targets)]
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 57, in toParallelTargets
    proto_test_list = toProtoTestList(suite)
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 42, in toProtoTestList
    toProtoTestList(i, test_list, doing_completions)
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 42, in toProtoTestList
    toProtoTestList(i, test_list, doing_completions)
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 36, in toProtoTestList
    getattr(suite, exception_method)()
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 398, in testFailure
    raise ImportError(message)
ImportError: Failed to import test:
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/site-packages/green-2.0.0-py3.4.egg/green/loader.py", line 390, in loadTarget
    tests = loader.loadTestsFromName(dotted_path)
  File "/usr/local/lib/python3.4/unittest/loader.py", line 105, in loadTestsFromName
    module = __import__('.'.join(parts_copy))
  File "/home/tad/Downloads/share/husl-numpy/test.py", line 2, in <module>
    import numpy as np
ImportError: No module named 'numpy'

What might I be doing wrong? Thanks.

Support "catch" like unittest

Some features that are present in unittest (when tests are invoked via python -m unittest project.test) that I would consider really nice to have:

  • "-c"/"--catch": Control-C during the test run waits for the current test to end and then reports all the results so far. A second control-C raises the normal KeyboardInterrupt exception.
  • "-f"/"--failfast": Stop the test run on the first error or failure.

Edit: Balanced some parens, etc.

Verbosity levels when running green with django

Is green's fine verbosity an option when running green as the default test runner for django?
I tried but I got django's default v3 level, where showing you how it's setting up the test database is as verbose as it gets.

Automate testing installed versions of green

Green should be tested after being installed to ensure that it actually works after installation. Because working in a clone of the repo is not really useful if the installation is broken. Perhaps this is a good place to use tox

Tests get different fully-dotted names depending on the discovery-method used.

Test discovery sometimes gives pkg.module.submodule... when some discovery methods are used, and for others it gives submodule...

This is mildly annoying for regular output, but incredibly frustrating when trying to implement output for bash/zsh completion in #7.

This has got to be fixed. I hope I don't have to rewrite all of test discovery.

Support "failfast" like unittest

(Split into its own issue from #5 so it can be addressed separately.)

A feature that is present in unittest (when tests are invoked via python -m unittest project.test) that would be really nice to have:

-f, --failfast     Stop the test run on the first error or failure.

Handle tracebacks more nicely

  • Highlight the most pertinent part of a traceback
  • Cut out useless parts of the traceback

Depends on #14 and avoiding unittest's mangling of tracebacks.

[question] setup.py integration

I'd like a way to integrate with setup.py more finely than what is currently possible (to my understanding at least).

Here is the long story

I found the way to integrate with setup.py using this code:

test_suite='green.test',
tests_require=['coverage', 'green'],

but has you can see in my example I want to use coverage to produce reports.

What I did when using nose

in setup.py

test_suite='nose.collector',
tests_require=['nose', 'nosexcover'],

and in setup.cfg

[nosetests]
detailed-errors=1
with-doctest=1
verbose=True
with-xunit=True
with-xcoverage=True

[aliases]
test = nosetests --detailed-errors --with-doctest --cover-package=mypackage

but using green I cannot (did not find the way to) provide an alias and thus cannot force the production of a correct .coverage directory

Current situation & usage

At the same time running python setup.py test with test_suite='nose.collector' will generate a directory named like this: .coverage.1_7173 but the appended number will change at every run and the generated files will reference temporary files that are long gone before you can run coverage html

My solution for the moment is to no use test_suite='nose.collector' and provide a simple shell script like this:

runtests.sh

#!/bin/bash

# run the green test runner (pip install --upgrade green)
green -r <mypackagenamehere>

# then generate the coverage report (pip install --upgrade coverage)
coverage html

# then open coverage_html_report/index.html to review the results

TypeError: __init__() takes at most 5 arguments (6 given) in subprocess.py

I'm getting the following traceback on pypy3

Traceback (most recent call last):
  File "/home/travis/build/polysquare/polysquare-ci-scripts/setup.py", line 44, in <module>
    include_package_data=True)
  File "/opt/python/pypy3-2.4.0/lib-python/3/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/opt/python/pypy3-2.4.0/lib-python/3/distutils/dist.py", line 917, in run_commands
    self.run_command(cmd)
  File "/opt/python/pypy3-2.4.0/lib-python/3/distutils/dist.py", line 936, in run_command
    cmd_obj.run()
  File "/home/travis/virtualenv/pypy3-2.4.0/site-packages/setuptools_green/__init__.py", line 34, in run
    sys.exit(green.cmdline.main())
  File "/home/travis/virtualenv/pypy3-2.4.0/site-packages/green/cmdline.py", line 87, in main
    result = run(test_suite, stream, args) # pragma: no cover
  File "/home/travis/virtualenv/pypy3-2.4.0/site-packages/green/runner.py", line 52, in run
    pool = LoggingDaemonlessPool(processes=args.subprocesses or None)
  File "/home/travis/virtualenv/pypy3-2.4.0/site-packages/green/subprocess.py", line 98, in __init__
    initargs, maxtasksperchild, context)
TypeError: __init__() takes at most 5 arguments (6 given)

Green 1.9.1 - would appear that --allow-stdout is broken

Hi -

Very much like green - thanks for providing it. It would appear that the -a or --allow-stdout option is not working - stdout is still captured with the argument. Have tested with

Green 1.9.1, Python 3.4.3
Green 1.9.1, Python 2.7.9

If I change the value of allow_stdout to True in default_args in config.py:47 - then stdout is not captured - so it would appear that there is some issue in parsing the arguments to green. Haven't dug down to that part of the code yet.

Thanks, Jim

Support AppVeyor's PowerShell

Running green in AppVeyor CI does not color results.

However, I think this may be an AppVeyor issue, as I can run green in Windows PowerShell myself and get at least some form of coloring (other issues, such as IT permissions here at work) are preventing me from really testing green + PowerShell thoroughly.

I figured I'd record this issue here even if it's out of your control.

Typo in line 14 of cmdline.py

"covarage_version" should be "coverage_version". If coverage isn't installed, this will cause the command to error out with a NameError:

NameError: global name 'coverage_version' is not defined

uncaught exception from testcase - exit without traceback

Python 2.7.10 & 3.4.3

I have a failing test (in fact it is expected at them moment lacking some implementation), yet I get no traceback, nor do any remaining tests run - clean exit after printing a red E, which is quite embarrassing.

$ green lib
........................................E$

In fact I get the same clean exit from plain unittest

$ python -m unittest discover
........................................E$

Note: even a final new line was not printed!

unittest2 shows a relevant traceback and stops afterwards (no new tests are run).

Actually I am using testtools which depends on unittest2, so it works with that as well:

$ python -m testtools.run discover
Tests running...
======================================================================
ERROR: lib.tool.test_...
----------------------------------------------------------------------
Traceback (most recent call last):
...
  File "/usr/lib64/python3.4/argparse.py", line 1728, in parse_args
    args, argv = self.parse_known_args(args, namespace)
  File "/usr/lib64/python3.4/argparse.py", line 1767, in parse_known_args
    self.error(str(err))
  File "/usr/lib64/python3.4/argparse.py", line 2386, in error
    self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
  File "/usr/lib64/python3.4/argparse.py", line 2373, in exit
    _sys.exit(status)
SystemExit: 2

nose runs properly, even capturing the uncaught second exception in unittest2 and continuing.

Unfortunately I could not create a minimal reproduction, yet (my attempts so far misteriously worked), but could pinpoint the place where it could be fixed.

Changing this line makes things work (though, this might not be the proper fix):

test(result)

            try:
                test(result)
            except:
                pass

Bad import causes no tests

When testing the following file, green does not find any tests.

#! /usr/bin/env python
import unittest
import xml.etree.ElementTree.Element

class TestElement(unittest.TestCase):
    def test_foo(self):
        Element

Python unittest's results:

$ python -m test_foo.py 
/usr/bin/python: Error while finding spec for 'test_foo.py' (<class 'ImportError'>: No module named 'xml.etree.ElementTree.Element'; 'xml.etree.ElementTree' is not a package)

Green's results:

$ green -d test_foo.py
2014-08-22 16:29:20     DEBUG Attempting to load target 'test_foo.py' with file_pattern 'test*.py'
2014-08-22 16:29:20     DEBUG Found 0 tests for target 'test_foo.py'
2014-08-22 16:29:20     DEBUG No test loading attempts succeeded.  Created an empty test suite.
Ran 0 tests in 0.000s

No Tests Found

Upon removal of the import line, green finds the test and rightly fails it:

#! /usr/bin/env python
import unittest

class TestElement(unittest.TestCase):
    def test_foo(self):
        Element
$ green -d test_foo.py
2014-08-22 16:29:37     DEBUG Attempting to load target 'test_foo.py' with file_pattern 'test*.py'
2014-08-22 16:29:37     DEBUG Load method: FILE - test_foo.py
2014-08-22 16:29:37     DEBUG Found 1 test for target 'test_foo.py'
E

Error in test_foo.TestElement.test_foo
  File "/usr/local/lib/python3.4/unittest/case.py", line 57, in testPartExecutor
    yield
  File "/usr/local/lib/python3.4/unittest/case.py", line 574, in run
    testMethod()
  File "/home/rolobio/tmp/test_foo.py", line 7, in test_foo
    Element
NameError: name 'Element' is not defined

Ran 1 test in 0.001s

FAILED (errors=1)

Redo Output Options

I would rather have fine-grained output options.

We can leave the -v,-vv,-vvv options as shortcuts to a set of the fine-grain options.

Cases we don't currently handle that we want to be able to:

  • show the method name AND the docstring
  • show only the module name and then trailing dots for the module's tests
  • show just the method docstrings (no module or classes)

Improve Green's own Unit Tests

Areas that could be improved:

  • Use only one style of docstring
  • Leverage helper functions and textwrap.dedent() to reduce the amount of boilerplate code in tests where we write external test files to load and run.
  • Use newer testing facilities like addCleanup and context handlers.

Consider adding an output mode that lists dots delineated by modules

Consider adding an output mode (maybe as -v and shift current -v to -vv and current -vv to -vvvv?) where the body of the output looks like pytests's default output.

green/test/test_cmdline.py ........
green/test/test_loader.py .............
green/test/test_output.py ...........
green/test/test_result.py .........................
green/test/test_runner.py ..........
green/test/test_subprocess.py ......
green/test/test_version.py ...

Since 97569e1, setUpClass is called once per test

Since 97569e1 setUpClass, tearDownClass setUpModule and tearDownModule get called once per unit test. This is due to running each suite's component individually, as the subprocess mode does.

According to the unittest documentation the order and number of times setUpClass and tearDownClass are called isn't strictly-speaking defined. It just so happens that by convention on unittest's own single-threaded mode that they only get called once, since the tests run in order. No guarantee is made about what happens when tests are run in a randomised order or in parallel.

Personally, I use those methods to do "expensive" set up and tear down where I know I can maintain the invariant between tests in setUp and tearDown. As such, the current behaviour makes my tests run a lot slower.

I would submit that the correct behaviour is to detect where tests are setting those methods and only divide up suites to the level where any of those methods are defined. So, for instance, if setUpClass was defined on a TestCase, then the entire TestCase should run in serial, but could be parallelized amongst other tests.

Here's some output to show the problem and a testcase:

from testtools import TestCase

import sys


class TestSetUpClass(TestCase):

    """Test set up class."""

    @classmethod
    def setUpClass(cls):
        sys.stderr.write("setUpClass\n")

    def test_one(self):
        """One"""
        sys.stderr.write("method one on two called\n")
        pass

    def test_two(self):
        """Two"""
        sys.stderr.write("method two on two called\n")
        pass

    def test_three(self):
        """Three"""
        sys.stderr.write("method three on two called\n")
        pass


Green 1.11.0, Coverage 3.7.1, Python 2.7.9

test.test_two.test_set_up_class
  TestSetUpClass
    OnesetUpClass
method one on two called
setUpClass
method three on two called
.   One
.   Three
    TwosetUpClass
method two on two called
.   Two

Ran 3 tests in 0.108s

OK (passes=3)

Capture stdout (and stderr?)

Right now, any output to stdout/stderr is mixed into the console output at the current cursor spot, messing with our nice output. Lets handle it in a better way.

hjwp's feedback:

by default py.test captures all stdout/stderr by default, and then only shows you the output from tests that failed. I think nose does something similar... it's definitely something that's sometimes useful, but sometimes not, so I would pick a sensible default but allow the user to override it and get back to "normal" stdout behaviour...

Support external concurrent requirements

Many test environments require extra tooling for concurrently accessing external requirements. We need support for this in subprocess mode. For example, each subprocess may need to set up its own database for tests to access.

subunit support

I've just started playing with Green and it works really well so far. However, I'd like to use it in automated tools, and I've had good success with subunit for this (reporting up to buildbot, for example).

Supporting subunit via a plugin or directly would be great!

Alternatively, some sort of structured output option (junit XML? json?) would be nice instead (or as well).

Multiple Targets

It would be very helpful if Green supported multiple test targets.

Some example syntax:

  • green test.tests1 test.tests
  • green -v --run-coverage test.tests1 test.tests2

Error in green.runner.N/A.poolRunner

Firstly, thanks for creating this project. I quite enjoy being able to add a little bit of colour to the command line!

Something changed in green between version 1.9.4 and 1.10.0. Before all my tests ran as expected. Output from v1.9.4:

test_colour
  Test_Colour
.   test_3hex_colour
.   test_6hex_colour
.   test_colour_bad_Tuple_value
.   test_colour_bad_hex_chars
.   test_colour_bad_hex_string_length
.   test_colour_bad_list_legnth
.   test_colour_bad_list_value
.   test_colour_bad_tuple_legnth
.   test_colour_blue
.   test_colour_green
.   test_colour_hex
.   test_colour_hex_black
.   test_colour_list
.   test_colour_not_hex_string
.   test_colour_red
.   test_colour_repr
.   test_colour_rgb
.   test_colour_rgb_nomralized_black
.   test_colour_rgb_normalized_white
.   test_colour_str
.   test_colour_tuple
x   test_colour_vs_color
.   test_default_colour
  Test_Contrast
.   test_contrast_black_black
.   test_contrast_from_colour
.   test_contrast_white_black
.   test_contrast_white_white
  Test_Luminance
.   test_luminance_as_colour_property
.   test_luminance_black
.   test_luminance_colour_provided
.   test_luminance_white_hex
.   test_luminance_white_tuple
test_palette
  Test_Palette
.   test_palette_length
.   test_palette_repr
.   test_palette_str
test_setup
  Test_Setup
.   test_version
.   test_we_live

Ran 37 tests in 0.009s

OK (expected_failures=1, passes=36)

However, with v1.10.0, none of the tests pass, and it seems to be an issue with green. Here is the output:

test_colour
  Test_Colour
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
  Test_Contrast
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
  Test_Luminance
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
E   poolRunner
test_palette
  Test_Palette
E   poolRunner
E   poolRunner
E   poolRunner
test_setup
  Test_Setup
E   poolRunner
E   poolRunner

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Error in green.runner.N/A.poolRunner
  File "C:\Python34\lib\site-packages\green\subprocess.py", line 236, in poolRunner
    test.run(result)
AttributeError: 'NoneType' object has no attribute 'run'

Ran 37 tests in 0.420s

FAILED (errors=37)

I'm running Python 3.4.2 on Windows 7. The tests are from colourettu.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.