Code Monkey home page Code Monkey logo

behave's Introduction

behave

CI Build Status

Documentation Status

Latest Version

License

Join the chat at https://gitter.im/behave/behave

behave is behavior-driven development, Python style.

logo

Behavior-driven development (or BDD) is an agile software development technique that encourages collaboration between developers, QA and non-technical or business participants in a software project.

behave uses tests written in a natural language style, backed up by Python code.

First, install *behave*.

Now make a directory called "features/". In that directory create a file called "example.feature" containing:

# -- FILE: features/example.feature
Feature: Showing off behave

  Scenario: Run a simple test
    Given we have behave installed
     When we implement 5 tests
     Then behave will test them for us!

Make a new directory called "features/steps/". In that directory create a file called "example_steps.py" containing:

# -- FILE: features/steps/example_steps.py
from behave import given, when, then, step

@given('we have behave installed')
def step_impl(context):
    pass

@when('we implement {number:d} tests')
def step_impl(context, number):  # -- NOTE: number is converted into integer
    assert number > 1 or number == 0
    context.tests_count = number

@then('behave will test them for us!')
def step_impl(context):
    assert context.failed is False
    assert context.tests_count >= 0

Run behave:

$ behave
Feature: Showing off behave # features/example.feature:2

  Scenario: Run a simple test          # features/example.feature:4
    Given we have behave installed     # features/steps/example_steps.py:4
    When we implement 5 tests          # features/steps/example_steps.py:8
    Then behave will test them for us! # features/steps/example_steps.py:13

1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
3 steps passed, 0 failed, 0 skipped, 0 undefined

Now, continue reading to learn how to get the most out of behave. To get started, we recommend the tutorial and then the feature testing language and api references.

More Information

behave's People

Contributors

aconti-ns1 avatar berdroid avatar bittner avatar caphrim007 avatar charleswhchan avatar florentx avatar gitter-badger avatar jamesroutley avatar jeamland avatar jenisys avatar jgentil avatar johbo avatar katherinesun avatar kingbuzzman avatar leszekhanusz avatar lrowe avatar mixxorz avatar msabramo avatar r1chardj0n3s avatar renovate-bot avatar rlgomes avatar rrueth avatar smadness avatar spitglued avatar sutyrin avatar teapow avatar tomekwszelaki avatar unklhe avatar vrutkovs avatar xbx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

behave's Issues

Include step in JUnit/XML <failure> tag

The XML test results are great for integration with Hudson but it's frustrating that the failure messages don't include the step. The message attribute is blank and the content is just the assertion exception message, so all you have to go on is the feature and scenario names and the captured stdout.

For example:

:
<testcase class="example" name="Alert on boundary case just inside threshold" time="1.028">
<failure message="None" type="NoneType">Assertion Failed: intensity is 50
Captured stdout:
:
</failure>
</testcase>
:

It would be more logical to populate the XML report as follows:

<testsuite ...>
    <testcase ...>
        <failure message="{exception message}" type="{exception type}">
            {scenario steps with passed/failed/skipped result on each one}
            {stack trace for exception related to failed step}
        </failure>
        <system-out>
            {captured sysout if capture option is active}
        </system-out>
        <system-err>
            {captured syserr if capture option is active}
            {captured logging if logcapture option is active}
        </system-err>
    <testcase>
<testsuite>

Take @step string from implementation docstring

To allow steps to be automatically documented (e.g. with Sphinx) it would be useful to have an option to take the string passed into the @given/when/then/step decorators from the first line of the function's docstring.

For example:

@then(docstring=True)
def generateReport(context, date):
    '''generate report for {date}
    Generates a FinancialFudge report for the given date
    and checks that the totals are correct.
    '''
    # do stuff
    assert True

So here the step would be "generate report for {date}" and that's what will also appear in the documentation generated by epydoc or Sphinx. Otherwise those document generators only show the function name and not the step string.

Allow import of decorators

I anticipate it might be helpful to allow "from behave import *" in step code to get the decorators to make linting tools happier.

The globals-stuffing can still happen, those who don't want to write the above - the import just gets the same objects as the globals.

Of course this means a slightly different approach to creating the decorators...

Fatal error when using --format=json

Use of "behave --format=json" fails with reference to undefined method:

Traceback (most recent call last):
  File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "C:\Python27\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 116, in <module>
main()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 90, in main
    failed = runner.run()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 419, in run
    self.run_with_paths()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 444, in run_with_paths
    failed = feature.run(self)
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\model.py", line 238, in run
    failed = scenario.run(runner)
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\model.py", line 431, in run
    runner.formatter.step(step)
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\formatter\json.py", line 50, in step
    element['steps'].append(step.to_dict())
AttributeError: 'dict' object has no attribute 'to_dict'

behave 1.1.0: Install fails under Windows

The problems seems to be related to the "behave.egg-info/SOURCES.txt" file that contains a line with a "/" (slash := Root-Directory).

WORKAROUND:
Remove the "behave.egg-info/" directory and perform python setup.py install.
The "*.egg-info" directory is regenerated during this step.

NOTE: In addition:
The "behave" script needs adaptation to work under Windows.
Currently, the user needs to build it by hand (under Windows).

Failed to parse test result file with Bamboo

Hi guys,

We are running behave tests in Bamboo and when parsing test results bamboo throws the following error: "Failed to parse test result file TESTS-abc.xml"

I'm using the Junit Parser Task and behave version 1.1.0.

Did anyone managed to parse test results with Bamboo ?

Thanks,

Dan

Broken terminals can't run behave

A failsafe terminal width may be provided:

In pretty_formatter.py

         self.display_width = get_terminal_size()[0]

Could be

         self.display_width = get_terminal_size()[0] or 80

cli.py needs a refactor and possibly a rename

Most of the runner logic is in cli.py at the moment and should probably be refactored out so as to be a bit easier to follow.

Once this is done, cli.py could probably be renamed main.py

Conflict with @step decorator

Python doesn't seem to like the following code:

from behave import *

@step("generate report for {date}")
def step(context, date):
    # do stuff
    assert True

@step("enter credentials for login")
def step(context):
    # do stuff
    assert True

I get this error:

...
File "C:\Users\...\steps\blob.py", line 13, in <module>
    @step("enter credentials for login")
TypeError: step() takes exactly 2 arguments (1 given)

Looks like Python is confusing the various functions called "step".

Not sure if this is an issue but there doesn't seem to be a forum to ask questions about Behave.

"behave --format help" raises an error

VERSION: 1.1.0
CONTEXT: command-line processing

"behave --format help" raises an error because it tries to access the non-existing help formatter.
Therefore, it does not behave as described in the commands help text.

OOPS: Mmh, untested command-line behaviour ,-)

KeyError sometimes when using `behave --stop`

I'm not exactly sure what about my feature files is doing this. It might be that I have more than one feature with unimplemented steps. All I know is that at some point in writing my feature files, when I run behave --stop, I begin to get the following traceback:

Traceback (most recent call last):
  File "/Users/deyk/code/py/shiny/bin/behave", line 5, in <module>
    main()
  File "[...]/behave/__main__.py", line 78, in main
    failed = runner.run()
  File "[...]/behave/runner.py", line 386, in run
    self.run_with_paths()
  File "[...]/behave/runner.py", line 419, in run_with_paths
    self.calculate_summaries()
  File "[...]/behave/runner.py", line 446, in calculate_summaries
    self.feature_summary[feature.status or 'skipped'] += 1
KeyError: 'untested'

Thanks otherwise for this great little bit of testing software. I've often been envious of my rubyist friends and their cucumbers.

Skipped scenarios are counted as passed

When running behave with --tag, the excluded/skipped scenarios are counted as passed in the test summary (and junit report when --junit is enabled).

For example, my feature file has 1 scenario tagged as "done", and 7 scenarios tagged as "unimplemented"

When I run:

behave --junit --no-color --no-capture-stderr --tags @done some.feature

The test result (in text and junit report) shows:

0 features passed, 1 failed, 0 skipped
7 scenarios passed, 1 failed, 0 skipped

I would expect to see:

0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 7 skipped

Strange behaviour when no steps directory is present / path specified

I get the following issue when trying to run behave under Windows. It seems to go up into the parent directory for some reason, rather than searching subdirectories. (adding a steps directory in the current folder stops it)

C:\Users\Anthony\Desktop\openfriendly\openfriendly>behave-run.py -v .
['./', 'C:\\Users\\Anthony', 'C:\\Users\\Anthony\\AppData\\Roaming']
Using defaults:
 logging_format %(levelname)s:%(name)s:%(message)s
        dry_run False
          color True
 stdout_capture True
    log_capture True
        summary True
   show_skipped True
  show_snippets True
          junit False
    show_source True
Supplied path: "."
Trying base directory: 'C:\Users\Anthony\Desktop\openfriendly\openfriendly'
Trying base directory: 'C:\Users\Anthony\Desktop\openfriendly'
Trying base directory: 'C:\Users\Anthony\Desktop'
Trying base directory: 'C:\Users\Anthony'
Trying base directory: 'C:\Users'
Trying base directory: 'C:\'
Traceback (most recent call last):
  File "C:\Python27\Scripts\behave-run.py", line 3, in <module>
    main()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 90, in main
    failed = runner.run()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 418, in run
    self.setup_paths()
  File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 346, in setup_paths
    for dirpath, dirnames, filenames in os.walk(base_dir):
  File "C:\Python27\lib\os.py", line 294, in walk
    for x in walk(new_path, topdown, onerror, followlinks):
  File "C:\Python27\lib\os.py", line 294, in walk
    for x in walk(new_path, topdown, onerror, followlinks):
  File "C:\Python27\lib\os.py", line 294, in walk
    for x in walk(new_path, topdown, onerror, followlinks):
  File "C:\Python27\lib\os.py", line 294, in walk
    for x in walk(new_path, topdown, onerror, followlinks):
  File "C:\Python27\lib\os.py", line 294, in walk
    for x in walk(new_path, topdown, onerror, followlinks):
  File "C:\Python27\lib\os.py", line 284, in walk
    if isdir(join(top, name)):
  File "C:\Python27\lib\genericpath.py", line 41, in isdir
    st = os.stat(s)
KeyboardInterrupt

Test summary reports incorrect passed/failed scenarios and steps when Scenario Outline is used

When a feature contains a Scenario Outline with 3 steps for 2 examples, while 1 example passed while the other one failed, the test summary looks like:

0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
1 step passed, 1 failed, 0 skipped, 0 undefined
Took 0m0.0s

I would expect it to be:

0 features passed, 1 failed, 0 skipped
1 scenarios passed, 1 failed, 0 skipped
5 step passed, 1 failed, 0 skipped, 0 undefined

Parser removes empty lines in multiline text argument

Before multiline text is parsed, the empty lines are already removed in the parse() method.
Therefore, the internal text code does not get the specified text.

DESIRED CHANGE:
Avoid removing empty lines in multiline text or specify a test that states that this is desired behavior.

CURRENT STATE:

Feature: ...
  Scenario: Parser strips empty lines from multiline text (currently)
    Given I have a multiline text argument with:
        """
        Line 1 (followed by empty line).

        Line 3.
        """
    Then I receive the the following "context.text" value with:
        """
        Line 1 (followed by empty line).
        Line 3.
        """

Formatter processing chain is broken

VERSION: 1.1.0 .. repository HEAD

Formatter chaining, that is currently implemented, is broken or deeply flawed.
If more than one formatter is present, an formatter is passed as stream argument in the Ctor of the next formatter.
Because the Formatter base class is lacking stream-like methods (as adapters: write(), flush(), ...),
this leads to weird runtime errors.

HOW TO REPEAT:

  1. Create a "behave.ini" configuration file with "format=pretty"
  2. Run behave with another formatter, for example "--format=plain"
$ behave -f plain 
Traceback (most recent call last):
  File "/Users/jens/se/INSPECT/behave_master/bin/behave", line 5, in <module>
    main()
  File "/Library/Python/2.7/site-packages/behave/__main__.py", line 90, in main
    failed = runner.run()
  ...
  File "/Library/Python/2.7/site-packages/behave/formatter/plain.py", line 13, in feature
    self.stream.write(u'%s: %s\n' % (feature.keyword, feature.name))
AttributeError: 'PrettyFormatter' object has no attribute 'write'

Similar problems occur in other cases/with other formatter combinations.
Therefore, it is best to ensure that only one formatter is used.

QUICKFIX:

# file:behave/__main__.py
def main():
    ...
    # -- SANITY: Use at most one formatter, more cause various problems.
    # PROBLEM DESCRIPTION:
    #   1. APPEND MODE: configfile.format + --format
    #   2. Daisy chaining of formatters does not work
    #     => behave.formatter.formatters.get_formatter()
    #     => Stream methods, stream.write(), stream.flush are missing
    #        in Formatter interface
    config.format = config.format[-1:]

Refactor runner to use models more

The runner should defer the running of things to the models code more (Feature.run(), Background.run(), etc) to make things neater and also more testable.

escape sequences don't work on terminal output

When running Behave in Gnome Terminal (the standard terminal in Ubuntu), or in xterm, the escape sequence for "up" doesn't display right. Instead, it looks like:

Scenario: run a simple test # example.feature:3
Given we have behave installed # steps/example.py:3
�[#1A Given we have behave installed # steps/example.py:3
When we implement a test # steps/example.py:7
�[#1A When we implement a test # steps/example.py:7
Then behave will test it for us! # steps/example.py:11
�[#1A Then behave will test it for us! # steps/example.py:11

(Before the "[" there is one of those numbered/boxy symbols indicating a control character, but that doesn't copy/paste properly, of course.)

In "screen" or rxvt, the terminal output is right (the "grey" line is replaced by the "green" line when the step passes). So obviously this is an issue with differences in escape sequences between terminals.

I've looked around to try to figure out how to redefine this "up" command (it's cuu or cuu1) on my terminal, but I can't figure it out. And of course I don't really want to switch terminals just to use behave. Any ideas how to address this?

Nice to have snippets for all unimplemented steps taking into account of the tags fltering

Currently "behave" only prints snippets for the first step it sees unimplemented for each scenario.
It would be good to print snippets for all unimplemented steps for the scenarios to be executed, taking into account of the features filter, scenarios filter, tags filter, etc.

And the number of undefined steps in the test summary should be updated accordingly.

"behave --format=plain --tags @one" seems to execute right scenario w/ wrong steps

When tags/tag-expressions and "--format=plain" is used together on the command-line,
the correct scenario is selected to execute but the wrong steps seem to be used.

FEATURE FILE:

# FILE: feature/tutorial11_tags.feature
@wip
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2)

    In order to increase the ninja survival rate,
    As a ninja commander
    I want my ninjas to decide whether to take on an opponent
    based on their skill levels.

    @ninja.any
    Scenario: Weaker opponent
        Given the ninja has a third level black-belt
        When attacked by a samurai
        Then the ninja should engage the opponent

    @ninja.chuck
    Scenario: Stronger opponent
        Given the ninja has a third level black-belt
        When attacked by Chuck Norris
        Then the ninja should run for his life

When I execute the feature file from above w/ the following command-line everything works fine:

$ behave -c --tags @ninja.chuck ../features/tutorial11_tags.feature 
@wip
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2) # ../features/tutorial11_tags.feature:2
  In order to increase the ninja survival rate,
  As a ninja commander
  I want my ninjas to decide whether to take on an opponent
  based on their skill levels.

  @ninja.any
  Scenario: Weaker opponent                      # ../features/tutorial11_tags.feature:10
    Given the ninja has a third level black-belt
    When attacked by a samurai
    Then the ninja should engage the opponent

  @ninja.chuck
  Scenario: Stronger opponent                    # ../features/tutorial11_tags.feature:16
    Given the ninja has a third level black-belt # ../features/steps/step_tutorial02.py:69
    When attacked by Chuck Norris                # ../features/steps/step_tutorial02.py:77
    Then the ninja should run for his life       # ../features/steps/step_tutorial02.py:81


SUMMARY:
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 1 skipped
3 steps passed, 0 failed, 3 skipped, 0 undefined

But when I execute the feature file from above w/ the --format=plain the second scenario with the steps from the first one seem to be executed:

$ behave --format=plain --tags @ninja.chuck ../features/tutorial11_tags.feature 
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2)
   Scenario: Weaker opponent
   Scenario: Stronger opponent
       Given the ninja has a third level black-belt ... passed
        When attacked by a samurai ... passed
        Then the ninja should engage the opponent ... passed


SUMMARY:
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 1 skipped
3 steps passed, 0 failed, 3 skipped, 0 undefined
Took 0m0.0s

behave --version runs tests/features

VERSION: 1.1.0
CONTEXT: command-line processing

"behave --version" should show the version information.
Instead it runs the features.
HINT: If argparse was used, you need to provide a specific action (I think).

Install fails on Windows

Hello,
I tried installing on windows, both using
$ pip install behave

as well as downloading it and doing a
$ python setup install

I get the following error
"""
writing top-level names to behave.egg-info\top_level.txtt.win32\egg\test
writing dependency_links to behave.egg-info\dependency_links.txt\test
reading manifest file 'behave.egg-info\SOURCES.txt'.win32\egg\test
Traceback (most recent call last):.py -> build\bdist.win32\egg\test
File "setup.py", line 41, in > build\bdist.win32\egg\test
"License :: OSI Approved :: BSD License",-> build\bdist.win32\egg\test
File "X:\sdk\Python27\lib\distutils\core.py", line 152, in setup\egg\test
dist.run_commands()init.py -> build\bdist.win32\egg\test
File "X:\sdk\Python27\lib\distutils\dist.py", line 953, in run_commandstion.py
self.run_command(cmd)
File "X:\sdk\Python27\lib\distutils\dist.py", line 972, in run_command ansi_es
cmd_obj.run()
.
[deleted]
.
File "X:\sdk\Python27\lib\site-packages\setuptools\command\egg_info.py", line 339, in add_defaults
self.read_manifest()
File "X:\sdk\Python27\lib\distutils\command\sdist.py", line 385, in read_manifest
self.filelist.append(line)
File "X:\sdk\Python27\lib\site-packages\setuptools\command\egg_info.py", line 278, in append
path = convert_path(item)
File "X:\sdk\Python27\lib\distutils\util.py", line 199, in convert_path
raise ValueError, "path '%s' cannot be absolute" % pathname
ValueError: path '/' cannot be absolute
"""

Anyways, it seems that the file behave.egg-info/SOURCES.txt has an
entry of "/" in it.

I deleted this line and it seems to install ok now.

and a test of the following provided no complaints

$ python -c "import behave"

Yay. :-)

Not sure what the "/" actually does but it breaks the install on windows.
(used both the dos cmd prompt as well as cygwin bash).

Just saying...

Support numbering of Features/Scenarios

Wikipedia has an example of a feature file:

Scenario 1: Refunded items should be returned to stock
Given a customer previously bought a black sweater from me
and I currently have three black sweaters left in stock
when he returns the sweater for a refund
then I should have four black sweaters in stock.

Scenario 2: Replaced items should be returned to stock
Given that a customer buys a blue garment
and I have two blue garments in stock
and three black garments in stock.
When he returns the garment for a replacement in black,
then I should have three blue garments in stock
and two black garments in stock.

Note the numbering of Scenarios. That's useful for referencing.

As an aside, also note the capitalisation of the keywords, allowing the scenario text to be a slightly more correct form of English.

Parser removes shell-like comments in multiline text before multiline is parsed

The parser removes "comment-lines" in multiline text arguments before multiline text is parsed.
DESIRED: Multiline text should be just passed through (whitespace stripping is OK)
SEE ALSO: behave.parser.Parser.action() method

EXAMPLE:

# file:features/behave_parser_strips_comments.feature
Features: One

  Scenario: SAD, Comments are stripped in multiline args
    Given I have the following multiline argument with:
       """
       Hello here
       # -- COMMENT LINE
       And here it ends.
       """
    Then I get the following multiline text within a step definition:
       """
       Hello here
       And here it ends.
       """

NOTES:
behave.model.Step.run() has also a problem in these cases.
If only comment lines exist, the "context.text" attribute is not set.
Better use: "if self.text is not None: ..." instead of "if self.text: context.text = self.text"

behave returns 0 (SUCCESS) even in case of test failures

The behave.runner.Runner.run() method implementation has a bug,
the prevents behave main() to return non-zero result codes in case of test failures.
Currently, 0 (SUCCESS) is always returned because runner.run() does not have a return statement,
causing failed=None that disables the result logic in main().

It is remarkable that nobody stumbled over this problem yet,
because failures are not detectable in build scripts, etc.

# file:behave/runner.py:
class Runner(object):
    ...
    def run(self):
        with self.path_manager:
            self.setup_paths()
            self.run_with_paths()
            # SHOULD-BE: return self.run_with_paths()

NOTE:
behave main() function has another problem related to "sys.exit(str(exception))" usage.
sys.exit() function only accepts numbers, not strings (causing another exception to be raised).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.