behave / behave Goto Github PK
View Code? Open in Web Editor NEWBDD, Python style.
Home Page: https://behave.readthedocs.io/en/latest/
License: Other
BDD, Python style.
Home Page: https://behave.readthedocs.io/en/latest/
License: Other
Hi guys,
We are running behave tests in Bamboo and when parsing test results bamboo throws the following error: "Failed to parse test result file TESTS-abc.xml"
I'm using the Junit Parser Task and behave version 1.1.0.
Did anyone managed to parse test results with Bamboo ?
Thanks,
Dan
VERSION: 1.1.0
CONTEXT: command-line processing
"behave --version" should show the version information.
Instead it runs the features.
HINT: If argparse was used, you need to provide a specific action (I think).
A failsafe terminal width may be provided:
In pretty_formatter.py
self.display_width = get_terminal_size()[0]
Could be
self.display_width = get_terminal_size()[0] or 80
Need to be able to turn off stdout and logging capture.
If a feature is tagged @slow and there's a before_feature in the environment then that should not be invoked if I skip slow tests using the command line "behave --tags ~slow"
The runner should defer the running of things to the models code more (Feature.run(), Background.run(), etc) to make things neater and also more testable.
When we have a table or text arg set on context, it should warn if those attributes are already set or if the step definition tries to set them itself.
"behave --junit --junit-directory=xxx/test_results" fails if more than 1 level must be created.
NOTE: Probably the os./shutil.makedirs() is not used that allows to create multiple levels at once.
I'm not exactly sure what about my feature files is doing this. It might be that I have more than one feature with unimplemented steps. All I know is that at some point in writing my feature files, when I run behave --stop
, I begin to get the following traceback:
Traceback (most recent call last):
File "/Users/deyk/code/py/shiny/bin/behave", line 5, in <module>
main()
File "[...]/behave/__main__.py", line 78, in main
failed = runner.run()
File "[...]/behave/runner.py", line 386, in run
self.run_with_paths()
File "[...]/behave/runner.py", line 419, in run_with_paths
self.calculate_summaries()
File "[...]/behave/runner.py", line 446, in calculate_summaries
self.feature_summary[feature.status or 'skipped'] += 1
KeyError: 'untested'
Thanks otherwise for this great little bit of testing software. I've often been envious of my rubyist friends and their cucumbers.
Use of "behave --format=json" fails with reference to undefined method:
Traceback (most recent call last):
File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 116, in <module>
main()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 90, in main
failed = runner.run()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 419, in run
self.run_with_paths()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 444, in run_with_paths
failed = feature.run(self)
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\model.py", line 238, in run
failed = scenario.run(runner)
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\model.py", line 431, in run
runner.formatter.step(step)
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\formatter\json.py", line 50, in step
element['steps'].append(step.to_dict())
AttributeError: 'dict' object has no attribute 'to_dict'
Don't even run the after_* stuff, just quit (alternatively allow all the after_* stuff to be run as well, but need an option to not do it)
VERSION: 1.1.0 .. repository HEAD
Formatter chaining, that is currently implemented, is broken or deeply flawed.
If more than one formatter is present, an formatter is passed as stream argument in the Ctor of the next formatter.
Because the Formatter base class is lacking stream-like methods (as adapters: write(), flush(), ...),
this leads to weird runtime errors.
HOW TO REPEAT:
$ behave -f plain
Traceback (most recent call last):
File "/Users/jens/se/INSPECT/behave_master/bin/behave", line 5, in <module>
main()
File "/Library/Python/2.7/site-packages/behave/__main__.py", line 90, in main
failed = runner.run()
...
File "/Library/Python/2.7/site-packages/behave/formatter/plain.py", line 13, in feature
self.stream.write(u'%s: %s\n' % (feature.keyword, feature.name))
AttributeError: 'PrettyFormatter' object has no attribute 'write'
Similar problems occur in other cases/with other formatter combinations.
Therefore, it is best to ensure that only one formatter is used.
QUICKFIX:
# file:behave/__main__.py
def main():
...
# -- SANITY: Use at most one formatter, more cause various problems.
# PROBLEM DESCRIPTION:
# 1. APPEND MODE: configfile.format + --format
# 2. Daisy chaining of formatters does not work
# => behave.formatter.formatters.get_formatter()
# => Stream methods, stream.write(), stream.flush are missing
# in Formatter interface
config.format = config.format[-1:]
I anticipate it might be helpful to allow "from behave import *" in step code to get the decorators to make linting tools happier.
The globals-stuffing can still happen, those who don't want to write the above - the import just gets the same objects as the globals.
Of course this means a slightly different approach to creating the decorators...
The XML test results are great for integration with Hudson but it's frustrating that the failure messages don't include the step. The message attribute is blank and the content is just the assertion exception message, so all you have to go on is the feature and scenario names and the captured stdout.
For example:
:
<testcase class="example" name="Alert on boundary case just inside threshold" time="1.028">
<failure message="None" type="NoneType">Assertion Failed: intensity is 50
Captured stdout:
:
</failure>
</testcase>
:
It would be more logical to populate the XML report as follows:
<testsuite ...>
<testcase ...>
<failure message="{exception message}" type="{exception type}">
{scenario steps with passed/failed/skipped result on each one}
{stack trace for exception related to failed step}
</failure>
<system-out>
{captured sysout if capture option is active}
</system-out>
<system-err>
{captured syserr if capture option is active}
{captured logging if logcapture option is active}
</system-err>
<testcase>
<testsuite>
Need to nuke the contents of configuration.py so we know what's actually active there.
For that matter, is there a reason Row shouldn't just inherit from dict?
When tags/tag-expressions and "--format=plain" is used together on the command-line,
the correct scenario is selected to execute but the wrong steps seem to be used.
FEATURE FILE:
# FILE: feature/tutorial11_tags.feature
@wip
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2)
In order to increase the ninja survival rate,
As a ninja commander
I want my ninjas to decide whether to take on an opponent
based on their skill levels.
@ninja.any
Scenario: Weaker opponent
Given the ninja has a third level black-belt
When attacked by a samurai
Then the ninja should engage the opponent
@ninja.chuck
Scenario: Stronger opponent
Given the ninja has a third level black-belt
When attacked by Chuck Norris
Then the ninja should run for his life
When I execute the feature file from above w/ the following command-line everything works fine:
$ behave -c --tags @ninja.chuck ../features/tutorial11_tags.feature
@wip
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2) # ../features/tutorial11_tags.feature:2
In order to increase the ninja survival rate,
As a ninja commander
I want my ninjas to decide whether to take on an opponent
based on their skill levels.
@ninja.any
Scenario: Weaker opponent # ../features/tutorial11_tags.feature:10
Given the ninja has a third level black-belt
When attacked by a samurai
Then the ninja should engage the opponent
@ninja.chuck
Scenario: Stronger opponent # ../features/tutorial11_tags.feature:16
Given the ninja has a third level black-belt # ../features/steps/step_tutorial02.py:69
When attacked by Chuck Norris # ../features/steps/step_tutorial02.py:77
Then the ninja should run for his life # ../features/steps/step_tutorial02.py:81
SUMMARY:
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 1 skipped
3 steps passed, 0 failed, 3 skipped, 0 undefined
But when I execute the feature file from above w/ the --format=plain the second scenario with the steps from the first one seem to be executed:
$ behave --format=plain --tags @ninja.chuck ../features/tutorial11_tags.feature
Feature: Using Tags with Features and Scenarios (tutorial11 := tutorial2)
Scenario: Weaker opponent
Scenario: Stronger opponent
Given the ninja has a third level black-belt ... passed
When attacked by a samurai ... passed
Then the ninja should engage the opponent ... passed
SUMMARY:
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 1 skipped
3 steps passed, 0 failed, 3 skipped, 0 undefined
Took 0m0.0s
To allow steps to be automatically documented (e.g. with Sphinx) it would be useful to have an option to take the string passed into the @given/when/then/step decorators from the first line of the function's docstring.
For example:
@then(docstring=True)
def generateReport(context, date):
'''generate report for {date}
Generates a FinancialFudge report for the given date
and checks that the totals are correct.
'''
# do stuff
assert True
So here the step would be "generate report for {date}" and that's what will also appear in the documentation generated by epydoc or Sphinx. Otherwise those document generators only show the function name and not the step string.
The docs seem to suggest that a Background section is run once per Feature when the intent is probably once per Scenario.
Seriously.
Using step as the name of functions decorated with @step will confuse Python. We are using intention-revealing names for our step functions instead. The tutorial could do the same to avoid others wasting time on this minor issue.
Yes, I know Safari is of the Apple is of the Evil but we should still try to make it look nice.
Python doesn't seem to like the following code:
from behave import *
@step("generate report for {date}")
def step(context, date):
# do stuff
assert True
@step("enter credentials for login")
def step(context):
# do stuff
assert True
I get this error:
...
File "C:\Users\...\steps\blob.py", line 13, in <module>
@step("enter credentials for login")
TypeError: step() takes exactly 2 arguments (1 given)
Looks like Python is confusing the various functions called "step".
Not sure if this is an issue but there doesn't seem to be a forum to ask questions about Behave.
The problems seems to be related to the "behave.egg-info/SOURCES.txt" file that contains a line with a "/" (slash := Root-Directory).
WORKAROUND:
Remove the "behave.egg-info/" directory and perform python setup.py install
.
The "*.egg-info" directory is regenerated during this step.
NOTE: In addition:
The "behave" script needs adaptation to work under Windows.
Currently, the user needs to build it by hand (under Windows).
This will address the confusion leading to issues like #22.
When running behave with --tag, the excluded/skipped scenarios are counted as passed in the test summary (and junit report when --junit is enabled).
For example, my feature file has 1 scenario tagged as "done", and 7 scenarios tagged as "unimplemented"
When I run:
behave --junit --no-color --no-capture-stderr --tags @done some.feature
The test result (in text and junit report) shows:
0 features passed, 1 failed, 0 skipped
7 scenarios passed, 1 failed, 0 skipped
I would expect to see:
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 7 skipped
VERSION: 1.1.0
CONTEXT: command-line processing
"behave --format help" raises an error because it tries to access the non-existing help formatter.
Therefore, it does not behave as described in the commands help text.
OOPS: Mmh, untested command-line behaviour ,-)
We'd like to support IronPython but there seem to be issues with using it with virtualenv and pip, possibly due to this bug:
http://ironpython.codeplex.com/workitem/30348
This issue can be assigned a milestone once the upstream issues with IronPython are a bit clearer.
/tmp% behave -f help
No steps directory in "/tmp/features"
When a step for a Scenario Outline is not implemented, the recommended snippets are duplicate N times for the same step. N is the number of the defined examples for the Scenario Outline.
When running Behave in Gnome Terminal (the standard terminal in Ubuntu), or in xterm, the escape sequence for "up" doesn't display right. Instead, it looks like:
Scenario: run a simple test # example.feature:3
Given we have behave installed # steps/example.py:3
�[#1A Given we have behave installed # steps/example.py:3
When we implement a test # steps/example.py:7
�[#1A When we implement a test # steps/example.py:7
Then behave will test it for us! # steps/example.py:11
�[#1A Then behave will test it for us! # steps/example.py:11
(Before the "[" there is one of those numbered/boxy symbols indicating a control character, but that doesn't copy/paste properly, of course.)
In "screen" or rxvt, the terminal output is right (the "grey" line is replaced by the "green" line when the step passes). So obviously this is an issue with differences in escape sequences between terminals.
I've looked around to try to figure out how to redefine this "up" command (it's cuu or cuu1) on my terminal, but I can't figure it out. And of course I don't really want to switch terminals just to use behave. Any ideas how to address this?
Hello,
I tried installing on windows, both using
$ pip install behave
as well as downloading it and doing a
$ python setup install
I get the following error
"""
writing top-level names to behave.egg-info\top_level.txtt.win32\egg\test
writing dependency_links to behave.egg-info\dependency_links.txt\test
reading manifest file 'behave.egg-info\SOURCES.txt'.win32\egg\test
Traceback (most recent call last):.py -> build\bdist.win32\egg\test
File "setup.py", line 41, in > build\bdist.win32\egg\test
"License :: OSI Approved :: BSD License",-> build\bdist.win32\egg\test
File "X:\sdk\Python27\lib\distutils\core.py", line 152, in setup\egg\test
dist.run_commands()init.py -> build\bdist.win32\egg\test
File "X:\sdk\Python27\lib\distutils\dist.py", line 953, in run_commandstion.py
self.run_command(cmd)
File "X:\sdk\Python27\lib\distutils\dist.py", line 972, in run_command ansi_es
cmd_obj.run()
.
[deleted]
.
File "X:\sdk\Python27\lib\site-packages\setuptools\command\egg_info.py", line 339, in add_defaults
self.read_manifest()
File "X:\sdk\Python27\lib\distutils\command\sdist.py", line 385, in read_manifest
self.filelist.append(line)
File "X:\sdk\Python27\lib\site-packages\setuptools\command\egg_info.py", line 278, in append
path = convert_path(item)
File "X:\sdk\Python27\lib\distutils\util.py", line 199, in convert_path
raise ValueError, "path '%s' cannot be absolute" % pathname
ValueError: path '/' cannot be absolute
"""
Anyways, it seems that the file behave.egg-info/SOURCES.txt has an
entry of "/" in it.
I deleted this line and it seems to install ok now.
and a test of the following provided no complaints
$ python -c "import behave"
Yay. :-)
Not sure what the "/" actually does but it breaks the install on windows.
(used both the dos cmd prompt as well as cygwin bash).
Just saying...
Most of the runner logic is in cli.py at the moment and should probably be refactored out so as to be a bit easier to follow.
Once this is done, cli.py could probably be renamed main.py
Wikipedia has an example of a feature file:
Scenario 1: Refunded items should be returned to stock
Given a customer previously bought a black sweater from me
and I currently have three black sweaters left in stock
when he returns the sweater for a refund
then I should have four black sweaters in stock.
Scenario 2: Replaced items should be returned to stock
Given that a customer buys a blue garment
and I have two blue garments in stock
and three black garments in stock.
When he returns the garment for a replacement in black,
then I should have three blue garments in stock
and two black garments in stock.
Note the numbering of Scenarios. That's useful for referencing.
As an aside, also note the capitalisation of the keywords, allowing the scenario text to be a slightly more correct form of English.
Currently "behave" only prints snippets for the first step it sees unimplemented for each scenario.
It would be good to print snippets for all unimplemented steps for the scenarios to be executed, taking into account of the features filter, scenarios filter, tags filter, etc.
And the number of undefined steps in the test summary should be updated accordingly.
Using step as the name of functions decorated with @step will confuse Python. We are using intention-revealing names for our step functions instead. The tutorial could do the same to avoid others wasting time on this minor issue.
The behave.runner.Runner.run() method implementation has a bug,
the prevents behave main() to return non-zero result codes in case of test failures.
Currently, 0 (SUCCESS) is always returned because runner.run() does not have a return statement,
causing failed=None that disables the result logic in main().
It is remarkable that nobody stumbled over this problem yet,
because failures are not detectable in build scripts, etc.
# file:behave/runner.py:
class Runner(object):
...
def run(self):
with self.path_manager:
self.setup_paths()
self.run_with_paths()
# SHOULD-BE: return self.run_with_paths()
NOTE:
behave main() function has another problem related to "sys.exit(str(exception))" usage.
sys.exit() function only accepts numbers, not strings (causing another exception to be raised).
Before multiline text is parsed, the empty lines are already removed in the parse() method.
Therefore, the internal text code does not get the specified text.
DESIRED CHANGE:
Avoid removing empty lines in multiline text or specify a test that states that this is desired behavior.
CURRENT STATE:
Feature: ...
Scenario: Parser strips empty lines from multiline text (currently)
Given I have a multiline text argument with:
"""
Line 1 (followed by empty line).
Line 3.
"""
Then I receive the the following "context.text" value with:
"""
Line 1 (followed by empty line).
Line 3.
"""
I get the following issue when trying to run behave under Windows. It seems to go up into the parent directory for some reason, rather than searching subdirectories. (adding a steps directory in the current folder stops it)
C:\Users\Anthony\Desktop\openfriendly\openfriendly>behave-run.py -v .
['./', 'C:\\Users\\Anthony', 'C:\\Users\\Anthony\\AppData\\Roaming']
Using defaults:
logging_format %(levelname)s:%(name)s:%(message)s
dry_run False
color True
stdout_capture True
log_capture True
summary True
show_skipped True
show_snippets True
junit False
show_source True
Supplied path: "."
Trying base directory: 'C:\Users\Anthony\Desktop\openfriendly\openfriendly'
Trying base directory: 'C:\Users\Anthony\Desktop\openfriendly'
Trying base directory: 'C:\Users\Anthony\Desktop'
Trying base directory: 'C:\Users\Anthony'
Trying base directory: 'C:\Users'
Trying base directory: 'C:\'
Traceback (most recent call last):
File "C:\Python27\Scripts\behave-run.py", line 3, in <module>
main()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\__main__.py", line 90, in main
failed = runner.run()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 418, in run
self.setup_paths()
File "C:\Python27\lib\site-packages\behave-1.1.0-py2.7.egg\behave\runner.py", line 346, in setup_paths
for dirpath, dirnames, filenames in os.walk(base_dir):
File "C:\Python27\lib\os.py", line 294, in walk
for x in walk(new_path, topdown, onerror, followlinks):
File "C:\Python27\lib\os.py", line 294, in walk
for x in walk(new_path, topdown, onerror, followlinks):
File "C:\Python27\lib\os.py", line 294, in walk
for x in walk(new_path, topdown, onerror, followlinks):
File "C:\Python27\lib\os.py", line 294, in walk
for x in walk(new_path, topdown, onerror, followlinks):
File "C:\Python27\lib\os.py", line 294, in walk
for x in walk(new_path, topdown, onerror, followlinks):
File "C:\Python27\lib\os.py", line 284, in walk
if isdir(join(top, name)):
File "C:\Python27\lib\genericpath.py", line 41, in isdir
st = os.stat(s)
KeyboardInterrupt
This would be useful in CI environments.
Using step as the name of functions decorated with @step will confuse Python. We are using intention-revealing names for our step functions instead. The tutorial could do the same to avoid others wasting time on this minor issue.
When a feature contains a Scenario Outline with 3 steps for 2 examples, while 1 example passed while the other one failed, the test summary looks like:
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
1 step passed, 1 failed, 0 skipped, 0 undefined
Took 0m0.0s
I would expect it to be:
0 features passed, 1 failed, 0 skipped
1 scenarios passed, 1 failed, 0 skipped
5 step passed, 1 failed, 0 skipped, 0 undefined
The parser removes "comment-lines" in multiline text arguments before multiline text is parsed.
DESIRED: Multiline text should be just passed through (whitespace stripping is OK)
SEE ALSO: behave.parser.Parser.action() method
EXAMPLE:
# file:features/behave_parser_strips_comments.feature
Features: One
Scenario: SAD, Comments are stripped in multiline args
Given I have the following multiline argument with:
"""
Hello here
# -- COMMENT LINE
And here it ends.
"""
Then I get the following multiline text within a step definition:
"""
Hello here
And here it ends.
"""
NOTES:
behave.model.Step.run() has also a problem in these cases.
If only comment lines exist, the "context.text" attribute is not set.
Better use: "if self.text is not None: ..." instead of "if self.text: context.text = self.text"
main() contains no rule to show version.
HINT: Maybe you should consider using argparse or opt parse w/ same add ons for configfile parsing.
This would be also useful for CI environments.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.