Comments (7)
- I assume, you want the
tags
data astestcase.property
(by split-up oftag
intoname
=value
) - If you need something else, say so. But this essential information how you want to transport your information is not described above.
OTHERWISE:
- You can easily post-process the JUnit XML output and rewrite it by using various Python modules that support such a functionality (examples: junitparser, junit-xml, …)
- You can add another
Formatter
that provides JUnitXML output (may reuse theJUnitReporter
class)
from behave.
I just reevaluate pytest markers
with JUnitXML
output (by using: pytest -rA --show-capture=all ...
):
pytest
only adds information to JUnit XML for known markers, like:skip
,xfail
, ... (in:<skipped>
XML element)pytest
does not add any marker related information for user-defined markers for passing testspytest
shows user-defined markers for failing tests in<failure>
XML element (as marker-decorators)
THEREFORE:
- Specify where you expect that information should be added to the JUnit XML report (and which info) !
NOTES:
behave
: Scenariotags
are already shown in CDATA part of JUnit XML output for newerbehave versions
(like: behave v1.2.7.dev5)
from behave.
Thanks
First of all, thanks for taking the time to check with me.
As per your second reply:
Not sure we're talking about the same thing, but what i meant is shown below.
We register the new marker, by adding it to a pytest.ini which is in the folder that contains the test file and is automatically parsed by pytest when started.
tj@tj-ubuntu:~/test$
tj@tj-ubuntu:~/test$ cat pytest.ini
[pytest]
markers =
test_id:The id of the test as known in RMS.
tj@tj-ubuntu:~/test$
Then the test code:
tj@tj-ubuntu:~/test$ cat test.py
import pytest
@pytest.mark.test_id("839")
def test_1():
assert 12 == 12
tj@tj-ubuntu:~/test$
Execution and manual formatted output:
tj@tj-ubuntu:~/test$ pytest -rA --show-capture=all --junitxml=./test-run-results.xml test.py
== test session starts ==
platform linux -- Python 3.11.4, pytest-7.2.1, pluggy-1.0.0+repack
rootdir: /home/tj/test, configfile: pytest.ini
collected 1 item
test.py . [100%]
== PASSES ==
-- generated xml file: /home/tj/test/test-run-results.xml --
== short test summary info ==
PASSED test.py::test_1
== 1 passed in 0.01s ==
tj@tj-ubuntu:~/test$
tj@tj-ubuntu:~/test$
And the final result (with manual formatted xml output):
tj@tj-ubuntu:~/test$
tj@tj-ubuntu:~/test$ cat ./test-run-results.xml
<?xml version="1.0" encoding="utf-8"?>
<testsuites>
<testsuite name="pytest" errors="0" failures="0" skipped="0" tests="1" time="0.017" timestamp="2023-08-20T23:03:56.208288" hostname="tj-ubuntu">
<testcase classname="test" name="test_1" time="0.000">
<properties>
<property name="test_id" value="839" />
</properties>
</testcase>
</testsuite>
</testsuites>
tj@tj-ubuntu:~/test$
tj@tj-ubuntu:~/test$
As per your first reply:
Somehow i need to be able to get to know which tests failed and which tests passed so that i can pass along that info the requirements management system. In the above example, i can add the marker (in the code) to get to know that because the pass/fail are in the xml, together with the marker id. After that's executed, there is another python script (call it, my notifier) that takes up that xml output and goes through the list of results and tells the RMS of the results by passing it the result together with the id.
If the TAG can be added to the tests in the Behave formatted/style Python code, and that TAG is included in whatever XML block inside the XML output file, then i'm all set and i just have to include parsing of that block in the notifier.
I will check how the TAG can be used and see if it ends up in the XML.
Thanks for now. Cheers!
from behave.
Thanks, but in pytest
:
- the
markers
list is only use for strict markers checking and CLI diagnostics (IMHO) - My POC with a
test_example.py
with a marker-decorator@pytest.mark.test_id(123)
did not lead to a property in theJUnitXML
file (same for a similar marker-decorator that had a string-arg) - Only if I use
record_property()
which is DEPRECATED forxuint2
dialect, I got a property as part of the<testcase>
XML element - USING:
pytest v7.4.0
withPython 3.10
(andtest_id
in themarkers
list param in the config-file)
RELATED TO: behave
- To get the equivalent what you want to achieve in
behave
, you need aScenario.record_property()
method (as simplified functionality) - The
JUnit
reporter can access the recorded properties and store them as testcase properties in theJUnitXML
file - Other formatters can access the recorded properties and store/provide them in the output format in a sensible way
- THEN: You can use
before_tag()
hook, to usectx.scenario.record_property()
to store the related information (for some properties) - TAG EXAMPLES:
@test_id=123
or@test_id:123
or@test_id.123
or …
RELATED TO: allure
test reports
- The equivalent of the
pytest property
is anallure
test environment parameter
- SEE: https://docs.qameta.io/allure-report/#_environment
from behave.
HINT:
The core functionality of Scenario.record_property()
(or similar) is already covered by the the following feature request: #1112
from behave.
Your remark on the record_property in pytest remembered me that you indeed need to call that for the specific properties to end up in the xml output, sorry i forgot that.
tj@tj-ubuntu:~/test$ cat conftest.py
import pytest
def pytest_collection_modifyitems(session, config, items):
for item in items:
for marker in item.iter_markers(name="test_id"):
test_id = marker.args[0]
item.user_properties.append(("test_id", test_id))
tj@tj-ubuntu:~/test$
tj@tj-ubuntu:~/test$
from behave.
Ok, doing this:
Feature: showing off behave
@test_id=13
Scenario: run a simple test on an ENC
Given we have a new test board as ENC
When we implement a test
Then behave will test it for us!
results in
<testsuite name="tutorial.showing off behave" tests="1" errors="0" failures="0" skipped="0" time="0.000384" timestamp="2023-08-22T08:47:30.168057" hostname="tj-ubuntu">
<testcase classname="tutorial.showing off behave" name="run a simple test on an ENC" status="passed" time="0.000189">
<system-out>
<![CDATA[
@scenario.begin
**@test_id=13**
Scenario: run a simple test on an ENC
Given we have a new test board as ENC ... passed in 0.000s
When we implement a test ... passed in 0.000s
Then behave will test it for us! ... passed in 0.000s
@scenario.end
--------------------------------------------------------------------------------
Captured stdout:
before step
after step
before step
after step
before step
after step
]]>
</system-out>
</testcase>
</testsuite>
Is that what you meant ? I can add parsing of this CDATA structure in my Notifier.
Or did you suggest something else?
from behave.
Related Issues (20)
- last log to stdout during step is lost due to ANSI color escaping HOT 6
- When will the latest version support Tag-Expressions-V2 be released? HOT 1
- Illegal hardware instruction python M1 HOT 3
- How to disable the default cleanup after scenario? HOT 1
- How to get Failed step/scenario logs in Environment file? HOT 3
- When the version of behave is v1.2.7.dev5, an error will appear when running using allure_behave as the report formatter. HOT 1
- Formatter for "Message" format NDJSON HOT 3
- Is it possible to include a prefix/ suffix to each scenario HOT 1
- Is there a way to render text just for context in the feature file? HOT 2
- Add "contains" dunder method for Row HOT 3
- Background step not working as expected with Scenario outline HOT 5
- Failure when building with Sphinx 7.2 HOT 4
- [feature] Add support for the "Rule" keyword HOT 1
- Config-files are not shown in verbose mode HOT 1
- step status do not permit "executing" HOT 5
- Why behave doesn't allow to define steps with duplicated descriptions? HOT 2
- ParseMatcher failing on steps with type annotations. HOT 2
- Docs: User step should be When HOT 1
- Not able to access my Behave configuration values. HOT 2
- Support 'AND' after 'BACKGROUND' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from behave.