Comments (11)
I wasn't entirely sure if the approach in #45 was the right way to deal with it and given I was quite busy at the time neglected this for a while, sorry for that. After giving it some more thought, I do believe this is something we should handle right.
It looks like the junit spec was created purely for testing, so AFAIK it doesn't have a good way of dealing with build errors (or even the case described in #41). We can work around it by declaring a fake test in the report called for example build failure
. Any build output we capture can go in here as well, so that solves the other problem I mentioned earlier. We can also increment the failures
and tests
numbers in the testsuit like you proposed.
I'll leave some comments on #45 and we should be able to make this work and get it merged.
from go-junit-report.
I agree with @posener in that you want the usage to remain the same in order to detect build failures. Additionally, when running tests for multiple packages at once, you can't specify the -c flag, so in my case, where I want to use go-junit-report like
go test ./... | go-junit-report
there's no good way outside of this pull request to detect build failures and still have tests reported.
from go-junit-report.
If you only care about the non zero exit code when the test doesn't compile, you could also try to compile the test before running it. For example:
$ go test -c -o test.out && ./test.out --test.v | go-junit-report; echo $?
I'm not entirely convinced yet whether this case should be supported, as it seems like a lot of extra code just to set the exit code. Have you maybe thought about capturing the error message and making it available in the report (and how would that be represented in the junit format)?
from go-junit-report.
Hey, thanks for the feedback.
As for the command line solution:
- It will take twice the time (or maybe more) to build and test the program rather than only test it.
- It also doesn't make sense to have this whole command: a combination of 2 commands, to actually understand if your program is not failing.
- If you already have this info from the test report, why not use it?
By leaving the go-junit-report as it is, it's return code might be wrong in some of the cases, and thus his behavior is not as expected.
It might be a lot of extra code (I'm not sure, it is a lot of tests code). If you have any advice how to implement it in a more simple way, I'll be glad to rewrite it.
About the output junit,
I see that the testsuite
can have an errors
attribute, do you think it might suite our case?
Cheers
from go-junit-report.
When you run go test
it still has to compile the code first, that's why you see the build failed error. So it wouldn't take twice the time. I thought the commandline was a nice solution, it will skip go-junit-report on build errors and return a non zero exit code. This is something you'd only have to set up once, and you already have to configure it to run go test together with go-junit-report. I'd be curious to know how you're currently using it.
As for the error in the report, I got the impression from the discussion in #41 that the JUnit spec doesn't support testsuite-level failures. The errors attribute that you mention contains the number of failing tests in the testsuite AFAIK.
from go-junit-report.
As for the time: if the build fails, then you are right, it won't take longer. but if it doesn't, then the test time will be the time of go build
+ go test
which is longer then the time of go test
.
As for the usage: I think this is less trivial than just using the go test, and if you already have the information it is a shame not to use it. I suggest that accepting this PR will prone errors from users who did not know they have to also build their code to know that it is in good shape. The report is misleading.
As for the report, It looks like that testuite does have an errors
attribute: here.
from go-junit-report.
Actually, when you run go test
it builds the source and then runs the test. You can verify this by passing the -x
flag to go test
. The errors
attribute that you referred to is a number that indicates the number of tests that errored, we cannot store the build error in there (this is also being discussed in issue #41).
Does the workaround I proposed not work in your case? I'd expect that most people use this tool in an automated way that only needs to be setup once.
from go-junit-report.
I hadn't considered that case @MicahLC, thanks. Do you have any thoughts how these errors should be included in the XML report that is generated?
from go-junit-report.
Not particularly, I don't know much about the XML format. As long as it shows up as a failure at a high level (e.g. when Jenkins picks it up), I'll be happy :)
from go-junit-report.
Just hit this case described by MicahLC as well. Our branches are happily green in bamboo a PR got merged which actually broke the build. Error reporting was masked.
from go-junit-report.
@jstemmer and @posener : I've looked at the XML linked in @posener's last comment on Feb 16th, and I agree that the build failure should be reported in the "errors" attribute on the "testsuite". You might need to also increase the number of "tests" in the "testsuite" to 1 in that case; I'm not sure how JUnit would respond if you tell it there were no tests but 1 test had an error. Thoughts?
It'd be great to get this resolved and pulled in soon. It sounds like more and more people are broken by this issue.
from go-junit-report.
Related Issues (20)
- Trimming module name from classname HOT 4
- unable to install go-unit-report HOT 3
- after I use "go test xxx 2>&1 | go-junit-report > result.xml", then the console doesn't print any more information HOT 4
- go-junit-report is failing with command not found...
- `-version` info is "not as useful" as it may should've been
- have an option to exclude succeeded tests
- zsh: command not found: go-junit-report HOT 3
- [no test files] Not handled well by set-exit-code
- New release HOT 2
- False error report in case of json parsing HOT 1
- Include arm64 builds in release assets
- gotest output from 1.20+ is not properly parsed
- `flag provided but not defined: -in`
- go-junit-report reports success after failing to compile code HOT 1
- benchmark tests marked as error when no unit test found HOT 1
- `system-out` is never placed under the correct `testcase` HOT 2
- Consider adding an option to prevent parsing lines starting with `#` as build output
- Better support of multiple test runs (`-count`)
- Linux arm64 binary missing HOT 1
- Capture test failures
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from go-junit-report.