Code Monkey home page Code Monkey logo

gotestsum's Introduction

gotestsum

gotestsum runs tests using go test -json, prints formatted test output, and a summary of the test run. It is designed to work well for both local development, and for automation like CI. gotestsum is used by some of the most popular Go projects.

Install

Download a binary from releases, or build from source with go install gotest.tools/gotestsum@latest. To run without installing use go run gotest.tools/gotestsum@latest.

Documentation

Core features

CI and Automation

  • --junitfile - write a JUnit XML file for integration with CI systems.
  • --jsonfile - write all the test2json input received by gotestsum to a file. The file can be used as input to gotestsum tool slowest, or as a way to store the full verbose output of tests when less verbose output is printed to stdout using a compact --format.
  • --rerun-fails - run failed (possibly flaky) tests again to avoid re-running the entire suite. Re-running individual tests can save significant time when working with flaky test suites.

Local Development

  • --watch - every time a .go file is saved run the tests for the package that changed.
  • --post-run-command - run a command after the tests, can be used for desktop notification of the test run.
  • gotestsum tool slowest - find the slowest tests, or automatically update the source code of the slowest tests to add a conditional t.Skip statements. This statement allows you to skip the slowest tests using gotestsum -- -short ./....

Output Format

The --format flag or GOTESTSUM_FORMAT environment variable set the format that is used to print the test names, and possibly test output, as the tests run. Most outputs use color to highlight pass, fail, or skip.

The --format-icons flag changes the icons used by pkgname and testdox formats. You can set the GOTESTSUM_FORMAT_ICONS environment variable, instead of the flag. The nerdfonts icons requires a font from Nerd Fonts.

Commonly used formats (see --help for a full list):

  • dots - print a character for each test.
  • pkgname (default) - print a line for each package.
  • testname - print a line for each test and package.
  • testdox - print a sentence for each test using gotestdox.
  • standard-quiet - the standard go test format.
  • standard-verbose - the standard go test -v format.

Have an idea for a new format? Please share it on github!

Demo

A demonstration of three --format options.

Demo
Source

Summary

Following the formatted output is a summary of the test run. The summary includes:

  • The test output, and elapsed time, for any test that fails or is skipped.

  • The build errors for any package that fails to build.

  • A DONE line with a count of tests run, tests skipped, tests failed, package build errors, and the elapsed time including time to build.

    DONE 101 tests[, 3 skipped][, 2 failures][, 1 error] in 0.103s
    

To hide parts of the summary use --hide-summary section.

Example: hide skipped tests in the summary

gotestsum --hide-summary=skipped

Example: hide everything except the DONE line

gotestsum --hide-summary=skipped,failed,errors,output
# or
gotestsum --hide-summary=all

Example: hide test output in the summary, only print names of failed and skipped tests and errors

gotestsum --hide-summary=output

JUnit XML output

When the --junitfile flag or GOTESTSUM_JUNITFILE environment variable are set to a file path, gotestsum will write a test report, in JUnit XML format, to the file. This file can be used to integrate with CI systems.

gotestsum --junitfile unit-tests.xml

If the package names in the testsuite.name or testcase.classname fields do not work with your CI system these values can be customized using the --junitfile-testsuite-name, or --junitfile-testcase-classname flags. These flags accept the following values:

  • short - the base name of the package (the single term specified by the package statement).
  • relative - a package path relative to the root of the repository
  • full - the full package path (default)

Note: If Go is not installed, or the go binary is not in PATH, the GOVERSION environment variable can be set to remove the "failed to lookup go version for junit xml" warning.

JSON file output

When the --jsonfile flag or GOTESTSUM_JSONFILE environment variable are set to a file path, gotestsum will write a line-delimited JSON file with all the test2json output that was written by go test -json. This file can be used to compare test runs, or find flaky tests.

gotestsum --jsonfile test-output.log

Post Run Command

The --post-run-command flag may be used to execute a command after the test run has completed. The binary will be run with the following environment variables set:

GOTESTSUM_ELAPSED       # test run time in seconds (ex: 2.45s)
GOTESTSUM_FORMAT        # gotestsum format (ex: pkgname)
GOTESTSUM_JSONFILE      # path to the jsonfile, empty if no file path was given
GOTESTSUM_JUNITFILE     # path to the junit.xml file, empty if no file path was given
TESTS_ERRORS            # number of errors
TESTS_FAILED            # number of failed tests
TESTS_SKIPPED           # number of skipped tests
TESTS_TOTAL             # number of tests run

To get more details about the test run, such as failure messages or the full list of failed tests, run gotestsum with either a --jsonfile or --junitfile and parse the file from the post-run-command. The gotestsum/testjson package may be used to parse the JSON file output.

Example: desktop notifications

First install the example notification command with go get gotest.tools/gotestsum/contrib/notify. The command will be downloaded to $GOPATH/bin as notify. Note that this example notify command only works on Linux with notify-send and on macOS with terminal-notifer installed.

On Linux, you need to have some "test-pass" and "test-fail" icons installed in your icon theme. Some sample icons can be found in contrib/notify, and can be installed with make install.

On Windows, you can install notify-send.exe but it does not support custom icons so will have to use the basic "info" and "error".

gotestsum --post-run-command notify

Example: command with flags

Possitional arguments or command line flags can be passed to the --post-run-command by quoting the whole command.

gotestsum --post-run-command "notify me --date"

Example: printing slowest tests

The post-run command can be combined with other gotestsum commands and tools to provide a more detailed summary. This example uses gotestsum tool slowest to print the slowest 10 tests after the summary.

gotestsum \
  --jsonfile tmp.json.log \
  --post-run-command "bash -c '
    echo; echo Slowest tests;
    gotestsum tool slowest --num 10 --jsonfile tmp.json.log'"

Re-running failed tests

When the --rerun-fails flag is set, gotestsum will re-run any failed tests. The tests will be re-run until each passes once, or the number of attempts exceeds the maximum attempts. Maximum attempts defaults to 2, and can be changed with --rerun-fails=n.

To avoid re-running tests when there are real failures, the re-run will be skipped when there are too many test failures. By default this value is 10, and can be changed with --rerun-fails-max-failures=n.

Note that using --rerun-fails may require the use of other flags, depending on how you specify args to go test:

  • when used with --raw-command the re-run will pass additional arguments to the command. The first arg is a -test.run flag with a regex that matches the test to re-run, and second is the name of a go package. These additional args can be passed to go test, or a test binary.

  • when used with any go test args (anything after -- on the command line), the list of packages to test must be specified as a space separated list using the --packages arg.

    Example

    gotestsum --rerun-fails --packages="./..." -- -count=2
    
  • if any of the go test args should be passed to the test binary, instead of go test itself, the -args flag must be used to separate the two groups of arguments. -args is a special flag that is understood by go test to indicate that any following args should be passed directly to the test binary.

    Example

    gotestsum --rerun-fails --packages="./..." -- -count=2 -args -update-golden
    

Custom go test command

By default gotestsum runs tests using the command go test -json ./.... You can change the command with positional arguments after a --. You can change just the test directory value (which defaults to ./...) by setting the TEST_DIRECTORY environment variable.

You can use --debug to echo the command before it is run.

Example: set build tags

gotestsum -- -tags=integration ./...

Example: run tests in a single package

gotestsum -- ./io/http

Example: enable coverage

gotestsum -- -coverprofile=cover.out ./...

Example: run a script instead of go test

gotestsum --raw-command -- ./scripts/run_tests.sh

Note: when using --raw-command, the script must follow a few rules about stdout and stderr output:

  • The stdout produced by the script must only contain the test2json output, or gotestsum will fail. If it isn't possible to change the script to avoid non-JSON output, you can use --ignore-non-json-output-lines (added in version 1.7.0) to ignore non-JSON lines and write them to gotestsum's stderr instead.
  • Any stderr produced by the script will be considered an error (this behaviour is necessary because package build errors are only reported by writting to stderr, not the test2json stdout). Any stderr produced by tests is not considered an error (it will be in the test2json stdout).

Example: accept intput from stdin

cat out.json | gotestsum --raw-command -- cat

Example: run tests with profiling enabled

Using a profile.sh script like this:

#!/usr/bin/env bash
set -eu

for pkg in $(go list "$@"); do
    dir="$(go list -f '{{ .Dir }}' $pkg)"
    go test -json -cpuprofile="$dir/cpu.profile" "$pkg"
done

You can run:

gotestsum --raw-command ./profile.sh ./...

Example: using TEST_DIRECTORY

TEST_DIRECTORY=./io/http gotestsum

Executing a compiled test binary

gotestsum supports executing a compiled test binary (created with go test -c) by running it as a custom command.

The -json flag is handled by go test itself, it is not available when using a compiled test binary, so go tool test2json must be used to get the output that gotestsum expects.

Example: running ./binary.test

gotestsum --raw-command -- go tool test2json -t -p pkgname ./binary.test -test.v

pkgname is the name of the package being tested, it will show up in the test output. ./binary.test is the path to the compiled test binary. The -test.v must be included so that go tool test2json receives all the output.

To execute a test binary without installing Go, see running without go.

Finding and skipping slow tests

gotestsum tool slowest reads test2json output, from a file or stdin, and prints the names and elapsed time of slow tests. The tests are sorted from slowest to fastest.

gotestsum tool slowest can also rewrite the source of tests slower than the threshold, making it possible to optionally skip them.

The test2json output can be created with gotestsum --jsonfile or go test -json.

See gotestsum tool slowest --help.

Example: printing a list of tests slower than 500 milliseconds

$ gotestsum --format dots --jsonfile json.log
[.]····↷··↷·
$ gotestsum tool slowest --jsonfile json.log --threshold 500ms
gotest.tools/example TestSomething 1.34s
gotest.tools/example TestSomethingElse 810ms

Example: skipping slow tests with go test --short

Any test slower than 200 milliseconds will be modified to add:

if testing.Short() {
    t.Skip("too slow for testing.Short")
}
go test -json -short ./... | gotestsum tool slowest --skip-stmt "testing.Short" --threshold 200ms

Use git diff to see the file changes. The next time tests are run using --short all the slow tests will be skipped.

Run tests when a file is saved

When the --watch flag is set, gotestsum will watch directories using file system notifications. When a Go file in one of those directories is modified, gotestsum will run the tests for the package that contains the changed file. By default all directories under the current directory with at least one .go file will be watched. Use the --packages flag to specify a different list.

If --watch is used with a command line that includes the name of one or more packages as command line arguments (ex: gotestsum --watch -- ./... or gotestsum --watch -- ./extrapkg), the tests in those packages will also be run when any file changes.

With the --watch-chdir flag, gotestsum will change the working directory to the directory with the modified file before running tests. Changing the directory is primarily useful when the project contains multiple Go modules. Without this flag, go test will refuse to run tests for any package outside of the main Go module.

While in watch mode, pressing some keys will perform an action:

  • r will run tests for the previous event. Added in version 1.6.1.
  • u will run tests for the previous event, with the -update flag added. Many golden packages use this flag to automatically update expected values of tests. Added in version 1.8.1.
  • d will run tests for the previous event using dlv test, allowing you to debug a test failure using delve. A breakpoint will automatically be added at the first line of any tests which failed in the previous run. Additional breakpoints can be added with runtime.Breakpoint or by using the delve command prompt. Added in version 1.6.1.
  • a will run tests for all packages, by using ./... as the package selector. Added in version 1.7.0.
  • l will scan the directory list again, and if there are any new directories which contain a file with a .go extension, they will be added to the watch list. Added in version 1.7.0.

Note that delve must be installed in order to use debug (d).

Example: run tests for a package when any file in that package is saved

gotestsum --watch --format testname

Who uses gotestsum?

The projects below use (or have used) gotestsum.

Please open a GitHub issue or pull request to add or remove projects from this list.

Development

Godoc CircleCI Go Recipes Go Reportcard

Pull requests and bug reports are welcome! Please open an issue first for any big changes.

Thanks

This package is heavily influenced by the pytest test runner for python.

gotestsum's People

Contributors

afbjorklund avatar andersjanmyr avatar brycekahle avatar cpuguy83 avatar dagood avatar dnephin avatar dprotaso avatar gaby avatar greut avatar howardjohn avatar james-crowley avatar jonasdebeukelaer avatar lokst avatar n-oden avatar nfi-hashicorp avatar nikolaigut avatar nikolaydubina avatar noblubb avatar ondrej-fabry avatar pawka avatar silverwind avatar smoynes avatar suzuki-shunsuke avatar szaydel avatar thajeztah avatar tiny-dancer avatar uhthomas avatar v1gnesh avatar wfscheper avatar xoxys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gotestsum's Issues

Configure verbosity of failing tests separately from passing tests

When running longer test suites, I don't want to wait for the suite to finish to see my failures. Currently the only way to get this behavior is to use a -verbose format, but then I see all of my passing tests, which makes it hard to see the (comparatively) smaller number of failures.

I would like to be able to specify whether to show passing tests verbosely independently of failing tests. Something like gotestsum --format short --verbose-failures.

Ability to apply a transformation to the "testsuite" name in junit

If I run tests on a package github.com/ijc/foo/... using -junitfile then I get results with:

<testsuite tests="33" failures="0" time="0.000000" name="github.com/ijc/foo/bar">
...
<testsuite tests="33" failures="0" time="0.000000" name="github.com/ijc/foo/baz">

...so far so good.

However if I feed this into Jenkins it creates me a directory named "github" and within that there are two entries "com/ijc/foo/bar" and "com/ijc/foo/baz", this is not really a desirable presentation.

It would be useful if gotestsum could allow fiddling with the name somehow. My preference would be to allow specifying a (common) prefix to be dropped from all the suite names (e.g. I would likely choose to drop github.com/ijc/foo/). A variant of that would be a -p«N» option (cf patch(1)) to strip «n» path elements.

I'm not sure if other transformations might be useful/better e.g. tr . - or some sort of escaping.

Just stripping a prefix has the benefit of being nice and simple I think.

[go1.13.x] Error handling TestMain logic

If TestMain exits 0 (or just returns) and no tests are executed, the package will show as failed. See: DataDog/datadog-agent#5294

func TestMain(m *testing.M) {
	if _, ok := os.LookupEnv("INTEGRATION"); !ok {
		log.Println("--- SKIP: to run tests in this package, set the INTEGRATION environment variable")
		os.Exit(0)
	}
	os.Exit(m.Run())
}
=== Failed
=== FAIL: pkg/trace/test/testsuite  (0.00s)
2020/04/15 15:35:55 --- SKIP: to run tests in this package, set the INTEGRATION environment variable
ok  	github.com/DataDog/datadog-agent/pkg/trace/test/testsuite	1.015s

dots formatting broken in v0.4.1

Upgrading to v0.4.1 has broken dots formatting.
It's now printing out a huge amount of output like this:

    🖴  internal/pkg/jwt ·······
    🖴  internal/pkg/random ·
    🖴  internal/pkg/httputil ···············
    🖴  internal/pkg/retry ···
    🖴  internal/migrator ··
    🖴  internal/api/audit ·········
    🖴  internal/blah··········································

command used:

gotestsum --format dots ./...

platform: OSX 10.15.1

Cheers

Module go download logs appearing as Errors (go v1.11)

Hey, I'm using the following command to run tests:

gotestsum -- ./api/...

inside a golang:1.11.1-stretch docker container, and also locally (mac), and getting the following output

∅ bitbucket.com/satalia/engine/api/healthsvc
∅ bitbucket.com/satalia/engine/api/healthsvc/endpoints
✓ bitbucket.com/satalia/engine/api/healthsvc/service (5ms)
∅ bitbucket.com/satalia/engine/api/healthsvc/transport

=== Errors
go: downloading github.com/go-kit/kit v0.7.0
go: downloading github.com/golang/protobuf v1.2.0
go: downloading google.golang.org/grpc v1.15.0
go: downloading golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1
go: downloading google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8
go: downloading golang.org/x/text v0.3.0
go: downloading github.com/go-logfmt/logfmt v0.3.0
go: downloading github.com/go-stack/stack v1.8.0

DONE 1 tests, 8 errors in 29.465s

whereas if I run using go test ./api/... I get none of the go downloading errors.

We are using go modules to manage dependencies (a file called go.mod) contains all the required packages, could this be an issue?

[go1.14] Panics in a test confuse test2json, test is considered passing

When a panic occurs in a test background goroutine, gotestsum terminates with exit code 1 but without reporting the panic and without reporting the package name, unlike go test.

go test -json unexpectedly emits a action=pass event for the test and doesn't the result of package name. Justifiably, gotestsum doesn't report the output of a reportedly successful test.

This seems unambiguously a go test issue, and I aim to follow up go test maintainers; though, I'd suggest considering a workaround or noting a caveat in documentation. It has been a bit hard to identify in our case and some peers have advocated for switching back to using go test to avoid similar bugs in future.

Reproduction

With sample file blow:

  1. run go test .; echo "exit code: $?" - note the FAIL message, panic and non-zero exit code
  2. run `gotestsum .; echo "exit code: $?""
$ go version
go version go1.14 darwin/amd64
$ go test .; echo "exit code: $?"
panic: panicing on a background goroutine

goroutine 19 [running]:
mypkg.Test_PanicInGoroutine.func1()
	/Users/notnoop/go/src/mypkg/my_test.go:9 +0x39
created by mypkg.Test_PanicInGoroutine
	/Users/notnoop/go/src/mypkg/my_test.go:9 +0x35
FAIL	mypkg	0.024s
FAIL
exit code: 1
$ gotestsum .; echo "exit code: $?"

DONE 1 tests in 0.811s
exit code: 1

sample test

package mypkg

import (
	"testing"
	"time"
)

func Test_PanicInGoroutine(t *testing.T) {
	go func() { panic("panicing on a background goroutine") }()
	time.Sleep(10 * time.Millisecond)
}

func Test_AlwaysFail(t *testing.T)    { t.Fatalf("fail here") }
func Test_AlwaysSucceed(t *testing.T) {}

more details

The json output of go test for Test_PanicInGoroutineandTest_AlwaysFails` are included below. I would note two differences:

  • Test_PanicInGoroutine has "Action":"pass" entry for the test despite the previous FAIL output.
  • Test_PanicInGoroutine does not have a "Action":"{pass|fail}" for the package level.

Test_PanicInGoroutine

`$ go test -json . -run 'Test_PanicInGoroutine'; echo "exit code: $?"`
{"Time":"2020-03-10T16:39:56.06625-04:00","Action":"run","Package":"mypkg","Test":"Test_PanicInGoroutine"}
{"Time":"2020-03-10T16:39:56.066817-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"=== RUN   Test_PanicInGoroutine\n"}
{"Time":"2020-03-10T16:39:56.068326-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"panic: panicing on a background goroutine\n"}
{"Time":"2020-03-10T16:39:56.068345-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"\n"}
{"Time":"2020-03-10T16:39:56.068356-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"goroutine 7 [running]:\n"}
{"Time":"2020-03-10T16:39:56.06836-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"mypkg.Test_PanicInGoroutine.func1()\n"}
{"Time":"2020-03-10T16:39:56.068364-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"\t/Users/notnoop/go/src/mypkg/my_test.go:9 +0x39\n"}
{"Time":"2020-03-10T16:39:56.068367-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"created by mypkg.Test_PanicInGoroutine\n"}
{"Time":"2020-03-10T16:39:56.068379-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"\t/Users/notnoop/go/src/mypkg/my_test.go:9 +0x35\n"}
{"Time":"2020-03-10T16:39:56.068697-04:00","Action":"output","Package":"mypkg","Test":"Test_PanicInGoroutine","Output":"FAIL\tmypkg\t0.022s\n"}
{"Time":"2020-03-10T16:39:56.068726-04:00","Action":"pass","Package":"mypkg","Test":"Test_PanicInGoroutine","Elapsed":0.024}
exit code: 1

AlwaysFails

go test -json -v . -run 'Test_AlwaysFail'; echo "exit code: $?"
{"Time":"2020-03-10T16:40:45.63055-04:00","Action":"run","Package":"mypkg","Test":"Test_AlwaysFail"}
{"Time":"2020-03-10T16:40:45.630831-04:00","Action":"output","Package":"mypkg","Test":"Test_AlwaysFail","Output":"=== RUN   Test_AlwaysFail\n"}
{"Time":"2020-03-10T16:40:45.630847-04:00","Action":"output","Package":"mypkg","Test":"Test_AlwaysFail","Output":"    Test_AlwaysFail: my_test.go:13: fail here\n"}
{"Time":"2020-03-10T16:40:45.631142-04:00","Action":"output","Package":"mypkg","Test":"Test_AlwaysFail","Output":"--- FAIL: Test_AlwaysFail (0.00s)\n"}
{"Time":"2020-03-10T16:40:45.631158-04:00","Action":"fail","Package":"mypkg","Test":"Test_AlwaysFail","Elapsed":0}
{"Time":"2020-03-10T16:40:45.631173-04:00","Action":"output","Package":"mypkg","Output":"FAIL\n"}
{"Time":"2020-03-10T16:40:45.631211-04:00","Action":"output","Package":"mypkg","Output":"FAIL\tmypkg\t0.020s\n"}
{"Time":"2020-03-10T16:40:45.631221-04:00","Action":"fail","Package":"mypkg","Elapsed":0.02}
exit code: 1

junitxml: failure count and include build errors

Currently the failure count is the len(failures), however if there was an error in package main that failure is not included in the count of failures.

Also build errors are not included in the failures.

Show benchmark output in verbose formats

Right now unless you're using standard-verbose, running benchmarks at the same time as tests causes the benchmark output to be hidden. It'd be nice if it was parsed and shown in some nice colours.

Add a cross-platform tool for desktop notifications using --post-run-command

Building on #74 (#76) it would be nice to have a cross-platform way of using desktop notifications.

I'd like to avoid additional golang dependencies in gotestsum and use the os/exec approach for running the notification.

A couple options that come to mind:

  1. If there are widely used CLI tools (such as terminal-notifier in contrib/notify/notify-macos.go) we could add a new cmd/tool/notify which detects which CLI is available and shells out to the correct CLI
  2. We could build a separate binary using something like https://github.com/martinlindhe/notify and make it available to download from the github releases page.

Execution time is 0.00s when using gotestsum with binary.

I'am trying to use gotestsum in following way:
gotestsum -f short-verbose --raw-command -- go tool test2json -p test ./test/run-tests -test.v

Test are running without problem but time execution of any test is 0.00s. F.e.:
PASS test.TestSomeFoo (0.00s)

Overall test execution time is reported properly. Am I missing something?

Add env file support

It would be nice to add an option for importing environment variables from an env file.

In integration tests it's a common pattern to use tokens and access keys from the environment and skip tests when they are missing. In local development environments this is commonly solved by using env files.

Although it's relatively easy to use godotenv for that, it adds an extra dependency, hence more complexity to the environment, so I think it would be a good fit for gotestsum.

Document feature to convert "go test" json output to junit xml without invoking "go test"

gotestsum right now invokes go test under the hood, and allows you to generate an output json file or an output JUnit xml file directly.

This is very handy.

I have some use cases where I cannot install gotestsum in the environment where I am running the tests. However, I can:

  1. Run go test and generate the output json file
  2. Copy this json file somewhere else
  3. Run a postprocessor on the json file, and generate the XML file

If you keep the existing behavior of gotestsum, but could add this additional behavior,
that would be very handy. For example, something like:

gotestsum convert --input testresults.json --junitxml testresults.xml

This behavior I am requesting is similar to https://github.com/jstemmer/go-junit-report, but go-junit-report does not parse JSON as input.....it parses the raw go test output. Parsing JSON is better.

Possible Bug due to recent PR

Hello there, I started seeing unexpected issues in one of my CircleCI builds that uses gotestsum, and I'm not sure what's causing it (previously the same commits were passing).

One possible theory is that there is an error with a recent PR: #79. Here's why I suspect that:

  • The error I get is from testjson/dotformat.go:73 (full stack trace and screenshot below).
  • If I run the CircleCI build with ssh and run the test command there, everything passes. This code seems to have something to do with the size of the terminal, so perhaps it is a difference about whether or not the tests are being run from a shell?
  • Specifying a specific version v0.4.0 of gotestsum (which I should have been doing anyway) seems to have fixed it.

Feel free to close this issue if it does not seem relevant/ is not helpful.

Stack trace:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x5898e7]

goroutine 1 [running]:
gotest.tools/gotestsum/testjson.newDotFormatter(0x6873c0, 0xc000010018, 0x1, 0xc000000180)
	/home/circleci/.go_workspace/src/gotest.tools/gotestsum/testjson/dotformat.go:73 +0x67
gotest.tools/gotestsum/testjson.NewEventFormatter(0x6873c0, 0xc000010018, 0x7fff7f97f5c2, 0x4, 0x5ea440, 0xc00000e2c0)
	/home/circleci/.go_workspace/src/gotest.tools/gotestsum/testjson/format.go:202 +0x1a4
main.newEventHandler(0xc000028100, 0x6873c0, 0xc000010018, 0x6873c0, 0xc000010020, 0xc0000ca160, 0x6873a0, 0xc000010060)
	/home/circleci/.go_workspace/src/gotest.tools/gotestsum/handler.go:51 +0x53
main.run(0xc000028100, 0x0, 0x0)
	/home/circleci/.go_workspace/src/gotest.tools/gotestsum/main.go:139 +0x133
main.main()
	/home/circleci/.go_workspace/src/gotest.tools/gotestsum/main.go:39 +0x154

Continuous_Integration_and_Deployment

Bitbucket Pipelines Test reporting displaying tests incorrectly

I'm using this with the Bitbucket Pipelines Test reporting and found an issue that test results displayed incorrectly.

Following input:

    <testsuite tests="14" failures="2" time="1.161" name="github.com/my_org/path/to/pkg">
        <properties>
            <property name="go.version" value="go1.13"></property>
        </properties>
        <testcase classname="github.com/my_org/path/to/pkg" name="TestInMyPkg" time="0.000">
            <failure message="Failed" type="">Example Error Message</failure>
        </testcase>
    </testsuite>

results in following test representation in Bitbucket Pipelines:
Screen Shot 2019-09-25 at 4 28 38 PM

While input with the short package name in the classname property:

    <testsuite tests="14" failures="2" time="1.161" name="github.com/my_org/path/to/pkg">
        <properties>
            <property name="go.version" value="go1.13"></property>
        </properties>
        <testcase classname="pkg" name="TestInMyPkg" time="0.000">
            <failure message="Failed" type="">Example Error Message</failure>
        </testcase>
    </testsuite>

Results in the much better representation on the Bitbucket Pipeliens UI:
Screen Shot 2019-09-25 at 4 31 09 PM

Give option to specify command other than "go test"

Right now, in this code:

https://github.com/gotestyourself/gotestsum/blob/master/main.go#L138

go test is invoked under the hood, unless --raw-command is specified.

It would be nice to have another option to specify a command other than go test.

I have a huge number of tests, and in order to improve syntax checking and performance, I
compile the tests to a binary first:

go test -c -o mytests

Then I run:

mytests

It would be nice to pass a flag to gotestsum which could run the mytests binary.

Is it possible to do this now? I couldn't figure out how to do it.

Execution time is slower with go 1.14

Hello !

I don't understand why, but when i run my tests with gotestsum and go 1.14, it's slower than with go 1.13.

G0 1.14

image

Go 1.13

image

If you need more information i can help 😄

Add post-run hook to support sending desktop notifications

Thanks to @glenjamin for the idea.

Instead of publishing notifications directly I'd like to add support for executing a script that is passed all of the necessary details, and can handle any type of notification.

The script should be called with env vars which contain a summary of the run, and a json blob could be passed to stdin to provide full details about any errors.

Such a feature might look something like this:

export GOTESTSUM_POST_RUN_SCRIPT=./scripts/notify-test-results.sh
gotestsum

Env vars:

TEST_TOTAL=100
TEST_FAILED=2
TEST_SKIPPED=5
TEST_ERRORS=0

I'm not sure how to encode more details. The junixml format might work, but is probably not the easiest format to integrate. A custom JSON document might work but will be more involved to maintain.

Maybe env vars will work for the first version.

Add --max-failure=count flag to exit early

When running in CI it is sometimes desirable to exit before all tests are run so that resources aren't wasted running against code that is obviously broken.

--max-failure count   stop execution after x failures

[v0.4.2] incorrect test output when stdout is missing a newline

Beginning with 0.4.2, output to stdout can cause gotestsum to report failure, noting (panic). In earlier versions, gotestsum absorbed the stdout and reported success/failure accurately.

Testcase:

package main

import (
	"fmt"
	"testing"
)

func Test_Digging(t *testing.T) {
	fmt.Print("Hello")
}

Note that if the output is changed to include a new line, the problem does not surface.

Execution:

$ go test --count=1
HelloPASS
ok      example.com/m   0.001s
$ ./gotestsum-0.4.1 --version && ./gotestsum-0.4.1 -- -count=1
gotestsum version 0.4.1
✓  . (1ms)

DONE 1 tests in 0.207s
$ ./gotestsum-0.4.2 --version && ./gotestsum-0.4.2 -- -count=1
gotestsum version 0.4.2
✓  . (1ms)

=== Failed
=== FAIL: . Test_Digging (panic)
Hello--- PASS: Test_Digging (0.00s)


DONE 1 tests, 1 failure in 0.218s
$

When go test -count is greater than 1, only the output for the last test is saved.

If the test is not deterministic, and the last instance passed, then none of the output is saved because the output of passed tests is dropped from the buffer.

Also an issue if ScanTestOutput is passed the output from different runs, and the same test name is repeated in the output.

The problem is that output is saved in a map by test name. The same test can run twice, so this is not the correct index. I think the solution is to assign a unique id to each TestCase. Output can be saved, and looked up by id.

Append xml / json file when running more than 1 test suite in serial/parallel way

Hi,

Creating a new issue with the reference to the original reply post

I have a MAKE file that looks like below:

GOGETTESTSUM=go get gotest.tools/gotestsum

GOTESTSUM_JUNITFILE=gotestsum --junitfile unit-tests.xml

all: product1_region_uat product2_region_uat product3_region_uat

product1_region_uat:
$(GOGETTESTSUM)
parameters product1_test $(GOTESTSUM_JUNITFILE)

product2_region_uat:
$(GOGETTESTSUM)
parameters product2_test $(GOTESTSUM_JUNITFILE)

product3_region_uat:
$(GOGETTESTSUM)
parameters product3_test $(GOTESTSUM_JUNITFILE)

Each time I execute tests using make all to capture test outcome in unit-tests.xml file, result of only the last executed test suite is getting saved (i.e. product3 in this instance).

How can I save test execution for all 3 test suites?

Reply from @dnephin

Hello @viralkpanchal, thanks for your interest in gotestsum!

I think the problem you are seeing is that gotestsum is being run 3 times by the makefile. Every time gotestsum runs it will override the --jsonfile. I think you could fix the problem by having a target run all the packages at once, instead of one at a time, or you could use a different filename for each package. make has some fancy syntax for getting the name of the currently run target, which you could use as part of the jsonfile name. I don't remember exactly what that looks like.

Generally the idea is that running a single gotestsum invocation would be preferable. It is possible that --jsonfile could be made to append to the file, instead of overriding it. Or a new flag could be added to do append instead of override. I'm not sure if that is really the right solution, and would need to think about it some more. I think we could create a new issue for that change, as it is not exactly related to this problem described in this issue.

I hope that helps!

Originally posted by @dnephin in #119 (comment)

Race Detection not mentioned in summary, giving confusing results

When -race is used an a race is detected, I get

=== FAIL: galley/pkg/config/source/kube/fs TestAddUpdateDelete (0.04s)

(no info other than FAIL)

But when a test failures I get a message like

=== FAIL: galley/pkg/config/source/kube/fs TestAddUpdateDelete (0.00s)
    source_test.go:141: my test failed

It may be useful to mention a data race was detected or something here?

how to use this in e2e tests binaries?

I am interested to add this to minikube e2e tests, but I am not sure how I would use it, we build e2e binaries for the the tests and run it on different platforms. could I still use this tool?

Add a --version flag

I was checking whether an install script was working correctly, and instinctively ran gotestsum --version, which I discovered didn't exist.

I was going to send in a PR rather than report an issue, but I had a look at the deployment script and it's using some stuff i'm not familiar with - so I wasn't sure where to start.

If this is OKed, I'm happy to do the actual work if someone points me in the right direction.

go1.14: test2json does not always send a 'test failed' event when test panics

Example output is below. In these cases the package is marked as failed, but there is not test failure event, so the test is not seem as failed. I'm not sure yet if this can be worked around in gotestsum.

{"Time":"2020-04-10T14:52:44.192693974-04:00","Action":"run","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare"}
{"Time":"2020-04-10T14:52:44.192822137-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"=== RUN   TestWaitOn_WithCompare\n"}
{"Time":"2020-04-10T14:52:44.1950981-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"panic: runtime error: index out of range [1] with length 1\n"}
{"Time":"2020-04-10T14:52:44.195110282-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\n"}
{"Time":"2020-04-10T14:52:44.195116665-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"goroutine 7 [running]:\n"}
{"Time":"2020-04-10T14:52:44.195120587-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/internal/assert.ArgsFromComparisonCall(0xc0000552a0, 0x1, 0x1, 0x1, 0x0, 0x0)\n"}
{"Time":"2020-04-10T14:52:44.195301254-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/internal/assert/result.go:102 +0x9f\n"}
{"Time":"2020-04-10T14:52:44.195332206-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/internal/assert.runComparison(0x6bcb80, 0xc00000e180, 0x67dee8, 0xc00007a9f0, 0x0, 0x0, 0x0, 0x7f7f4fb6d108)\n"}
{"Time":"2020-04-10T14:52:44.19533807-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/internal/assert/result.go:34 +0x2b1\n"}
{"Time":"2020-04-10T14:52:44.195342341-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/internal/assert.Eval(0x6bcb80, 0xc00000e180, 0x67dee8, 0x627660, 0xc00007a9f0, 0x0, 0x0, 0x0, 0x642c60)\n"}
{"Time":"2020-04-10T14:52:44.195346449-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/internal/assert/assert.go:56 +0x2e4\n"}
{"Time":"2020-04-10T14:52:44.195350377-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/poll.Compare(0xc00007a9f0, 0x6b74a0, 0x618a60)\n"}
{"Time":"2020-04-10T14:52:44.195356946-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/poll/poll.go:151 +0x81\n"}
{"Time":"2020-04-10T14:52:44.195360761-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/poll.TestWaitOn_WithCompare.func1(0x6be4c0, 0xc00016c240, 0xc00016c240, 0x6be4c0)\n"}
{"Time":"2020-04-10T14:52:44.195367482-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/poll/poll_test.go:81 +0x58\n"}
{"Time":"2020-04-10T14:52:44.195371319-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"gotest.tools/v3/poll.WaitOn.func1(0xc00001e3c0, 0x67df50, 0x6c1960, 0xc00016c240)\n"}
{"Time":"2020-04-10T14:52:44.195375766-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/poll/poll.go:125 +0x62\n"}
{"Time":"2020-04-10T14:52:44.195379421-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"created by gotest.tools/v3/poll.WaitOn\n"}
{"Time":"2020-04-10T14:52:44.195384493-04:00","Action":"output","Package":"gotest.tools/v3/poll","Test":"TestWaitOn_WithCompare","Output":"\t/home/daniel/pers/code/gotest.tools/poll/poll.go:124 +0x16f\n"}
{"Time":"2020-04-10T14:52:44.195785223-04:00","Action":"output","Package":"gotest.tools/v3/poll","Output":"FAIL\tgotest.tools/v3/poll\t0.005s\n"}
{"Time":"2020-04-10T14:52:44.195796081-04:00","Action":"fail","Package":"gotest.tools/v3/poll","Elapsed":0.005}

Duplicate test names in JUnit XML

I used this invocation of gotestsum:

gotestsum --format standard-verbose --junitfile report.xml --jsonfile report.json -- -test.run TestMySuite

The output XML looks like:

<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
        <testsuite tests="11" failures="0" time="5806.690s" name="MySuite">
                <properties>
                        <property name="go.version" value="go1.10.3"></property>
                </properties>
                <testcase classname="job" name="TestOne" time="75.350s"></testcase>
                <testcase classname="job" name="TestOne" time="76.680s"></testcase>
                <testcase classname="job" name="TestTwo" time="86.110s"></testcase>
                <testcase classname="job" name="TestTwo" time="86.990s"></testcase>
                <testcase classname="job" name="TestThree" time="188.780s"></testcase>
                <testcase classname="job" name="TestThree" time="189.740s"></testcase>
                <testcase classname="job" name="TestMySuite" time="11.570s"></testcase>
                <testcase classname="job" name="TestMySuite" time="1937.300s"></testcase>
        </testsuite>
</testsuites>

The JSON is pretty big, but if I cut it down, it looks something like:

{"Time":"2018-08-02T08:18:13.739327862Z","Action":"run","Package":"MySuite","Test":"TestOne"}
{"Time":"2018-08-02T08:19:30.419938008Z","Action":"pass","Package":"MySuite","Test":"TestOne","Elapsed":75.35}
{"Time":"2018-08-02T08:19:30.419959464Z","Action":"output","Package":"MySuite","Test":"TestOne","Output":"--- PASS: TestOne (76.68s)\n"}
{"Time":"2018-08-02T08:19:30.419971053Z","Action":"pass","Package":"MySuite","Test":"TestOne","Elapsed":76.68}
{"Time":"2018-08-02T08:19:30.419976409Z","Action":"run","Package":"MySuite","Test":"TestTwo"}
{"Time":"2018-08-02T08:20:57.405652904Z","Action":"pass","Package":"MySuite","Test":"TestTwo","Elapsed":86.11}
{"Time":"2018-08-02T08:20:57.405663056Z","Action":"output","Package":"MySuite","Test":"TestTwo","Output":"--- PASS: TestTwo (86.99s)\n"}
{"Time":"2018-08-02T08:20:57.405671178Z","Action":"pass","Package":"MySuite","Test":"TestTwo","Elapsed":86.99}
{"Time":"2018-08-02T08:20:57.405676436Z","Action":"run","Package":"MySuite","Test":"TestThree"}
{"Time":"2018-08-02T08:24:07.146323211Z","Action":"pass","Package":"MySuite","Test":"TestThree","Elapsed":188.78}
{"Time":"2018-08-02T08:24:07.146333316Z","Action":"output","Package":"MySuite","Test":"TestThree","Output":"--- PASS: TestThree (189.74s)\n"}
{"Time":"2018-08-02T08:24:07.146340153Z","Action":"pass","Package":"MySuite","Test":"TestThree","Elapsed":189.74}

{"Time":"2018-08-02T08:50:27.611048188Z","Action":"pass","Package":"MySuite","Test":"MySuite","Elapsed":11.57}
{"Time":"2018-08-02T08:50:27.611767989Z","Action":"pass","Package":"MySuite","Test":"MySuite","Elapsed":1937.3}
{"Time":"2018-08-02T08:50:27.611054838Z","Action":"output","Package":"MySuite","Test":"MySuite","Output":"--- PASS: MySuite (1937.30s)\n"}
{"Time":"2018-08-02T08:50:27.61302664Z","Action":"pass","Package":"MySuite","Elapsed":1937.313}

Any idea why lines with "Action":"pass" appear twice for the same test?

I am using go 1.10.3

Cobertura XML test coverage output

We already use gotestsum for JUnit XML output and would also like to output our line-by-line test coverage in Cobertura XML format, because that is what our test analysis system understands.

Currently we have to use 2 additional tools for that (i.e. three tools in total):

Is it possible to integrate the functionality of these tools into gotestsum, so we need less tools and have a one stop shop for test analysis?

Dots format outputs only on one line

While trying the dots format on the docker/cli unit tests, it printed everything on one line

$ GOTESTSUM_FORMAT=dots make test-unit
[some packages listed]
[cli]·····[cli/command/bundlefile]····[cli/command/checkpoint]·······[cli/command/idresolver]·····[cli/command/formatter]··················[cli/command/config]··················[cli/command]···························[cli/command/inspect]···········[cli/command/engine]···············[cli/command/manifest]············[cli/command/context]·······························[cli/command/network]·············[cli/command/container]··········↷·↷··········································································[cli/command/node]························[cli/command/plugin]······························[cli/command/registry]·············[cli/command/service/progress]····[cli/command/secret]···[cli/command/stack/formatter]·················[cli/command/stack/loader]··[cli/command/stack/swarm]·····[cli/command/swarm]·····························[cli/command/service]·····························································[cli/command/task]········[cli/command/stack]······························[cli/compose/interpolation]·····[cli/command/volume]·················[cli/compose/convert]············································[cli/compose/schema]··················[cli/command/trust]····················································································[cli/compose/loader]··········································································································[cli/compose/template]·······················[cli/config]·····················[cli/config/configfile]·············[cli/config/credentials]·············[cli/connhelper/ssh]·[cli/command/image/build]···✖✖·✖[cli/connhelper]·········[cli/command/system]········[cli/debug]···[cli/context/store]············[cli/flags]··[cli/manifest/store]·······[cli/context/kubernetes]···[internal/licenseutils]············[cli/trust]···[cmd/docker]····[internal/pkg/containerized]·[internal/containerizedengine]··········[internal/versions]·········[service/logs]·······[templates]················[cli/command/stack/kubernetes]························[opts]·····························································[cli/command/image]·↷·················✖·····↷·····································

I would have expected a line per package, but maybe I'm missing something with the dots format usage:

[cli]·····
[cli/command/bundlefile]····
[cli/command/checkpoint]·······
[cli/command/idresolver]·····
[cli/command/formatter]··················
[cli/command/config]··················
[cli/command]···························
[cli/command/inspect]···········
[cli/command/engine]···············
[cli/command/manifest]············
[cli/command/context]·······························
[cli/command/network]·············
[cli/command/container]··········↷·↷··········································································
[cli/command/node]························
...

Using gotestsum binary in jenkins pipeline

I am using gotestum unix binary in my jenkins pipleline, issue is when it starts running tests it looks for some libs related to gotestsum, since libs are not available it fails. my concern is when binary is used it should not look for required libs, we are not building the code of gotestsum, we are just using the binary.

when same is run on local IDE, it does not look for libs, it just runs test and shared result, same is expected in pipeline as well but it fails.

Only diff is on local we use, exe and in jenkins it uses linux binary.

Using gotest binary in jenkins pipeline

I am using gotestum unix binary in my jenkins pipleline, issue is when it starts running tests it looks for some libs related to gotestsum, since libs are not available it fails. my concern is when binary is used it should not look for required libs, we are not building the code of gotestsum, we are just using the binary.

when same is run on local IDE, it does not look for libs, it just runs test and shared result, same is expected in pipeline as well but it fails.

Only diff is on local we use, exe and in jenkins it uses linux binary.

Support go test flags

I'd like to see the ability to use the flags available to go test such as -race and -bench, but really anything that go test supports should be supported by gotestsum.

Tests marked as failed if success indicator isn't the beginning of output

Noticed that some successful tests get marked as failures in our CI. I noticed that if the --- PASS: final line indicator is preceded by other log information, the test is marked as fail with -0.0 run_time.

The issue happens somewhat rarely - 10 builds out of 105 builds in my sample.

A sample case is TestScalingPolicies_GetPolicy in https://circleci.com/gh/hashicorp/nomad/69658 . The raw json events from go test is in https://69658-36653430-gh.circle-artifacts.com/0/tmp/test-reports/testjsonfile.json .

In the above case note that --- PASS: line isn't the beginning of the line:

{"Time":"2020-05-22T13:54:21.740428891Z","Action":"output","Package":"github.com/hashicorp/nomad/api","Test":"TestScalingPolicies_GetPolicy","Output":"    2020-05-22T13:54:21.737Z [DEBUG] worker: created evaluation: eval=\"\u003cEval \"1e656465-9c8c-7a35-a2fe-cda7fe77b049\" JobID: \"job1\" Namespace: \"default\"\u003e\"--- PASS: TestScalingPolicies_GetPolicy (1.60s)\n"}

The logs are output of a processes that is spawned from the test and directly writes to Stderr. My theory is that the process is killed before the new line is flushed and result into contaminating go test output. In all of my observations, the go test success indicator ended the line, I haven't noticed any case where the process final log output surrounded go test indicator. I've updated our project to ensure the process is terminated gracefully, but we are waiting results.

Here are some sample builds hitting this issue as well:

Missing logs when test panic in go 1.14

Hello!

I recently update my project to go 1.14 and when my test panic i have no output (just exit 1).

image

With gotestsum --format short
image

With go test ./...
image

If you need small project to reproduce, tell me i can create it.

Thanks !

Coverage report duplicated in standard-quiet format

Hi,

First, thanks for this tool, that's pretty cool and the final summary is really nice.

The only thing that I miss was coverage support when coverage is enabled in go test. Switching to the format standard-quiet solve that by printing the normal line, but in this case, the coverage report is duplicated:

gotestsum --format standard-quiet ./test-folder1/... ./test-folder2/... ./local-code/... -- -cover
coverage: 50.6% of statements
ok  	path/to/some/tests	0.018s	coverage: 50.6% of statements

I tried to get a really quick look at the code but couldn't find immediately where was the issue.
Also, if it was possible to have coverage support in short format (just having the percentage at the end of the line if coverage is enabled), that would be magic!

Thanks in advance,
Moe

Format for showing why tests failed

I'm not sure if this is an issue with gotestsum and testify/assert but the only format that actually shows why a test failed is standard-verbose. The rest of the formats show that a test failed, but not the output, which makes it kind of useless.

example with short-verbose

➜  frosting git:(first) ✗ gotestsum --format short-verbose
PASS Test_defaultDependencyResolver_Dequeue/when_ready_is_empty,_return_nil,_false (0.00s)
PASS Test_defaultDependencyResolver_Dequeue/when_ready_has_1_item,_return_item,_true (0.00s)
=== RUN   Test_defaultDependencyResolver_Dequeue
--- FAIL: Test_defaultDependencyResolver_Dequeue (0.00s)
FAIL Test_defaultDependencyResolver_Dequeue (0.00s)
PASS Test_defaultDependencyResolver_Length (0.00s)
PASS Test_defaultDependencyResolver_NotifyComplete (0.00s)
PASS Test_defaultDependencyResolver_Load (0.00s)
FAIL .
EMPTY sh
EMPTY test
EMPTY util

=== Failed
=== FAIL: . Test_defaultDependencyResolver_Dequeue (0.00s)


DONE 6 tests, 1 failure in 2.749s

example with standard-verbose

➜  frosting git:(first) ✗ gotestsum --format standard-verbose
=== RUN   Test_defaultDependencyResolver_Dequeue
=== RUN   Test_defaultDependencyResolver_Dequeue/when_ready_is_empty,_return_nil,_false
=== RUN   Test_defaultDependencyResolver_Dequeue/when_ready_has_1_item,_return_item,_true
    Test_defaultDependencyResolver_Dequeue: resolver_test.go:35:
                Error Trace:    resolver_test.go:35
                                                        resolver_test.go:52
                Error:          Expected value not to be nil.
                Test:           Test_defaultDependencyResolver_Dequeue
                Messages:       there is one ingredient ready
    Test_defaultDependencyResolver_Dequeue: resolver_test.go:39:
                Error Trace:    resolver_test.go:39
                                                        resolver_test.go:52
                Error:          Should be true
                Test:           Test_defaultDependencyResolver_Dequeue
                Messages:       because an ingredient was returned
--- FAIL: Test_defaultDependencyResolver_Dequeue (0.00s)
    --- PASS: Test_defaultDependencyResolver_Dequeue/when_ready_is_empty,_return_nil,_false (0.00s)
    --- PASS: Test_defaultDependencyResolver_Dequeue/when_ready_has_1_item,_return_item,_true (0.00s)
=== RUN   Test_defaultDependencyResolver_Length
--- PASS: Test_defaultDependencyResolver_Length (0.00s)
=== RUN   Test_defaultDependencyResolver_NotifyComplete
--- PASS: Test_defaultDependencyResolver_NotifyComplete (0.00s)
=== RUN   Test_defaultDependencyResolver_Load
--- PASS: Test_defaultDependencyResolver_Load (0.00s)
FAIL
FAIL    github.com/cakehappens/frosting 0.014s
?       github.com/cakehappens/frosting/sh      [no test files]
?       github.com/cakehappens/frosting/test    [no test files]
?       github.com/cakehappens/frosting/util    [no test files]

=== Failed
=== FAIL: . Test_defaultDependencyResolver_Dequeue (0.00s)


DONE 6 tests, 1 failure in 2.741s

Request for flag to suppress output in summary

I would like a flag which:

  1. Prints out the summary of passed and failed tests
  2. Omits the test output for failed tests

I already run with --format standard-verbose, and the logs from my tests are huge (i.e. over 10 Mb). I can already see the failure in the regular logs.....displaying the failed output in the summary clutters things up (for my use case).

In e-mail discussion with @dnephin, he brainstormed an idea for a flag: --no-summary=output

junit.xml incorrect PASS testcase

I'm writing a CI project, but when I use gotestsum to sum how many testcase success or failed, I found the PASS testcase number is 1 less than actual PASS testcase.
But failures is correct, it's so weird.

my command is :
gotestsum --format short-verbose --junitfile junit.xml -- -gcflags "all=-N -l" ./tests -coverprofile=coverage.out -coverpkg=./MYPACKAGE ./MYPACKAGETOO -covermode=count

the junit.xml generated by auto:

<testsuites>
	<testsuite tests="31" failures="0" time="48.320000" name="MYCLASSNAME">
		<properties>
			<property name="go.version" value="go1.12.1 linux/amd64"></property>
		</properties>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.150000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="7.960000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.840000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.090000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.040000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.110000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.910000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.000000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.900000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.020000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.930000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.080000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.100000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.280000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.050000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.220000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.060000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.140000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="3.240000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="2.150000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="3.080000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="2.970000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.220000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="3.060000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.180000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.240000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="1.240000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.150000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="2.350000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="4.500000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.000000"></testcase>
		<testcase classname="MYCLASSNAME" name="MYTESTCASENAME" time="0.060000"></testcase>
	</testsuite>
</testsuites>

U will see the number of tests is not equal with real testcases number.

Please consider bugfix release

Hi, I'm using the testjson package and ran into unexpected panics while calling ScanTestOutput with nil ScanConfig.Handler. After some confusion, I realized I'm looking at the master branch code, 50 something commits ahead of latest release and with the underlying bug fixed.

Is something in the way of v0.4.3? I'd be glad to help if I can! Importing from master for now, but that's ugly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.