Code Monkey home page Code Monkey logo

diem-devtools's Introduction

diem-devtools

This repository contains the source code for developer tools and libraries built for Diem Core. Currently, this includes:

  • nextest: a new, faster Cargo test runner Documentation (main)
  • quick-junit: a data model, serializer (and in the future deserializer) for JUnit/XUnit XML quick-junit on crates.io Documentation (latest release) Documentation (main)
  • datatest-stable: data-driven testing on stable Rust datatest-stable on crates.io Documentation (latest release) Documentation (main)

Minimum supported Rust version

These crates target the latest stable version of Rust.

While a crate is pre-release status (0.x.x) it may have its MSRV bumped in a patch release. Once a crate has reached 1.x, any MSRV bump will be accompanied with a new minor version.

Contributing

See the CONTRIBUTING file for how to help out.

License

This project is available under the terms of either the Apache 2.0 license or the MIT license.

diem-devtools's People

Contributors

bmwill avatar bors-libra avatar dependabot-preview[bot] avatar dependabot[bot] avatar huitseeker avatar metajack avatar mimoo avatar rexhoffman avatar sunshowers avatar tranzystorekk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diem-devtools's Issues

Add support for doctests

Should be possible if we grab the doctest executables through -Z unstable-options --persist-doctests.

Add support for retrying failed tests to nextest

This needs to be done with a little care:

  • The initial, simpler use requires keeping track of which Cargo arguments the build was run with -- this is relatively simple.
  • An extended use case to solve is "what if the user wants to grow the set of tests or binaries that are run?" This is going to require us keeping track of the exact set of binaries run and Cargo arguments passed. Probably worth discussing with some folks before doing so.

Overall this is a stateful operation, kind of like a source control bisect.

Add classname for each test

"classname" seems to be a unique identifier for the immediate container for each test. We should be able to construct it with package-name.[binary-name.]module.path, using . rather than ::.

Add support for stress tests to nextest

Add support to run a single test multiple times in parallel. This is a somewhat different mode of operation from nextest as usual (the runner should be fed the same test multiple times, and should maybe not run other tests at the same time), but is worth doing as a future improvement.

Fail tests that leak pipes

It is possible to write a test that ends up leaking a pipe (e.g. creates a process but doesn't terminate it). Currently, the test runner hangs on encountering such a test.

Instead, we should figure out a way to:

  • detect such a situation (a small amount of raciness between waiting on process exit and checking that the handles are closed is fine)
  • mark such a test as failure with a LEAK message or similar

Publish timestamps for each test

At the moment we aren't publishing any timestamps, just test durations. It would be great to have support for timestamps.

One issue is that the JUnit format doesn't support timestamp in testcase elements, only in testsuite elements. This means that we'll probably have to wrap each testcase in its own testsuite. Note that testsuite elements can be nested.

Test runner should log when a test is running over a preset time (eg. 60sec)

Developer reported a unit test job (https://github.com/diem/diem/actions/runs/836112503) that hung for over 30 minutes without output. In this case, test runner should log when a test is running longer than a preset time limit.

2021-05-12T16:31:34.1985590Z         PASS [   1.897s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_module_publishing
2021-05-12T16:31:34.2755302Z         PASS [   1.876s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_module_publishing_non_association
2021-05-12T16:31:34.3417959Z         PASS [   1.863s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_non_genesis_write_set
2021-05-12T16:31:34.3506958Z         PASS [   1.872s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_sequence_number_too_new
2021-05-12T16:31:34.4389504Z         PASS [   1.918s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_transaction
2021-05-12T16:31:34.6665912Z         PASS [   1.811s]                                             vm-validator  vm_validator::vm_validator_test::test_validate_unknown_script
2021-05-12T16:31:39.9518323Z         PASS [  11.483s]                                  ir-testsuite::testsuite  run_test::move/sorted_linked_list/sorted-linked-list.mvir
2021-05-12T17:01:58.1447546Z ##[error]The operation was canceled.
2021-05-12T17:01:58.1529807Z ##[group]Run actions/upload-artifact@v2
2021-05-12T17:01:58.1530372Z with:
2021-05-12T17:01:58.1530832Z   name: unit-test-results
2021-05-12T17:01:58.1531442Z   path: target/junit-reports/unit-test.xml
2021-05-12T17:01:58.1532057Z   if-no-files-found: warn
2021-05-12T17:01:58.1532502Z env:
2021-05-12T17:01:58.1532873Z   max_threads: 16

Define a protocol to communicate between individual tests and nextest

Right now the only communication channel we have between the testrunner and Rust tests, during execution, is the process exit code. We'd like to have a richer protocol but the Rust test harness is kind of limited so we need to be careful about it. Some ideas for things to communicate:

  • test flakiness
  • test retries
  • test timeouts

Some ideas for the protocol itself:

  • Use lines to stdout (or stderr) that begin with nextest: (similar to cargo: lines in build scripts)
  • Pass in a random string through the environment and ensure that lines begin with that to avoid confusing random test output with nextest control lines.
  • Publish a crate which can be used to output these lines, either as function calls or as a proc macro.

Implement fail-fast support

The default for cargo test is to quit on the first failing test (well, the first failing binary). The test runner should have a way to do that as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.