Code Monkey home page Code Monkey logo

health-helm's Introduction

Health - A CI data analysis pipeline

Building and Deploy

The Docker images used by Health can be built locally through make or using skaffold.

# Build locally using make
make

Skaffold also pushes the image to the specified registry and deployes the helm chart to either minikube or a remote cluster.

# Build locally and push to registry using skaffold
export IMAGE_REG=registry.ng.bluemix.net/ci-pipeline
export PR_NUMBER=<pull request number>
skaffold run

Skaffold can be used in development mode, in which case it will monitor the workspace of all Docker images for changes and automatically re-trigger a build when something changes and re-deploy.

# Using skaffold
export IMAGE_REG=registry.ng.bluemix.net/ci-pipeline
skaffold dev

Note: the skaffold configuration uses a skaffold feature which is not merged yet: GoogleContainerTools/skaffold#602.

Using Health

To leverage this system we need to populate test results in the database. Ideally this will be integrated into the post processing steps of your CI system to automatically collect the data. This section will cover some examples with different test runners and languages to show how you would do this. There are 4 different techniques for doing this, which one you use depends greatly on how you're running tests and the exact details of your environment.

Subunit Emitting Test Runner

The most straightforward mechanism to populate data into the database is if you're running tests with a test runner that natively supports generating subunit v2. Depending on the language used for testing there are some options. If you have a subunit stream either as a file or stdout from your test runner you can just pass that to the subunit2sql command directly to insert the results into the database.

Python

On python you can use stestr, testrepository, or subunit.run for this. These will work on any unittest compliant test suite. In fact subunit.run is just a drop-in replacement for unittest.run. So instead of running:

$ python -m unittest.run test_suite

you just run:

$ python -m subunit.run test_suite

which will emit the results as a subunit stream to stdout. You can just pipe that to stdin of subunit2sql like:

$ python -m subunit.run test_suite | subunit2sql ...

However, leveraging subunit.run (or unittest.run) is quite a limited test runner so it's probably better to use a more feature-rich tool. stestr is the better choice for this, while similar to testrepository, it's actively maintained and provides a better feature-set. To best use stestr it requires a small configuration file in repo to define how to perform test discovery. At the minimum that's just:

[DEFAULT]
test_path=<PATH_TO_TEST_DIR>

With that set you can just call stestr run to run your test suite after that. If you don't want to write a configuration file you can just specify the test_path on the cli like:

$ stestr --test-path <PATH_TO_TEST_DIR> run

stestr by default runs tests in parallel, so you might need the --serial flag if your testing was developed with that in mind. To get subunit output from the tests there are 2 methods. The first is to add the --subunit flag to the run command. This will change the default output from something human readable to subunit v2. Then you can just pipe this directly into subunit2sql like above with subunit.run. Alternatively if you want to retain the human readable output during the run you can call subunit last after the run to generate a subunit stream for the run. For example:

$ stestr run
$ stestr last --subunit | subunit2sql ...

Alternatively there is a pytest plugin pytest-subunit that will enable the pytest runner to emit subunit natively. However, this package hasn't been updated in a long time and looking at the junitxml section below might be a better path for using pytest with subunit.

Javascript

If you're using karma as your test runner for JS tests you can use the karma-subunit-reporter plugin to enable subunit output. This is straightforward and just involves installing the plugin with npm and adding the following configuration in your karma.conf.js:

module.exports = function(config) {
  config.set({
    // ...

    reporters: ['subunit'], // <---- This can contain any other reporters
                            //       just ensure subunit is in the list
    // ...
  });
};

Then you can customize the subunit output settings by adding a subunitReporter object to your config. For example:

module.exports = function(config) {
  config.set({
    reporters: ['subunit'],

    // ...

    subunitReporter: {
      outputFile: 'karma.subunit',
      tags: [],      // tag strings to append to all tests
      slug: false,   // convert whitespace to '_'
      separator: '.' // separator for suite components + test name
    }
  });
};

Which just explicitly sets the defaults. After you run karma it will now write a subunit file to the path specified in the config (or the default karma.subunit). You can just load that directly with the subunit2sql cli:

$ subunit2sql karma.subunit

Converting results to subunit

The second option for populating results in the database is to still leverage the subunit2sql CLI is to convert a different results format into subunit. This gives you more flexability in runner and language used for testing since the conversion step can happen in any language. This section will cover some common examples for doing this.

junitxml

junitxml is another popular results format, mostly due to its native support in jenkins. A lot of popular test runners, like pytest natively support writing junitxml results. This makes converting from junitxml to subunit a popular choice. This repo includes a small utility to convert junitxml (and xunitxml, which is similar) to subunit v2 output. To run this you either pass the junitxml in via stdin or pass the path to an junitxml file as the sole argument to the script. For example:

$ ./junitxml2subunit.py junitxml.xml

This can easily be tied to using a test runner like pytest to generate junitxml and then simply follow-up by converting that to subunit. Then you pass that subunit output directly into subunit2sql. For example:

$ pytest PATH_TO_TEST_DIR --junitxml=results.xml
$ ./junitxml2subunit.py results.xml | subunit2sql ...

Additionally you can use the Rust junitxml2subunit project for a faster tool performing the same conversion.

Writing your own conversion

The final option is to write a converter from whatever test results format you're using to subunit. This isn't as difficult as it seems. There are several examples out there, mostly in python (since this is the primary langauge for the upstream subunit library), for doing this. But, there are subunit v2 bindings available for multiple languages including Javascript, Python, Rust, and Go. Then there are also subunit v1 (which can easily be converted to v2 using the subunit-1to2 utility) bindings available for even more languages including C, C++, shell, and Perl. If you're using python you can refer to the [junitxml2subunit.py](junitxml2subunit.py) file in this repo for an example. Another example performing the same conversion (JUnit XML to subunit v2) can be found in rust with the junitxml2subunit project, if you'd like to implement a converter in rust.

Manually Generating Subunit

Another option for populating results is to manually generate your own subunit. There are two tools that are useful for this. The subunit-output command, packaged in the python-subunit library and the generate-subunit tool which is packaged in the os-testr package. subunit-output provides low level protocol access to write very custom subunit results. While generate-subunit provides a simpler higher level interface. Either will work, but using generate-subunit is probably easier. For example, you can use it to build a results stream by concatenating the output several times:

$ generate-subunit %(date +%s) 42 success test_a > output.subunit
$ generate-subunit %(date +%s) 10 fail test_b >> output.subunit
$ generate-subunit %(date +%s) 20 success test_c >> output.subunit

Custom results processor

The final option to directly populate the DB with results your testing. While subunit is in the name of subunit2sql, this was an artifact of it's original goal, not a design limitation. The actual data model and consumption side are not specific to the subunit protocol and can be leveraged directly with little effort. The SQL schema is not very complex for subunit2sql and directly inserting new results is not difficult. The schema/data model is documented here: https://docs.openstack.org/subunit2sql/latest/reference/data_model.html

When writing your own results processor you can either leverage the subunit2sql Python API which provides a convenient methods to add results to the DB directly. Or you can just directly connect to the DB and insert records manually using whatever tools work best for your environment. It's worth noting that the DB schema is not stable between releases and migrations may be run to change how data is stored in the database. If you manually insert data into the database you might have to update that when you upgrade the database. One of the advantages the Python API is that it provides a consistent stable interface between versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.