Code Monkey home page Code Monkey logo

tests's Introduction

Carrier - Results Analysis

Set up Grafana Dashboards

PerfGun (Gatling) and PerfMeter (JMeter) have a custom InfluxDB Listener that is automatically added to test for reporting.

Carrier provides Grafana dashboards for both tools to monitor test results.

You can install InfluxDB and Grafana using a Carrier installer. In this case, all the necessary databases and dashboards will be set up automatically.

If you already have Grafana installed, you can import the dashboards and data sources from the Carrier Repository.

It is also necessary to have InfluxDB installed with the databases created (jmeter, gatling, comparison, telegraf, thresholds).

Example how to import PerfMeter dashboard using Curl:

curl -s https://raw.githubusercontent.com/carrier-io/carrier-io/master/grafana_dashboards/perfmeter_dashboards.json | curl -X POST "http://${FULLHOST}/grafana/api/dashboards/db" -u admin:${GRAFANA_PASSWORD} --header "Content-Type: application/json" -d @-

You can also import dashboards manually. But in this case you need to remove the "dashboard" key and its closing bracket from the JSON-file.

Screenshots of manual import of the dashboard are presented below.

alt text

alt text

Example how to import PerfMeter data source using Curl:

curl -s https://raw.githubusercontent.com/carrier-io/carrier-io/master/influx_datasources/datasource_jmeter | curl -X POST "http://${FULLHOST}/grafana/api/datasources" -u admin:password --header "Content-Type: application/json" -d @-

You can also create a data source manually. Go to the Configuration menu "Data source", press "Add data source", select "InfluxDB" and fill in all the necessary fields.

Screenshots are presented below.

alt text

alt text

alt text

alt text

Dashboard overview

During the test execution all performance metrics are being saved to InfluxDB and displayed in Grafana.

Grafana allows us to review performance tests results using different filters which helps us to divide executed tests by parameters (e.g.: Simulation name, Test type, Environment, Users count, etc.).

alt text

You can pick necessary time range in which you want to see your results. You can also set the "Refreshing every:" option. This allows you to automatically update the dashboard at specified intervals.

alt text

After all filters set-up properly you will be able to see results of test execution.

The first block consists of 6 panels with overall information.

alt text

The second block of the dashboard named “Response Times Over Time” contains a chart where you can see Response time for all requests or only for chosen in the right side of the block.

alt text

The third block of the dashboard, called "Throughput", contains a graph showing the change in throughput over time.

alt text

The last block of the dashboard called "Summary table" contains a table consisting of detailed statistics for each request. In case of empty table refresh the page.

alt text

Capacity test

The capacity of a system is the total workload it can handle without violating predetermined key performance acceptance criteria.

A capacity test ran to determine your server’s saturation point, saturation area and failure point.

Capacity testing ran in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.

Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.

Saturation point - the point when system throughput stops to grow, while pressure (concurrent users) continue to grow. Saturation point should be used to determine if hardware resources is planned carefully, e.g. if saturation come w/o all system resources utilized, it may be considered as ineffective resource usage. Saturation point should be used as an input in planning of scaling strategy for a particular system.

Saturation area is time from saturation point to a failure point, while pressure on a system is growing. Saturation area is a time system have to scale up to avoid reaching failure point.

Failure point is a point when % of failed requests or % of responses exceeded maximum expected time reached the allowed limit (e.g. 1%). This point considered to be user attrition point - users stops using the system in this state because of unsatisfactory experience.

A screenshot of the saturation point, saturation area and failure point is presented below.

alt text

Tests comparison

There are some cases when you need to compare performance results between different test executions for each request.

Carrier provides Grafana dashboard for this purpose. Setup is performed automatically if you are using the Carrier installer. Or you can import it yourself if you already have a Grafana installed.

Example how to import Comparison dashboard using Curl:

curl -s https://raw.githubusercontent.com/carrier-io/carrier-io/master/grafana_dashboards/performance_comparison_dashboard.json | curl -X POST "http://${FULLHOST}/grafana/api/dashboards/db" -u admin:${GRAFANA_PASSWORD} --header "Content-Type: application/json" -d @-

Grafana allows us to review performance test results using different filters which helps us to divide executed tests by parameters (e.g.: Simulation name, Duration, User count, etc.).

alt text

Once all the filters have been configured correctly, you will be able to see a comparison of the results for the selected tests.

There are 3 main sections on the dashboard.

The first one shows a chart of the response time with a comparison of the selected tests for each request. To the right of this chart you can select one of the tests and mark it as baseline.

alt text

The second section contains tables with the distribution of all requests by response code.

alt text

The last section contains tables comparing the response time for each request.

alt text

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.