Code Monkey home page Code Monkey logo

athena's People

Contributors

atrifan avatar bogdanbranescu avatar catalin-me avatar corinapurcarea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

athena's Issues

Simplify paths for config and test collection

  • Move config.js at repo root level.
  • Allow for arbitrary directory structure when collecting tests.
  • Automatically detect fixtures when a specific test is passed through the -t option.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet.
We recommend using:

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

suite model potentially wrong

Tests should not present a dependency to a suite.

Suite model is potentially wrong. A suite should look like:

.....
scenario:
given: >
....
then: >
....
when: >
....
tests:
   - ref: "name"
      version: "x.x"
      pottential overriding fields (must respect test model)

Please change the entity parsing logic to refer tests inside suites not the other way around

Switch to more inclusive language

Could you please take some time in the next few days to make changes to some terminology in your repos and content as much as is possible:

  • Whitelist/blacklist to Allowed List and Blocked List (or Approve List/Deny List - some software uses this instead) respectively. Google and many developers are formalizing allowlist and blocklist. You might want to lobby for those terms to be used in the UI.
  • Master/Slave to master and replica (or subordinate, if that makes more sense) respectively.

If you cannot remove the term because the writing for example reflects the UI or the code, please make a note and send me an email to [email protected] so we can bring it that team’s attention. Thanks for your efforts in this matter.

Write unit tests

Create unit test collection for the framework in order to test several critical parts.

Add Cluster Support for Performance Testing

Feature

  • Provide support for running Athena as a manager node.
  • Allow Athena agents to join an already set-up cluster using an access token generated by the manager node.
  • Use the athena CLI to manage the cluster and delegate performance tests to all agents inside the cluster.
  • Provide support to the Athena manager to aggregate the results from all working nodes and store them inside a local database (sqlite?).
  • Allow the Athena manager to report the tests result inside a CI/CD environment.

Option for verbose report of passing assertions

Currently, the passing tests simply show up in a simple list. A flag (e.g. --verbose) should be provided so that each expected item within the passing test can be inspected. Examples where this is useful:

  • test passes under multiple conditions that are discernible from the assertions
  • report information can be further used (part of the response, the result of test fixtures, etc)

Add sidecar support

Expected Behaviour

Ability to be injected arround k8 pod and on server around a docker image. Start tests and intercept outbound traffic.

Actual Behaviour

N/A

Reproduce Scenario (including but not limited to)

N/A

Steps to Reproduce

N/A

Platform and Version

0.X

Sample Code that illustrates the problem

Known issue

Logs taken while reproducing problem

N/A

Register perfPattern

Ensure registration of pattern and context with overriding for referenced perf runs

Failing tests report only the first failing assertion

In order to understand the state of a failing test, a list of all the failing assertions should be provided. See an example of faulty reporting below:

  • test file:
    expect(response).to.have.header("pragma", "no-cache"),
    expect(response).to.have.header("accept-ranges", "bytes"),
    expect(response).to.have.header("X-First-Bad-Header", "this_header_enjoys_espresso"),
    expect(response).to.have.header("X-Second-Bad-Header", "this_value_drinks_only_decaf")
  • output:
     AssertionError: expected header X-First-Bad-Header with value undefined to match this_header_enjoys_espresso

Register perfRun

Need to register perfRun in perfPattern and map configs hooks and functions

The model looks insufficiently restricted for API testing

By offering a plugin as part of the base platform, please also consider a more keyword-based / YAML-structured alternative to what is currently offered. While clearly flexible, the code-heavy model can be a source of inconsistency (e.g. numerous equivalent ways to express the same thing, arbitrary function calls). I raise this issue because many aspects related to API testing are pretty standard (there are only so many types of entities attached to a request or expected from a response).

The coding capabilities need not be removed from this alternative, but certain aspects look repetitive enough and can be addressed case by case (for instance, checking multiple headers requires the same lengthy call many times). I understand that this means adding custom logic where out-of-the-box functionalities already took care of things, yet I consider this feature valuable as long as it does not eat too much development time.

What do you think is the proper way to include this functionality?

Json array is transformed in map when removing empty elements

Expected Behaviour

Json array stays as array when removing empty elements

Actual Behaviour

Json array is transformed in map when removing empty elements

Reproduce Scenario (including but not limited to)

Steps to Reproduce

Set examples/performance/perfRun.yaml to

name: "apiKey"
version: "1.0"
description: ""
engine: autocannon
type: perfRun
config:
  url: https://api-gateway-qe-d-ue1.adobe.io/test
  connections: 2000
  duration: 30
  method: GET
  requests: # Array of objects.
    - headers:
        host: keepalive-with-upstream-test-stage-service-1.adobe.io
      method:
      path:
    - headers:
        host: baseline-performance-testing-stage-service-1.adobe.io
hooks:
  onInit:
  onDestroy:
  onRequest:

and run athena

The command line arguments are not propagated to master when running performance tests in cluster mode

The command line arguments given when running performance tests in cluster mode are not propagated to ManagerNode and are discarded.

Expected Behaviour

When issuing a command to master to run performance tests from a given path, the tests located at the given path are executed.

Actual Behaviour

When issuing a command to master to run performance tests from a given path, the tests that are located at the default path are used instead.

Steps to Reproduce

node athena.js cluster --init --addr 0.0.0.0
node athena.js cluster --join --addr 0.0.0.0:5000 --token abcd

node athena.js cluster --run --performance --tests gw-tests/performance/upstream-directive/baseline

pm2 logs athena-agent
pm2 logs athena-manager

REST Api control plane

Expected Behaviour

Add REST API for starting up functional/performance tests. Since these might be a long running process present corelationId to client so he can later query for the status of the job.

Actual Behaviour

Reproduce Scenario (including but not limited to)

Steps to Reproduce

Platform and Version

Sample Code that illustrates the problem

Logs taken while reproducing problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.