Code Monkey home page Code Monkey logo

edison's People

Contributors

badgerati avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

edison's Issues

Implement basic Load Testing functionality

Implement basic features of load testing into Edison. This could be done using different fixture/test attributes such as [LoadTestFixture] and [LoadTest], where [LoadTest]` accepts a seconds-to-run, and max-rps.

These tests will be run completely separate to number tests for stability, lightweight running. They will also run sequentially.

For example:

[LoadTest(30, 100)]
public void Test()
{
    //call some URL
}

This will call some URL, rapidly, for 30 seconds with a simulated 100 requests per second maximum. There could even be a configurable delay between each call.

After each load test has finished report:

  • Minimum response time
  • Maximum response time
  • Average response time
  • Total number of requests
  • Total successful requests
  • Total failed requests

The timing will basically be how long that "test" took to complete, and same for number of requests being number of tests.

Introduce the notion of having a Suite of tests

Currently the only way to aggregate tests together it via the Category attribute or by namespace. It would be ideal if it were possible to have a Suite attribute for classes do define "dev" test suites, or "performance" test suites, etc.

This should be possible from the command line, and the GUI.

Only one suite can be selected at a time, so no need for include/exclude suite functionality.

Singular thread after main threads which re-runs failed tests

This feature should be toggled.

When enabled, after the main parallel and singular threads have run, this thread will re-run all tests that failed, updating their results before completion.

Idea behind this, is to limit the amount of tests that fail because of "environmental" issues, yet pass when run mere seconds later.

Might also be wise to include a threshold property; so that if so many tests have failed above a certain percentage, then don't re-run.

Additional string assertions needed

There are some string assertions which are missing:

  • Contains
  • NotContains
  • IsMatch
  • IsNotMatch
  • StartsWith
  • NotStartsWith
  • EndsWith
  • NotEndsWith
  • AreEqualIgnoreCase
  • AreNotEqualIgnoreCase

Possibility of having Threading at the Test Level

At the moment, threading is done at the TestFixture level. This is OK if you have numerous TestFixtures with few test however, if you have one TestFixture with numerous tests then threading doesn't work.

It'd be ideal if you could specify number of threads to run fixtures across, and then a secondary number of threads to run inner tests on. The concurrency attribute would need to be updated to apply to tests, too.

[EPIC] Upgrades to engine for a new Test Results website for Edison

One of the features of Edison is the ability to post test results to a URL.
It would be ideal if Edison was shipped with a website that could be hosted, and supported an endpoint we could post to by default.

I've had a few thoughts about this, and the website itself will be written in node.js - mostly because I hate ASP.NET, and so that the site can be containerised/run on any OS more easily.

For now, whilst I think a little more, this is an epic to cover all of the new features, enhancements and bugs to fix to prepare for the site:

  • #61: New UrlOutputType needed, rather than using file's OutputType
  • #62: When a test run starts, Edison should send a callout to tell the website a run has started
  • #63: When a test run ends, Edison should send a callout to tell the website the run has ended - and why
  • #64: Need a new TestRunName, which can be more informative than TestRunId
  • #65: Need a new TestRunProject identifier for test results
  • #66: Need a new TestRunEnvironment identifier for test results
  • #67: Callouts sent to a TestResultUrl/Slack/etc should be run in a separate thread

Need a new TestRunName, which can be more informative than TestRunId

This is more of an informative feature addition.

The idea of the TestRunId is to be able to group certain runs together: ie, "All run for a TestRunId of v1.2"

However, every run having the same "name", such as "v1.2" isn't very informative. This is where the TestRunName comes in handy. You can pass a TestRunId as "v1.2", and the name could just be the same but with the short commit hash: "v1.2.a1b2c3d". This now points to a specific commit the tests where run over, and is grouped together with "v1.2".

This value can be set on CLI or Edisonfile.

Allow for Test test cases to run in parallel

Test cases for Tests are currently run sequentially, even if they're in within a parallel repeat. This issue is for the implementation of a [ParallelTestCase] attribute, which will allow all test cases for a Test to be run in parallel.

This will not apply to TestFixtures likewise with parallel repeats, as it will cause sequential only running tests to be forced to run in parallel.

Need a new TestRunProject identifier for test results

Having a TestRunId of "v1.2" and a TestRunName of "v1.2.a1b2bc3d" is great, but what if you have two different projects with the same ID/Name?

Such as, having tests run for the core project and the website project in the same version?

This is to add a new TestRunProject identifier, which could be set to anything like "core/site/console/gui/etc.".

This value will only be passed on the initial "test run has started" callout.

This value can be set on CLI or Edisonfile.

Add logic for conditional assertions

Say if you want to assert that either AssertFactory.Instance.AreEqual(...) or AssertFactory.Instance.IsLessThan(...) passes, and even if one fails it's considered a pass. Where as if both fail it's a proper assertion failure.

For now just an Or will do, as an And is technically just consecutive Assert calls.

This new AssertFactory.Instance.Or(...) should take a params of assertion actions.

Allow Edison to be run using TestDriven.NET

The only way to run Edison tests at the moment is by either using the console app, or the GUI app.

This enhancement will allow users to run tests via TestDriven.NET. The TestDriven.Framework.dll can be found in the Program Files directory that it was installed to.

Ability to listen for TestResult events

When the tests are started via EdisonContext -> Run, you have to wait till after the tests have all finished running or have a separate thread polling the EdisonContext's ResultQueue for chances.

It would be nicer if you could attach a listener to the EdisonContext that would be called on TestResult events after each test, for external updating.

Any other kind of events would be useful also.

Framework .NET Dependency, and others need changing

Currently the Edison.Framework only works with projects on .NET 4.5 or higher, this should be lowered to .NET 2.0 if possible, or at most .NET 4.0.

The engine, GUI and others have a reliance on .NET 4.5.2 even though none of the new features are used. If possible, drop this to .NET 4.0 also.

Add Recently Opened sub-menu to GUI

On the GUI, you have to select the Open option and search for you file every time. It would be nice to have a "Recently Opened" sub-menu with a list of recently open files.

Maybe have an option to clear the list, as well.

When a test run starts, Edison should send a callout to tell the website a run has started

The idea here is that when a test run starts, Edison should send a callout to the website.

This callout should be to an endpoint that informs the endpoint that a run has started, to create any initial data - like the test run name/ID/project etc. - as well as to return a "SessionId" that should be used when sending test results later on. As well as the "test run has ended" callout.

This new "SessionId" is because a test run could have the same ID and Name, so determining which to assign results to is difficult.

Framework to support Browser testing

using the InternetExplorer.Application COM Object in the SHDocVw.dll, allow the Edison.Framework to support browser testing.

Basically, this will be incorporating Monocle into Edison's main framework.

Allow for a test to build up multiple failed assertions

Currently when a test comes across an assertion that fails, an AssertException is thrown and the test fails/stops.

With this idea, a batch of assertions could be run in a block of some kind, and any failures/errors could be listed up and only reported on test end, or a non-blocked assertion failed.

TestResult will need to be altered to support multiple error/failure messages. Also need to concider ideas on how to do the assertions blocks - maybe delegates?

Ability to send Test result(s) to Slack

Include a new attribute into Edison that will allow a user to specify a Slack channel to send a specific test's results to. Aka:

[Slack("Team1", SlackTestResult.Any)]
public void Test()
{
    AssertFactory.Instance.AreEqual(1, 1);
}

The above will send a Slack message to the Team1 channel, and will send the message no matter the test result. Other options for SlackTestResult should be:

  • Any
  • Success
  • Error
  • Failure

Inconclusive and Ignored are skipped, and only reported on Any.

The Slack token to allow Edison to send messages will be a CLI/EdisonFile argument. The GUI will be ignored for now.

If feasible, try and get the message to be colour coded using Slack attachments.

Edisonfile for the console application

The Edisonfile will be of YAML format and will contain all the values that would normally be supplied on the command line. This file is only respected if Edison.Console.exe is run with no arguments, or the --ef argument is supplied to the location of the Edisonfile (or if it has a different name).

An example could be:

assemblies:
  - "./some/path/test.dll"
  - "./some/other/test.dll"

output_type: "xml"

This will allow versioned arguments for Edison in a repository, for when more assemblies are added/removed or categories need to be excluded etc.

Allow for repeated tests to run in parallel

Currently you can run tests repeatedly in serial using the [Repeat] attribute. With this issue I suggest adding the concept of being able to run repeated tests in parallel, maybe with a [ParallelRepeat] attribute, or add a parallel flag to the [Repeat] attribute:

[Repeat(3, Parallel = true)]

// or

[ParallelRepeat(3)]

Personally I prefer the first one.

Need a new TestRunEnvironment identifier for test results

This one is short an simple. A new TestRunEnvironment identifier to help pinpoint which environment a test run occurred.

Where the tests run locally? or where they run on one of your 2 or 3 regression or performance environments?

The value could be something like "DEV1/REG1/PERF3" etc. Or even just the name of the server/machine they were run on.

Value will only be initially send on the "test run has started" callout.

This value can be set on CLI or Edisonfile. If the value is not set, and the run is configured to send results to a URL, just default to the machine's name.

Callouts sent to a TestResultUrl/Slack/etc should be run in a separate thread

Currently when a test finishes and we have the result, the result is synchronously sent out to the TestResultUrl or Slack if they're configured.

These callouts should be done asynchronously, so as to now impact main test running performance.

The callouts for the "test run started" and "test run ended" will still be synchronous, as data is required back from them.

Furthermore, Edison should be able to wait for a results to be sent; should the tests finish running and a couple of results are still being sent out.

Also, if a callout to an endpoint fails 5 times, no more callouts should be sent to that URL - this is to stop pointless callouts and to prevent 60sec timeouts.

Use proper libraries to serialize output

Currently the JSON/CSV and XML output is all mashed-up and rigged together with string concatenation. This functionality works however, it would be better to use proper libraries such as Json.NET and CsvHelper.

Ability to pass in Tests/Fixtures to run as files

Currently the only way to pass in specific tests/fixtures to run is via the command line. If you have one or two to pass this is OK, but if you need to pass a lot, or the namespaces are long then this can reach the max length for console input.

The idea here, is you can supply a path to a file instead, which contains the tests/fixtures to run, one line per test/fixture.

The same could also be done for assemblies, as well.

Move parameter validation from Console into the Engine Context

Currently the properties for the EdisonContext have their validation split across the Console and GUI. This validation needs to be pulled together and placed within the EdisonContext's Run method.

This allows the console/GUI, or third party program using the engine, to just set the properties to what's passed; allowing the engine to validate everything upfront before running.

Validation should also be possible before calling Run, if validation is required without actually running the tests.

Test assemblies should be run in separate AppDomain with configs loaded

Currently all of the test assemblies are loaded onto the current AppDomain, this is all right until a test wants to reference values within the AppSettings of that assembly's app.config. The issue is that the app.config for that AppDomain will actually be the console's or the GUI's.

Some references on how to do this can be found here: http://stackoverflow.com/questions/11993546/app-config-for-dynamically-loaded-assemblies and here: http://stackoverflow.com/questions/658498/how-to-load-an-assembly-to-appdomain-with-all-references-recursively

Repeat value of 1 needlessly appends repeat value to TestResult FullName

When you don't set a repeat value for a test/fixture, a value of -1 is used. If the repeat index for the test is -1, then no index value is appended within the TestResult for the FullName. However, if you set the repeat value to 1, the value is appended when it really isn't needed because the test only ran the once.

Idea is to make it so that if the value is 1 then don't append the index, else append if greater. This means we can change the GetRepeatValue in the ReflectionRepository to just return the RepeatAttribute, or return a constant one with a repeat of 1.

TestCases should work on TestFixtures

Currently the TestCase attribute specifies that it can be applied to both methods and classes however, assigning a TestCase to a TestFixture does nothing. It works on Tests.

Ideally when a TestCase is assigned to a TestFixture, the parameters should be passed into the TestFixture's constructor.

Update the wiki and docs

Right now the wiki and docs contain some information, but not much. This needs to be updated to included proper references and examples for the console application, and writing tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.