badgerati / edison Goto Github PK
View Code? Open in Web Editor NEWEdison is an open source unit/integration test framework for .NET
License: MIT License
Edison is an open source unit/integration test framework for .NET
License: MIT License
Implement basic features of load testing into Edison. This could be done using different fixture/test attributes such as [LoadTestFixture]
and [LoadTest], where
[LoadTest]` accepts a seconds-to-run, and max-rps.
These tests will be run completely separate to number tests for stability, lightweight running. They will also run sequentially.
For example:
[LoadTest(30, 100)]
public void Test()
{
//call some URL
}
This will call some URL, rapidly, for 30 seconds with a simulated 100 requests per second maximum. There could even be a configurable delay between each call.
After each load test has finished report:
The timing will basically be how long that "test" took to complete, and same for number of requests being number of tests.
Currently the only way to aggregate tests together it via the Category
attribute or by namespace. It would be ideal if it were possible to have a Suite
attribute for classes do define "dev" test suites, or "performance" test suites, etc.
This should be possible from the command line, and the GUI.
Only one suite can be selected at a time, so no need for include/exclude suite functionality.
This feature should be toggled.
When enabled, after the main parallel and singular threads have run, this thread will re-run all tests that failed, updating their results before completion.
Idea behind this, is to limit the amount of tests that fail because of "environmental" issues, yet pass when run mere seconds later.
Might also be wise to include a threshold property; so that if so many tests have failed above a certain percentage, then don't re-run.
Right now the format that data is sent to the TestResultUrl
is determined by the OutputType
that Edison writes to files in.
Here, we'll create a new UrlOutputType
so that we can write to file/console/url in varyingly different formats.
There are some string assertions which are missing:
When sending to a Test Result URL, the duration should really be sent in milliseconds, and not as a .NET TimeSpan object.
At the moment, threading is done at the TestFixture
level. This is OK if you have numerous TestFixtures with few test however, if you have one TestFixture
with numerous tests then threading doesn't work.
It'd be ideal if you could specify number of threads to run fixtures across, and then a secondary number of threads to run inner tests on. The concurrency attribute would need to be updated to apply to tests, too.
One of the features of Edison is the ability to post test results to a URL.
It would be ideal if Edison was shipped with a website that could be hosted, and supported an endpoint we could post to by default.
I've had a few thoughts about this, and the website itself will be written in node.js - mostly because I hate ASP.NET, and so that the site can be containerised/run on any OS more easily.
For now, whilst I think a little more, this is an epic to cover all of the new features, enhancements and bugs to fix to prepare for the site:
This is more of an informative feature addition.
The idea of the TestRunId
is to be able to group certain runs together: ie, "All run for a TestRunId of v1.2"
However, every run having the same "name", such as "v1.2" isn't very informative. This is where the TestRunName
comes in handy. You can pass a TestRunId
as "v1.2", and the name could just be the same but with the short commit hash: "v1.2.a1b2c3d". This now points to a specific commit the tests where run over, and is grouped together with "v1.2".
This value can be set on CLI or Edisonfile.
Test cases for Tests are currently run sequentially, even if they're in within a parallel repeat. This issue is for the implementation of a [ParallelTestCase]
attribute, which will allow all test cases for a Test to be run in parallel.
This will not apply to TestFixtures likewise with parallel repeats, as it will cause sequential only running tests to be forced to run in parallel.
Having a TestRunId
of "v1.2" and a TestRunName
of "v1.2.a1b2bc3d" is great, but what if you have two different projects with the same ID/Name?
Such as, having tests run for the core project and the website project in the same version?
This is to add a new TestRunProject
identifier, which could be set to anything like "core/site/console/gui/etc.".
This value will only be passed on the initial "test run has started" callout.
This value can be set on CLI or Edisonfile.
Say if you want to assert that either AssertFactory.Instance.AreEqual(...)
or AssertFactory.Instance.IsLessThan(...)
passes, and even if one fails it's considered a pass. Where as if both fail it's a proper assertion failure.
For now just an Or
will do, as an And
is technically just consecutive Assert calls.
This new AssertFactory.Instance.Or(...)
should take a params of assertion actions.
The only way to run Edison tests at the moment is by either using the console app, or the GUI app.
This enhancement will allow users to run tests via TestDriven.NET. The TestDriven.Framework.dll
can be found in the Program Files directory that it was installed to.
On top of the currently supported output types, HTML and MarkDown Output Repositories also need to be added, into the Edison.Engine > Repositories
.
When the tests are started via EdisonContext -> Run
, you have to wait till after the tests have all finished running or have a separate thread polling the EdisonContext
's ResultQueue
for chances.
It would be nicer if you could attach a listener to the EdisonContext
that would be called on TestResult events after each test, for external updating.
Any other kind of events would be useful also.
Currently the Edison.Framework only works with projects on .NET 4.5 or higher, this should be lowered to .NET 2.0 if possible, or at most .NET 4.0.
The engine, GUI and others have a reliance on .NET 4.5.2 even though none of the new features are used. If possible, drop this to .NET 4.0 also.
The Edison.Console application needs to be on NuGet for easier version controlling. This will allow different projects to support different versions of Edison, that can be restored via NuGet and referenced in the packages directory.
When running tests in more than 1 fixture or test thread, sometimes some tests don't get run.
This is probably the way in which test segments are split out, and it might be rounding incorrectly.
On the GUI, you have to select the Open option and search for you file every time. It would be nice to have a "Recently Opened" sub-menu with a list of recently open files.
Maybe have an option to clear the list, as well.
The idea here is that when a test run starts, Edison should send a callout to the website.
This callout should be to an endpoint that informs the endpoint that a run has started, to create any initial data - like the test run name/ID/project etc. - as well as to return a "SessionId" that should be used when sending test results later on. As well as the "test run has ended" callout.
This new "SessionId" is because a test run could have the same ID and Name, so determining which to assign results to is difficult.
using the InternetExplorer.Application
COM Object in the SHDocVw.dll
, allow the Edison.Framework
to support browser testing.
Basically, this will be incorporating Monocle into Edison's main framework.
The Edison.Console
application always returns a successful exit code of 0, even if some of the tests actually failed.
If tests fail, the application should return an exit code of non-zero, say 1.
Currently when a test comes across an assertion that fails, an AssertException is thrown and the test fails/stops.
With this idea, a batch of assertions could be run in a block of some kind, and any failures/errors could be listed up and only reported on test end, or a non-blocked assertion failed.
TestResult will need to be altered to support multiple error/failure messages. Also need to concider ideas on how to do the assertions blocks - maybe delegates?
Include a new attribute into Edison that will allow a user to specify a Slack channel to send a specific test's results to. Aka:
[Slack("Team1", SlackTestResult.Any)]
public void Test()
{
AssertFactory.Instance.AreEqual(1, 1);
}
The above will send a Slack message to the Team1
channel, and will send the message no matter the test result. Other options for SlackTestResult should be:
Inconclusive and Ignored are skipped, and only reported on Any.
The Slack token to allow Edison to send messages will be a CLI/EdisonFile argument. The GUI will be ignored for now.
If feasible, try and get the message to be colour coded using Slack attachments.
At the moment the GUI can only open and run one assembly at a time. This issue will resolve that, allowing it to open up multiple assemblies, or open a solution file; just like the CLI.
The Edisonfile
will be of YAML format and will contain all the values that would normally be supplied on the command line. This file is only respected if Edison.Console.exe
is run with no arguments, or the --ef
argument is supplied to the location of the Edisonfile (or if it has a different name).
An example could be:
assemblies:
- "./some/path/test.dll"
- "./some/other/test.dll"
output_type: "xml"
This will allow versioned arguments for Edison in a repository, for when more assemblies are added/removed or categories need to be excluded etc.
Currently you can run tests repeatedly in serial using the [Repeat]
attribute. With this issue I suggest adding the concept of being able to run repeated tests in parallel, maybe with a [ParallelRepeat]
attribute, or add a parallel flag to the [Repeat]
attribute:
[Repeat(3, Parallel = true)]
// or
[ParallelRepeat(3)]
Personally I prefer the first one.
This one is short an simple. A new TestRunEnvironment
identifier to help pinpoint which environment a test run occurred.
Where the tests run locally? or where they run on one of your 2 or 3 regression or performance environments?
The value could be something like "DEV1/REG1/PERF3" etc. Or even just the name of the server/machine they were run on.
Value will only be initially send on the "test run has started" callout.
This value can be set on CLI or Edisonfile. If the value is not set, and the run is configured to send results to a URL, just default to the machine's name.
Create a new attribute [Retry(x)]
which will keep retrying a test x
times until it passes. If after x
times it is still failing, report the first error (or if possible, the most recurring error).
Currently when a test finishes and we have the result, the result is synchronously sent out to the TestResultUrl or Slack if they're configured.
These callouts should be done asynchronously, so as to now impact main test running performance.
The callouts for the "test run started" and "test run ended" will still be synchronous, as data is required back from them.
Furthermore, Edison should be able to wait for a results to be sent; should the tests finish running and a couple of results are still being sent out.
Also, if a callout to an endpoint fails 5 times, no more callouts should be sent to that URL - this is to stop pointless callouts and to prevent 60sec timeouts.
Currently the JSON/CSV and XML output is all mashed-up and rigged together with string concatenation. This functionality works however, it would be better to use proper libraries such as Json.NET and CsvHelper.
Much like issue #62, this one will be for Edison to send a callout when a test run has ended - using the SessionId from the start callout.
The data sent from Edison should be the:
SessionId
Note, that if the SessionId is not available, just send the TestRunId
and TestRunName
instead.
Currently the only way to pass in specific tests/fixtures to run is via the command line. If you have one or two to pass this is OK, but if you need to pass a lot, or the namespaces are long then this can reach the max length for console input.
The idea here, is you can supply a path to a file instead, which contains the tests/fixtures to run, one line per test/fixture.
The same could also be done for assemblies, as well.
Currently the properties for the EdisonContext
have their validation split across the Console and GUI. This validation needs to be pulled together and placed within the EdisonContext's Run method.
This allows the console/GUI, or third party program using the engine, to just set the properties to what's passed; allowing the engine to validate everything upfront before running.
Validation should also be possible before calling Run, if validation is required without actually running the tests.
Currently all of the test assemblies are loaded onto the current AppDomain, this is all right until a test wants to reference values within the AppSettings of that assembly's app.config. The issue is that the app.config for that AppDomain will actually be the console's or the GUI's.
Some references on how to do this can be found here: http://stackoverflow.com/questions/11993546/app-config-for-dynamically-loaded-assemblies and here: http://stackoverflow.com/questions/658498/how-to-load-an-assembly-to-appdomain-with-all-references-recursively
When tests are run via the GUI, the progress bar always remains green. It could be useful if the bar changed its colour to red when failed tests occur.
That, or orange for failed tests and red for outright errors.
When you don't set a repeat value for a test/fixture, a value of -1 is used. If the repeat index for the test is -1, then no index value is appended within the TestResult for the FullName. However, if you set the repeat value to 1, the value is appended when it really isn't needed because the test only ran the once.
Idea is to make it so that if the value is 1 then don't append the index, else append if greater. This means we can change the GetRepeatValue in the ReflectionRepository to just return the RepeatAttribute, or return a constant one with a repeat of 1.
Currently the TestCase
attribute specifies that it can be applied to both methods and classes however, assigning a TestCase
to a TestFixture
does nothing. It works on Test
s.
Ideally when a TestCase
is assigned to a TestFixture
, the parameters should be passed into the TestFixture
's constructor.
Right now the wiki and docs contain some information, but not much. This needs to be updated to included proper references and examples for the console application, and writing tests.
When supplying the "Included Categories", and this matches a TestFixture which has Tests with no categories; currently these tests are not run when they should be. They only shouldn't run if the test does have a category that doesn't match the one(s) given.
This feature to enable the user to pass a .NET solution file on the CLI and in the GUI. The engine will then extract Tests from the debug assemblies it can find from the included projects of the solution.
The parameter name for the TestResultUrl is url
on both the CLI and in Edisonfiles.
This name needs to change to conform with the other test result/run parameters, to be:
CLI: --turl
Edisonfile: test_result_url
The ChocolateyInstall.ps1
script now needs to include the checksum of the binaries zip file when downloading.
The checksum can be generated using the choco install checksum
program, via:
checksum -t sha256 -f $version-Binaries.zip
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.