petabridge / nbench Goto Github PK
View Code? Open in Web Editor NEWPerformance benchmarking and testing framework for .NET applications :chart_with_upwards_trend:
Home Page: https://nbench.io/
License: Apache License 2.0
Performance benchmarking and testing framework for .NET applications :chart_with_upwards_trend:
Home Page: https://nbench.io/
License: Apache License 2.0
Suggested by @nayato from the DotNetty team - it'd be awesome if we could instrument a set of attributes that allow NBench to take before and after snapshots from various Windows Performance Counters.
The current table markdown syntax seems to be compatible with Github tables.
It is not compatible with for example Marked.JS or Atom.IO's markdown preview.
The raw markdown is also hard to read due to column widths
Metric | Units | Max | Average | Min | StdDev |
---------------- |---------------- |---------------- |---------------- |---------------- |---------------- |
[Counter] InboundMessageDispatch | operations | 1,099,208.00 | 1,099,208.00 | 1,099,208.00 | 0.00 |
Metric | Units / s | Max / s | Average / s | Min / s | StdDev / s |
---------------- |---------------- |---------------- |---------------- |---------------- |---------------- |
[Counter] InboundMessageDispatch | operations | 2,785,603.36 | 2,503,310.64 | 1,983,111.20 | 271,163.13 |
It would be nice if the column width of the output text was calculated from the values contained in the columns.
Currently it is hard to read the output in both raw and rendered
Related to #8
There isn't a short way to summarize this issue in the title, so I'll illustrate by example:
Akka.Benchmarks.dll depends on Akka.dll and some others, all of which are stored in directory /src/Akka.Benchmarks/bin/Debug/
We need to write tests to verify that these behaviors are identical
Current Working Directory | DLL arg passed to NBench.Runner.exe |
---|---|
/src/Akka.Benchmarks/bin/Debug/ | Akka.Benchmarks.dll |
~/ | /src/Akka.Benchmarks/bin/Debug/Akka.Benchmarks.dll |
if it blows up because user added 2+, then log a useful response to user
The raw data is useful for troubleshooting potential issues too.
should = NameOfContainingClass + BenchmarkMethodName
To decrease the potential for test pollution as a result of previous tests leaving behind artifacts, changing static members, failing to clean up properly, etc... I would propose that we completely destroy and recreate the test AppDomain between each spec.
1 spec can do all of its runs and warmups within the domain, but as soon as it's done reporting its results we destroy it and move on.
This would require a redesign in how the TestRunner
works, but I think there's merit to it. Any thoughts?
Any plans to include support for running under TeamCity and integrating into their test reporting framework?
An approach similar to NUnit's ITestEventListener
would be a useful addition making the content and formatting of the output more 'pluggable'.
I noticed this when I was originally debugging NBench.PerformanceCounters, but it looks like we might be trying to use them "too quickly," which is a known issue (I'll have to find the source for that again.)
As a result, looks like we fail to collect data from them fairly often. May need to rework how we handle PerformanceCounter
failures and retries.
Attempting to run under mono (4.3) I get the following exception:
Unhandled Exception:
System.ComponentModel.Win32Exception: Access denied
at System.Diagnostics.Process.set_PriorityClass (ProcessPriorityClass value) <0x41e83ce0 + 0x0016f> in :0
at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:set_PriorityClass (System.Diagnostics.ProcessPriorityClass)
at NBench.Runner.Program.Main (System.String[] args) <0x41e80d50 + 0x001cb> in :0
[ERROR] FATAL UNHANDLED EXCEPTION: System.ComponentModel.Win32Exception: Access denied
at System.Diagnostics.Process.set_PriorityClass (ProcessPriorityClass value) <0x41e83ce0 + 0x0016f> in :0
at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:set_PriorityClass (System.Diagnostics.ProcessPriorityClass)
at NBench.Runner.Program.Main (System.String[] args) <0x41e80d50 + 0x001cb> in :0
Hi,
I have tested generated markdown preview on multiple viewers but all of them failed to display properly except Github Editor. Is there any compatible viewer for Windows?
Thanks
Instead of just reporting numbers or pass or fail.
It would be nice with some visual indication on the difference between the current and previous runs.
A way to see how much better or worse the the outcome was.
Different shades of gray/geen/red or arrows at different angles.
And possibly also some aggregated view/report where all outputs worth noticing is rendered.
Just show the significant changes there.
Do you have any plans for this to work against CoreCLR apps especially those that run on OSX/Linux?
Have an Akka.NET throughput spec that fails during setup for reasons unknown at the moment:
[PerfSetup]
public void Setup(BenchmarkContext context)
{
_remoteMessageThroughput = context.GetCounter(RemoteMessageCounterName);
System1 = ActorSystem.Create("SystemA" + Counter.Next(), CreateActorSystemConfig("SystemA" + Counter.Current, "127.0.0.1", 0));
_echo = System1.ActorOf(Props.Create(() => new EchoActor()), "echo");
System2 = ActorSystem.Create("SystemB" + Counter.Next(), CreateActorSystemConfig("SystemB" + Counter.Next(), "127.0.0.1", 0));
_receiver =
System2.ActorOf(
Props.Create(() => new BenchmarkActor(_remoteMessageThroughput, RemoteMessageCount, _resetEvent)),
"benchmark");
var system1Address = RARP.For(System1).Provider.Transport.DefaultAddress;
var system2Address = RARP.For(System2).Provider.Transport.DefaultAddress;
var system1EchoActorPath = new RootActorPath(system1Address) / "user" / "echo";
var system2RemoteActorPath = new RootActorPath(system2Address) / "user" / "benchmark";
_remoteReceiver =
System1.ActorSelection(system2RemoteActorPath).Ask<ActorIdentity>(new Identify(null), TimeSpan.FromSeconds(2)).Result.Subject;
_remoteEcho =
System2.ActorSelection(system1EchoActorPath)
.Ask<ActorIdentity>(new Identify(null), TimeSpan.FromSeconds(2))
.Result.Subject;
}
This code currently throws an exception and causes the NBench.Runner
not to proceed onto the next test. I think this is probably an issue inside the Benchmark
class itself.
For long-running iteration tests. This should be added to the PerfBenchmark
attribute.
Hi,
I have created multiple Perf Methods in single POCO. I am getting following output in console which have exceptions
------------ STARTING TestAndPerf.Perf.PerfLogging+Perf_Logger_Val ----------
ERROR: Error occurred during $TestAndPerf.Perf.PerfLogging+Perf_Logger_Val SETUP
.
NBench.NBenchException: error while retrieving counter ---> System.Collections.G
eneric.KeyNotFoundException: The given key was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at NBench.BenchmarkContext.GetCounter(String name)
--- End of inner exception stack trace ---
at NBench.BenchmarkContext.GetCounter(String name)
at TestAndPerf.Perf.PerfLogging.Setup(BenchmarkContext context) in D:\Develop
ment\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line 21
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(BenchmarkContext con
text)
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(Int64 runCount, Benc
hmarkContext context)
at NBench.Sdk.Benchmark.PreRun()
--------------- BEGIN WARMUP ---------------
Elapsed: 00:00:00.6864496
TotalBytesAllocated - bytes: 231,480.00 ,bytes: /s 337,213.35 , ns / bytes: 2,96
5.48
TotalCollections [Gen2] - collections: 0.00 ,collections: /s 0.00 , ns / collect
ions: 686,449,682.04
[Counter] Counter_Logger_Val - operations: 1.00 ,operations: /s 1.46 , ns / oper
ations: 686,449,682.04
--------------- END WARMUP ---------------
WARNING: Error during previous run of TestAndPerf.Perf.PerfLogging+Perf_Logger_V
al. Aborting run...
--------------- RESULTS: TestAndPerf.Perf.PerfLogging+Perf_Logger_Val ----------
-----
Test to ensure that minimal Througput of __.Val is sufficient
--------------- DATA ---------------
TotalBytesAllocated: Max: 231,480.00 bytes, Average: 231,480.00 bytes, Min: 231,
480.00 bytes, StdDev: 0.00 bytes
TotalBytesAllocated: Max / s: 337,213.35 bytes, Average / s: 337,213.35 bytes, M
in / s: 337,213.35 bytes, StdDev / s: 0.00 bytes
TotalCollections [Gen2]: Max: 0.00 collections, Average: 0.00 collections, Min:
0.00 collections, StdDev: 0.00 collections
TotalCollections [Gen2]: Max / s: 0.00 collections, Average / s: 0.00 collection
s, Min / s: 0.00 collections, StdDev / s: 0.00 collections
[Counter] Counter_Logger_Val: Max: 1.00 operations, Average: 1.00 operations, Mi
n: 1.00 operations, StdDev: 0.00 operations
[Counter] Counter_Logger_Val: Max / s: 1.46 operations, Average / s: 1.46 operat
ions, Min / s: 1.46 operations, StdDev / s: 0.00 operations
--------------- ASSERTIONS ---------------
[FAIL] Expected [Counter] Counter_Logger_Val to must be greater than 1,000,000.0
0 operations; actual value was 1.46 operations.
[PASS] Expected TotalBytesAllocated to must be less than or equal to 16,384,000.
00 bytes; actual value was 231,480.00 bytes.
[PASS] Expected TotalCollections [Gen2] to must be exactly 0.00 collections; act
ual value was 0.00 collections.
--------------- EXCEPTIONS ---------------
NBench.NBenchException: Error occurred during $TestAndPerf.Perf.PerfLogging+Perf
_Logger_Val SETUP. ---> NBench.NBenchException: error while retrieving counter -
--> System.Collections.Generic.KeyNotFoundException: The given key was not prese
nt in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at NBench.BenchmarkContext.GetCounter(String name)
--- End of inner exception stack trace ---
at NBench.BenchmarkContext.GetCounter(String name)
at TestAndPerf.Perf.PerfLogging.Setup(BenchmarkContext context) in D:\Develop
ment\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line 21
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(BenchmarkContext con
text)
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(Int64 runCount, Benc
hmarkContext context)
at NBench.Sdk.Benchmark.PreRun()
--- End of inner exception stack trace ---
## Exceptions
------------ FINISHED TestAndPerf.Perf.PerfLogging+Perf_Logger_Val ----------
------------ STARTING TestAndPerf.Perf.PerfLogging+Test_Logger_DB ----------
ERROR: Error occurred during $TestAndPerf.Perf.PerfLogging+Test_Logger_DB SETUP.
NBench.NBenchException: error while retrieving counter ---> System.Collections.G
eneric.KeyNotFoundException: The given key was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at NBench.BenchmarkContext.GetCounter(String name)
--- End of inner exception stack trace ---
at NBench.BenchmarkContext.GetCounter(String name)
at TestAndPerf.Perf.PerfLogging.Setup(BenchmarkContext context) in D:\Develop
ment\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line 20
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(BenchmarkContext con
text)
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(Int64 runCount, Benc
hmarkContext context)
at NBench.Sdk.Benchmark.PreRun()
ERROR: Error occurred during $TestAndPerf.Perf.PerfLogging+Test_Logger_DB RUN.
System.NullReferenceException: Object reference not set to an instance of an obj
ect.
at TestAndPerf.Perf.PerfLogging.Test_Logger_DB(BenchmarkContext context) in D
:\Development\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line
51
at NBench.Sdk.Benchmark.RunBenchmark()
at NBench.Sdk.Benchmark.RunSingleBenchmark()
--------------- BEGIN WARMUP ---------------
Elapsed: 00:00:00.0077538
TotalBytesAllocated - bytes: -6,595,792.00 ,bytes: /s -16,066,096,111,520.00 , n
s / bytes: 410.54
TotalCollections [Gen2] - collections: 0.00 ,collections: /s 0.00 , ns / collect
ions: 410.54
[Counter] Counter_Logger_DB - operations: 0.00 ,operations: /s 0.00 , ns / opera
tions: 410.54
--------------- END WARMUP ---------------
WARNING: Error during previous run of TestAndPerf.Perf.PerfLogging+Test_Logger_D
B. Aborting run...
--------------- RESULTS: TestAndPerf.Perf.PerfLogging+Test_Logger_DB -----------
----
Test to ensure that minimal Througput of __.DB is sufficient
--------------- DATA ---------------
TotalBytesAllocated: Max: -6,595,792.00 bytes, Average: -6,595,792.00 bytes, Min
: -6,595,792.00 bytes, StdDev: 0.00 bytes
TotalBytesAllocated: Max / s: -16,066,096,111,520.00 bytes, Average / s: -16,066
,096,111,520.00 bytes, Min / s: -16,066,096,111,520.00 bytes, StdDev / s: 0.00 b
ytes
TotalCollections [Gen2]: Max: 0.00 collections, Average: 0.00 collections, Min:
0.00 collections, StdDev: 0.00 collections
TotalCollections [Gen2]: Max / s: 0.00 collections, Average / s: 0.00 collection
s, Min / s: 0.00 collections, StdDev / s: 0.00 collections
[Counter] Counter_Logger_DB: Max: 0.00 operations, Average: 0.00 operations, Min
: 0.00 operations, StdDev: 0.00 operations
[Counter] Counter_Logger_DB: Max / s: 0.00 operations, Average / s: 0.00 operati
ons, Min / s: 0.00 operations, StdDev / s: 0.00 operations
--------------- ASSERTIONS ---------------
[FAIL] Expected [Counter] Counter_Logger_DB to must be greater than 1,000,000.00
operations; actual value was 0.00 operations.
[PASS] Expected TotalBytesAllocated to must be less than or equal to 16,384,000.
00 bytes; actual value was -6,595,792.00 bytes.
[PASS] Expected TotalCollections [Gen2] to must be exactly 0.00 collections; act
ual value was 0.00 collections.
--------------- EXCEPTIONS ---------------
NBench.NBenchException: Error occurred during $TestAndPerf.Perf.PerfLogging+Test
_Logger_DB SETUP. ---> NBench.NBenchException: error while retrieving counter --
-> System.Collections.Generic.KeyNotFoundException: The given key was not presen
t in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at NBench.BenchmarkContext.GetCounter(String name)
--- End of inner exception stack trace ---
at NBench.BenchmarkContext.GetCounter(String name)
at TestAndPerf.Perf.PerfLogging.Setup(BenchmarkContext context) in D:\Develop
ment\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line 20
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(BenchmarkContext con
text)
at NBench.Sdk.ReflectionBenchmarkInvoker.InvokePerfSetup(Int64 runCount, Benc
hmarkContext context)
at NBench.Sdk.Benchmark.PreRun()
--- End of inner exception stack trace ---
NBench.NBenchException: Error occurred during $TestAndPerf.Perf.PerfLogging+Test
_Logger_DB RUN. ---> System.NullReferenceException: Object reference not set to
an instance of an object.
at TestAndPerf.Perf.PerfLogging.Test_Logger_DB(BenchmarkContext context) in D
:\Development\Production-WS\Infrastructure\TestAndPerf\Perf\PerfLogging.cs:line
51
at NBench.Sdk.Benchmark.RunBenchmark()
at NBench.Sdk.Benchmark.RunSingleBenchmark()
--- End of inner exception stack trace ---
## Exceptions
------------ FINISHED TestAndPerf.Perf.PerfLogging+Test_Logger_DB ----------
I have used following code, please help if there is any issue in my setup or if i am missing something
public class PerfLogging
{
private Counter Counter_Logger_Val;
private Counter Counter_Logger_DB;
[PerfSetup]
public void Setup(BenchmarkContext context)
{
Counter_Logger_Val = context.GetCounter("Counter_Logger_Val");
Counter_Logger_DB = context.GetCounter("Counter_Logger_DB");
}
[PerfBenchmark(Description = "Test to ensure that minimal Througput of __.Val is sufficient",
NumberOfIterations = 3, RunMode = RunMode.Throughput,
RunTimeMilliseconds = 1000, TestMode = TestMode.Test)]
[CounterThroughputAssertion("Counter_Logger_Val", MustBe.GreaterThan, 1000000d)]
[MemoryAssertion(MemoryMetric.TotalBytesAllocated, MustBe.LessThanOrEqualTo, ByteConstants.SixteenKb * 1000)]
[GcTotalAssertion(GcMetric.TotalCollections, GcGeneration.Gen2, MustBe.ExactlyEqualTo, 0.0d)]
public void Perf_Logger_Val(BenchmarkContext context)
{
Assert.Equal(__.Val(Logger.Level.NotSpecified), "NotSpecified");
string[] values = { "", null, "A" };
Assert.Equal(__.Val(values), "A");
Assert.Equal(__.Val(false, values), "");
Counter_Logger_Val.Increment();
}
[PerfBenchmark(Description = "Test to ensure that minimal Througput of __.DB is sufficient",
NumberOfIterations = 3, RunMode = RunMode.Throughput,
RunTimeMilliseconds = 1000, TestMode = TestMode.Test)]
[CounterThroughputAssertion("Counter_Logger_DB", MustBe.GreaterThan, 1000000d)]
[MemoryAssertion(MemoryMetric.TotalBytesAllocated, MustBe.LessThanOrEqualTo, ByteConstants.SixteenKb * 1000)]
[GcTotalAssertion(GcMetric.TotalCollections, GcGeneration.Gen2, MustBe.ExactlyEqualTo, 0.0d)]
public void Test_Logger_DB(BenchmarkContext context)
{
Assert.Equal(__.DB("DB", "USER", "PASSWORD"), "USER/********@DB");
typeof(Options).GetField("_ShowPassword", System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static).SetValue(null, 1);
Assert.Equal(__.DB("DB", "USER", "PASSWORD"), "USER/PASSWORD@DB");
typeof(Options).GetField("_ShowPassword", System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static).SetValue(null, 0);
Counter_Logger_DB.Increment();
}
}
Any help would be appreciated, Thanks
Ran into a fun issue on DotNetty Azure/DotNetty#95 where it looked like a change in the underlying hardware we're running NBench on affected the benchmark significantly with no explanation, thanks to our use of auto-scaling build agents on Windows Azure.
We really need to expand the hardware profile captured by SystemInfo
and include much more detail about:
That way it'll be easier to tell if it was a code change or a hardware change that resulted in a divergence in a metric, and not cause the developers to panic when it's the latter.
Hi!
I am trying to write perf tests for new implementation of some code and I want to compare its performance with the old version.
Ideally I want to mark one benchmark as a baseline and specify for others that there will be (lets say) no more than 0.9x by time and no more than 1.2x allocations compared to the baseline benchmark.
Also, if possible, I want to ignore peaks and assert for 90-95th percentile of samples.
Is something alike supported / possible with NBench? Thanks!
This will probably be necessary to implement #59, but in general this would make debugging large performance test suites easier.
It would be useful to be able to write data from within the benchmark out to the final output of the benchmark itself via BenchmarkContext
, especially for larger "integration" benchmarks.
Looks like we have some sort of N+1 error inside the warmup system for Benchmark
that causes the first actual warmup after the pre-warmup to not actually collect any metrics. Whoops ๐จ
Instead of throwing an exception, display a help message and a list of possible commands on NBench.Runner.exe
Reflection, delegate compilation, Dictionaries in builder, etc
Was just having a look around for a better solution for our performance tests - NBench looks like a pretty interesting option!
Spotted a typo in the readme, just thought I'd point it out. I'd correct it, but I'm not sure what it's meant to say! In the Benchmark Modes
section:
During a Throughput benchmark the `Benchmark
Thanks - will be back if I hit further problems.
Should have an end-to-end integration test that actually runs as part of a FAKE build.
Hi,
I'm trying to run the NBench.Runner for an assembly signed. I have generated the signature for NBench.dll and NBench.Runner.exe, with the same signature of my project using ildasm and ilasm tools as they say in http://buffered.io/posts/net-fu-signing-an-unsigned-assembly-without-delay-signing/. When run NBench.Runner.exe from my command prompt see this error:
System.IO.FileLoadException: You can not load file or assembly 'NBench, Version=0.1.6.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An assembly with strong name is required. (Exception of HRESULT: 0x80131044) in NBench.Runner.Program.Main(String[] args).
However all references to NBench.dll and NBench.Runner.exe are signed. I do not know why the error occurs. Please help me with this issue.
Thanks.
Right now ReflectionDiscovery
will attempt to create Benchmark
s for classes marked with abstract
, which obviously won't work when we try to create an instance of the class via Activator
.
Get the following exception if you try to run NBench.Runner
via NuGet otherwise:
Using NBench.Runner: D:\Repositories\olympus\DedicatedThreadPool\src\packages\NBench.Runner\lib\net45\NBench.Runner.exe
D:\Repositories\olympus\DedicatedThreadPool\src\packages\NBench.Runner\lib\net45\NBench.Runner.exe "D:\Repositories\olympus\DedicatedThreadPool\src\tests\Helios.DedicatedThreadPool.Tests.Performance\bin\Release\Helios.DedicatedThreadPool.Tests.Performance.dll" "output-directory="D:\Repositories\olympus\DedicatedThreadPool\PerfResults""
Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'NBench, Version=0.1.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.
at NBench.Runner.Program.Main(String[] args)
Running build failed.
Error:
System.Exception: NBench.Runner failed. D:\Repositories\olympus\DedicatedThreadPool\src\packages\NBench.Runner\lib\net45\NBench.Runner.exe "D:\Repositories\olympus\DedicatedThreadPool\src\tests\Helios.DedicatedThreadPool.Tests.Performance\bin\Release\Helios.DedicatedThreadPool.Tests.Performance.dll" "output-directory="D:\Repositories\olympus\DedicatedThreadPool\PerfResults""
at [email protected](String message) in D:\Repositories\olympus\DedicatedThreadPool\build.fsx:line 127
at Microsoft.FSharp.Collections.SeqModule.Iterate[T](FSharpFunc`2 action, IEnumerable`1 source)
at [email protected](Unit _arg7) in D:\Repositories\olympus\DedicatedThreadPool\build.fsx:line 129
at Fake.TargetHelper.runTarget@314(String targetName) in D:\code\fake\src\app\FakeLib\TargetHelper.fs:line 325
Seeing some instances of benchmarks where we have a major StdDev disparity between two runs of the same benchmark:
[03:43:39][Step 1/1] ------------ STARTING NBench.Tests.Performance.ThroughputLoopPerformanceSpec+ForLoop ----------
[03:43:39][Step 1/1] --------------- BEGIN WARMUP ---------------
[03:43:39][Step 1/1] Elapsed: 00:00:00.6828316
[03:43:39][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 54,918,372.26 , ns / operations: 18.21
[03:43:39][Step 1/1] --------------- END WARMUP ---------------
[03:43:39][Step 1/1]
[03:43:41][Step 1/1] --------------- BEGIN RUN ---------------
[03:43:41][Step 1/1] Elapsed: 00:00:00.6704073
[03:43:41][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 55,936,145.09 , ns / operations: 17.88
[03:43:41][Step 1/1] --------------- END RUN ---------------
[03:43:41][Step 1/1]
[03:46:02][Step 1/1] --------------- BEGIN RUN ---------------
[03:46:02][Step 1/1] Elapsed: 00:02:20.9072606
[03:46:02][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 266,132.49 , ns / operations: 3,757.53
TROUBLE
[03:46:02][Step 1/1] --------------- END RUN ---------------
[03:46:02][Step 1/1]
[03:46:02][Step 1/1] --------------- BEGIN RUN ---------------
[03:46:02][Step 1/1] Elapsed: 00:00:00.6566757
[03:46:02][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 57,105,813.42 , ns / operations: 17.51
[03:46:02][Step 1/1] --------------- END RUN ---------------
[03:46:02][Step 1/1]
[03:46:03][Step 1/1] --------------- BEGIN RUN ---------------
[03:46:03][Step 1/1] Elapsed: 00:00:00.6865853
[03:46:03][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 54,618,122.47 , ns / operations: 18.31
[03:46:03][Step 1/1] --------------- END RUN ---------------
[03:46:03][Step 1/1]
[03:48:21][Step 1/1] --------------- BEGIN RUN ---------------
[03:48:21][Step 1/1] Elapsed: 00:02:18.1465176
[03:48:21][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 271,450.93 , ns / operations: 3,683.91
[03:48:21][Step 1/1] --------------- END RUN ---------------
[03:48:21][Step 1/1]
[03:48:22][Step 1/1] --------------- BEGIN RUN ---------------
[03:48:22][Step 1/1] Elapsed: 00:00:00.6592115
[03:48:22][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 56,886,143.52 , ns / operations: 17.58
[03:48:22][Step 1/1] --------------- END RUN ---------------
[03:48:22][Step 1/1]
[03:50:44][Step 1/1] --------------- BEGIN RUN ---------------
[03:50:44][Step 1/1] Elapsed: 00:02:21.7176385
[03:50:44][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 264,610.68 , ns / operations: 3,779.14
TROUBLE
[03:50:44][Step 1/1] --------------- END RUN ---------------
[03:50:44][Step 1/1]
[03:50:44][Step 1/1] --------------- BEGIN RUN ---------------
[03:50:44][Step 1/1] Elapsed: 00:00:00.6883052
[03:50:44][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 54,481,645.64 , ns / operations: 18.35
[03:50:44][Step 1/1] --------------- END RUN ---------------
[03:50:44][Step 1/1]
[03:50:45][Step 1/1] --------------- BEGIN RUN ---------------
[03:50:45][Step 1/1] Elapsed: 00:00:00.6448187
[03:50:45][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 58,155,881.65 , ns / operations: 17.20
[03:50:45][Step 1/1] --------------- END RUN ---------------
[03:50:45][Step 1/1]
[03:50:46][Step 1/1] --------------- BEGIN RUN ---------------
[03:50:46][Step 1/1] Elapsed: 00:00:00.6796817
[03:50:46][Step 1/1] [Counter] TestCounter - operations: 37,500,000.00 ,operations: /s 55,172,884.60 , ns / operations: 18.12
[03:50:46][Step 1/1] --------------- END RUN ---------------
[03:50:46][Step 1/1]
[03:50:46][Step 1/1] --------------- RESULTS: NBench.Tests.Performance.ThroughputLoopPerformanceSpec+ForLoop ---------------
[03:50:46][Step 1/1] --------------- DATA ---------------
[03:50:46][Step 1/1] [Counter] TestCounter: Max: 37,500,000.00 operations, Average: 37,500,000.00 operations, Min: 37,500,000.00 operations, StdDev: 0.00 operations
[03:50:46][Step 1/1] [Counter] TestCounter: Max / s: 58,155,881.65 operations, Average / s: 39,315,883.05 operations, Min / s: 264,610.68 operations, StdDev / s: 26,969,798.52 operations
Could just be background noise and I just need to increase the number of iterations of each benchmark, or we could be doing something that isn't a best practice that leaves us susceptible to JIT time or GC overhead mid-benchmark.
commas be good
It would be really nice to be able to debug these NBench tests inside visual studio!
akkadotnet/akka.net#1999 - case in point.
Tests how quickly ICanTell.Ask operations can be performed, and with how much memory
5/30/2016 8:45:00 PM
NBench=NBench, Version=0.3.0.0, Culture=neutral, PublicKeyToken=null
OS=Microsoft Windows NT 6.2.9200.0
ProcessorCount=2
CLR=4.0.30319.42000,IsMono=False,MaxGcGeneration=2
WorkerThreads=32767, IOThreads=2
RunMode=Throughput, TestMode=Measurement
NumberOfIterations=3, MaximumRunTime=00:00:05
Concurrent=True
Tracing=True
Metric | Units | Max | Average | Min | StdDev |
---|---|---|---|---|---|
TotalBytesAllocated | bytes | 24,501,752.00 | 24,482,493.33 | 24,464,360.00 | 18,721.38 |
[Counter] AskReplies | operations | 28,311.00 | 28,311.00 | 28,311.00 | 0.00 |
Metric | Units / s | Max / s | Average / s | Min / s | StdDev / s |
---|---|---|---|---|---|
TotalBytesAllocated | bytes | 61,361,105.95 | 58,372,460.41 | 54,754,466.33 | 3,347,981.70 |
[Counter] AskReplies | operations | 71,009.19 | 67,501.21 | 63,319.73 | 3,888.72 |
Benchmark ran for 5 seconds, so the operations / second total should be lower than the total value.
Useful for long running benchmarks and stress tests.
I notice that the current package from NuGet doesn't include the XML comments file, so IntelliSense isn't as rich when using types from NBench.dll
To run my unit tests/benchmark my code is dependent on configurations made in the config file (app.config). Please support loading those files either implicit by the assembly name + ".config" or explicit via CLI argument.
Need an IBenchmarkOutput
target that writes to a markdown file
Related to #48 - if an abstract base class declares a PerfBenchmark
attribute on any of its methods, those don't show up on the discovery list produced by ReflectionDiscovery
for child classes that concretely implement the abstract class.
Possible idea to make it easier to understand the output of performance tests: expose the IBenchmarkOutput
as part of BenchmarkContext
so users can write their own log messages to the output, especially for troubleshooting issues during PerfSetup
and PerfCleanup
Assertion data does not appear to match what's printed in the final results.
This will enable <gcServer="enabled"/>
for all benchmark runs, which is helpful for multi-threaded benchmarks.
Don't need it
Maybe I haven't understood what it's supposed to do, but if you take your sample code and add Skip="whatever" and then do NBench Runner on it, it gets run.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.