jeffrmoore / async-benchmark-runner Goto Github PK
View Code? Open in Web Editor NEWA benchmark runner for node focusing on measuring elapsed time and memory usage for promise using asynchronous code.
License: MIT License
A benchmark runner for node focusing on measuring elapsed time and memory usage for promise using asynchronous code.
License: MIT License
Need to go through the compiled code to make sure that a babel-processed version would not introduce unexpected effects in the benchmark runner itself.
Currently an error ends the benchmark. This might involve adding some support library for this case.
Currently opsPerSample
and numSamples
are hard coded values. Also, the calibration sections of the documentation are confusing.
A less confusing set of options would be
minCyclesPerSample
maxCyclesPerSample
minTimePerSample
maxTimePerSample
This set of options would allow ABR to automatically choose the correct numbers of ops per sample in a way that allowed the benchmark to run in the minimum amount of time required to produce stable results.
Additionally adding
minSamples
maxSamples
target-variance
Would allow ABR to automatically determine the best numSamples, again using the minimum amount of runtime to achieve stable results.
Maybe.
The benefit being that these would be global benchmark options and would not require per-benchmark configuration.
ABR Relies on the --expose-gc
option in the shebang definition in its cli entry point. This does not work on Linus. A similar issue occurred and was solved when the --harmony
flag was in use. A similar solution can probably be used, which was wrapping the cli in a shell script.
Would be nice. Might have architectural considerations to accomplish.
Looks like we'll have a way of getting detailed performance and gc information. Rewrite.
https://medium.com/the-node-js-collection/timing-is-everything-6d43fc9fd416
I believe this will make the results more stable with fewer samples and reduce the need for confusing calibration and jitter calming.
Its also a complete re-write of the benchmark scheduling. Which is ok because the chain of callbacks is hard to understand anyway and mostly an artifact of earlier iterations that aren't relevant any more.
Easier to see significance of changes.
A filter so low-magnitude, but statistically relevant changes can be ignored
It should be possible to run a benchmark for an infinite amount of time. Look for leaks in the benchmark scheduling and running process that might prevent that.
The current object definition does not allow the use of this. Developers would probably feel more comfortable with a declaration style that matches more closely unit test declaration styles.
Mean is the wrong statistic. What does it mean to say something used a fractional number of bytes? Switching to mode allows the Margin of Error to also be removed, cleaning up and simplifying the reports.
Maybe switch to commander?
Currently ABR cannot add the --trace_gc option during run-benchmark. To use this feature, you must edit the file directly adding the option to the shebang line for the script. Options on the shebang line are known to not be supoorted in Linux. Future ABR version will use a shell script instead of a node script to launch benchmarks to eliminate this issue.
Critical requirement for running in a CI process
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.