Comments (13)
@sinclairzx81 @moltar or anyone else volunteering for this? Otherwise I'm going to try to implement @sinclairzx81 suggestions.
from typescript-runtime-type-benchmarks.
@sinclairzx81 could you please try the changes from my PR?
When you execute npm start
or npm run start
or ./start.sh
, benchmarks are now run for each module in their own npm process and the results are aggregated.
You can also do npm run start run zod myzod
to only benchmark the listed modules at a time.
And most important, does the bias of the ordering go away or does it persist?
from typescript-runtime-type-benchmarks.
Thx for the feedback @sinclairzx81 😊 and I am glad that this removed the result bias. I will fix up the linter issues with the PR tomorrow so that we can merge it.
from typescript-runtime-type-benchmarks.
@hoeck amazing work, as usual! 👍🏼
I'm wondering if we shall add temporarily a notice banner at the top explaining this change, with maybe a link to this issue for details?
from typescript-runtime-type-benchmarks.
Hey @sinclairzx81, thank you for your feedback.
This is certainly a serious issue.
I am not an expert on Node internals or how GC works, beyond the CS 201 type of information :)
Do you have any proposals that are specific to this repo?
I had an idea to run every test in a separate GitHub Action job, which would provide the isolation, but then I was worried about the jobs being placed on different machines with different specs.
Running them in a separate process on the same machine sounds like good middle ground.
Wondering what would it take to do so, given our current setup.
from typescript-runtime-type-benchmarks.
Is there anything, perhaps, in the settings of benny
we could tweak to equalize the testing environment?
Maybe @caderek can recommend something?
from typescript-runtime-type-benchmarks.
@hoeck Hi. that's great news :) I wasn't planning on undertaking these updates myself (as that work is best left for those benchmarking), but can help assist with the PR process by running some local tests and providing feedback prior to merge if it's helpful.
Keep me posted!
from typescript-runtime-type-benchmarks.
Otherwise I'm going to try to implement
That would be great! 👍🏼 😁
from typescript-runtime-type-benchmarks.
@hoeck Hi sure, just ran your branch and it looks like that did the trick. Seems many of the libraries there are giving much better results across the board now, with some crossing the 1 million operations per second mark which is nice to see! Hopefully the updates weren't too bad to implement. Also +1 for adding the ability run individual cases, very useful!
Good work mate!
S
from typescript-runtime-type-benchmarks.
@moltar Hi, thanks for the quick reply.
I agree that each test should be run on the same machine (not distributed across different machines on CI). I guess my initial thoughts on this were to just run each test in it's own V8 isolate. I guess this could be achieved a couple of ways, but perhaps the easiest is to have a cli command that is able to target an individual test. This would enable something like the following in package.json
.
// runs spectypes only
{
"scripts": {
"start": "npm run compile:spectypes && ts-node index.ts --test spectypes"
}
}
So, to run the complete test suite, you could just append the start
script....
// runs everything
{
"scripts": {
"start": "npm run compile:spectypes && ts-node index.ts --test spectypes && ts-node index.ts --test zod && ... (omitted)"
}
}
As the start command would likely get prohibitively long, googles zx package could be utilized to run a JavaScript based shell execution script to run each test and await them in turn.
// runner.mjs
// build spectype
await $`npm run compile:spectypes`
// load cases somehow, and execute in series (with each test run as an separate OS process)
for (const case of loadCases()) {
await $`ts-node index.ts --test ${case}`
}
$ npx zx runner.mjs
I had a quick look through the code, I guess the only complication that might arise is handling aggregate results (writing benchmark output to node1x.json
which gets sourced in the UI later on). Not sure if there is going to be an issue with each test run in isolation.
Open to thoughts.
Cheers
S
from typescript-runtime-type-benchmarks.
run on the same machine
👍
a cli command that is able to target an individual test
👍
I had a quick look through the code, I guess the only complication that might arise is handling aggregate results (writing benchmark output to node1x.json which gets sourced in the UI later on). Not sure if there is going to be an issue with each test run in isolation.
The results json format is pretty simple: an object in an array for each benchmark/module combination. That can be easily rewritten such that every isolated benchmark run appends its individual result to an already existing results json file. Benchmarks will obviously not run in parallel so there won't be any file transaction issues.
Two more thoughts/questions that come to my mind are:
- Should we run all benchmarks of a module (assertString, parseSafe etc) in the same process or isolate that too?
- Should only the chosen module be loaded or is it okay to load all modules (zod, io-ts, ...) even though only a single one is benchmarked?
from typescript-runtime-type-benchmarks.
@hoeck Hi!
Should we run all benchmarks of a module (assertString, parseSafe etc) in the same process or isolate that too?
This is a good question. I'm thinking maybe the best way to orchestrate these tests might be to organize the execution of the tests by package (vs by test). So in this regard, the process would start for a given validation package (i.e myzod), then execute all the tests for that validation package, then exit. The next package would be run immediately there after (in it's own process)
The thinking here is that, if performance degrades during the tests run for a single package, that's generally going to be a consequence of that package throttling V8 in some way. Package authors may be able to identify bottlenecks in throughput if they know the sequence in which the tests were run, and if the results show clear degradation across the tests run in sequence.
Should only the chosen module be loaded or is it okay to load all modules (zod, io-ts, ...) even though only a single one is benchmarked?
It might be a good idea to omit packages from import
if they are not used in the test. It's probably going to be fine either way, but I guess a potential exists for packages to cache acceleration data structures on import
, and such caching could bias results.
I should note, In the testing I've done so far, I have been importing all the tests in the suite, and haven't experienced a problem with them all being there. Degradation seems more tied to order of execution, with the importing of these packages seeming ok.
Hope this helps!
S
from typescript-runtime-type-benchmarks.
I'm wondering if we shall add temporarily a notice banner at the top explaining this change, with maybe a link to this issue for details?
That makes sense, I'll add a simple info message on top of the readme.
from typescript-runtime-type-benchmarks.
Related Issues (20)
- Add `vality` HOT 2
- feat(package): `parse-dont-validate` HOT 2
- Failing build HOT 1
- How do I add a testcase (Request: Documentation on that) HOT 5
- Add `@Typia` HOT 3
- Suggest: add a table whether each type can be validated or not HOT 3
- Add `caketype`
- Add `@fp-ts/schema`
- Add `@gapstack/light-type` HOT 4
- Add `arktypeio/arktype` HOT 2
- Node 20 HOT 2
- Preview SVG image has been broken HOT 8
- Show operations/s as a number for each benchmark HOT 3
- Show weekly npm downloads for each package HOT 2
- Add `valibot`
- Include failing data in the benchmarks HOT 5
- ParseSafe for Zod calls parse() not safeParse() HOT 1
- Categories for AOT, JIT and Dynamic Validation HOT 5
- I don't really quite understand the categories... HOT 2
- Add Effect-TS schema HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from typescript-runtime-type-benchmarks.