Comments (3)
@samchon Hi, Just going to chime in on this one.
While I certainly think this project could benefit from additional benchmarks and tests, I do not think that these should be submitted by library authors as the benefit and value of community projects like this comes primarily from external independent contributors submitting tests outside of author intervention. This specifically to try and establish an accurate lens into performance and mitigate potential for biasing.
On this point, if the typia benchmarks are being put forth (having reviewed them independently, as well as submitted TypeBox schematics to Typia here, here and here for alignment and comparative measurement) I do not feel these would be a good candidate for cross library benchmarking or testing for the following reasons:
- The schematics are highly coupled to the performance and assertion criteria as implemented in typia.
- The schematics are arbitrarily complex and not very helpful when trying to identify where performance disparities exist.
- The schematics would rule out a significant amount of libraries contributed to this project.
- This project (afaik) is a benchmarking project, NOT a validation unit test suite.
Also, for the reporting table, I do actually feel quite strongly about not showing RED marks next to each project listed here, particularly if the criteria for testing is dependent on each library adopting specific assertion criteria as implemented by typia (of which there is much room for interpreting validation semantics across libraries).
Again, while I'm certainly for the idea of seeing additional benchmarks (or tests) added here, I do feel these should ideally be defined independently (and openly) with the validation criteria made clear and set low enough such that all libraries currently submitted can participate. In addition, if more sophisticated schematics are deemed warranted (of which I've some interest), my preference would be to omit failing projects from result tables rather than marking them as RED (which may be publicly discouraging to project authors who have contributed their free time and effort to this arena)
For establishing "a minimum viable suite" of schematics, I think what's going to be less divisive is a collaborative effort where interested parties can define clearly what the schematics are, what they measure for, and what techniques may be applicable to attain better performance (possibly through GH discussions). This to set a fair and reasonable performance criteria and hopefully help other developers to attain robust high performance assertions in their respective projects, mine included.
Just some of my thoughts on this one.
S
from typescript-runtime-type-benchmarks.
@samchon I appreciate your involvement and I know you have put a lot of effort into thinking about this. Thank you for your contribution!
I do largely agree with @sinclairzx81 that the idea is to keep tests as impartial as possible, that's why they remained so primitive up to this point.
I think the way to move forward is to discuss each independent test addition we'd like to do as a separate issue. And try to evaluate and understand what value each test adds and how it will affect the rest of the tests.
Again, I am not against change, but we need to think more holistically about it. Tbh, my initial tests uite were not thought out too much at all. I just cobbled together some rough ideas and off it went to be released.
from typescript-runtime-type-benchmarks.
@marcj do you have any input, I remember you had some strong opinions too before. Thanks!
from typescript-runtime-type-benchmarks.
Related Issues (20)
- Add `vality` HOT 2
- feat(package): `parse-dont-validate` HOT 2
- Failing build HOT 1
- How do I add a testcase (Request: Documentation on that) HOT 5
- Add `@Typia` HOT 3
- Add `caketype`
- Add `@fp-ts/schema`
- Add `@gapstack/light-type` HOT 4
- Add `arktypeio/arktype` HOT 2
- Node 20 HOT 2
- Preview SVG image has been broken HOT 8
- Show operations/s as a number for each benchmark HOT 3
- Show weekly npm downloads for each package HOT 2
- Add `valibot`
- Include failing data in the benchmarks HOT 5
- ParseSafe for Zod calls parse() not safeParse() HOT 1
- Categories for AOT, JIT and Dynamic Validation HOT 5
- I don't really quite understand the categories... HOT 2
- Add Effect-TS schema HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from typescript-runtime-type-benchmarks.