Code Monkey home page Code Monkey logo

Comments (3)

sinclairzx81 avatar sinclairzx81 commented on September 25, 2024 4

@samchon Hi, Just going to chime in on this one.

While I certainly think this project could benefit from additional benchmarks and tests, I do not think that these should be submitted by library authors as the benefit and value of community projects like this comes primarily from external independent contributors submitting tests outside of author intervention. This specifically to try and establish an accurate lens into performance and mitigate potential for biasing.

On this point, if the typia benchmarks are being put forth (having reviewed them independently, as well as submitted TypeBox schematics to Typia here, here and here for alignment and comparative measurement) I do not feel these would be a good candidate for cross library benchmarking or testing for the following reasons:

  • The schematics are highly coupled to the performance and assertion criteria as implemented in typia.
  • The schematics are arbitrarily complex and not very helpful when trying to identify where performance disparities exist.
  • The schematics would rule out a significant amount of libraries contributed to this project.
  • This project (afaik) is a benchmarking project, NOT a validation unit test suite.

Also, for the reporting table, I do actually feel quite strongly about not showing RED marks next to each project listed here, particularly if the criteria for testing is dependent on each library adopting specific assertion criteria as implemented by typia (of which there is much room for interpreting validation semantics across libraries).

Again, while I'm certainly for the idea of seeing additional benchmarks (or tests) added here, I do feel these should ideally be defined independently (and openly) with the validation criteria made clear and set low enough such that all libraries currently submitted can participate. In addition, if more sophisticated schematics are deemed warranted (of which I've some interest), my preference would be to omit failing projects from result tables rather than marking them as RED (which may be publicly discouraging to project authors who have contributed their free time and effort to this arena)

For establishing "a minimum viable suite" of schematics, I think what's going to be less divisive is a collaborative effort where interested parties can define clearly what the schematics are, what they measure for, and what techniques may be applicable to attain better performance (possibly through GH discussions). This to set a fair and reasonable performance criteria and hopefully help other developers to attain robust high performance assertions in their respective projects, mine included.

Just some of my thoughts on this one.
S

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on September 25, 2024 3

@samchon I appreciate your involvement and I know you have put a lot of effort into thinking about this. Thank you for your contribution!

I do largely agree with @sinclairzx81 that the idea is to keep tests as impartial as possible, that's why they remained so primitive up to this point.

I think the way to move forward is to discuss each independent test addition we'd like to do as a separate issue. And try to evaluate and understand what value each test adds and how it will affect the rest of the tests.

Again, I am not against change, but we need to think more holistically about it. Tbh, my initial tests uite were not thought out too much at all. I just cobbled together some rough ideas and off it went to be released.

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on September 25, 2024

@marcj do you have any input, I remember you had some strong opinions too before. Thanks!

from typescript-runtime-type-benchmarks.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.