Code Monkey home page Code Monkey logo

Comments (11)

hoeck avatar hoeck commented on May 23, 2024 2

In addition to the three modes above I'd like to see a second dimension: strict vs non-strict validation of objects.

Strict:
only allowing known attributes in an object, reporting an error when unknown keys are present.

Non-strict:
similar to how Typescript checks interface type compatibility: if all known attributes are present, everything is good.

Strict vs non-strict checking has performance implications that I'd like to see how other libraries tackle it.

Unfortunately there is a third mode in this dimension, that is allowing non-strict objects as input but returning strict object that has all unknown keys stripped.

As a side note, I think that non-strict runtype checking is actually leading to a false sense of security as you essentially allow arbitrary attributes on objects that are passed into you application and maybe even deeper into the database or other services. So, maybe only benchmark for strict + non-strict-input-strict-output cases?

I'd love to help once it is clear what we need and how to report it.

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on May 23, 2024 1

@ianstormtaylor thank you for your input. I completely agree. This started small by comparing just a few libs and then it grew to this monster and now we are comparing apples to oranges. I like your idea about having extra test cases.

from typescript-runtime-type-benchmarks.

ianstormtaylor avatar ianstormtaylor commented on May 23, 2024 1

I think is, assert and parse are good to start.

from typescript-runtime-type-benchmarks.

hoeck avatar hoeck commented on May 23, 2024 1

Found some time over the weekend trying to solidify my ideas hoeck@eef6d89

It basically introduces:

  1. Support for more than one benchmark case per library: https://github.com/hoeck/typescript-runtime-type-benchmarks/blob/eef6d89fc5d7d8ef249b47bd8a8f3474c66d5a55/cases/valita.ts
  2. Interactive visualization using gh-pages: https://hoeck.github.io/typescript-runtime-type-benchmarks/

Next steps I can think of are:

  • add a parsing benchmark case
  • update the tests to work with the multiple benchmark cases
  • add node-version and library filter to the visualization
  • decide what to do with the existing graphs in the readme: keep as is / use the new group-graph / keep only a single graph and refer to the gh-pages ui for detailed info about node-versions and outlier-removal?

Any ideas, suggestions, feedback, code-complaints?

Should I move forward with this @moltar ?

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on May 23, 2024 1

@hoeck love the progress, especially the interactive visualization using gh-pages! I was dreaming of this, but didn't think I'll ever have time to get to that!

What I might have done differently on the additional benchmark cases is to have a single interface, which every benchmark implements. Not to have two tests per class.

We can then group and organize different classes of test cases into folders and perhaps just run the benchmarking tool with options to point to different folders, or just loop thru folders programmatically and run all benchmarks.

Another option is to add a concept of tags or features to the interface, which can be just an array of strings, and then use that as part of the UI to select/unselect different options. Then we can organize the test cases however we want, e.g. group by package in a dir, or put them all in one dir, with just different filenames, and use tags to organize.

from typescript-runtime-type-benchmarks.

hoeck avatar hoeck commented on May 23, 2024 1

Thanks for the Feedback 😁!

I agree that grouping the cases can be improved. Was just not sure about how much of the existing structure you'd like to keep or why everything was written like it is in the first place (e.g. using an abstract class over an interface).

Having a single interface and multiple files per package sounds like the good & common way to do it. I would like to keep the setup simple though, so no automatic reading and importing of directories. In my experience this leads to more complex builds than just having a few explicit import statements. Automatic importing could probably be added later once the folder structure is in place anyway.

Tags sound like a nice add-on layer over choosing packages and benchmarks individually and I thought of this too but would first like to try to get the basics off the ground.

I'll update my fork with said way of grouping benchmarks when I have some spare time (probably in about a week) and keep you updated.

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on May 23, 2024 1

@hoeck Can you please make a PR and I'll review it! Thanks! 😄

from typescript-runtime-type-benchmarks.

hoeck avatar hoeck commented on May 23, 2024 1

Just FYI I finally found some time and I am working on this. right now. I am also really curious on the results 😁.

from typescript-runtime-type-benchmarks.

ianstormtaylor avatar ianstormtaylor commented on May 23, 2024

@moltar 😄 glad to hear it! I think the graphs and stuff that you'll be able to show once it's split will be really cool. And I'm looking forward to digging into Superstruct's performance when it's apples-to-apples.

from typescript-runtime-type-benchmarks.

moltar avatar moltar commented on May 23, 2024

Let's think about what kind of test buckets we want then? Any ideas?

from typescript-runtime-type-benchmarks.

hoeck avatar hoeck commented on May 23, 2024

Hey there,

found some time to work on this further. @moltar could you please have a look at the lastest commit at https://github.com/hoeck/typescript-runtime-type-benchmarks/tree/refactor-benchmarks ?

I've simplified adding new benchmarks and libraries. Its no done via a typed register function. Benchmark cases now look like this:

import { register } from '../benchmarks';
import * as v from '@badrap/valita';

const dataType = v.object({
  number: v.number(),
  negNumber: v.number(),
  maxNumber: v.number(),
  string: v.string(),
  longString: v.string(),
  boolean: v.boolean(),
  deeplyNested: v.object({
    foo: v.string(),
    num: v.number(),
    bool: v.boolean(),
  }),
});

register('valita', 'validate', data => {
  return dataType.parse(data);
});

register('valita', 'validateStrict', data => {
  return dataType.parse(data, { mode: 'strict' });
});

The benchmark case implementation is just the single function passed to register. Grouping benchmarks in different files is trivial. You could even add them dynamically but for now I am just using simple import calls.

For the strict & non-strict benchmarks its easier to keep them in 1 file as the runtype definition is the same only the check arguments differ for most libraries.

Benchmarks also have their own tests which are executed for each registered case.

If you give me a thumbs up I would continue to clean this up, migrate all existing cases and create a mergable PR.

from typescript-runtime-type-benchmarks.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.