Code Monkey home page Code Monkey logo

typescript-runtime-type-benchmarks's Introduction

๐Ÿ“Š Benchmark Comparison of Packages with Runtime Validation and TypeScript Support


โšกโš  Benchmark results have changed after switching to isolated node processes for each benchmarked package, see #864 โš โšก


Benchmark Results

Fastest Packages - click to view details

click here for result details

Packages Compared

Criteria

Validation

These packages are capable of validating the data for type correctness.

E.g. if string was expected, but a number was provided, the validator should fail.

Interface

It has a validator function or method that returns a valid type casted value or throws.

const data: any = {}

// `res` is now type casted to the right type
const res = isValid(data)

Or it has a type guard function that in a truthy block type casts the value.

const data: any = {}

function isMyDataValid(data: any) {
  // isValidGuard is the type guard function provided by the package
  if (isValidGuard(data)) {
    // data here is "guarded" and therefore inferred to be of the right type
    return data
  }

  throw new Error('Invalid!')
}

// `res` is now type casted to the right type
const res = isMyDataValid(data)

Local Development

  • npm run start - run benchmarks for all modules
  • npm run start run zod myzod valita - run benchmarks only for a few selected modules
  • npm run docs:serve - result viewer
  • npm run test - run tests on all modules

Adding a new node version

  • update node version matrix in .github/workflows/pr.yml and .github/workflows/release.yml
  • update NODE_VERSIONS in docs/dist/app.tsx and run npm run docs:build
  • optionally set NODE_VERSION_FOR_PREVIEW in benchmarks/helpers/main.ts

typescript-runtime-type-benchmarks's People

Contributors

aslilac avatar dependabot-preview[bot] avatar dependabot[bot] avatar dsagal avatar dzakh avatar edobrb avatar fabian-hiller avatar gervinfung avatar hoeck avatar imranbarbhuiya avatar iyegoroff avatar jayakrishnanamburu avatar jeengbe avatar jsoldi avatar jviide avatar marcj avatar micnic avatar mmamedel avatar moltar avatar naruaway avatar nin-jin avatar patsissons avatar renovate-bot avatar renovate[bot] avatar richardscarrott avatar ryasmi avatar samchon avatar sinclairzx81 avatar skarab42 avatar typeofweb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

typescript-runtime-type-benchmarks's Issues

Add constraint checking validation benchmarks

Originally this project was started with libraries that had support for constraint checking (e.g. minimum number or max string length).

But I had removed it in 61b353c:

  • to allow for more libraries to be included that only had type checking capability
  • because I did not like the way it was designed and wanted to redesign it later

This issue is an open discussion for finding a common denominator for most packages that we could benchmark. A set of functionality that is widely supported, yet is not too shallow.

adapt to real world scenarios

Hi,

Nice project, thanks, i've used it for my evaluations.

I've noticed the huge gap between 2 libraries to all other libraries

  • ts-quartet
  • ts-json-validator

This huge gap is probably because of the way the project is running the tests.
The 2 libraries above use a different strategy than all others to create the validators.

While others mostly use predefined, hard-coded validator functions and through composition of them create a schema, the fastest 2 libraries will compile JS code at runtime (eval() or new Function(...)) to create discrete validation functions that do not call other functions internally (no composition) but instead have all the required validation code within the same function created specifically for the schema.

For example, Quartet:

For the following schema:

const checkData = v<Data>({
  number: v.safeInteger,
  negNumber: v.negative,
  maxNumber: v.positive,
  string: v.string,
  longString: v.string,
  boolean: v.boolean,
  deeplyNested: {
    foo: v.string,
    num: v.number,
    bool: v.boolean,
  },
});

It will generate the following validator function:

function validator(value) {
  if (value == null) return false
  if (!Number.isSafeInteger(value.number)) return false
  if (value.negNumber >= 0) return false
  if (value.maxNumber <= 0) return false
  if (typeof value.string !== 'string') return false
  if (typeof value.longString !== 'string') return false
  if (typeof value.boolean !== 'boolean') return false
  if (value.deeplyNested == null) return false
  if (typeof value.deeplyNested.foo !== 'string') return false
  if (typeof value.deeplyNested.num !== 'number') return false
  if (typeof value.deeplyNested.bool !== 'boolean') return false
  return true
}

This has a deep impact on performance depending on how you run your code.

The benchmark code in this project will use 1 schema and iterate over it for a certain period of time. This is perfect for quartet because of how V8 works.
The function becomes super hot, it quickly becomes inlined and additionally, if any internal function call exists within the validator it will get inlined as well!

In other libraries, this can not happen because so many functions are called, due to the composition, so most of them are cold and nothing get's inlined.

In real world scenarios, such a perfect order does not exists. For example, when handling incoming request, so many functions are called that by the time we reach the validator it is no longer hot!
And of course, we also need to factor in handling of multiple incoming requests.

The major advantage of the 2 libraries in question does not play along in real world scenarios and the results of the 2 are distorted in the benchmark.

I should also note the security risks of using runtime code evaluation. For a popular and heavily used library like ts-json-validator (which is actuall ajv) this is less of a concern. For a almost not used, not popular library like quartet I will take caution.

I general sucj a huge gap does not make sense, otherwise everyone would have used these libraries entirely.

Thanks again!

What about joi library?

Hi, we use a lot of validation in browser runtime with "joi" library. Could you please add some benchmarks about this package?

Why were actual data validations removed?

Currently, only data types itself are tested. This has been introduced in 61b353c. Validation benchmarks without basic data content validation is way too shallow to be called a validation benchmark. At best it is a type guard, but that has nothing to do with data validation. Almost all real-life validation code does use features like enums, negative number checks, string length limits, or email checks, etc. If this is not part of this benchmark, then this benchmark is highly misleading.

If a validator library is not able to validate the actual content for something like number ranges, or string length etc, then it should not be called data validator and not be part of this benchmark suite.

If you focus only on type guards only, without data content checks, then please add this to the README. Then we can remove the benchmark links from Marshal since comparing apple with oranges it the last people want.

Benchmark is not fair for packages that return a new object

It seems that there is a huge gap between the results of different packages in this benchmark, by going deeply in the implementation of the parsing process of packages that perform better I saw that for example @badrap/valita does not return a new object and does not strip properties that are not part of the schema, this gives an advantage of about 4 times in some tests that I made with my own package.

It seems that more mature packages do return a new object with the properties copied from the provided object by default, while newer packages tend to return the same value that is provided, which give them some advantage.

I propose to add a criteria for the benchmark results to expect the same object or a new one, this way the unfairness of the results will be minimal.

AJV Test Case

Hi, great project!

Would it be possible to include AJV to this test suite? I think a case exists for testing AJV performance as it's possible for TypeScript to inference types from JSON schema directly via conditional type mapping. Would certainly be curious to see how the defacto JSON schema validator holds up against these libraries.

Many Thanks!

Separating tests into indepdent packages

Having all validator packages installed in one Node package (project) may lead to wrong results, and even errors, if dependencies of validators are shared, but have loose versions defined.

The potential solution is to convert this to a monorepo, which has a separate package for each validator.

But the ideal solution should not break dependabot / revonatebot.

@hoeck thoughts?

Upgrade to [email protected]

I made isValid much more performant in 6.0.1, it should be more on par with the other faster libraries. checkValid and validate both produce errors arrays that cost memory/CPU to construct.

Also, thanks for maintaining this benchmark!

rulr breaks on v12

Run ./start.sh
+ node -v
+ export NODE_VERSION=v12.22.10
+ npm start
> [email protected] start /home/runner/work/typescript-runtime-type-benchmarks/typescript-runtime-type-benchmarks
> ts-node index.ts
/home/runner/work/typescript-runtime-type-benchmarks/typescript-runtime-type-benchmarks/node_modules/rulr/dist/rulr.modern.js:1
import{BaseError as r}from"make-error";import t from"validator";import e from"atob";class n extends r{constructor(){super("expected integer")}}function s(r){return Number.isInteger(r)}function o(r){if(s(r))return r;throw new n}class u extends r{constructor(){super("expected negative integer")}}function c(r){return s(r)&&r<=0}function i(r){if(c(r))return r;throw new u}class f extends r{constructor(){super("expected number")}}function a(r){return"number"==typeof r&&!1===Number.isNaN(r)}function p(r){if(a(r))return r;throw new f}class d extends r{constructor(){super("expected negative number")}}function h(r){return a(r)&&r<=0}function l(r){if(h(r))return r;throw new d}class w extends r{constructor(){super("expected positive integer")}}function x(r){return s(r)&&r>=0}function g(r){if(x(r))return r;throw new w}class m extends r{constructor(){super("expected positive number")}}function v(r){return a(r)&&r>=0}function y(r){if(v(r))return r;throw new m}class b extends r{constructor(){super("expected string")}}functio...
^^^^^^
SyntaxError: Cannot use import statement outside a module
    at wrapSafe (internal/modules/cjs/loader.js:915:16)
    at Module._compile (internal/modules/cjs/loader.js:963:27)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)
    at Module.load (internal/modules/cjs/loader.js:863:32)
    at Function.Module._load (internal/modules/cjs/loader.js:708:14)
    at Module.require (internal/modules/cjs/loader.js:887:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object.<anonymous> (/home/runner/work/typescript-runtime-type-benchmarks/typescript-runtime-type-benchmarks/cases/rulr.ts:1:1)
    at Module._compile (internal/modules/cjs/loader.js:999:30)
    at Module.m._compile (/home/runner/work/typescript-runtime-type-benchmarks/typescript-runtime-type-benchmarks/node_modules/ts-node/src/index.ts:1056:23)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `ts-node index.ts`
npm ERR! Exit status 1
npm ERR! 

Parallel benchmarking

Parallelization had to be disabled a while ago, because the final step of the benchmarking run (per Node version) is to commit back the results. This resulted in a race condition, between runs and some would fail to commit, as they had a stale git checkout.

I think we can fix this by running the benchmarks in parallel, but instead of committing the results right away, use GitHub actions artifacts mechanism to store the results in the cache.

Then as the last step, get the cached results at once, and commit them at once.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • chore(deps): update dependency @types/node to v18.19.31
  • fix(deps): update dependency @badrap/valita to v0.3.8
  • fix(deps): update dependency @deepkit/core to v1.0.1-alpha.145
  • fix(deps): update dependency @deepkit/type to v1.0.1-alpha.146
  • fix(deps): update dependency @deepkit/type-compiler to v1.0.1-alpha.146
  • fix(deps): update dependency @mondrian-framework/model to v2.0.40
  • fix(deps): update dependency @sapphire/shapeshift to v3.9.7
  • fix(deps): update dependency fp-ts to v2.16.5
  • fix(deps): update dependency purify-ts to v2.0.3
  • fix(deps): update dependency serve to v14.2.3
  • fix(deps): update dependency superstruct to v1.0.4
  • fix(deps): update dependency ts-node to v10.9.2
  • fix(deps): update dependency vality to v6.3.4
  • chore(deps): update babel monorepo (@babel/cli, @babel/core, @babel/preset-env, @babel/preset-typescript)
  • chore(deps): update dependency @types/ts-expose-internals to v5.4.5
  • chore(deps): update dependency @types/yup to v0.32.0
  • chore(deps): update dependency expect-type to v0.19.0
  • chore(deps): update dependency gts to v5.3.0
  • chore(deps): update dependency ts-patch to v3.1.2
  • fix(deps): update dependency @sinclair/typebox to v0.32.28
  • fix(deps): update dependency ajv to v8.13.0
  • fix(deps): update dependency myzod to v1.11.0
  • fix(deps): update dependency preact to v10.21.0
  • fix(deps): update dependency reflect-metadata to v0.2.2
  • fix(deps): update dependency rescript to v11.1.0
  • fix(deps): update dependency rescript-schema to v6.4.0
  • fix(deps): update dependency ts-runtime-checks to v0.5.1
  • fix(deps): update dependency typia to v5.5.10
  • fix(deps): update dependency unknownutil to v3.18.0
  • fix(deps): update dependency valibot to v0.30.0
  • fix(deps): update dependency vega to v5.28.0
  • fix(deps): update dependency zod to v3.23.6
  • chore(deps): update dependency @types/node to v20
  • fix(deps): update dependency rulr to v10
  • fix(deps): update dependency svgo to v3 (svgo, @types/svgo)
  • fix(deps): update dependency typia to v6
  • fix(deps): update dependency yup to v1
  • ๐Ÿ” Create all rate-limited PRs at once ๐Ÿ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

github-actions
.github/workflows/pr.yml
  • actions/checkout v4
  • actions/cache v4
  • actions/setup-node v4
.github/workflows/release.yml
  • actions/checkout v4
  • actions/cache v4
  • actions/setup-node v4
  • rlespinasse/git-commit-data-action v1
  • github-actions-x/commit v2.9
npm
package.json
  • @ailabs/ts-utils 1.4.0
  • @badrap/valita 0.3.0
  • @deepkit/core 1.0.1-alpha.123
  • @deepkit/type 1.0.1-alpha.123
  • @deepkit/type-compiler 1.0.1-alpha.123
  • @mojotech/json-type-validation 3.1.0
  • @sapphire/shapeshift 3.9.6
  • @sinclair/typebox 0.31.28
  • @skarab/tson 1.5.1
  • @toi/toi 1.3.0
  • @typeofweb/schema 0.7.3
  • @types/benchmark 2.1.5
  • ajv 8.12.0
  • arktype 1.0.21-alpha
  • benny 3.7.1
  • bueno 0.1.5
  • class-transformer 0.5.1
  • class-transformer-validator 0.9.1
  • class-validator 0.14.1
  • clone 2.1.2
  • computed-types 1.11.2
  • csv-stringify 5.6.5
  • decoders 1.25.5
  • fp-ts 2.16.2
  • io-ts 2.2.21
  • jointz 7.0.4
  • json-decoder 1.4.1
  • mol_data_all 1.1.920
  • @mondrian-framework/model 2.0.35
  • myzod 1.10.2
  • ok-computer 1.0.4
  • parse-dont-validate 4.0.0
  • preact 10.19.2
  • purify-ts 2.0.1
  • r-assign 1.9.0
  • reflect-metadata 0.1.13
  • rescript 11.0.0-rc.6
  • rescript-schema 6.1.0
  • rulr 8.7.6
  • runtypes 6.7.0
  • serve 14.2.1
  • simple-runtypes 7.1.3
  • spectypes 2.1.11
  • succulent 0.18.1
  • superstruct 1.0.3
  • suretype 1.2.0
  • svgo 2.8.0
  • to-typed 0.5.0
  • ts-interface-checker 1.0.2
  • ts-json-validator 0.7.1
  • ts-node 10.9.1
  • ts-runtime-checks 0.4.1
  • typescript 5.1.6
  • typia 5.3.3
  • unknownutil 3.11.0
  • valibot 0.21.0
  • vality 6.3.3
  • vega 5.26.1
  • vega-lite 5.11.0
  • yup 0.32.11
  • zod 3.22.4
  • @babel/cli 7.23.9
  • @babel/core 7.23.9
  • @babel/preset-env 7.23.9
  • @babel/preset-typescript 7.23.3
  • @types/clone 2.1.4
  • @types/jest 29.5.12
  • @types/node ^18.11.18
  • @types/svgo 2.6.0
  • @types/ts-expose-internals 5.3.3
  • @types/yup 0.29.14
  • babel-plugin-spectypes 2.1.11
  • expect-type 0.17.3
  • gts 5.2.0
  • jest 29.7.0
  • rimraf 5.0.5
  • ts-jest 29.1.2
  • ts-patch ^3.0.1
  • tsconfigs 4.0.2

  • Check this box to trigger a request for Renovate to run again on this repository

split test cases into specific levels of functionality?

Hey, thanks for this benchmark! It's a great idea.

I think right now the existing test cases aren't testing for the same behavior in each library though. Depending on the library, it might support "guarding", "validating", "parsing", "transforming", etc. But if you compare guarding of one to parsing of another it's not a direct comparison.

For example, as far as I can tell right now toi is simply guarding a value and returning a boolean in its test case, whereas superstruct is parsing an input, cloning it, and returning the parsed value in its test case (instead of using the simpler is export).

It might make sense to add more specific levels of functionality to each test case. And some libraries simply won't allow certain functionality. For example (using superstruct because I'm the author so I know it best):

import { is, assert, coerce, object, ... } from 'superstruct'

const type = object({ ... })

class SuperstructCase extends Case {
  name = 'superstruct'

  // The `is` test is for simply returning a boolean indicating whether 
  // a value is valid. It doesn't need to create errors, parse values, etc.
  // This is the most basic and the minimum to be included.
  is(value) {
    return is(value, type)
  }

  // The `assert` test is for throwing a proper `Error` object, with a 
  // reason for why validation failed. It shouldn't be re-implemented
  // for libraries that only support `is` type guards because the errors
  // won't actually contain any information.
  assert(value) {
    assert(value, type)
  }

  // The `parse` test is for libraries that allow some parsing step on
  // the value to ensure that it is validโ€”for things like default values,
  // coercing inputs, etc. It should return the parsed value, and the
  // value *should not* equal the input (ie. cloned).
  parse(value) {
    return coerce(value, type)
  }
}

You'd be able to show a section of graphs for each level of functionality. Then people can make more informed decisionsโ€”eg. a library might be much faster in is, but not allow for parse.

I think this would make the comparisons much more accurate.

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

SVGs in readme are aggressively cached

The SVGs in the readme do not reflect the repo due to the cache. Consider updating automation to add a random value to the URLs in the readme after each update.

Packages matrix

Create a table / matrix for each package and features it supports.

Order of benchmarks seems to bias results

Hi, thanks for this project.

I'm currently looking at submitting a new runtime type checker for comparative performance benchmarks, but have noticed that the order in which each validator is run seems to degrade subsequent benchmark results. By moving performing tests around, I'm able to greatly improve the performance results of tests by simply running them first.

I'm thinking this is likely due to some of the validators utilizing more memory overhead (implicating the GC), or that V8 optimization heuristics have broken down causing V8 to take the slow path (this is quite likely given how validation routines may require dynamic property access on objects to complete). To rule out the GC, I started the project with npm run compile:spectypes && node --expose-gc -r ts-node/register ./index.ts and called global.gc() to force collection prior to tests running, but no luck. So the problem is most likely related V8 optimization heuristics.

I think to give all tests an even footing, I'm wondering if it would be possible to reshape these tests such that each is run within it's own node process (rather than running all tests in the same process). This would give each a fresh instance in which to do it's thing, and should yield more accurate results.

Open to thoughts if this is something worth looking into.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.