Code Monkey home page Code Monkey logo

web-tooling-benchmark's Introduction

Web Tooling Benchmark

Build Status styled with prettier

This is a benchmark suite designed to measure the JavaScript-related workloads commonly used by web developers, such as the core workloads in popular tools like Babel or TypeScript. The goal is to measure only the JavaScript performance aspect (which is affected by the JavaScript engine) and not measure I/O or other unrelated aspects.

See the in-depth analysis for a detailed description of the tests included in this benchmark suite.

The latest browser version of the benchmark is available at https://v8.github.io/web-tooling-benchmark/.

Support

The Web Tooling Benchmark supports the latest active LTS version of Node.js. To see the supported Node.js versions of the current version of the benchmark, see the node_js section of our CI configuration.

Building

To build the benchmark suite, run

$ npm install

assuming that you have a working Node.js installation. Once the command is done, it produces a bundled version that is suitable to run in JS shells (i.e. d8, jsc or jsshell) in dist/cli.js and another bundle in dist/browser.js that is used by the browser version in dist/index.html.

To build an individual benchmark rather than the entire suite, pass the --env.only CLI flag:

$ npm run build -- --env.only babel

Running

You can either run the benchmark suite directly via Node, i.e. like this:

$ node dist/cli.js
Running Web Tooling Benchmark v0.5.2โ€ฆ
-------------------------------------
         acorn:  5.50 runs/s
         babel:  6.10 runs/s
  babel-minify:  9.13 runs/s
       babylon:  8.00 runs/s
         buble:  4.77 runs/s
          chai: 14.47 runs/s
  coffeescript:  5.62 runs/s
        espree:  4.05 runs/s
       esprima:  6.68 runs/s
        jshint:  7.84 runs/s
         lebab:  7.52 runs/s
       postcss:  5.06 runs/s
       prepack:  6.26 runs/s
      prettier:  5.97 runs/s
    source-map:  8.60 runs/s
        terser: 16.40 runs/s
    typescript: 10.04 runs/s
     uglify-js:  3.81 runs/s
-------------------------------------
Geometric mean:  6.98 runs/s

Or you open a web browser and point it to dist/index.html, or you can use one of the engine JS shells to run the special bundle in dist/cli.js. The easiest way to install recent versions of the supported JS engine shells is to run jsvu. Afterwards, you can run the benchmark as follows:

$ chakra dist/cli.js
$ javascriptcore dist/cli.js
$ spidermonkey dist/cli.js
$ v8 dist/cli.js

To run an individual benchmark rather than the entire suite via Node, pass the --only CLI flag:

$ npm run build -- --env.only babel && npm run benchmark

web-tooling-benchmark's People

Contributors

alopezsanchez avatar bmeurer avatar hashseed avatar hzoo avatar jhnns avatar ksashikumar avatar mathiasbynens avatar tebbi avatar ulan avatar vigneshshanmugam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

web-tooling-benchmark's Issues

Fairer comparison of Babel and Buble

The current Babel benchmark should include Babylon parsing just like the Buble benchmark includes Acorn parsing. Thoughts?

PS: This benchmark is a great comparison of existing Javascript parsers, and you can even verify the results from your browser! Perhaps this repository should take a similar approach?

cc: @hzoo @Rich-Harris

Add PostCSS benchmark

@sokra mentioned that postcss is a very popular loader for webpack as well, and we should cover that in this benchmark suite.

There seems to be a synchronous API for postcss that just consumes string inputs. We also need to find representative inputs, ideally such that we get a similar runs/sec as the other tests.

`--only` CLI flag with bundled code

Hello.

The README says that we can run node dist/cli.js --only babel to run an individual benchmark. But we can't with the current webpack config. I think target: "node" must be set. See https://webpack.js.org/configuration/target/#target

The only way to run a single benchmark with bundled code is by building it with npm run build -- --env.only babel, so npm run benchmark will only run babel benchmark.

Should we edit the README to replace node dist/cli.js --only babel with node src/cli.js --only babel?

Geomean

It would be great for geomean to have a confidence interval.

Also, the calculation of geomean seems off?

Light mode?

Hi all!

I'm trying to test web-tooling benchmark in my development environment,
but find that it takes too long to run and generates a huge sized file (dist/cli.js).

Is there any lite mode that enables short running time (such as few iterations) ?
Or how can I easily set this mode?

I also build a single benchmark passing --env.only option,
but it seemed that the created cli.js contained all test code like before.
Sadly, single benchmark took a long time as well..

Thanks in advance.

result submission site [feature]

I love this benchmark and use it to compare pc/laptop performance working on web development environment,
do you think it will be a great idea to have submission list of machines that run the benchmark? just like geekbench 5

Update package versions

Hello.

This issue is related to #18.

First, i ran a npm outdated:

captura de pantalla 2018-01-14 a las 22 25 53

@bmeurer, I have some questions about it:

  • Should we upgrade all package versions?
  • The reason to have explicit version of packages in package.json is the traceability between package versions and project version, right? I mean, having "prettier": "1.8.2" instead of "prettier": "^1.8.2", for example.
  • This implies a project version bump, right? 0.4.0?

Predictable mode

We need a predictable mode for the benchmark where each test is run for a fixed number of iterations. Currently benchmark.js doesn't seem to provide this option.

Update dependencies

We're accepting PRs that update the frameworks, libraries, and other dependencies in the benchmark.

web-tooling-benchmark-generator

Hello!

I just published a benchmark generator for web-tooling-benchmark (https://github.com/alopezsanchez/web-tooling-benchmark-generator).

This package generates and modifies the files that you would modify creating the benchmark manually, such as docs/in-depth.md and src/cli-flags-helper.js. It also creates automatically the src/<library>-benchmark.js and src/<library>-benchmark.test.js with a default content, including the license banner.

Demo:

Demo

Copied from the project README:

This tool:

  • Checks that the user is in in the v8/web-tooling-benchmark repository.
  • Checks if the new library already has a benchmark.
  • Installs the new library with npm i --save-exact.
  • Generates the benchmark and benchmark test files with the naming convention.
  • Creates a new section in the documentation file.
  • Updates the target list (list of runnable benchmarks).

I think it can be useful to start a new benchmark, saving time to the user.

This is just a Proof of Concept. Use it if you find it useful and give me some feedback if you want ๐Ÿ˜ƒ
I made it because I think this is a very interesting project and I want to help you to improve it.

Updating virtualfs

Hi, thanks for using virtualfs.
I noticed you're still using the 1.0.0 version. Please note several updates have been made and bugs fixed. I recommend upgrading to the latest. The behaviour as you're currently relying on shouldn't change (I searched your repo for uses of virtualfs).

Weird results on iPhoneX

We've ran the benchmark in our company on most devices... while it runs predictably on laptops and android phones. We've discovered something's wrong on iPhone X. We get consistently better or similar results to a 7th gen Intel Core i7 on a good part of the benchmarks.
Please check out the screenshots.

i7_7820hq

i7_7820hq

Pixel running Android P

pixel_androidp

iPhone X running Chrome

iphonex_chrome

Babel: thinking about how to make the benchmark more representative

Ref #24 (comment)

These could all similarly apply for babylon itself
FYI we need the bundled/concat'd uncompiled version

the current benchmark:

This benchmark runs the Babel transformation logic using the es2015 preset on a 196KiB ES2015 module containing the untranspiled Vue bundle. Note that this explicitly excludes the Babylon parser and only measures the throughput of the actual transformations. The parser is tested separately by the babylon benchmark below.

name: "vue.runtime.esm-nobuble-2.4.4.js",

Right now it only tests an ES2015 module (albeit a 194kb one ๐Ÿ‘) but that may not be representative of what the future will be like so we should think about possible changes to this benchmark:

  • Since we have deprecated the yearly presets like preset-es2015 we should run with @babel/preset-env now
    options: { presets: ["es2015"], sourceType: "module" }
    • I guess we might want to think about different targets but that just runs less of Babel so not sure how useful that is for a benchmark? (targets: default/ie, current node, current chrome, etc)
  • In a similar way I guess we could have a test for an ES3/ES5 file as a good test of the baseline perf of going through the whole program. (The shortcut Babel could do is just to print the file exactly out if it doesn't find any changes, kinda like engines cheat but we won't do that)
    • I just realized we could just run Babel on the output of the original benchmark since that will be ES5 anyway?
  • The payload should test out other kinds of code that people are writing/using with Babel like
    • ES2017+ and Stage x proposals (we could use Babel itself for this if we bundled all of it untranspiled, but there are probably other projects we could use)
    • JSX/Flow/Typescript
  • There are other things like compiling a minified source but people shouldn't be doing that?
    • Babel operates per file so realistically it compiles a lot of smaller files

No longer working in node?

After a fresh clone and npm install:

~/src/web-tooling-benchmark master
โฏ node dist/cli.js
/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:52067
/* WEBPACK VAR INJECTION */(function(global) {var scope = (typeof self !== "undefined" && self) || window;
                                                                                                   ^

ReferenceError: window is not defined
    at Object.<anonymous> (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:52067:100)
    at Object.<anonymous> (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:52129:30)
    at __webpack_require__ (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:25:30)
    at Object.module.exports (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:90860:69)
    at __webpack_require__ (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:25:30)
    at Object.toString (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:164851:20)
    at __webpack_require__ (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:25:30)
    at Object.<anonymous> (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:160010:74)
    at __webpack_require__ (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:25:30)
    at Object.<anonymous> (/Users/ofrobots/src/web-tooling-benchmark/dist/cli.js:7374:19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.