Code Monkey home page Code Monkey logo

scriptorium's Introduction

Scriptorium ๐Ÿ“œ

  • Game Scripting Languages benchmarked.
  • Using latest versions at the time of writing (Jul 2015)
  • Total solutions evaluated: 50

Results

Rank Language Flavor Time Relative Lua speed Score
1 C vc 0.074 s. 100% 1808 pt
2 Lua luajit 0.111 s. 100% 1203 pt
3 Terra terra 0.121 s. 100% 1107 pt
4 C c4-jit 0.136 s. 100% 986 pt
5 C libtcc 0.151 s. 100% 886 pt
6 Pawn pawn-asm 0.384 s. 100% 349 pt
7 Lua luajit-nojit 0.521 s. 100% 257 pt
8 Pawn pawn 0.719 s. 100% 186 pt
9 TinyVM tinyvm 0.786 s. 100% 170 pt
10 Scheme chibi 1.009 s. 100% 132 pt
11 Neko nekovm 1.104 s. 100% 121 pt
12 Lua lua 1.341 s. 100% 100 pt
13 Ruby tinyrb(ist) 1.441 s. 93.0103% 93 pt
14 GameMonkey gamemonkey 1.691 s. 79.2907% 79 pt
15 Angelscript angelscript-jit 1.859 s. 72.1132% 72 pt
16 Wren wren 1.997 s. 67.1361% 67 pt
17 Lily lily 2.005 s. 66.8582% 66 pt
18 Angelscript angelscript 2.039 s. 65.7295% 65 pt
19 Ruby mruby 2.098 s. 63.893% 63 pt
20 Squirrel squirrel 2.126 s. 63.0622% 63 pt
21 Scheme s7 2.136 s. 62.7708% 62 pt
22 C c4 2.538 s. 52.8101% 52 pt
23 Python micropython 2.842 s. 47.1675% 47 pt
24 Dao dao 2.876 s. 46.6166% 46 pt
25 QuakeC gmqcc 3.060 s. 43.806% 43 pt
26 ObjectScript objectscript 3.108 s. 43.1278% 43 pt
27 SGScript sgscript 4.620 s. 29.0163% 29 pt
28 Java Jog 4.675 s. 28.672% 28 pt
29 JetScript JetScript 4.810 s. 27.8671% 27 pt
30 Lisp minilisp 6.951 s. 19.2855% 19 pt
31 Lisp aria 8.010 s. 16.7366% 16 pt
32 JavaScript duktape 9.544 s. 14.0463% 14 pt
33 Tcl jim 12.280 s. 10.9162% 10 pt
34 GML gml 16.443 s. 8.15268% 8 pt
35 MiniScheme MiniScheme 17.345 s. 7.72839% 7 pt
36 PSL psl 17.645 s. 7.59738% 7 pt
37 Python tinypy(panda) 21.799 s. 6.14937% 6 pt
38 Scheme s9 33.160 s. 4.04257% 4 pt
39 C picoC 36.625 s. 3.66016% 3 pt
40 JX9 jx9 43.598 s. 3.07476% 3 pt
41 PHP ph7 46.029 s. 2.91235% 2 pt
42 JTC jtc 47.021 s. 2.8509% 2 pt
43 JavaScript v7 51.940 s. 2.58089% 2 pt
44 Scheme tinyscheme 65.398 s. 2.04979% 2 pt
45 Lisp paren 72.901 s. 1.83883% 1 pt
46 Lisp lispy90 91.767 s. 1.46079% 1 pt
47 Tcl picol 151.527 s. 0.884674% 0 pt
48 ChaiScript chaiscript 175.038 s. 0.765845% 0 pt
49 JavaScript 42tiny-js 227.170 s. 0.590096% 0 pt
50 Tcl lil 555.976 s. 0.241111% 0 pt
  • AMD A10 3.8 GHz, 8 GiB, Windows 7 64bit.
  • Compiled on VS2015 RC if possible, VS2013 elsewhere.
  • Take it with a grain of salt.

Language requirements

  • must embed from C++.
  • must be self-contained (no BOOST, no LLVM backends).
  • must compile on vs2015 (or vs2013 at least).
  • must link statically.
  • must not require (heavy) makefiles/cygwin/build-systems to build.
  • must use jit/optimizations if available.

Test requirements

  • must compare fair between languages. for example:
    • must not use yield/coroutines on recursive fibonacci test.
    • must disable threading if possible (not all languages are thread-safe)
    • etc

Upcoming (soon)

  • creating a class to handle them all (relevant)
  • @todo {
  • add exe size
  • add iteration benchmarks
  • add string benchmarks
  • add script->native round trip
  • add native->script round trip
  • add memory consumption
  • add memory leaks
  • add time to set up
  • add final thoughts
  • also, create a score chart, based on:
    • small
    • clean
    • type (interpreted/bytecode/jit)
    • fast
    • containers
    • OO
    • closures
    • bindings
    • 32/64bit
    • threading
    • thread-safety
    • coroutines/greenlets
    • debug capabilities
    • zero-configuration based (drag-n-drop files on solution/project)
    • ...
  • }

To evaluate (someday... maybe)

License

  • initial tests by Lewis Van Winkle (2009) (Public Domain)
  • makefiles, bench code, compilation fixes & most tests put into public domain (@r-lyeh)

scriptorium's People

Contributors

cgbystrom avatar r-lyeh avatar readmecritic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scriptorium's Issues

GML Bench Info [YYC?]

The analysis has come quite interesing to me. I've some doubts though, about the GML test. I wondering if the GML benchmark here addresses the VM or the YYC[LLVM C++], also, which version of vs was used to compile [last vs version available for gms is vs13]?

If the YYC hasn't been tested, It would be much to ask for a YYC test?, I find interesting to test (performance wise) GML as a c++ abstraction.

Add the list of existing benchmarks to the top of README

Currently it's not easy at all to find the benchmarks. Could you please restructure README such, that at the very beginning (at least before the table with results is presented) there is a list of all existing benchmarks, which are available for all the benchmarked languages? Also a direct URL pointing to the the tests/ directory is a must have.

Without this the benchmarks do not have unfortunately any value for the reader ๐Ÿ˜ข.

Please split benchmark results

As I understand your table right now the results of all benchmarks are shown combined. Could you split them up?

Different scenarios might have different priorities. For example, you can image that most proper game scripts are about calling native functions as fast as possible and and general interpretation overhead, not arithmetic operators.

Add more benchmarks

Right now the benchmarks look like they are all based on a simple fibonacci test, which I guess is a good start considering the number of runtimes being tested.

But just relying on this single type of benchmark is a bit one-sided and expanding the number of benchmarks is needed for a more accurate picture.

Perhaps we can list a few good candidates here. Two good sources I found are
https://github.com/kragen/shootout/tree/master/bench and https://github.com/attractivechaos/plb

Would be nice to pick something easy to implement but at the same time showing a different execution pattern than the fibonacci test.

Benchmark CL

Just found this, great project, exactly what I did recently and was looking for.

Might be worth to also look at the few common lisp implementations:

Please add ZetScript

Hi r-lyeh,

I would like to know whether is possible to eval ZetScript in your performance list test.

Currently ZetScript links dinamically by default but I attached the static version below,

zetscript-1.3.0_static.zip

And the equivalent Fibonacci script for ZetScript is typed below,

function fibR(n)
{
    if (n < 2) { 
     	return n; 
    }

    return fibR(n-2)+fibR(n-1);
}

print("fib: " + (fibR(34)) );

Cheers,

Update Jim Tcl for the next run

Jim Tcl is maintained, but not in @antirez's repository. The official GitHub mirror is at https://github.com/msteveb/jimtcl/.

Also, please consider including Tcl 8.6, the main implementation of Tcl, in the next round of your benchmark. Tcl 8.x is a versatile scripting language with a rich C API and has been used in games before, including commercial titles. If you have any questions about embedding Tcl, please ask.

Some years later - time to update?

I was looking for some kind of a comparison of scripting languages in terms of speed. The results here are quite old so I wondered if you would be willing to re-run the benchmarks and update the table?

Kind regards,
Ingwie

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.