Code Monkey home page Code Monkey logo

radis-benchmark's Introduction

radis-benchmark

Performance Benchmarks for RADIS

image

asv

Automatic benchmarks

Benchmarks can be found in benchmarks/benchmarks.py

Benchmarks are executed with Airspeed Velocity. Results: 馃敆 https://radis.github.io/radis-benchmark/

Run the benchmark for the latest commit of the RADIS develop branch :

asv run develop^!

Run the benchmarks for all tested versions (update file if necessary):

asv run HASHFILE:tested_radis_versions.txt -e --skip-existing-successful
asv publish
asv preview

Benchmarks are executed many times. Some involve calculations of 1+ millions of lines, before the LDM method was introduced, and therefore take a long time. If developing new benchmarks, first check the ASV documentation and in particular asv dev

Benchmarks are run against major tagged versions of RADIS. The list of version can be found in tested_radis_versions.txt. These tags mostly belong to the master branch. Older versions (< 0.9.21) required manual patches to be able to run the benchmarks. Therefore, support branches were added. See support/0.9.18, support/0.9.19, support/0.9.21, support/0.9.22.

*Note for developers : once you have run the test locally, you can upload them directly on the 馃敆 online website by running asv gh-pages

Manual performance tests :

radis-benchmark's People

Contributors

encrypted-soul avatar erwanp avatar pkj-m avatar tranhuunhathuy avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

radis-benchmark's Issues

Errors while setting up radis-benchmark

While trying to setup benchmarks, several errors are being encountered. The setup is isolated with a separate virtualenv with the latest conda version.

路路 Error running /home/gaganaryan/Desktop/Radis/radis-benchmark/env/f329282d0d237249a049116b7f234e30/bin/python setup.py build (exit status 1)
   STDOUT -------->
   
   STDERR -------->
   Traceback (most recent call last):
     File "setup.py", line 49, in <module>
       import numpy 
   ModuleNotFoundError: No module named 'numpy'
   
   During handling of the above exception, another exception occurred:
   
   Traceback (most recent call last):
     File "setup.py", line 56, in <module>
       'matplotlib pandas')
   ImportError: Please install these librairies first (with Anaconda is strongly recommended) 
    >>> conda install numpy scipy matplotlib pandas

I tried uninstalling and then reinstalling all the 4 packages listed in the last line of the error. But this doesn't resolve the issue. The packages are installed (and was verified with conda list).

This error was not faced a few times when I had freshly reinstalled the entire Anaconda. But I faced a similar ModuleNotFoundError for psutil. Facing even now for the commits that do not crash with the first error.

路路 Error running /home/gaganaryan/Desktop/Radis/radis-benchmark/env/f329282d0d237249a049116b7f234e30/bin/python /home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py discover /home/gaganaryan/Desktop/Radis/radis-benchmark/benchmarks /tmp/tmpvtl2_a7z/result.json (exit status 1)
   STDOUT -------->
   
   STDERR -------->
   Traceback (most recent call last):
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 1315, in <module>
       main()
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 1308, in main
       commands[mode](args)
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 1004, in main_discover
       list_benchmarks(benchmark_dir, fp)
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 989, in list_benchmarks
       for benchmark in disc_benchmarks(root):
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 887, in disc_benchmarks
       for module in disc_modules(root_name, ignore_import_errors=ignore_import_errors):
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 869, in disc_modules
       for item in disc_modules(name, ignore_import_errors=ignore_import_errors):
     File "/home/gaganaryan/anaconda3/envs/radis/lib/python3.8/site-packages/asv/benchmark.py", line 857, in disc_modules
       module = import_module(module_name)
     File "/home/gaganaryan/Desktop/Radis/radis-benchmark/env/f329282d0d237249a049116b7f234e30/lib/python3.6/importlib/__init__.py", line 126, in import_module
       return _bootstrap._gcd_import(name[level:], package, level)
     File "<frozen importlib._bootstrap>", line 994, in _gcd_import
     File "<frozen importlib._bootstrap>", line 971, in _find_and_load
     File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
     File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
     File "<frozen importlib._bootstrap_external>", line 678, in exec_module
     File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
     File "/home/gaganaryan/Desktop/Radis/radis-benchmark/benchmarks/benchmarks.py", line 9, in <module>
       from psutil import virtual_memory
   ModuleNotFoundError: No module named 'psutil'

@anandxkumar is there anything you would like to add to this?

OH full range - 730 lines

@dcmvdbekerom I set up a quite extreme example where we see the weaknesses of the LDM method / FFT when there are very little lines and a large spectral range.

  • LDM : 27s
  • historical : 0.43s

https://github.com/radis/radis-benchmark/blob/master/manual_benchmarks/OH%20benchmark.ipynb

I do not think the LDM can be improved much for these conditions. Instead, we should make use of the fact that RADIS has the historical method already implemented, and switch to it automatically.

I discussed this previously, but in a first approximation :

  • DLM scales as spectral range / wstep * log(spectral range / wstep) (FFT)
  • historical method (line-centered lineshape with cutoff) scales as broadening_cutoff * spectral range / wstep^2 (convolution) * N_lines

Therefore the ratio R should be a good indicator of when to use LDM (R >> Rcrit) and when to use the historical method (R << Rcrit)

R = broadening_cutoff  / wstep * N_lines / log(spectral range / wstep)

In the benchmark example, I calculated R=50e6 and we're definitly in a historical method computation, so we can already say that Rcrit >> 50e6.

This in itself could be a GSOC project, actually !

  • Build a big map of benchmark cases
  • Run the methods and manually optimize
  • Derive a better R expression.
  • Implement the automatic switch using a high-level "optimization='auto'" mode in calc_spectrum

Related

Unitary tests to implement

functions/methods to add in the benchmark :

  • BaseFactory._add_EvibErot_RADIS_cls1 (groupby().apply() erplaced by index.map(dict.get()) in 0.9.29 . Try with HITEMP - CO )

HITEMP tests stall on Windows

as of 7b9a7a2 : unexpectedly stalls on Windows. No error message. Had to crash the console. Tried in Git Bash and normal Cmd.

Running the same tests on Linux showed asv: benchmark timed out (timeout 60.0s)

publish new benchmarks : filename too long on Windows

Current

asv gh-pages

Fails with

 error: open("graphs/Cython-null/arch-x86_64/branch-support_0.9.18/cpu-Intel(R) Core(TM) i7-6700HQ CPU @ 2.50GHz (4                                         cores)/machine-ERWAN-XPS/num_cpu-4/os-Windows 10/psutil-null/python-3.6/ram-16GB/benchmarks.CO2_HITEMP.peakmem_eq_spectrum.js                                        on"): Filename too long

add psutils and other requirements

Current requirements of benchmarks were not loaded.

In #7 they were added to the "Matrix" of tested dependencies, and therefore installed.

However, they appear everywhere in the Benchmarks tested parameters, which is unecessary : we should only add them as dependencies in a requirements.txt or conda environment.yml.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    馃枛 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 馃搳馃搱馃帀

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google 鉂わ笍 Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.