Code Monkey home page Code Monkey logo

nas_benchmarks's Introduction

Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search

This repository contains code of tabular benchmarks for

  • HPOBench: joint hyperparameter and architecture optimization of feed forward neural networks on regression problems (see [1])
  • NASBench101: the architecture optimization of a convolutional neural network (see [2])

To download the datasets for the FC-Net benchmark:

wget http://ml4aad.org/wp-content/uploads/2019/01/fcnet_tabular_benchmarks.tar.gz
tar xf fcnet_tabular_benchmarks.tar.gz

The data for NASBench is available here.

To install it, type:

git clone https://github.com/automl/nas_benchmarks.git
cd nas_benchmarks
python setup.py install

The following example shows how to load the benchmark and to evaluate a random hyperparameter configuration:

from tabular_benchmarks import FCNetProteinStructureBenchmark

b = FCNetProteinStructureBenchmark(data_dir="./fcnet_tabular_benchmarks/")
cs = b.get_configuration_space()
config = cs.sample_configuration()

print("Numpy representation: ", config.get_array())
print("Dict representation: ", config.get_dictionary())

max_epochs = 100
y, cost = b.objective_function(config, budget=max_epochs)
print(y, cost)

To see how you can run different open-source optimizers from the literature, have a look on the python scripts in 'experiment_scripts' folder, which were also used to conducted the experiments in the papers.

References

[1] Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization
    A. Klein and F. Hutter
    arXiv:1905.04970 [cs.LG]

[2] NAS-Bench-101: Towards Reproducible Neural Architecture Search
    C. Ying and A. Klein and E. Real and E. Christiansen and K. Murphy and F. Hutter
    arXiv:1902.09635 [cs.LG]

nas_benchmarks's People

Contributors

aaronkl avatar dekuenstle avatar keggensperger avatar neeratyoy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nas_benchmarks's Issues

UCI dataset .npy used for fcnet results

Dear Dr. Klein,

Many thanks for the benchmark datasets on CNN and Fully-connected networks. I'm trying to reproduce some of the results of the fully connected networks on protein structure uci dataset using your script "train_fcnet.py". However, I got quite different performance (valid_mse=0.5) as compared to the results reported in the tabular benchmark dataset (valid_mse=0.3) on the same configuration/hyperparamters.

Is it possible for you to share the train/valid/test.npy files you used for the fcnet tabular benchmark dataset? Because that's the only variant in my script (btw, I also remove the redundant feature and normalise the data as instructed in your paper)?

I'd be really grateful to hear your reply. Once again, thanks for the datasets.

Best,
Robin
Oxford

Reproducing NAS-Bench-101 benchmarks

Hi there,
thanks for both releasing NAS-Bench-101 and the code you used to benchmark on it, I believe it's a really good step towards fairer comparison of NAS methods.

It would be really useful if you also released the scripts (including the hyper-parameters) used to generate the main plot (for example Fig.7).
image

Would that be possible? Thanks

Reproduce the results

Hi, Thank you for this work! I would like to ask if I want to reproduce the paper in the paper, do I run the code in experiments_script directly, and the json file obtained is the result described in the paper? Are there any details that I need to pay attention to or need to modify?Looking forward to your reply.

Handling of objective function evaluation in Regularized Evolution script

Hi,

This is wrt fcnet_tabular_benchmarks.

The Regularized Evolution script evaluates accuracy by subtracting the objective_function() evaluation from 1.

However, for certain benchmarks the MSE returned is greater than 1, which can make the accuracy negative. Is this evaluation correct?
Accuracies are computed here and here.

One way to verify this may be so:

b = FCNetSliceLocalizationBenchmark(data_dir)
cs = b.get_configuration_space()

for i in range(1000):
     config = cs.sample_configuration()
     value, _ = b.objective_function(config)
     if value > 1:
         print(config)
         break
for i in range(6):
    value, _ = b.objective_function(config)
    print(1 - value)

what are cifarA, cifarB, and cifarC

Hello there,

Thank you for making this benchmark public. I have a question regarding the dataset of cifarA, cifarB, cifarC, and they all inherit a class that wraps nasbench. So my question is, what's the difference of cifarA, B, C, and their purpose? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.