Code Monkey home page Code Monkey logo

Comments (7)

dafnevk avatar dafnevk commented on May 28, 2024 3

I also found this blogpost very useful:
http://blog.turi.com/how-to-evaluate-machine-learning-models-part-4-hyperparameter-tuning

from mcfly.

dafnevk avatar dafnevk commented on May 28, 2024

And we can look into Optunity they also support for example TPE and other optimizers.

from mcfly.

dafnevk avatar dafnevk commented on May 28, 2024

Ah sorry but I see that Optunity uses hyperopt under the hood for TPE so we might run into the same problems as hyperas (#35)

from mcfly.

vincentvanhees avatar vincentvanhees commented on May 28, 2024

Optunity allows for CMA-ES optimizer. According to 'Algorithms for Hyper-parameter Optimizations' by James Bergstra, " CMA-ES is a state-of-the-art gradient-free evolutionary algorithm for optimization on continuous domains, which has been shown to outperform the Gaussian search EDA. Notice that such a gradient-free approach allows non-differentiable kernels for the GP regression."

I struggle to digest this. Does this mean that it can handle non-real numbers as hyperparameter, like we want or is a non-differentiable kernel something different?

from mcfly.

vincentvanhees avatar vincentvanhees commented on May 28, 2024

Rescale is a commercial tool to train deep networks in the cloud, including Keras, Torch,... Part of the service is Keras hyperparameter optimization. https://blog.rescale.com/deep-neural-network-hyper-parameter-optimization/ It may be good to know that these services exist.

from mcfly.

dafnevk avatar dafnevk commented on May 28, 2024

In that blogpost, they use SMAC - which trains random forests on the results, and is better on categorical variables according to the blog of Alice Zheng.
SMAC is available in python in the pysmac package

from mcfly.

dafnevk avatar dafnevk commented on May 28, 2024

Another interesting blogpost: http://www.argmin.net/2016/06/20/hypertuning/ (also the comments below)
Conclusion is that bayesian methods such as TPE and SMAC are only somewhat faster in finding an optimum than random search, the speedup is not more than 2x - and random search is easily parallelizable.

It seems that TPE and SMAC are the only algorithms that are really suitable for the type of problem that we have: with mixed categorical, discrete and continuous hyperparameters.
This paper compares the methods. SMAC seems to be better than TPE in a majority of the medium/high-dimensional cases.

from mcfly.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.