Code Monkey home page Code Monkey logo

vbmc's Introduction

Variational Bayesian Monte Carlo (VBMC) - v0.9 (beta)

News:

  • The VBMC paper [1] has been accepted for a poster presentation at NIPS 2018! (20.8% acceptance rate this year, for a total of 4856 submissions)

What is it

What if there was a model-fitting method similar to Bayesian optimization (e.g., BADS), which, instead of returning just the optimal parameter vector, would also return its uncertainty (even better, the full posterior distribution of the parameters), and maybe even a metric than can be used for Bayesian model comparison?

VBMC is a novel approximate inference method designed to fit and evaluate computational models with a limited budget of likelihood evaluations (e.g., for computationally expensive models). Specifically, VBMC simultaneously computes:

  • an approximate posterior distribution of the model parameters;
  • an approximation — technically, an approximate lower bound — of the log model evidence (also known as log marginal likelihood or log Bayes factor), a metric used for Bayesian model selection.

Our first benchmark using an array of meaningful artificial test problems and a real neuronal model shows that, with the exception of the easiest cases, VBMC vastly outperforms existing inference methods for expensive models [1]. We are currently validating the algorithm on several other real model-fitting problems, as we previously did for our model-fitting algorithm based on Bayesian optimization, BADS.

VBMC runs with virtually no tuning and it is very easy to set up for your problem (especially if you are already familiar with bads).

Should I use VBMC?

VBMC is effective when:

  • the model log-likelihood function is a black-box (e.g., the gradient is unavailable);
  • the likelihood is at least moderately expensive to compute (say, half a second or more per evaluation);
  • the model has up to D = 10 parameters (maybe a few more, but no more than D = 20);
  • the target posterior distribution is continuous and reasonably smooth (see here).

Conversely, if your model can be written analytically, you should exploit the powerful machinery of probabilistic programming frameworks such as Stan or PyMC3.

Installation

Download the latest version of VBMC as a ZIP file.

  • To install VBMC, clone or unpack the zipped repository where you want it and run the script install.m.
    • This will add the VBMC base folder to the MATLAB search path.
  • To see if everything works, run vbmc('test').

Quick start

The VBMC interface is similar to that of MATLAB optimizers. The basic usage is:

[VP,ELBO,ELBO_SD] = vbmc(FUN,X0,LB,UB,PLB,PUB);

with input parameters:

  • FUN, a function handle to the log posterior distribution of your model (that is, log prior plus log likelihood of a dataset and model, for a given input parameter vector);
  • X0, the starting point of the inference (a row vector);
  • LB and UB, hard lower and upper bounds for the parameters;
  • PLB and PUB, plausible lower and upper bounds, that is a box that ideally brackets a region of high posterior density.

The output parameters are:

  • VP, a struct with the variational posterior approximating the true posterior;
  • ELBO, the (estimated) lower bound on the log model evidence;
  • ELBO_SD, the standard deviation of the estimate of the ELBO (not the error between the ELBO and the true log model evidence, which is generally unknown).

The variational posterior vp can be manipulated with functions such as vbmc_moments (compute posterior mean and covariance), vbmc_pdf (evaluates the posterior density), vbmc_rnd (draw random samples), vbmc_kldiv (Kullback-Leibler divergence between two posteriors); see also this question.

For a tutorial with many extensive usage examples, see vbmc_examples.m. You can also type help vbmc to display the documentation.

For practical recommendations, such as how to set LB and UB and the plausible bounds, and any other question, check out the FAQ on the VBMC wiki.

For BADS users

If you already use Bayesian Adaptive Direct Search (BADS) to fit your models, setting up VBMC on your problem should be particularly simple; see here.

How does it work

VBMC combines two machine learning techniques in a novel way:

  • variational inference, a method to perform approximate Bayesian inference;
  • Bayesian quadrature, a technique to estimate the value of expensive integrals.

VBMC iteratively builds an approximation of the true, expensive target posterior via a Gaussian process (GP), and it matches a variational distribution — an expressive mixture of Gaussians — to the GP.

This matching process entails optimization of the expected lower bound (ELBO), that is a lower bound on the log marginal likelihood (LML), also known as log model evidence. Crucially, we estimate the ELBO via Bayesian quadrature, which is fast and does not require further evaluation of the true target posterior.

In each iteration, VBMC uses active sampling to select which points to evaluate next in order to explore the posterior landscape and reduce uncertainty in the approximation.

Fig 1: VBMC procedure VBMC procedure

In Fig 1A above, we show several iterations of VBMC at work (contour plots of the variational posterior). Red crosses are the centers of the mixture of Gaussians used as variational posterior, whereas black dots are sampled points in the current training set. Fig 1B shows a plot of the estimated ELBO vs. the true log marginal likelihood (LML). Fig 1C represents the ground truth for the target posterior density.

See the VBMC paper for more details [1].

Troubleshooting

The VBMC toolbox is under active development and currently in its beta version (close to final).

It is still in beta since we are validating the algorithm on an additional batch of model-fitting problems, and we want to include in the toolbox some semi-automated diagnostics tools for robustness. The toolbox interface (that is, details of input and output arguments of some functions) may change slightly from the beta to the final version. As of now, the toolbox is usable, but you should double-check your results (as you would do in any case, of course). See the FAQ for more information on diagnostics.

If you have trouble doing something with VBMC, spot bugs or strange behavior, or you simply have some questions, please contact me at [email protected], putting 'VBMC' in the subject of the email.

VBMC for other programming languages

VBMC is currently available only for MATLAB. A Python version is being planned.

If you are interested in porting VBMC to Python or another language (R, Julia), please get in touch at [email protected] (putting 'VBMC' in the subject of the email); I'd be willing to help. However, before contacting me for this reason, please have a good look at the codebase here on GitHub, and at the paper [1]. VBMC is a fairly complex piece of software, so be aware that porting it will require considerable effort and programming/computing skills.

Reference

  1. Acerbi, L. (2018). Variational Bayesian Monte Carlo. To appear in Advances in Neural Information Processing Systems 31. arXiv preprint arXiv:1810.05558

You can cite VBMC in your work with something along the lines of

We estimated approximate posterior distibutions and approximate lower bounds to the model evidence of our models using Variational Bayesian Monte Carlo (VBMC; Acerbi, 2018). VBMC combines variational inference and active-sampling Bayesian quadrature to perform approximate Bayesian inference in a sample-efficient manner.

Besides formal citations, you can demonstrate your appreciation for VBMC in the following ways:

  • Star the VBMC repository on GitHub;
  • Follow me on Twitter for updates about VBMC and other projects I am involved with;
  • Tell me about your model-fitting problem and your experience with VBMC (positive or negative) at [email protected] (putting 'VBMC' in the subject of the email).

You may also want to check out Bayesian Adaptive Direct Search (BADS), our method for fast Bayesian optimization.

License

VBMC is released under the terms of the GNU General Public License v3.0.

vbmc's People

Contributors

lacerbi avatar

Watchers

James Cloos avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.