Code Monkey home page Code Monkey logo

Comments (1)

willtebbutt avatar willtebbutt commented on May 29, 2024

Ah I see. This certainly makes a lot of sense -- (usually-approximate) low-rank structure is utilised regularly throughout the GP world, but this seems like a nice special case.

So the key question here is how to implement this so that you can "know" that the structure is present. e.g. where ColVecs "knows" that what you really meant by a matrix of data was a vector-of-vectors where each vector is a column of the matrix, you'll want a RepeatedInputs(name up to you of course, but this seems reasonable) type that's parametrised in terms of the unique inputs and N (in your notation above).

At that point you've got structure that you can dispatch on, so you then have to decide at which level of abstraction you wish to dispatch, and you've then got a couple of options.

Option 1

Firstly, if you take a look at the logpdf and rand implementations, you'll see that they're implemented in terms of

  1. the logdet of the covariance matrix, and
  2. C.U' \ data where C is the Cholesky of the covariance matrix

So the first option is to implement a new subtype AbstractMatrix called LowRankMatrix or something and dispatch at the level of cov, so that whenever you ask for the covariance of a FiniteGP / between two FiniteGPs whose inputs x are both RepeatedInputs, you spit out a LowRankMatrix. You could then simply define cholesky(::LowRankMatrix) to spit out something that also knows about this structure and then implement logdet and C.U' \ data on that.

Option 2

Dispatch logpdf and rand on the type of x, so that if x isa RepeatedInputs you exploit the low-rank structure to perform inference.

Option 3 (related to option 2)

Have you considered that you could think of what you're proposing as a special case of Bayesian linear regression where the number of features equals the number of unique data points, the prior over the coefficients is the covariance matrix at the unique points, and the "feature matrix" is N (or maybe transpose(N), depending on your conventions)?

So you could just utilise BayesianLinearRegressors.jl and get all of this for free? Indeed, it might make a lot of sense to implement option 2 in terms of this.

This is the kind of thing that would actually be quite nice to have generally, because it would be nice to give people an escape hatch to handle repeated inputs if they know that they're there, so it would be cool if you could contribute your code back.

I think I would be most in favour of option 2 implemented in terms of option 3, as it seems to be to be the most clean approach. While implementing the linear algebra primitives (option 1) necessary to exploit this structure seems like a good idea in principle, I suspect that there will be a lot of boilerplate in practice and it'll just wind up being quite complicated.

edit: maybe RepeatedElementsVector is a better name than RepeatedInputs since a subtype of AbstractVector doesn't intrinsically have anything to do with being a collection of inputs to something, it should really stand on its own as a vector.

edit2: you'll also need to think about how you'll exploit this structure to make predictions. You might be better off implementing this using the AbstractGPs interface, because it makes it more straightforward to dispatch on the type of the inputs. The conditioning mechanism in Stheno actually makes this quite complicated at the minute, which is one of the reasons why I'm going to transition it over to using that interface "soon". It really depends on whether you need the probabilistic-programming bit of Stheno, or whether you're fine without it.

from stheno.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.