Code Monkey home page Code Monkey logo

mxfusion's Introduction

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

mxfusion's People

Contributors

apaleyes avatar cliffmcc avatar jnkm avatar meissnereric avatar tdiethe avatar zhenwendai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mxfusion's Issues

Fix flaky model replication test

The test that fails sometimes is:

testing/core/factor_graph_test.py::FactorGraphTests::test_replicate_simple_model

It only fails on Linux builds and only occasionally. First step would be to reproduce the failure then debug why it's happening.

Fail to load the default configuration

[09/14/18 08:41 PM] Diethe, Tom: When running the PPCA demo, I get the following exception:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-13-4cf00e1190ef> in <module>()
      1 cov = mx.nd.broadcast_to(mx.nd.expand_dims(mx.nd.array(np.eye(K,K)), 0),shape=(N,K,K))
----> 2 m.z = mf.distributions.MultivariateNormal.define_variable(mean=mx.nd.zeros(shape=(N,K)), covariance=cov, shape=(N,K))
      3 sigma_2 = Variable(shape=(1,), transformation=PositiveTransformation())
      4 m.x = mf.distributions.Normal.define_variable(mean=m.dot(m.z, m.w), variance=sigma_2, shape=(N,D))

~/Library/Python/3.6/lib/python/site-packages/mxfusion/components/distributions/normal.py in define_variable(shape, mean, covariance, rand_gen, minibatch_ratio, dtype, ctx)
    303         normal = MultivariateNormal(mean=mean, covariance=covariance,
    304                                     rand_gen=rand_gen,
--> 305                                     dtype=dtype, ctx=ctx)
    306         normal._generate_outputs(shape=shape)
    307         return normal.random_variable

~/Library/Python/3.6/lib/python/site-packages/mxfusion/components/distributions/normal.py in __init__(self, mean, covariance, rand_gen, minibatch_ratio, dtype, ctx)
    216                                      input_names=input_names,
    217                                      output_names=output_names,
--> 218                                      rand_gen=rand_gen, dtype=dtype, ctx=ctx)
    219 
    220     def replicate_self(self, attribute_map=None):

~/Library/Python/3.6/lib/python/site-packages/mxfusion/components/distributions/distribution.py in __init__(self, inputs, outputs, input_names, output_names, rand_gen, dtype, ctx)
    109         self._rand_gen = MXNetRandomGenerator if rand_gen is None else \
    110             rand_gen
--> 111         self.dtype = get_default_dtype() if dtype is None else dtype
    112         self.ctx = ctx
    113         self.log_pdf_scaling = 1

~/Library/Python/3.6/lib/python/site-packages/mxfusion/common/config.py in get_default_dtype()
     11 
     12 def get_default_dtype():
---> 13     return config['defaults']['mxnet_dtype']
     14 
     15 

/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/configparser.py in __getitem__(self, key)
    957     def __getitem__(self, key):
    958         if key != self.default_section and not self.has_section(key):
--> 959             raise KeyError(key)
    960         return self._proxies[key]
    961 

KeyError: 'defaults'

It failed to load config.cfg. A possible reason is that PyPi packaging does not include "config.cfg".

Improve test coverage

Some of the code still doesn't have full test coverage. Important places to improve include the FactorGraph class and the util folder functions.

Implement common MXNet operators (dot product, diag, etc. in MXFusion

These should be implemented as stateless functions.

Create a class that takes in variables and returns a function evaluation.

Basic - Add, subtract, multiplication, division of variables.

elementwise - square, exponentiation, log

Aggregation - sum, mean, prod

Matrix ops - dot product, diag

Matrix manipulation - reshape, transpose

Printing summary of optimized parameters

Hi Team,

I can't figure out how to intuitively print a summary of the parameters that the optimizer sees.
I can either print the uuid (not useful since I can't translate it to variable names) or the optimized values, but what I'm looking for is a way to make the connection: parameter_name: optimized_value e.g. kernel.lengthscale: 0.1.

Right now I'm using this:

m = Model()
kernel = RBF(input_dim=1)
m.y = GPRegression.define_variable(kernel=kernel,...)


infr = GradBasedInference(model=m, ...)
infr.run(...)

my_gp = m.y.factor

param_names = [
    'my_gp.kernel.rbf.lengthscale',
    'my_gp.kernel.rbf.variance'
]

param_keys = [
    my_gp._module_graph.kernel.rbf.lengthscale,
    my_gp._module_graph.kernel.rbf.variance
]

for i in range(len(param_keys)):
    print(param_names[i], infr.params[param_keys[i]].asnumpy())

But obviously I'd love some code that doesn't assume I know the structure of the model and is instead automatically parsing the model and/or infr object to generate the parameter report.

Thanks!
Andreas

Add tests for non-mock sample generation

Currently the tests of sampling from distributions all use the mock test generator, which doesn't actually test the random_gen.py functions, with the exception of the Bernoulli test:

        # Also make sure the non-mock sampler works
        rand_gen = None
        var = Bernoulli.define_variable(0, shape=rv_shape, rand_gen=rand_gen, dtype=dtype).factor
        prob_true_mx = mx.nd.array(prob_true, dtype=dtype)
        if not prob_true_is_samples:
            prob_true_mx = add_sample_dimension(mx.nd, prob_true_mx)
        variables = {var.prob_true.uuid: prob_true_mx}
        rv_samples_rt = var.draw_samples(
            F=mx.nd, variables=variables, num_samples=num_samples)

        assert is_sampled_array(mx.nd, rv_samples_rt)
        assert get_num_samples(mx.nd, rv_samples_rt) == num_samples
        assert rv_samples_rt.dtype == dtype

Add MXFusion variable combination

Example use case: in ADVI to set a prior over the combination of variables. This is required for the full covariance matrix in the posterior.

This may require a new Variable class (VariableCombination?).

Add better optimizer (e.g. l-bfgs)

Optimizing GP models with Adam is less than ideal, would be great if we could bring in a better optimizer. It's not clear how exactly that should happen (or if it should even be in the MXFusion library vs. MXNet itself).

The simplest solution is probably just to wrap the scipy optimizer.

Samples not propagated correctly in SVI for GPs

As you can see from the following code/output SVI in deep GPs produces a mismatch in dimensions (number of samples vs 1), some broadcast seems to fail presumably?

Note: Probably related to issue: #76

Thanks,
Andreas

import numpy as np, mxnet as mx
from mxfusion import Model, Variable
from mxfusion.modules.gp_modules import SparseGPRegression
from mxfusion.components.distributions.gp.kernels import RBF, White
from mxfusion.inference import GradBasedInference, create_Gaussian_meanfield, StochasticVariationalInference
from mxfusion.components.variables.var_trans import PositiveTransformation


X = np.random.uniform(-3.,3.,(20,1))
Y = np.sin(X) + np.random.randn(20,1)*0.05

m = Model()
m.x = Variable(shape=(20, 1))
m.noise_var = Variable(transformation=PositiveTransformation())
kernel = RBF(input_dim=1, dtype='float64') + White(input_dim=1, dtype='float64')
m.y = SparseGPRegression.define_variable(X=m.x, kernel=kernel, shape=(20, 1), 
                                   noise_var=m.noise_var, dtype='float64')

m.noise_var_z = Variable(transformation=PositiveTransformation())
kernel_z = RBF(input_dim=1, variance=1., lengthscale=1., dtype='float64')+ White(input_dim=1, dtype='float64')
m.z = SparseGPRegression.define_variable(X=m.y, kernel=kernel_z, shape=(20, 1), 
                                   noise_var=m.noise_var_z, dtype='float64')

q = create_Gaussian_meanfield(model=m, observed=[m.x, m.z], dtype='float64')
infr = GradBasedInference(inference_algorithm=StochasticVariationalInference(model=m, posterior=q, num_samples=100, observed=[m.z]), dtype='float64')
infr.run(x=mx.nd.array(X, dtype='float64'), z=mx.nd.array(Y, dtype='float64'), learning_rate=0.05, verbose=True, max_iter=100)

Output:

---------------------------------------------------------------------------
MXNetError                                Traceback (most recent call last)
<ipython-input-19-58fc005b39b5> in <module>()
     24 q = create_Gaussian_meanfield(model=m, observed=[m.x, m.z], dtype='float64')
     25 infr = GradBasedInference(inference_algorithm=StochasticVariationalInference(model=m, posterior=q, num_samples=100, observed=[m.z]), dtype='float64')
---> 26 infr.run(x=mx.nd.array(X, dtype='float64'), z=mx.nd.array(Y, dtype='float64'), learning_rate=0.05, verbose=True, max_iter=100)

MXFusion/mxfusion/inference/grad_based_inference.py in run(self, optimizer, learning_rate, max_iter, verbose, **kw)
     75             infr_executor=infr, data=data, param_dict=self.params.param_dict,
     76             ctx=self.mxnet_context, optimizer=optimizer,
---> 77             learning_rate=learning_rate, max_iter=max_iter, verbose=verbose)

MXFusion/mxfusion/inference/batch_loop.py in run(self, infr_executor, data, param_dict, ctx, optimizer, learning_rate, max_iter, n_prints, verbose)
     35         for i in range(max_iter):
     36             with mx.autograd.record():
---> 37                 loss, loss_for_gradient = infr_executor(mx.nd.zeros(1, ctx=ctx), *data)
     38                 loss_for_gradient.backward()
     39             if verbose:

mxnet/gluon/block.py in __call__(self, *args)
    539             hook(self, args)
    540 
--> 541         out = self.forward(*args)
    542 
    543         for hook in self._forward_hooks.values():

mxnet/gluon/block.py in forward(self, x, *args)
    916                     params = {i: j.data(ctx) for i, j in self._reg_params.items()}
    917 
--> 918                 return self.hybrid_forward(ndarray, x, *args, **params)
    919 
    920         assert isinstance(x, Symbol), \

MXFusion/mxfusion/inference/inference_alg.py in hybrid_forward(self, F, x, *args, **kw)
     65         add_sample_dimension_to_arrays(F, kw, out=variables)
     66         add_sample_dimension_to_arrays(F, self._constants, out=variables)
---> 67         obj = self._infr_method.compute(F=F, variables=variables)
     68         with autograd.pause():
     69             # An inference algorithm may directly set the value of a parameter instead of computing its gradient.

MXFusion/mxfusion/inference/variational.py in compute(self, F, variables)
     91             F=F, variables=variables, num_samples=self.num_samples)
     92         variables.update(samples)
---> 93         logL = self.model.log_pdf(F=F, variables=variables)
     94         logL = logL - self.posterior.log_pdf(F=F, variables=variables)
     95         return -logL, -logL

MXFusion/mxfusion/models/factor_graph.py in log_pdf(self, F, variables, targets)
    211                 if len(module_targets) > 0:
    212                     logL = logL + F.sum(expectation(F, f.log_pdf(
--> 213                         F=F, variables=variables, targets=module_targets)))
    214             else:
    215                 raise ModelSpecificationError("There is an object in the factor graph that isn't a factor." + "That shouldn't happen.")

MXFusion/mxfusion/modules/module.py in log_pdf(self, F, variables, targets)
    295         else:
    296             raise ModelSpecificationError("The targets, conditionals pattern for log_pdf computation "+str((target_names, conditionals_names))+" cannot find a matched inference algorithm.")
--> 297         return alg.compute(F, variables)
    298 
    299     def draw_samples(self, F, variables, num_samples=1, targets=None):

MXFusion/mxfusion/modules/gp_modules/sparsegp_regression.py in compute(self, F, variables)
     47             mean = self.model.mean_func(F, X)
     48             Y = Y - mean
---> 49         LAInvLinvKufY = F.linalg.trsm(LA, F.linalg.gemm2(LinvKuf, Y))
     50 
     51         logL = - D*F.linalg.sumlogdiag(LA)

mxnet/ndarray/register.py in gemm2(A, B, transpose_a, transpose_b, alpha, axis, out, name, **kwargs)

mxnet/_ctypes/ndarray.py in _imperative_invoke(handle, ndargs, keys, vals, out)
     90         c_str_array(keys),
     91         c_str_array([str(s) for s in vals]),
---> 92         ctypes.byref(out_stypes)))
     93 
     94     if original_output is not None:

mxnet/base.py in check_call(ret)
    250     """
    251     if ret != 0:
--> 252         raise MXNetError(py_str(_LIB.MXGetLastError()))
    253 
    254 

MXNetError: [11:40:59] src/operator/tensor/./la_op.h:144: Check failed: (*in_attrs)[0][i] == (*in_attrs)[1][i] (1 vs. 100) Shapes of inputs 0, 1 must be the same, except on row/col axis

Stack trace returned 8 entries:
[bt] (0) 0   libmxnet.so                         0x00000001142c4b90 libmxnet.so + 15248
[bt] (1) 1   libmxnet.so                         0x00000001142c493f libmxnet.so + 14655
[bt] (2) 2   libmxnet.so                         0x000000011552970b libmxnet.so + 19302155
[bt] (3) 3   libmxnet.so                         0x00000001157f94ca MXNDListFree + 502922
[bt] (4) 4   libmxnet.so                         0x00000001157f7f94 MXNDListFree + 497492
[bt] (5) 5   libmxnet.so                         0x000000011575530e MXCustomFunctionRecord + 20926
[bt] (6) 6   libmxnet.so                         0x0000000115756030 MXImperativeInvokeEx + 176
[bt] (7) 7   _ctypes.cpython-35m-darwin.so       0x000000010cd69627 ffi_call_unix64 + 79

Bug in passing scalar as value to Variable

When a constant primitive scalar is passed in as a value to a Variable, we don't handle it well during inference. There is a bug in get_num_samples that attempts to get the samples from it there. The issue with converting it to an mx.nd.array during Variable definition time is that we don't know the dtype.

I think the right way to fix this is in the get_num_samples, etc. code, for that to accept scalars and just move forward since mxnet operators are ok with broadcasting a scalar usually.

Improve interface for retrieving hidden parameters from Modules

It's a bit difficult (and not obvious) how to get the parameters of a Module out of the InferenceParameters. Right now you have to do something like infr.params[m.y.factor.kernel.lengthscale].

Having a method on the Model class that retrieves all components including internal module components might be handy for users.

Improve inference parameters printing

infr.params.param_dict just shows the mxnet basic printing, but that's really not helpful since its by uuid.

Would be really nice to have something like

print(infr.params)
> Variable(1ab23)(name=y) - (Model/Posterior(123ge2)) - (first mxnet values/shape)
> ....

DeprecationWarning about invalid escape sequence

Fix the DeprecationWarning about invalid escape sequence:

/home/travis/build/amzn/MXFusion/mxfusion/components/distributions/gp/gp.py:153:
DeprecationWarning: invalid escape sequence *
"""
/home/travis/build/amzn/MXFusion/mxfusion/components/distributions/gp/gp.py:184: DeprecationWarning: invalid escape sequence *
"""
/home/travis/build/amzn/MXFusion/mxfusion/components/distributions/gp/cond_gp.py:199: DeprecationWarning: invalid escape sequence *
"""
/home/travis/build/amzn/MXFusion/mxfusion/components/distributions/gp/cond_gp.py:241: DeprecationWarning: invalid escape sequence *
"""
/home/travis/build/amzn/MXFusion/mxfusion/components/functions/gluon_func_eval.py:69: DeprecationWarning: invalid escape sequence *
"""
/home/travis/build/amzn/MXFusion/mxfusion/components/functions/gluon_func_eval.py:85: DeprecationWarning: invalid escape sequence *
"""

Random seed in tests

I see the set_seed decorator in final places that uses a fixed random seed for tests (currently 0). There are cases where some instabilities might occur for certain random seeds not others (e.g. singular matrices etc.). One could make the seed itself random, but then you have a potential issue that tests randomly pass or fail, and this could be falsely attributed to other commits. Have you considered running the tests with multiple seeds? Of course this is only partial protection, since if the error is rare among the random seeds then you would need a large number of seeds.

Looking at what other people do, I note that in the dotnet/infer project they mostly use a single seed, but in some cases they do multiple repeats with different seeds, so possibly identifying special cases where it's needed is enough.

Bug in Variable shape property for constant valued Variables

From @palindromik
" When an MXFusion Variable is created using MXNet ndarray or Numpy array, shape passed is not validated, resulting in incorrect shape being assigned to the Variable object.
If the shape if None, Variable's shape is defaulted to (1, ) which could be incorrect based on the value passed.
"

Tests break when default configuration set to float64

Simple to reproduce: set the config.cfg value for mxnet_dtype to float64 and 7 tests will fail.

Looks like it's an issue in the normal distribution draw_samples code drawing samples for a different data type than what's passed in but just a coarse guess.

________________________________ InferenceTests.test_meanfield_batch ________________________________

self = <meanfield_test.InferenceTests testMethod=test_meanfield_batch>

    def test_meanfield_batch(self):
        x = np.random.rand(1000, 1)
        y = np.random.rand(1000, 1)
        x_nd, y_nd = mx.nd.array(y), mx.nd.array(x)

        self.net = self.make_net()
        self.net(x_nd)

        m = self.make_model(self.net)

        from mxfusion.inference.meanfield import create_Gaussian_meanfield
        from mxfusion.inference import StochasticVariationalInference
        from mxfusion.inference.grad_based_inference import GradBasedInference
        from mxfusion.inference import BatchInferenceLoop
        observed = [m.y, m.x]
        q = create_Gaussian_meanfield(model=m, observed=observed)
        alg = StochasticVariationalInference(num_samples=3, model=m, observed=observed, posterior=q)
        infr = GradBasedInference(inference_algorithm=alg, grad_loop=BatchInferenceLoop())
        infr.initialize(y=y_nd, x=x_nd)
>       infr.run(max_iter=1, learning_rate=1e-2, y=y_nd, x=x_nd)

testing/inference/meanfield_test.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mxfusion/inference/grad_based_inference.py:77: in run
    learning_rate=learning_rate, max_iter=max_iter, verbose=verbose)
mxfusion/inference/batch_loop.py:36: in run
    loss = infr_executor(mx.nd.zeros(1, ctx=ctx), *data)
../../../.pyenv/versions/3.6.0/lib/python3.6/site-packages/mxnet/gluon/block.py:413: in __call__
    return self.forward(*args)
../../../.pyenv/versions/3.6.0/lib/python3.6/site-packages/mxnet/gluon/block.py:629: in forward
    return self.hybrid_forward(ndarray, x, *args, **params)
mxfusion/inference/inference_alg.py:52: in hybrid_forward
    constants=constants)
mxfusion/inference/variational.py:54: in compute
    constants=constants)
mxfusion/models/factor_graph.py:257: in draw_samples
    always_return_tuple=True)
mxfusion/components/distributions/distribution.py:83: in draw_samples_variables
    always_return_tuple=always_return_tuple, **args)
mxfusion/components/distributions/univariate.py:70: in draw_samples_broadcast
    **variables)
mxfusion/components/distributions/normal.py:83: in draw_samples
    F.sqrt(variance)), mean)
<string>:49: in broadcast_mul
    ???
../../../.pyenv/versions/3.6.0/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py:92: in _imperative_invoke
    ctypes.byref(out_stypes)))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

ret = -1

    def check_call(ret):
        """Check the return value of C API call.

        This function will raise an exception when an error occurs.
        Wrap every API call with this function.

        Parameters
        ----------
        ret : int
            return value from API calls.
        """
        if ret != 0:
>           raise MXNetError(py_str(_LIB.MXGetLastError()))
E           mxnet.base.MXNetError: [15:13:39] src/operator/contrib/../elemwise_op_common.h:123: Check failed: assign(&dattr, (*vec)[i]) Incompatible attr in node  at 1-th input: expected float64, got float32
E
E           Stack trace returned 10 entries:
E           [bt] (0) 0   libmxnet.so                         0x00000001037f13b4 libmxnet.so + 21428
E           [bt] (1) 1   libmxnet.so                         0x00000001037f116f libmxnet.so + 20847
E           [bt] (2) 2   libmxnet.so                         0x00000001037f0de9 libmxnet.so + 19945
E           [bt] (3) 3   libmxnet.so                         0x000000010381c80b libmxnet.so + 198667
E           [bt] (4) 4   libmxnet.so                         0x0000000103813007 libmxnet.so + 159751
E           [bt] (5) 5   libmxnet.so                         0x00000001048edc6b MXNDListFree + 432395
E           [bt] (6) 6   libmxnet.so                         0x00000001048ec519 MXNDListFree + 426425
E           [bt] (7) 7   libmxnet.so                         0x000000010485d98a MXCustomFunctionRecord + 20250
E           [bt] (8) 8   libmxnet.so                         0x000000010485eb90 MXImperativeInvokeEx + 176
E           [bt] (9) 9   _ctypes.cpython-36m-darwin.so       0x0000000101ff802f ffi_call_unix64 + 79

../../../.pyenv/versions/3.6.0/lib/python3.6/site-packages/mxnet/base.py:149: MXNetError

num_samples not always passed correctly in SVI

In the following example you can see that when I have a GP model with latent inputs and want to do Bayesian inference (i.e. Bayesian GP-LVM with sampling), the num_samples is being disregarded:

import time, numpy as np, mxnet as mx
from mxfusion import Model, Variable
from mxfusion.modules.gp_modules import SparseGPRegression
from mxfusion.components.distributions.gp.kernels import RBF, White
from mxfusion.inference import GradBasedInference, create_Gaussian_meanfield, StochasticVariationalInference
from mxfusion.components.variables.var_trans import PositiveTransformation

X = np.random.uniform(-3.,3.,(20,1))
Y = np.sin(X) + np.random.randn(20,1)*0.05

m = Model()
m.x = Variable(shape=(20, 1))
m.noise_var = Variable(transformation=PositiveTransformation())
kernel = RBF(input_dim=1, dtype='float64') + White(input_dim=1, dtype='float64')
m.y = SparseGPRegression.define_variable(X=m.x, kernel=kernel, shape=(20, 1), 
                                   noise_var=m.noise_var, dtype='float64')

q = create_Gaussian_meanfield(model=m, observed=[m.x, m.y], dtype='float64')

infr = GradBasedInference(inference_algorithm=StochasticVariationalInference(model=m, posterior=q, num_samples=1, observed=[m.y]), dtype='float64')
start = time.time()
infr.run(x=mx.nd.array(X, dtype='float64'), y=mx.nd.array(Y, dtype='float64'), learning_rate=0.05, verbose=False, max_iter=100)
end = time.time()
print("\nTime with 1 sample: " + str(end - start))

infr = GradBasedInference(inference_algorithm=StochasticVariationalInference(model=m, posterior=q, num_samples=100000, observed=[m.y]), dtype='float64')
start = time.time()
infr.run(x=mx.nd.array(X, dtype='float64'), y=mx.nd.array(Y, dtype='float64'), learning_rate=0.05, verbose=False, max_iter=100)
end = time.time()
print("\nTime with 10000 samples: " + str(end - start))

Output:

Time with 1 sample: 1.2569129467010498
Time with 10000 samples: 1.0852479934692383

(I have run it multiple times)

Thanks,
Andreas

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.