yaringal / dropoutuncertaintyexps Goto Github PK
View Code? Open in Web Editor NEWExperiments used in "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning"
License: Other
Experiments used in "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning"
License: Other
Hello Yarin,
I tried running your code but I found that there is a little discrepancy in the results. For instance, I took Boston Housing and I ran the same code with the same hypers (tau = 0.159708, dropout = 0.05, batch size = 128, length scale = 1e-02). Here are my results which are slightly worse than reported in the paper. Can you please help me to resolve this issue. Thanks.
DropoutUncertaintyExps/net/net.py
Line 9 in 6eb4497
Should be changed to:
from scipy.special import logsumexp
reference: https://github.com/cvxgrp/cvxpy/issues/640
Any suggestions on how to implement the stochastic predictor with a different dropout rate than that which was used in training? I have tried to modify the layer attributes (.rate), but this does not change the output of the stochastic predictor function (built on the keras backend function).
I don't quite understand the calculation of the log-likelihood
# We compute the test log-likelihood
ll = (logsumexp(-0.5 * self.tau * (y_test[None] - Yt_hat)**2., 0) - np.log(T)
- 0.5*np.log(2*np.pi) + 0.5*np.log(self.tau))
test_ll = np.mean(ll)
why is the logsumexp
used? and why are the predictive variances not used?
I tried to calculate the test log likelihood like this:
from scipy.stats import norm
pred_var = np.var(Yt_hat, axis = 0) + 1 / self.tau
ll = []
for i in range(y_test.shape[0]):
ll.append(norm.logpdf(y_test[i][0], MC_pred[i][0], np.sqrt(pred_var[i][0])))
new_test_ll = np.mean(ll)
And it usually generates slightly worse log likelihood. For example, using the concrete
dataset, with split id set to 19, the log likelihood given by the original code is -3.17, while the log likelihood given by the above code is -3.25.
When normalizing the output, shouldn't the pre-defined model precision also normalized?
Hello Yarin,
Is there any way to interpret the obtained Predictive Uncertainty(Variance)? After computing the predictive variance i.e. the sample variance of T stochastic forward passes is there any way to calculate any threshold or cutoff value so that if the predictive variance is above that value we can say that the model is uncertain or below which it is certain about its prediction?
Uncertain if (predictive variance>=threshold) || Certain if (predictive variance<threshold)
something like this!
Thanks!
Hello Yarin,
It looks like that the description of the outputs in your predict
method of the net
class does not match to the actual output.
DropoutUncertaintyExps/net/net.py
Lines 95 to 108 in 6eb4497
According to your publication, the predictive variance should be the sample variance of T stochastic forward passes plus the inverse model precision tau. (In your case, because the output y is a scalar, the variance are also scalars.) But it looks like that you did not add the inverse of tau when you are calculating the predictive "rmse". In addition, what is the estimate variance with additive noise?
Thank you very much.
Best,
Lei
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.