Code Monkey home page Code Monkey logo

Comments (5)

DominiqueMakowski avatar DominiqueMakowski commented on July 20, 2024

@mattansb out of curiosity, I was wondering what do you think of the p-MAP, which Jeff Mills calls "the Bayesian p-value" in his talk? As it seems to offer in theory the "best of both worlds", i.e., it can give evidence for the null (which I remember you mentioned as the main benefit of BF), but it also does not suffer from all the limitations of the BFs. Moreover, it is straightforward to compute, understand and seems (at least that's what Jeff suggests) mathematically grounded...

from blog.

mattansb avatar mattansb commented on July 20, 2024

I have some thoughts:

First, I'm not sure why this is dubbed a "p-value" - it is a ratio (because the denominator is the MAP it is by definition <= 1, but still not a probability), making it more like a BF than a p-value.

Second, I don't see how it lends itself to support the null any more than a p-value - at best, when MAP is the null, the p-MAP is 1. This is also true for p-values - when the estimate is the null, the p-value is 1. Since the latter cannot be used to support the null, I don't see how the former can. (I guess this is why it is the Bayesian p-value).
Also, because it answers the question "how much more probable is the MAP than the null" it is by definition looking for evidence for the anything (the "best case scenario" via the MAP) over the null, but cannot provide evidence for the null.

Finally, it does not really deal with the problem of choosing a prior - it only deal with a problem of choosing a weak/non-informative prior. But when you have strong priors you get a reversed Jeffreys-Lindley-Bartlett paradox:

library(bayestestR)
library(rstanarm)

stan_glm_shhh <- function(...){
  capture.output(fit <- stan_glm(...))
  fit
}


X <- rnorm(100)
Y <- X + rnorm(100, sd = 0.1)

cor(X,Y) # data points to a strong effects
#> [1] 0.9953305

fit <- stan_glm_shhh(Y ~ X, 
                     prior = normal(0,0.001)) # strong priors for null effect

p_map(fit) # points to no effect!
#> # MAP-based p-value
#> 
#>   (Intercept): 0.978
#>   X          : 1.000



X <- rnorm(10000)
Y <- rnorm(10000)

cor(X,Y) # data points to no effect
#> [1] -0.0205174

fit <- stan_glm_shhh(Y ~ X, 
                     prior = normal(1,0.001)) # strong priors against null effect

p_map(fit) # points to a true effect!
#> # MAP-based p-value
#> 
#>   (Intercept): 0.713
#>   X          : 0.000

Created on 2019-06-19 by the reprex package (v0.3.0)

from blog.

DominiqueMakowski avatar DominiqueMakowski commented on July 20, 2024

First, I'm not sure why this is dubbed a "p-value" - it is a ratio (because the numerator is the MAP it is by definition >= 1, but still not a probability), making it more like a BF than a p-value.

I agree, a "Bayesian p-value" refers IMO more to the pd than to this ratio.

Also, because it answers the question "how much more probable is the MAP than the null" it is by definition looking for evidence for the anything (the "best case scenario" via the MAP) over the null, but cannot provide evidence for the null.

good point.

But when you have strong priors you get a reversed Jeffreys-Lindley-Bartlett paradox:

Interesting interesting.

As Justin Bieber recently challenged Tom Cruise for an MMA fight in an octogone, I am thinking about organizing a tournament with Wagenmakers, Mills, Kruschke, the stan people, you and Daniel. I will be taking the bets 💰 💰

from blog.

mattansb avatar mattansb commented on July 20, 2024

Just like in the Bieber vs. Cruise case, I'm sure its obvious who would be the ultimate MBA (Mixed Bayesian arts) champion 😜


BTW, the BF here performs here as expected:
For the first model, the priors of both the point-null model and the "alternative" are so similar, that BF = 1:

#> Computation of Bayes factors: estimating marginal likelihood, please wait...
#> Bayes factor analysis
#> ---------------------             
#> [2] X    1.01
#> 
#> Against denominator:
#> 		 [1] (Intercept only)   
#> ---
#> Bayes factor type:  marginal likelihoods (bridgesampling)

And for the second model, the priors of point-null model are wayyyy more appropriate that the alternative model, that BF <<< 1:

#> Computation of Bayes factors: estimating marginal likelihood, please wait...
#> Bayes factor analysis
#> ---------------------                 
#> [2] X    6.46e-14
#> 
#> Against denominator:
#> 		 [1] (Intercept only)   
#> ---
#> Bayes factor type:  marginal likelihoods (bridgesampling)

(Note that I used the compare-models function and not the Savage-Dickey function because for the second model the prior and posterior samples were both so close together and far from 0 that estimation failed (NaN), but for the first model BF was ~1).

from blog.

mattansb avatar mattansb commented on July 20, 2024

BTW, the reversed Jeffreys-Lindley-Bartlett paradox holds also for pd and ROPE, CI, Median... and any other measure that is only based on the posterior.

To summarize:

  1. If you don't have any priors (weak / non-informative), it is silly to use BFs as their whole point is to compare two sets of priors. In such cases, the posterior is driven almost 100% by the observed data, and thus it makes sense to explore the posterior (with pd, ROPE, p-MAP just to see if what you observe isn't 0).
  2. If you have super strong priors, inferring from the posterior is silly as the posterior is driven almost 100% by your priors.,... But it would make sense to see how your strong priors hold up against another set of priors.
  3. If you have some (even minimally) informed priors, you get to do both: the posteriors are (probably) mostly data driven, so it's a good idea to look at them (for estimating, and maybe also seeing if they differ from 0), and also you can compare different models with different priors to see which is better.

FIN

from blog.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.