by Benny Wenz
Last Updated February 11, 2019 09:19 AM - source

I have been doing some linear model analyses involving Bayes factors lately and I have two probably very basic questions:

1) As far as I understand, a Bayes factor is simply a likelihood ratio, i.e.

p(data|M1)/p(data|M2).

But that's not really Bayesian inference, is it? Since the whole point of Bayesian inference is to convert p(data|model) to p(model|data)? Sure, people argue that given equal prior probabilities of both models, the above equation is equivalent to p(M1|data)/p(M2|data), but still it seems to me like the Bayes factor approach is missing the whole point of Bayesian inference. Especially since the really cool thing about Bayesian modeling is that I can have both prior and posterior *distributions* for each model *coefficient*, I feel like Bayes factor model comparison is falling short of the power of Bayesian models...?

2) How is it possible, in the first place, that Bayes factors, being based on (unpenalized) likelihood, can favor the null model? Shouldn't the likelihood always increase with more complex models, i.e. with the non-null model?

Hope some of you can shed a little light on my mind. Cheers, Benny

- Serverfault Help
- Superuser Help
- Ubuntu Help
- Webapps Help
- Webmasters Help
- Programmers Help
- Dba Help
- Drupal Help
- Wordpress Help
- Magento Help
- Joomla Help
- Android Help
- Apple Help
- Game Help
- Gaming Help
- Blender Help
- Ux Help
- Cooking Help
- Photo Help
- Stats Help
- Math Help
- Diy Help
- Gis Help
- Tex Help
- Meta Help
- Electronics Help
- Stackoverflow Help
- Bitcoin Help
- Ethereum Help