I have been doing some linear model analyses involving Bayes factors lately and I have two probably very basic questions:
1) As far as I understand, a Bayes factor is simply a likelihood ratio, i.e.
But that's not really Bayesian inference, is it? Since the whole point of Bayesian inference is to convert p(data|model) to p(model|data)? Sure, people argue that given equal prior probabilities of both models, the above equation is equivalent to p(M1|data)/p(M2|data), but still it seems to me like the Bayes factor approach is missing the whole point of Bayesian inference. Especially since the really cool thing about Bayesian modeling is that I can have both prior and posterior distributions for each model coefficient, I feel like Bayes factor model comparison is falling short of the power of Bayesian models...?
2) How is it possible, in the first place, that Bayes factors, being based on (unpenalized) likelihood, can favor the null model? Shouldn't the likelihood always increase with more complex models, i.e. with the non-null model?
Hope some of you can shed a little light on my mind. Cheers, Benny