Title: Computing the Marginal Likelihood
1Computing the Marginal Likelihood
David Madigan
2Bayesian Criterion
- Typically impossible to compute analytically
- All sorts of Monte Carlo approximations
3Laplace Method for p(DM)
(i.e., the log of the integrand divided by n)
Laplaces Method
4Laplace cont.
- Tierney Kadane (1986, JASA) show the
approximation is O(n-1) - Using the MLE instead of the posterior mode is
also O(n-1) - Using the expected information matrix in ? is
O(n-1/2) but convenient since often computed by
standard software - Raftery (1993) suggested approximating by a
single Newton step starting at the MLE - Note the prior is explicit in these approximations
5Monte Carlo Estimates of p(DM)
Draw iid ?1,, ?m from p(?)
In practice has large variance
6Monte Carlo Estimates of p(DM) (cont.)
Draw iid ?1,, ?m from p(?D)
Importance Sampling
7Monte Carlo Estimates of p(DM) (cont.)
Newton and Rafterys Harmonic Mean
Estimator Unstable in practice and needs
modification
8p(DM) from Gibbs Sampler Output
First note the following identity (for any ? )
p(D?) and p(?) are usually easy to
evaluate. What about p(?D)?
Suppose we decompose ? into (?1,?2) such that
p(?1D,?2) and p(?2D,?1) are available in
closed-form
Chib (1995)
9p(DM) from Gibbs Sampler Output
The Gibbs sampler gives (dependent) draws from
p(?1, ?2 D) and hence marginally from p(?2 D)
Rao-Blackwellization
10What about 3 parameter blocks
OK
OK
?
To get these draws, continue the Gibbs sampler
sampling in turn from
and
11p(DM) from Metropolis Output
Chib and Jeliazkov, JASA, 2001
12p(DM) from Metropolis Output
E1 with respect to ?y
E2 with respect to q(?, ?)
13Bayesian Information Criterion
(SL is the negative log-likelihood)
- BIC is an O(1) approximation to p(DM)
- Circumvents explicit prior
- Approximation is O(n-1/2) for a class of priors
called unit information priors. - No free lunch (Weakliem (1998) example)
14(No Transcript)
15(No Transcript)
16Score Functions on Hold-Out Data
- Instead of penalizing complexity, look at
performance on hold-out data - Note even using hold-out data, performance
results can be optimistically biased - Pseudo-hold-out Score
Recall