Say we have K models: , each with parameters .

e.g. model 1 is exponential, model 2 is Gaussian, model 3 is MOG, etc.

Choosing the maximum-likelihood model is a simple recipe for overfitting. Models with more parameters will tend to fit the data more closely, so the issue of model complexity has to be addressed. One way to do so is to use some test data to choose a better model/parameters.

Rather than just choose a single most-likely model, we want to compute .

In practice, the prior over models can be very controversial, so the prior tends to be kept pretty weak. As a result, the `likelihood`

portion tends to dominate.

This is called the **marginal likelihood**, and can be used to select a model by choosing the one with the largest marginal likelihood.

This can be very difficult to compute or estimate if not available in closed form. As the dimension of \theta goes higher, there is a computational cost.

(Predictive density for the given model) * (marginal likelihood of the model).

- * is
**marginal likelihood** - * is
**Bayesian model weights**

In practice, Bayesian model averaging doesn't work ver well compared to other ensemble methods.