**Trace Trick**

The trace is the sum of the diagonals of a matrix.

The trace operator has the **cyclic permutation property**:

Dealing with Mahalanobis terms

Expand out into

**sufficient statistics** = A representation that preserves all information of original data. “The minimum data needed to use a model”

**Interpretation of Probability**

or

* is a proposition that is true or false

* Frequentist or Bayesian interpretation of the value?

** Frequentist: $P(a)$ is the relative frequency with which $a$ occurs in repeated trials
** Bayesian: is a degree of belief of an “agent” that proposition is true

**Mixing Discrete and Real-valued Variables**

(See problem 8 of HW1)

is binary, e.g. (0=Healthy,1=Flu)

is real valued, e.g. temperature of a patient

Can think of:

*

** Conditioned on binary value of C, this is a mixture of two continuous distributions
* $P(C|X)P(X)$
** Conditioned on continuous value X, this is a probability across the categories of C

**Multivariate Models**

(Text section 2.5 and note set 2)

is a d-by-1 vector of real values. is the expected value vector for each .

**Covariance**

- = covariance between and
- =

- is the covariance matrix. It is symmetric:

**Linear Correlation**

Limits of Correlation Measures: These are summaries, and fail to capture important aspects in certain circumstances. e.g. a square toroidal relationship between two variables.

Characterized by parameters and , the mean and covariance matrix.

- For dimensions, the number of parameters needed grows as due to the covariance matrix.
- The Gaussian assumes only pairwise linear dependencies.

This is effectively a distance function between the target and the mean. As the distance increases, the exponential component of the equation becomes a smaller and smaller value.

=Note Set 2=

Sections 10.1-10.2 in the text

More general than mutual independence.

are conditionally independent given iff .

Equivalently:

Isn't in general implied by marginal independence.

Learning one value doesn't change distribution over the other. This is frequently an assumption used to build models.

First-order: Given the previous entry in a sequence, the next entry is independent on all previous parts of the sequence.

For higher orders, will depend on more than one other variable.

: random variable with values is a “class variable”.

With real-valued features

Want to infer .

Using Bayes' Rule over all values of :

Don't worry about normalization term (evidence) because the most likely can be found using these proportions.

This model tends to be too overconfident, but it tends to do well for ranking or comparing classes.

“Directed graphical models” were invented in genetics in the 1920s, then were revived in the 1970s as “Bayesian Networks”. They are a systematic way to represent and compute sets of random variables and associated conditional independence assumptions.

Associate each random variable with a node in a directed graph. A directed edge is drawn from to when depends directly on . There are no directed cycles allowed.

Markov model:

Naive Bayes model:

when

First expand the full joint:

Then factor the join:

Then can simplify the conditionals by C.I. from the graph:

Exploiting the independence assumption allows for more efficient computing over these sums.

Check out Murphy text

* page 99 for example of MLE of Multivariate Normal

Example 6 in describes a mixture model where the MLE solution is not obvious.

Treat as a random variable. = prior distribution/density on before considering any data. = Likelihood.

So using a Binomial model with a Beta prior…

where and .

In this case, the posterior mean = . This means that if , or there is a “flat prior” and the posterior mean equals the `mode`

of and goes to the MLE.

Two data sets D1 and D2 which have IID likelihood and parameters .

Posterior is proportional to the Likelihood * Prior.

Under the IID data assumption:

Where Naive Bayes refers to conditionally independent variables, the data samples are assumed to be conditionally independent here. In this way, new data can be evaluated sequentially.

Let D1 be one success out of 4 trials and D2 is 5 successes out of 8 trials.

“Posterior Predictive Densities” in the textbook.

Given some training data , Likelihood, assuming are conditionally independent given .

Prior:

result is the posterior.

However, want to know . Using the Law of Total Probability:

= (prediction for x_new if \theta were the true value) (How likely the \theta value is, given the data)

This is in contrast to something like , because now we are using the entire posterior rather than only its mode.

Might not do this if the integral is too difficult, or if the data leads the prior to already be very “peaked” and well concentrated.

with unknown mean and known variance. .

To make predictions about new observations:

The variance of this is the sum of the two sources of variance: one variance is known and based on the observed data. The other is the variance from the prior over the unknown mean of the data.

Bayesian Model Selection

Check out

* Murphy text

** section 5.3
* Barber Text available on class webpage.
== Midterm ==
Was around here.
==Classification==
Classification
===Logistic Regression==
Logistic Regression
Check out
* Murphy text p.245-251 on Newton-Raphson and related methods
* Murphy text p.261-266 on Stochastic gradient descent
===Neural Networks===
Neural Networks
Check out
* Murphy text
** chapter 16.5

** chapter 28 on deep learning

Unsupervised Learning

* Expectation Maximization

Check out Murphy text

* p337-356 on Finite Mixture Models

* p363-365 on EM growth

Hidden Markov Models

* see p603-629 in the text, or chapter 17

* see p661 in the text for CRFs

Gibbs Sampling

* Check out collapsed method as Dirichlet mixture of Gaussians

Linear Regression

In the Murphy text:

* See section 7.3

* See section 7.5 about fitting and time complexity

* See somewhere about the Bias-Variance tradeoff