# Michael Data

==Bivariate Transformation=
Identify supports and such that the transformation is a 1-to-1 map from to . If there isn't an “onto” mapping, then identify a partition of such that there is a 1-to-1 map from each to . (C&B 162)

(C&B 157)

#### Jacobian

is its determinant (ad - bc)

#### Sum of two Poisson

If ~ and ~ and and are independent, then ~

#### Distribution Trick

Simplify an integral into the form of a known distribution, so it integrates to 1.

#### Geometric Series

See Wikipedia
Used to derive moment-generating function of Poisson.

#### Pearson's Correlation Coefficient

##### Chebyshev's Inequality

“can be applied to completely arbitrary distributions (unknown except for mean and variance)”
Use to show that certain limits go to zero in order to satisfy conditions for a Central Limit Theorem.

##### Central Limit Theorems

Given a random variable with finite mean and variance :

#### for the Mean

The sample mean converges in distribution to for samples.

#### for the Sum

The sum of samples converges in distribution to

#### Lyapunov

The Random Variables have to be independent, but not necessarily identically distributed.
Lyapunov's condition:

When Lyapunov's condition is satisfied,

#### Lindeberg

The Random Variables do not need to be identically distributed, as long as they satisfy Lindeberg's condition.

Lindeberg's Condition

If Lindeberg's Condition is satisfied, then
and

##### Multivariate Normal Conditional distributions

If and are partitioned as follows

with sizes

with sizes

then, the distribution of conditional on is multivariate normal.

where

and covariance matrix

.

This matrix is the Schur complement of in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix. Here is the generalized inverse of

Note that knowing that alters the variance, though the new variance does not depend on the specific value of . Perhaps more surprisingly, the mean is shifted by .

Compare this with the situation of not knowing the value of , in which case would have distribution
.

An interesting fact derived in order to prove this result, is that the random vectors and are independent.

The matrix is known as the matrix of regression analysis coefficients.

In the bivariate case where x is partitioned into and , the conditional distribution of given is

where is the correlation coefficient between and .

#### Bivariate conditional expectation

In the case

the following result holds

where the final ratio here is called the inverse Mills ratio.

##### Delta Method

For random variable which satisfies , sample vector , samples, and a function of the random variable :
Use a Taylor-series approximation of without calculating the distribution of .

First-order