# Michael Data

Essentially the same idea as principal component analysis. Factorize the large N-by-M matrix into “skinny” matrices of size N-by-k and k-by-M. The factorization results in an approximation via singular value decomposition.

Each of the dimensions is referred to as a factor. Both dimensions of the data can then be represented in this new latent feature space. Finding the factors is equivalent to performing SVD, which has complexity .

There are systematic effects that have to be taken out:

• * The overall mean rating
• * The mean rating for user u
• * The mean rating for movie i

These systematic effects can be modeled in baseline predictors.

The new prediction model becomes .

Regularization turns out to be very important. Assists in learning, but you can also regularize incomplete data toward the center of the latent space.

One method for learning the approximation components is gradient descent. The parameters to learn are the biases .

The matrix factorization can also be learned with gradient descent. Rather than using the linear algebra SVD methods to do it, the factors are treated as an additional set of parameters to learn.
Update equations with regularization are:

• *
• *