# у║┐ТђДтГљуЕ║жЌ┤СИГТЋ░ТЇ«

СИ╗ТѕљтѕєТў»ТюђтцДТќ╣ти«уџёТќ╣тљЉсђѓТјет»╝СИ╗ТѕљтѕєСйюСИ║у║┐ТђДтГљуЕ║жЌ┤СИГТЋ░ТЇ«уџёТюђСй│У┐ЉС╝╝сђѓТќ╣ти«ТюђтцДтїќуџёуГЅС╗иТђДсђѓжђџУ┐Єт»╗ТЅЙтЇЈТќ╣ти«уЪЕжўхуџёуЅ╣тЙЂтђ╝тњїуЅ╣тЙЂтљЉжЄЈТЮЦжЂ┐тЁЇТўЙт╝ЈС╝ўтїќсђѓТ▒йУйдСИ╗УдЂжЃеС╗Хуц║СЙІ; тдѓСйЋућет░ЈтъІУ┤ДУйдтЉіУ»ЅУиЉУйдсђѓтЂџPCAуџёТаЄтЄєжЁЇТќ╣сђѓУДБжЄіPCAуџёТ│еТёЈС║ІжА╣сђѓ
т▒Ћт╝ђТЪЦуюІУ»дТЃЁ

1. Principal Components: Mathematics, Example, Interpretation 36-350: Data Mining 18 September 2009 Reading: Section 3.6 in the textbook. Contents 1 Mathematics of Principal Components 1 1.1 Minimizing Projection Residuals . . . . . . . . . . . . . . . . . . 2 1.2 Maximizing Variance . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 More Geometry; Back to the Residuals . . . . . . . . . . . . . . . 5 2 Example: Cars 6 2.1 A Recipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 PCA Cautions 10 At the end of the last lecture, I set as our goal to find ways of reducing the dimensionality of our data by means of linear projections, and of choosing pro- jections which in some sense respect the structure of the data. I further asserted that there was one way of doing this which was far more useful and important than others, called principal components analysis, where Рђюrespecting struc- tureРђЮ means Рђюpreserving varianceРђЮ. This lecture will explain that, explain how to do PCA, show an example, and describe some of the issues that come up in interpreting the results. PCA has been rediscovered many times in many fields, so it is also known as the Karhunen-Lo`eve transformation, the Hotelling transformation, the method of empirical orthogonal functions, and singular value decomposition1 . We will call it PCA. 1 Mathematics of Principal Components We start with p-dimensional feature vectors, and want to summarize them by projecting down into a q-dimensional subspace. Our summary will be the pro- 1 Strictly speaking, singular value decomposition is a matrix algebra trick which is used in the most common algorithm for PCA. 1

2.jection of the original vectors on to q directions, the principal components, which span the sub-space. There are several equivalent ways of deriving the principal components math- ematically. The simplest one is by finding the projections which maximize the variance. The first principal component is the direction in feature space along which projections have the largest variance. The second principal component is the direction which maximizes variance among all directions orthogonal to the first. The k th component is the variance-maximizing direction orthogonal to the previous k Рѕњ 1 components. There are p principal components in all. Rather than maximizing variance, it might sound more plausible to look for the projection with the smallest average (mean-squared) distance between the original vectors and their projections on to the principal components; this turns out to be equivalent to maximizing the variance. Throughout, assume that the data have been РђюcenteredРђЮ, so that every fea- ture has mean 0. If we write the centered data in a matrix X, where rows are objects and columns are features, then XT X = nV, where V is the covariance matrix of the data. (You should check that last statement!) 1.1 Minimizing Projection Residuals WeРђЎll start by looking for a one-dimensional projection. That is, we have p- dimensional feature vectors, and we want to project them on to a line through the origin. We can specify the line by a unit vector along it, w, and then the projection of a data vector xi on to the line is xi ┬и w, which is a scalar. (Sanity check: this gives us the right answer when we project on to one of the coordinate axes.) This is the distance of the projection from the origin; the actual coordinate in p-dimensional space is (xi ┬и w)w. The mean of the projections will be zero, because the mean of the vectors xi is zero: n n 1 1 (xi ┬и w)w = xi ┬иw w (1) n i=1 n i=1 If we try to use our projected or image vectors instead of our original vectors, there will be some error, because (in general) the images do not coincide with the original vectors. (When do they coincide?) The difference is the error or residual of the projection. How big is it? For any one vector, say xi , itРђЎs 2 2 2 xi Рѕњ (w ┬и xi )w = xi Рѕњ 2(w ┬и xi )(w ┬и xi ) + w (2) 2 2 = xi Рѕњ 2(w ┬и xi ) + 1 (3) (This is the same trick used to compute distance matrices in the solution to the first homework; itРђЎs really just the Pythagorean theorem.) Add those residuals up across all the vectors: n 2 2 RSS(w) = xi Рѕњ 2(w ┬и xi ) + 1 (4) i=1 2

3. n n 2 2 = n+ xi Рѕњ2 (w ┬и xi ) (5) i=1 i=1 The term in the big parenthesis doesnРђЎt depend on w, so it doesnРђЎt matter for trying to minimize the residual sum-of-squares. To make RSS small, what we must do is make the second sum big, i.e., we want to maximize n 2 (w ┬и xi ) (6) i=1 Equivalently, since n doesnРђЎt depend on w, we want to maximize n 1 2 (w ┬и xi ) (7) n i=1 2 which we can see is the sample mean of (w ┬и xi ) . The mean of a square is always equal to the square of the mean plus the variance: n n 2 1 2 1 (w ┬и xi ) = xi ┬и w + Var [w ┬и xi ] (8) n i=1 n i=1 Since weРђЎve just seen that the mean of the projections is zero, minimizing the residual sum of squares turns out to be equivalent to maximizing the variance of the projections. (Of course in general we donРђЎt want to project on to just one vector, but on to multiple principal components. If those components are orthogonal and have the unit vectors w1 , w2 , . . . wk , then the image of xi is its projection into the space spanned by these vectors, k (xi ┬и wj )wj (9) j=1 The mean of the projection on to each component is still zero. If we go through the same algebra for the residual sum of squares, it turns out that the cross- terms between different components all cancel out, and we are left with trying to maximize the sum of the variances of the projections on to the components. Exercise: Do this algebra.) 1.2 Maximizing Variance Accordingly, letРђЎs maximize the variance! Writing out all the summations grows tedious, so letРђЎs do our algebra in matrix form. If we stack our n data vectors into an n ├Ќ p matrix, X, then the projections are given by Xw, which is an n ├Ќ 1 matrix. The variance is 2 1 2 ¤Ѓw = (xi ┬и w) (10) n i 3

4. 1 T = (Xw) (Xw) (11) n 1 T T = w X Xw (12) n XT X = wT w (13) n = wT Vw (14) 2 We want to chose a unit vector w so as to maximize ¤Ѓw . To do this, we need to make sure that we only look at unit vectors Рђћ we need to constrain the maximization. The constraint is that w ┬и w = 1, or wT w = 1. This needs a brief excursion into constrained optimization. We start with a function f (w) that we want to maximize. (Here, that function is wT V w.) We also have an equality constraint, g(w) = c. (Here, g(w) = wT w and c = 1.) We re-arrange the constraint equation so its right- hand side is zero, g(w) Рѕњ c = 0. We now add an extra variable to the problem, the Lagrange multiplier ╬╗, and consider u(w, ╬╗) = f (w)Рѕњ╬╗(g(w)Рѕњc). This is our new objective function, so we differentiate with respect to both arguments and set the derivatives equal to zero: Рѕѓu Рѕѓf Рѕѓg =0= Рѕњ╬╗ (15) Рѕѓw Рѕѓw Рѕѓw Рѕѓu = 0 = Рѕњ(g(w) Рѕњ c) (16) Рѕѓ╬╗ That is, maximizing with respect to ╬╗ gives us back our constraint equation, g(w) = c. At the same time, when we have the constraint satisfied, our new ob- jective function is the same as the old one. (If we had more than one constraint, we would just need more Lagrange multipliers.)23 For our projection problem, u = wT Vw Рѕњ ╬╗(wT w Рѕњ 1) (17) Рѕѓu = 2Vw Рѕњ 2╬╗w = 0 (18) Рѕѓw Vw = ╬╗w (19) Thus, desired vector w is an eigenvector of the covariance matrix V, and the maximizing vector will be the one associated with the largest eigenvalue ╬╗. This is good news, because finding eigenvectors is something which can be done comparatively rapidly (see Principles of Data Mining p. 81), and because eigen- vectors have many nice mathematical properties, which we can use as follows. We know that V is a p ├Ќ p matrix, so it will have p different eigenvectors.4 We know that V is a covariance matrix, so it is symmetric, and then linear 2 To learn more about Lagrange multipliers, read Boas (1983) or (more compactly) Klein (2001). 3 Thanks to Ramana Vinjamuri for pointing out a sign error in an earlier version of this paragraph. 4 Exception: if n < p, there are only n distinct eigenvectors and eigenvalues. 4

5.algebra tells us that the eigenvectors must be orthogonal to one another. Again because V is a covariance matrix, it is a positive matrix, in the sense that x ┬и Vx РЅЦ 0 for any x. This tells us that the eigenvalues of V must all be РЅЦ 0. The eigenvectors of V are the principal components of the data. We know that they are all orthogonal top each other from the previous paragraph, so together they span the whole p-dimensional feature space. The first principal component, i.e. the eigenvector which goes the largest value of ╬╗, is the direction along which the data have the most variance. The second principal component, i.e. the second eigenvector, is the direction orthogonal to the first component with the most variance. Because it is orthogonal to the first eigenvector, their projections will be uncorrelated. In fact, projections on to all the principal components are uncorrelated with each other. If we use q principal components, our weight matrix w will be a p├Ќq matrix, where each column will be a different eigenvector of the covariance matrix V. The eigenvalues will give the total variance described by each component. The variance of the projections on to q the first q principal components is then i=1 ╬╗i . 1.3 More Geometry; Back to the Residuals Suppose that the data really are q-dimensional. Then V will have only q positive eigenvalues, and p Рѕњ q zero eigenvalues. If the data fall near a q-dimensional subspace, then p Рѕњ q of the eigenvalues will be nearly zero. If we pick the top q components, we can define a projection operator Pq . The images of the data are then XPq . The projection residuals are X Рѕњ XPq or X(1 Рѕњ Pq ). (Notice that the residuals here are vectors, not just magnitudes.) If the data really are q-dimensional, then the residuals will be zero. If the data are approximately q-dimensional, then the residuals will be small. In any case, we can define the R2 of the projection as the fraction of the original variance kept by the image vectors, q ╬╗i R2 РЅА pi=1 (20) j=1 ╬╗j just as the R2 of a linear regression is the fraction of the original variance of the dependent variable retained by the fitted values. The q = 1 case is especially instructive. We know, from the discussion of projections in the last lecture, that the residual vectors are all orthogonal to the projections. Suppose we ask for the first principal component of the residuals. This will be the direction of largest variance which is perpendicular to the first principal component. In other words, it will be the second principal component of the data. This suggests a recursive algorithm for finding all the principal components: the k th principal component is the leading component of the residuals after subtracting off the first k Рѕњ 1 components. In practice, it is faster to use eigenvector-solvers to get all the components at once from V, but we will see versions of this idea later. This is a good place to remark that if the data really fall in a q-dimensional subspace, then V will have only q positive eigenvalues, because after subtracting 5

6. Variable Meaning Sports Binary indicator for being a sports car SUV Indicator for sports utility vehicle Wagon Indicator Minivan Indicator Pickup Indicator AWD Indicator for all-wheel drive RWD Indicator for rear-wheel drive Retail Suggested retail price (US\$) Dealer Price to dealer (US\$) Engine Engine size (liters) Cylinders Number of engine cylinders Horsepower Engine horsepower CityMPG City gas mileage HighwayMPG Highway gas mileage Weight Weight (pounds) Wheelbase Wheelbase (inches) Length Length (inches) Width Width (inches) Table 1: Features for the 2004 cars data. off those components there will be no residuals. The other p Рѕњ q eigenvectors will all have eigenvalue 0. If the data cluster around a q-dimensional subspace, then p Рѕњ q of the eigenvalues will be very small, though how small they need to be before we can neglect them is a tricky question.5 2 Example: Cars TodayРђЎs dataset is 388 cars from the 2004 model year, with 18 features (from http://www.amstat.org/publications/jse/datasets/04cars.txt, with in- complete records removed). Eight features are binary indicators; the other 11 features are numerical (Table 1). All of the features except Type are numeri- cal. Table 2 shows the first few lines from the data set. PCA only works with numerical features, so we have ten of them to play with. There are two R functions for doing PCA, princomp and prcomp, which differ in how they do the actual calculation.6 The latter is generally more robust, so 5 One tricky case where this can occur is if n < p. Any two points define a line, and three points define a plane, etc., so if there are fewer data points than features, it is necessarily true that the fall on a low-dimensional subspace. If we look at the bags-of-words for the Times stories, for instance, we have p РЅѕ 4400 but n РЅѕ 102. Finding that only 102 principal components account for all the variance is not an empirical discovery but a mathematical artifact. 6 princomp actually calculates the covariance matrix and takes its eigenvalues. prcomp uses a different technique called Рђюsingular value decompositionРђЮ. 6

7.Sports, SUV, Wagon, Minivan, Pickup, AWD, RWD, Retail,Dealer,Engine,Cylinders,Horsepower,Cit Acura 3.5 RL,0,0,0,0,0,0,0,43755,39014,3.5,6,225,18,24,3880,115,197,72 Acura MDX,0,1,0,0,0,1,0,36945,33337,3.5,6,265,17,23,4451,106,189,77 Acura NSX S,1,0,0,0,0,0,1,89765,79978,3.2,6,290,17,24,3153,100,174,71 Table 2: The first few lines of the 2004 cars data set. weРђЎll just use it. cars04 = read.csv("cars-fixed04.dat") cars04.pca = prcomp(cars04[,8:18], scale.=TRUE) The second argument to prcomp tells it to first scale all the variables to have variance 1, i.e., to standardize them. You should experiment with what happens with this data when we donРђЎt standardize. We can now extract the loadings or weight matrix from the cars04.pca object. For comprehensibility IРђЎll just show the first two components. > round(cars04.pca\$rotation[,1:2],2) PC1 PC2 Retail -0.26 -0.47 Dealer -0.26 -0.47 Engine -0.35 0.02 Cylinders -0.33 -0.08 Horsepower -0.32 -0.29 CityMPG 0.31 0.00 HighwayMPG 0.31 0.01 Weight -0.34 0.17 Wheelbase -0.27 0.42 Length -0.26 0.41 Width -0.30 0.31 This says that all the variables except the gas-mileages have a negative projection on to the first component. This means that there is a negative correlation between mileage and everything else. The first principal component tells us about whether we are getting a big, expensive gas-guzzling car with a powerful engine, or whether we are getting a small, cheap, fuel-efficient car with a wimpy engine. The second component is a little more interesting. Engine size and gas mileage hardly project on to it at all. Instead we have a contrast between the physical size of the car (positive projection) and the price and horsepower. Basi- cally, this axis separates mini-vans, trucks and SUVs (big, not so expensive, not so much horse-power) from sports-cars (small, expensive, lots of horse-power). To check this interpretation, we can use a useful tool called a biplot, which plots the data, along with the projections of the original features, on to the first two components (Figure 1). Notice that the car with the lowest value of the 7

8.second component is a Porsche 911, with pick-up trucks and mini-vans at the other end of the scale. Similarly, the highest values of the first component all belong to hybrids. 2.1 A Recipe There is a more-or-less standard recipe for interpreting PCA plots, which goes as follows. To begin with, find the first two principal components of your data. (I say РђюtwoРђЮ only because thatРђЎs what you can plot; see below.) ItРђЎs generally a good idea to standardized all the features first, but not strictly necessary. Coordinates Using the arrows, summarize what each component means. For the cars, the first component is something like size vs. fuel economy, and the second is something like sporty vs. boxy. Correlations For many datasets, the arrows cluster into groups of highly cor- related attributes. Describe these attributes. Also determine the overall level of correlation (given by the R2 value). Here we get groups of arrows like the two MPGs (unsurprising), retail and dealer price (ditto) and the physical dimensions of the car (maybe a bit more interesting). Clusters Clusters indicate a preference for particular combinations of attribute values. Summarize each cluster by its prototypical member. For the cars data, we see a cluster of very similar values for sports-cars, for instance, slightly below the main blob of data. Funnels Funnels are wide at one end and narrow at the other. They happen when one dimension affects the variance of another, orthogonal dimension. Thus, even though the components are uncorrelated (because they are perpendicular) they still affect each other. (They are uncorrelated but not independent.) The cars data has a funnel, showing that small cars are similar in sportiness, while large cars are more varied. Voids Voids are areas inside the range of the data which are unusually unpop- ulated. A permutation plot is a good way to spot voids. (Randomly permute the data in each column, and see if any new areas become occu- pied.) For the cars data, there is a void of sporty cars which are very small or very large. This suggests that such cars are undesirable or difficult to make. Projections on to the first two or three principal components can be visu- alized; however they may not be enough to really give a good summary of the data. Usually, to get an R2 of 1, you need to use all p principal components.7 7 The exceptions are when some of your features are linear combinations of the others, so that you donРђЎt really have p different features, or when n < p. 8