Inverting Matrices Determinants and Matrix Multiplication Determinants • Square matrices have determinants, which are useful in other matrix operations, especially inversion. a11 a12 • For a second-order square a a matrix, A, 21 22 the determinant is A a11 a22 a12 a21 Consider the following bivariate raw data matrix Subject # 1 2 3 4 5 X 12 18 32 44 49 Y 1 3 2 4 5 from which the following XY variance-covariance matrix is obtained: X Y X 256 21.5 Y 21.5 2.5 COVXY r S X SY 21.5 256 2.5 0.9 A 256(2.5) 21.5(21.5) 177.75 Think of the variance-covariance matrix as containing information about the two variables – the more variable X and Y are, the more information you have. Any redundancy between X and Y reduces the total amount of information you have -- to the extent that you have covariance between X and Y, you have less total information. Generalized Variance • The determinant tells you how much information the matrix has about the variance in the variables – the generalized variance, • after removing redundancy among variables. • We took the product of the variances and then subtracted the product of the covariances (redundancy). Imagine a Rectangle • Its width represents information on X • Its height represents information on Y • X is perpendicular to Y (orthogonal), thus rXY = 0. • The area of the rectangle represents the total information on X and Y. • With covariance = 0, the determinant = the product of the two variances minus 0. Imagine a Parallelogram • Allowing X and Y to be correlated with one another moves the angle between height and width away from 90 degrees. • As the angle moves further and further away from 90 degrees, the area of the parallelogram is also reduced. • Eventually to zero (when X and Y are perfectly correlated). • See the Generalized Variance video clip in BlackBoard. Consider This Data Matrix Subject # 1 X 10 Y 1 2 20 2 3 30 3 4 40 4 5 50 5 Variance-Covariance Matrix X Y COVXY r SX SY X 250 25 25 1 250 2.5 Y 25 2.5 A 250(2.5) 25(25) 0 Since X and Y are perfectly correlated, the generalized variance is nil. Identity Matrix • An identity matrix has 1’s on its main diagonal, 0’s elsewhere. 1 0 0 0 1 0 0 0 1 Inversion • The inverted matrix is that which when multiplied by A yields the identity matrix. That is, AA1 = A1A = I. • With scalars, multiplication by 1 a 1. the inverse yields the scalar a identity. • Multiplication by an inverse 1 a a . is like division with scalars. b b Inverting a 2x2 Matrix • For our original variance/covariance matrix: A 1 2 2.5 - 21.5 1 a22 - a12 1 * A 2 - a21 a11 177.75 - 21.5 256 Multiplying a Scalar by a Matrix • Simply multiply each matrix element by the scalar (1/177.75 in this case). • The resulting inverse matrix is: .014064698 A - .120956399 1 - .120956399 1.440225035 AA1 = A1A = I a b w x row 1 col1 row 1 col2 c d y z row col row col 2 1 2 2 256 21.5 .014064698 21.5 2.5 - .120956399 aw by ax bz cw dy cx dz - .120956399 1 0 1.440225035 0 1 The Determinant of a Third-Order Square Matrix a11 a12 a13 a a a A 21 22 23 3 a31 a32 a33 a11 a22 a33 a12 a23 a31 a13 a32 a21 a31 a22 a13 a11 a32 a23 a12 a21 a33 Matrix Multiplication for a 3 x 3 a b c r s t d e f u v w g h i x y z ar bu cx as bv cy at bw cz dr eu fx ds ev fy dt ew fz gr hu ix gs hv iy gt hw iz row1 col1 row1 col2 row1 col3 row 2 col1 row 2 col2 row 2 col3 row 3 col1 row 3 col2 row 3 col3 SAS Will Do It For You • • • • • • • • • Proc IML; reset print; display each matrix when created XY ={ enter the matrix XY 256 21.5, comma at end of row 21.5 2.5}; matrix within { } determinant = det(XY); find determinant inverse = inv(XY); find inverse identity = XY*inverse; multiply by inverse quit; XY 2 rows 256 21.5 DETERMINANT INVERSE IDENTITY 2 cols 21.5 2.5 1 row 177.75 2 rows 0.0140647 -0.1209560 2 rows 1 -2.08E-17 1 col 2 cols -0.120956 1.440225 2 cols -2.22E-16 1