Matrix A matrix is an ordered set of numbers listed rectangular form. Example. Let A denote the matrix [2 [5 [3 5 6 9 7 8 0 8] 9] 1] This matrix A has three rows and four columns. We say it is a 3 x 4 matrix. We denote the element on the second row and fourth column with a2,4. Square matrix If a matrix A has n rows and n columns then we say it's a square matrix. In a square matrix the elements ai,i , with i = 1,2,3,... , are called diagonal elements. Remark. There is no difference between a 1 x 1 matrix and an ordenary number. Diagonal matrix A diagonal matrix is a square matrix with all de non-diagonal elements 0. The diagonal matrix is completely denoted by the diagonal elements. Example. [7 [0 [0 0 5 0 0] 0] 6] The matrix is denoted by diag(7 , 5 , 6) Row matrix A matrix with one row is called a row matrix Column matrix A matrix with one column is called a column matrix Matrices of the same kind Matrix A and B are of the same kind if and only if A has as many rows as B and A has as many columns as B The tranpose of a matrix The n x m matrix A' is the transpose of the m x n matrix A if and only if The ith row of A = the ith column of A' for (i = 1,2,3,..n) So ai,j = aj,i' The transpose of A is denoted T(A) or AT 0-matrix When all the elements of a matrix A are 0, we call A a 0-matrix. We write shortly 0 for a 0-matrix. An identity matrix I An identity matrix I is a diagonal matrix with all diagonal element = 1. A scalar matrix S A scalar matrix S is a diagonal matrix with all diagonal elements alike. a1,1 = ai,i for (i = 1,2,3,..n) The opposite matrix of a matrix If we change the sign of all the elements of a matrix A, we have the opposite matrix -A. If A' is the opposite of A then ai,j' = -ai,j, for all i and j. A symmetric matrix A square matrix is called symmetric if it is equal to its transpose. Then ai,j = aj,i , for all i and j. A skew-symmetric matrix A square matrix is called skew-symmetric if it is equal to the opposite of its transpose. Then ai,j = -aj,i , for all i and j. The sum of matrices of the same kind Sum of matrices To add two matrices of the same kind, we simply add the corresponding elements. Sum properties Consider the set S of all n x m matrices (n and m fixed) and A and B are in S. From the properties of real numbers it's immediate that A + B is in S the addition of matrices is associative in S A+0=A=0+A with each A corresponds an opposite matrix -A A+B=B+A Scalar multiplication Definition To multiply a matrix with a real number, we multiply each element with this number. Properties Consider the set S of all n x m matrices (n and m fixed). A and B are in S; r and s are real numbers. It is not difficult to see that: r(A+B) = rA+rB (r+s)A = rA+sA (rs)A = r(sA) (A + B)T = AT + BT (rA)T = r. AT Sums in math Because in the following, there is an intensive use of the properties of sums, the reader who is not familiar with these properties must read first Sums in math . Remark. In this html document, for convenience, we'll write the word sum instead of the sigma sign. Multiplication of a row matrix by a column matrix This multiplication is only possible if the row matrix and the column matrix have the same number of elements. The result is a ordinary number ( 1 x 1 matrix). To multiply the row by the column, one multiplies corresponding elements, then adds the results. Example. [1] [2 1 3]. [2] = [19] [5] Multiplication of two matrices A.B This product is defined only if A is a (l x m) matrix and B is a (m x n) matrix. So the number of columns of A has to be equal to the number of rows of B. The product C = A.B then is a (l x n) matrix. The element of the ith row and the jth column of the product is found by multiplying the ith row of A by the jth column of B. ci,j = sumk (ai,k.bk,j) Example. [1 2][1 3] = [5 7] [2 1][2 2] [4 8] [1 3][1 2] = [7 5] [2 2][2 1] [6 6] [1 1][2 [1 1][-2 2] = [0 0] -2] [0 0] From these examples we see that the product is not commutative and that there are zero divisors. Properties of multiplication of matrices Associativity If the multiplication is defined then A(B.C) = (A.B)C holds for all matrices A,B and C. Proof: We'll show that an element of A(B.C) is equal to the corresponding element of (A.B)C First we calculate the element of the ith row and jth column of A(B.C) Let D denote B.C, then dk,j = sump bk,p.cp,j (1) Let E denote A.D then ei,j = sumk ai,k.dk,j (2) (1) in (2) gives ei,j = sumk ai,k.(sump bk,p.cp,j) <=> ei,j = sumk,p ai,k.bk,p.cp,j So the element of the ith row and jth column of A(B.C) is sumk,p ai,k.bk,p.cp,j (3) Now we calculate the element of the ith row and jth column of (A.B)C Let D' denote A.B, then di,p' = sumk ai,k.bk,p (4) Let E' denote D'C then ei,j' = sump di,p'.cp,j (5) (4) in (5) gives ei,j' = sump (sumk ai,k.bk,p).cp,j <=> ei,j' = sumk,p ai,k.bk,p.cp,j So the element of the ith row and jth column of (A.B)C is sumk,p ai,k.bk,p.cp,j (6) From (3) and (6) Distributivity => A(B.C) = (A.B)C If the multiplication is defined then A(B+C) = A.B+A.C and (A+B).C = A.C+B.C holds for all matrices A,B and C. This theorem can be proved in the same way as above. Theorem 1 For each A, there is always an identity matrix E and an identity matrix E' so that A.E = A and E'.A = A If A is a square matrix, E = E'. Theorem 2 (A.B)T = BT .AT This theorem can be proved in the same way as above. Theorem 3 If the multiplication is defined then for each A A.0 = 0 = 0.A Theorem 4 r and s are real numbers and A , B matrices. If the multiplication is defined then (rA)(sB) = (rs)(AB) This theorem can be proved in the same way as above. Theorem 5 if D = diag(a,b,c) then D.D = ( a2 , b2 , c2) D.D.D = ( a3 , b3 , c3) ..... This property can be generalised for D = diag(a,b,c,d,e,...,l).