Introduction to Matrix Algebra DEFINITION OF A MATRIX A matrix is a rectangular array of quantities arranged in rows and columns. A matrix containing m rows and n columns can be expressed as A11 A12 A = [A] = A21 A22 A m1 Am2 A1n A2n Amn mxn The quantities that form a matrix are referred to as elements of the matrix. Each element of the matrix is identified with two subscripts i and j to designate the row and column locations, respectively. Thus, the i,j element (or coefficient) of [A] is expressed as Aij with i = 1, 2, …, m and j = 1, 2, …, n for m rows and n columns. This also defines the size of [A], referred to as its order, to be m x n. 1 TYPES OF MATRICES Row Matrix – If all the elements of a matrix are arranged in a single row (i.e., m = 1), then the matrix is referred to as a row matrix or row vector. A row vector, denoted with angle brackets, with n columns can be expressed as r r1 r2 rn 1xn Notice that only one subscript is used with the row vector to identify the column location since there is only one row. 3 2 Column Matrix – If all the elements of a matrix are arranged in a single column (i.e., n = 1), then the matrix is referred to as a column matrix or column vector. A column vector, denoted with braces, with m rows can be expressed as c1 c {c} 2 c m Notice that only one subscript is used with the column vector to identify the row location since there is only one column. 4 1 Square Matrix – Matrix with the same number of rows and columns, i.e., m = n. An nxn square matrix is S11 S12 S1n S 21 S22 S2n [S] = S n1 Sn2 Snn nxn Elements with the same subscript, i.e., Sii for i = 1, 2, …, n; are referred to as the diagonal elements or coefficients. All other coefficients Sij for i j are termed the off-diagonal elements or 5 coefficients. where all elements not shown are zero. Unit or Identity Matrix – diagonal matrix with all diagonal elements equal to 1 (i.e., Iii = 1 and Iij = 0 for i j) is called a unit or identity matrix 1 1 [I] 1 nxn Null Matrix – When all the elements of a matrix are zero [O] (i.e., Oij = 0 for i = 1, 2, …, m; j = 1, 2, …, n), the 7 matrix is called a null matrix. Symmetric Matrix – If the elements of a square matrix are symmetric (Sij = Sji) about the main diagonal (Sii, i = 1, 2, …, n) then the matrix is referred to as a symmetric matrix. Diagonal Matrix – If all the offdiagonal elements of a square matrix are zero, i.e., Dij = 0 for i j; the matrix is referred to as a diagonal matrix D11 D 22 [D] D nn nxn 6 MATRIX OPERATIONS Equality – Two matrices [A] and [B] are equal if they are of the same order and if their corresponding elements are identical, i.e., Aij = Bij for i = 1, 2, …, m and j =1, 2, …, n. Addition and Subtraction – Addition or subtraction of two matrices [A] and [B] is carried out for two equal order matrices only by adding or subtracting the corresponding elements of the two matrices, i.e. Cij = Aij Bij 8 2 Multiplication by a Scalar – The product of a scalar and a matrix is obtained simply by multiplying each matrix element by the scalar c, i.e. Cij = cAij Multiplication of Matrices – The multiplication of two matrices can be carried out only if the number of columns of the first (pre-multiplier) matrix equals the number of rows in the second (post-multiplier) matrix. Such matrices are referred to as conformable for 9 multiplication. Even when two matrices [A] and [B] are of such orders that both matrix products [A] [B] and [B] [A] can be determined, the two products are generally not equal: [A] [B] [B] [A] i.e., matrix multiplication does not satisfy the commutative law of mathematics. Thus, it is necessary to maintain proper sequential order of matrices when computing matrix products. 11 For example, [C]mxp = [A]mxn [B]nxp or Cij n Aik Bkj k 1 Ai1B1j Ai2 B2 j Ain Bnj A common application of matrix multiplication involves simultaneous equations [A] {x} = {b} which for a system or order three is explicitly expressed as A11 A12 A13 x1 b1 A A 22 A 23 x 2 b 2 21 10 A31 A32 A33 x 3 b3 Matrix multiplication does satisfy the associative and distributed mathematical relationships provided the matrices are compatible, i.e. [A] [B] [C] = ([A] [B]) [C] = [A] ([B] [C]) [A] ([B] + [C]) = [A] [B] + [A] [C] 12 3 Inverse of a Square Matrix – The inverse of square matrix [S] is represented as [S]-1 with the elements of such magnitude that the multiplication of the original [S] by [S] -1 yields the identity matrix, i.e. [S] -1 [S] = [S] [S]-1 = [I] Multiplication of [A] by a compatible null matrix [O] results in a null matrix and multiplication of [A] by a compatible identity matrix [I] results in matrix [A], i.e. [A]mxn [O]nxp = [O]mxp [O]pxm [A]mxn = [O]pxn [A]mxn [I]nxn = [A]mxn [I]mxm [A]mxn = [A]mxn 13 Matrix inversion will be used in this class for the solution of simultaneous equations [S] {x} = {b} {x} = [S]-1 {b} An important property of matrix inversion is that if [S] is symmetric, then [S]-1 is also symmetric. The simultaneous equations to be solved in connection with the displacement method of analysis in this course will always be symmetric. 15 Your class notes include the closed form inverse for a 2x2 matrix and a 3x3 matrix. Matrix inversion is only defined for square matrices and the order of the inverse matrix is the same as the 14 original matrix. Transpose of a Matrix – The transpose of a matrix is obtained by interchanging its corresponding rows and columns. Transpose of a matrix is usually identified by the superscript T on the matrix. For example, consider [A]3x2, the transpose of [A] is expressed as [A]T and the elements of [A]T are related to the elements of [A] as A11 A12 [A] A 21 A 22 A31 A32 16 4 A [A]T 11 A12 A 21 A 22 A31 A32 A11 A12 A A 22 [A] 21 A 31 A32 For a symmetric matrix [S]: [S] = [S]T ; sij = sji The transpose of a matrix product is defined as: ([A] [B]) T = [B] T [A] T ([A] [B] [C]) T = [C] T [B] T [A]T Partitioning of Matrices – Partitioning is a process by which a matrix is subdivided into a number of smaller matrices called 17 submatrices. For example, Matrix operations such as addition, subtraction, and multiplication can be performed on partitioned matrices in the same manner as described previously by treating submatrices as elements, provided the matrices are partitioned such that they are conformable. A13 A14 A 23 A 24 A33 A34 [A]11 [A]12 [A]21 [A]22 A12 A [A]11 11 A 21 A 22 A [A]12 14 A 24 [A]21 A31 A32 [A]22 A34 A13 A 23 A33 18 GAUSS-JORDAN ELIMINATION Solution of Simultaneous Equations – The Gauss-Jordan elimination method is one of the numerous techniques available to solve simultaneous equations, particularly for hand solution. Consider the following three symmetric equations: 4 x1 + 2 x2 + 0 x3 = 4 2 x1 + 8 x2 + 2 x3 = -4 19 0 x1 + 2 x2 + 4 x3 = 0 20 5 When applying the Gauss-Jordan method, it is usually convenient to write the coefficient matrix [A] and the right hand side vector {b} as submatrices of a partitioned augmented matrix: 4 2 0 4 2 8 2 4 0 2 4 0 Matrix Inversion – Gauss Jordan elimination can also be used to determine the inverse of a matrix. You simply follow the GaussJordan process described above with the augmented matrix equal to the identity matrix. For example, calculate the inverse of the matrix used in the Gauss-Jordan elimination example above: 21 22 The unknown vector {x} can be calculated using [A]-1 as Augmented Matrix = 4 2 0 1 0 0 2 8 2 0 1 0 0 2 4 0 0 1 7 2 1 1 [A]1 2 4 2 24 1 2 7 23 {x} = [A]-1 {b} x1 7 2 1 4 1 2 4 2 4 x 2 x 24 1 2 7 0 3 36 3 24 2 24 1 24 12 1 24 2 24 6