~ Linear Algebraic Equations ~ Gauss Elimination Chapter 9 Credit: Prof. Lale Yurttas, Chemical Eng., Texas A&M University 1 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Solving Systems of Equations • A linear equation in n variables: a1x1 + a2x2 + … + anxn = b • For small (n ≤ 3), linear algebra provides several tools to solve such systems of linear equations: – Graphical method – Cramer’s rule – Method of elimination • Nowadays, easy access to computers makes the solution of very large sets of linear algebraic equations possible 2 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Determinants and Cramer’s Rule Ax b [A] : coefficient matrix a11 a12 a13 x1 b1 a a a x b 21 22 23 2 2 a31 a32 a33 x3 b3 a11 a12 a13 A a21 a22 a23 a31 a32 a33 x1 b1 a12 a13 a11 b1 a13 a11 a12 b1 b2 a22 a23 a21 b2 a23 a21 a22 b2 b3 a32 a33 a31 b3 a33 a31 a32 b3 D x2 D x3 D D : Determinant of A matrix 3 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. a11 a12 a13 Computing the Determinant Ax B a11 a12 a13 A a21 a22 a23 a31 a32 a33 D a21 a22 a23 a31 a32 a33 D11 a22 a23 D12 a21 a23 D13 a21 a22 Determinan t of A D a11 a32 a33 a31 a33 a31 a32 a22 a23 a32 a33 a12 a22 a33 a32 a23 a21 a33 a31 a23 a21 a32 a31 a22 a21 a23 a31 a33 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. a13 a21 a22 a31 a32 4 Gauss Elimination • Solve Ax = b • Consists of two phases: –Forward elimination –Back substitution • Forward Elimination reduces Ax = b to an upper triangular system Tx = b’ • Back substitution can then solve Tx = b’ for x a11 a12 a 21 a22 a31 a32 a13 b1 a23 b2 a33 b3 Forward Elimination a11 a12 0 a' 22 0 0 a13 b1 ' a23 b2' '' a33 b3'' b3'' x3 '' a33 ' b2' a23 x3 x2 ' a22 Back Substitution b1 a13 x3 a12 x2 x1 a11 5 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Forward Elimination Gaussian Elimination -(3/1) -(2/1) x1 - x2 + x3 = 6 3x1 + 4x2 + 2x3 = 9 2x1 + x2 + x3 = 7 -(3/7) x1 - x 2 + x3 = 6 0 +7x2 - x3 = -9 0 + 3x2 - x3 = -5 x1 - x2 + x3 = 6 0 7x2 - x3 = -9 0 0 -(4/7)x3=-(8/7) Solve using BACK SUBSTITUTION: x3 = 2 x2=-1 x1 =3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 6 Back Substitution 1x0 x3 = 2 +1x1 –1x2 +4x3 = 8 – 2x1 –3x2 +1x3 = 5 2x2 – 3x3 = 0 2x3 = 4 7 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Back Substitution 1x0 x2 = 3 +1x1 –1x2 = 0 – 2x1 –3x2 = 3 2x2 = 6 8 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Back Substitution 1x0 +1x1 = 3 x1 = –6 – 2x1 = 12 9 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Back Substitution x0 = 9 1x0 = Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 9 Back Substitution (* Pseudocode *) for i n down to 1 do /* calculate xi */ x [ i ] b [ i ] / a [ i, i ] /* substitute x [ i ] in the equations above */ endfor for j 1 to i-1 do b [ j ] b [ j ] x [ i ] × a [ j, i ] endfor Time Complexity? O(n2) 11 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Forward Elimination Gaussian Elimination a ji 0 a ji a ji aii aii 12 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. M U L T I P L I E R S Forward Elimination 4x0 +6x1 +2x2 – 2x3 = 8 +5x2 – 2x3 = 4 -(2/4) 2x0 -(-4/4) –4x0 – 3x1 – 5x2 +4x3 = 1 -(8/4) 8x0 +18x1 – 2x2 +3x3 = 40 13 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Forward Elimination M U L T I P L I E R S +6x1 +2x2 – 2x3 = 8 – 3x1 +4x2 – 1x3 = 0 -(3/-3) +3x1 – 3x2 +2x3 = 9 -(6/-3) +6x1 – 6x2 +7x3 = 24 4x0 14 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Forward Elimination 4x0 M U L T I P L I E R ?? +6x1 +2x2 – 2x3 = 8 – 3x1 +4x2 – 1x3 = 0 1x2 +1x3 = 9 2x2 +5x3 = 24 15 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Forward Elimination 4x0 +6x1 +2x2 – 2x3 = 8 – 3x1 +4x2 – 1x3 = 0 1x2 +1x3 = 9 3x3 = 6 16 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Operation count in Forward Elimination Gaussian Elimination b11 2n 0 2n 0 0 2n 0 0 0 2n 0 0 0 0 2n 0 0 0 0 0 2n 0 0 0 0 0 0 2n 0 0 0 0 0 0 0 2n 0 0 0 0 0 0 0 b22 b33 b44 b55 b66 b77 b66 0 b66 TOTAL 1st column: 2n(n-1) 2n2 2(n-1)2 ……. 2(n-2)2 TOTAL # of Operations for FORWARD ELIMINATIO N : n 2n 2(n 1) ... 2 * (2) 2 * (1) 2 i 2 2 2 2 2 i 1 n(n 1)( 2n 1) 2 6 O(n 3 ) Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 17 Pitfalls of Elimination Methods Division by zero It is possible that during both elimination and back-substitution phases a division by zero can occur. For example: 2x2 + 3x3 = 8 4x1 + 6x2 + 7x3 = -3 2x1 + x2 + 6x3 = 5 A= 0 4 2 2 6 1 3 7 6 Solution: pivoting (to be discussed later) 18 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pitfalls (cont.) Round-off errors • Because computers carry only a limited number of significant figures, round-off errors will occur and they will propagate from one iteration to the next. • This problem is especially important when large numbers of equations (100 or more) are to be solved. • Always use double-precision numbers/arithmetic. It is slow but needed for correctness! • It is also a good idea to substitute your results back into the original equations and check whether a substantial error has occurred. 19 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pitfalls (cont.) ill-conditioned systems - small changes in coefficients result in large changes in the solution. Alternatively, a wide range of answers can approximately satisfy the equations. (Well-conditioned systems – small changes in coefficients result in small changes in the solution) Problem: Since round off errors can induce small changes in the coefficients, these changes can lead to large solution errors in ill-conditioned systems. Example: x1 + 2x2 = 10 1.1x1 + 2x2 = 10.4 x1 + 2x2 = 10 1.05x1 + 2x2 = 10.4 b1 a12 x1 b2 a22 D b1 a12 x1 b2 a22 D 10 2 10.4 2 2(10) 2(10.4) 4 1(2) 2(1.1) 0.2 10 2 10.4 2 2(10) 2(10.4) 8 1(2) 2(1.05) 0.1 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. x2 3 x2 1 ill-conditioned systems (cont.) – • Surprisingly, substitution of the erroneous values, x1=8 and x2=1, into the original equation will not reveal their incorrect nature clearly: x1 + 2x2 = 10 8+2(1) = 10 (the same!) 1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!) IMPORTANT OBSERVATION: An ill-conditioned system is one with a determinant close to zero • If determinant D=0 then there are infinitely many solutions singular system • Scaling (multiplying the coefficients with the same value) does not change the equations but changes the value of the determinant in a significant way. However, it does not change the ill-conditioned state of the equations! DANGER! It may hide the fact that the system is ill-conditioned!! How can we find out whether a system is ill-conditioned or not? Not easy! Luckily, most engineering systems yield well-conditioned results! • One way to find out: change the coefficients slightly and recompute & compare Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. COMPUTING THE DETERMINANT OF A MATRIX USING GAUSSIAN ELIMINATION The determinant of a matrix can be found using Gaussian elimination. Here are the rules that apply: • • • • Interchanging two rows changes the sign of the determinant. Multiplying a row by a scalar multiplies the determinant by that scalar. Replacing any row by the sum of that row and the multiple of any other row does NOT change the determinant. The determinant of a triangular matrix (upper or lower triangular) is the product of the diagonal elements. i.e. D=t11t22t33 t11 D 0 0 t12 t13 t22 t23 0 t33 22 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Techniques for Improving Solutions • Use of more significant figures – double precision arithmetic • Pivoting If a pivot element is zero, normalization step leads to division by zero. The same problem may arise, when the pivot element is close to zero. Problem can be avoided: – Partial pivoting Switching the rows below so that the largest element is the pivot element. **hereA** Go over the solution in: CHAP9e-Problem-11.doc – Complete pivoting • Searching for the largest element in all rows and columns then switching. • This is rarely used because switching columns changes the order of x’s and adds significant complexity and overhead costly • Scaling – used to reduce the round-off errors and improve accuracy Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Gauss-Jordan Elimination a11 0 0 0 0 0 0 0 0 b11 0 x22 0 0 0 0 0 0 0 b22 0 0 x33 0 0 0 0 0 0 b33 0 0 0 x44 0 0 0 0 0 b44 0 0 0 0 x55 0 0 0 0 b55 0 0 0 0 0 x66 0 0 0 b66 0 0 0 0 0 0 x77 0 0 b77 0 0 0 0 0 0 0 x88 0 b88 0 0 0 0 0 0 0 0 x99 b99 24 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Gauss-Jordan Elimination: Example 1 2 x1 8 1 1 2 3 x 1 2 3 7 4 x3 10 R2 R2 - (-1)R1 R3 R3 - ( 3)R1 R1 R1 - (1)R2 R3 R3-(4)R2 R1 R1 - (7)R3 R2 R2-(-5)R3 1 2| 8 1 Augmented Matrix : 1 2 3 | 1 3 7 4 |10 1 1 2 | 8 0 1 5 | 9 0 4 2| 14 1 0 7 | 17 0 1 5 | 9 0 0 18 | 22 1 0 0 | 8.444 0 1 0 | 2.888 0 0 1 | 1.222 Time Complexity? O(n3) Scaling R2: R2 R2/(-1) Scaling R3: R3 R3/(18) 1 1 2 | 8 0 1 5| 9 0 4 2| 14 1 0 7 | 17 0 1 5 | 9 0 0 1 |11 / 9 RESULT: x1=8.45, Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. x2=-2.89, x3=1.23 25 Systems Equations Systemsof of Nonlinear Nonlinear Equations • Locate the roots of a set of simultaneous nonlinear equations: f1 ( x1 , x2 , x3 ,, xn ) 0 f 2 ( x1 , x2 , x3 ,, xn ) 0 f n ( x1 , x2 , x3 ,, xn ) 0 Example: x12 x1 x2 10 f1 ( x1 , x2 ) x12 x1 x2 10 0 x2 3 x1 x2 2 57 f 2 ( x1 , x2 ) x2 3 x1 x2 2 57 0 26 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. • First-order Taylor series expansion of a function with more than one variable: f1(i 1) f1(i ) f 2(i 1) f 2( i ) f1(i ) x1 ( x1(i 1) x1( i ) ) f 2(i ) f1(i ) ( x1( i 1) x1( i ) ) x1 x2 ( x2( i 1) x2( i ) ) 0 f 2(i ) x2 ( x2( i 1) x2( i ) ) 0 • The root of the equation occurs at the value of x1 and x2 where f1(i+1) =0 and f2(i+1) =0 Rearrange to solve for x1(i+1) and x2(i+1) f1(i ) x1 f 2(i ) x1 x1(i 1) x1(i 1) f1(i ) x2 f 2 (i ) x2 x2 (i 1) f1(i ) f1(i ) x2 (i 1) f 2(i ) x1 xi f 2(i ) x1 f1(i ) xi f1(i ) x1 f 2 (i ) x 1 x2 x2 ( i ) f 2 (i ) x2 x2 ( i ) f1( i ) f1( i ) f1(i ) x1 x2 x1(i 1) f f 2 ( i ) x2 ( i 1) f 2(i ) 2(i ) x x2 1 f1(i ) x2 x1(i ) f 2 (i ) x2 (i ) x2 • Since x1(i), x2(i), f1(i), and f2(i) are all known at the ith iteration, this represents a set of two linear equations with two unknowns, x1(i+1) and x2(i+1) • You may use several techniques to solve these equations Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. General Representation for the solution of Nonlinear Systems of Equations f1 ( x1 , x2 , x3 ,, xn ) 0 f 2 ( x1 , x2 , x3 ,, xn ) 0 f n ( x1 , x2 , x3 ,, xn ) 0 f1 x 1 f 2 x1 f n x1 f1 x2 f 2 x2 f n x2 f1 x 1 f 2 J x1 f n x1 f1 x2 f 2 x2 f n x2 f1 Jacobian Matrix xn f 2 Solve this set of linear xn equations at each iteration: f n xn [ J i ]{ X i 1} {Fi } [ J i ]{ X i } f1 f1 f ( x , x , x , , x ) x 1 1 2 3 n 1 x xn 1 f 2 f x2 f 2 ( x1 , x2 , x3 ,, xn ) 2 x1 xn f f n n xn f ( x , x , x , , x ) n i n 1 2 3 xn i i 1 x1 f1 x2 f 2 x2 f n x2 f1 x1 xn f 2 x2 xn f n xn xn i i 28 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Solution of Nonlinear Systems of Equations Solve this set of linear equations at each iteration: [ J i ]{ X i 1} {Fi } [ J i ]{ X i } [ J i ]{ X i 1 X i } {Fi } Rearrange: [ J i ]{X i 1} {Fi } f1 x 1 f 2 x1 f n x1 f1 x2 f 2 x2 f n x2 f1 f1 ( x1 , x2 , x3 ,, xn ) x1 xn f 2 x2 f 2 ( x1 , x2 , x3 ,, xn ) xn f n xn f n ( x1 , x2 , x3 , , xn ) i 1 ii xn i 29 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Notes on the solution of Nonlinear Systems of Equations n non-linear equations n unknowns to be solved: f1 ( x1 , x2 , x3 ,, xn ) 0 f 2 ( x1 , x2 , x3 ,, xn ) 0 Jacobian Matrix: f n ( x1 , x2 , x3 ,, xn ) 0 f1 x 1 f 2 J x1 f n x1 f1 x2 f 2 x2 f n x2 f1 xn f 2 xn f n xn • Need to solve this set of Linear Equations [ J i ]{X i 1} {Fi } in each and every iterative solution of the original Non-Linear system of equations. • For n unknowns, the size of the Jacobian matrix is n2 and the time it takes to solve the linear system of equations (one iteration of the non-linear system) is proportional to O(n3) • There is no guarantee for the convergence of the non-linear system. There may also be slow convergence. • In summary, finding the solution set for the Non-Linear system of equations is an extremely compute-intensive task. 30 Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display.