Matrix Methods - Civil and Environmental Engineering | SIU

MATRIX METHODS

SYSTEMS OF LINEAR EQUATIONS

ENGR 351

Numerical Methods for Engineers

Southern Illinois University Carbondale

College of Engineering

Dr. L.R. Chevalier

Dr. B.A. DeVantier file: matrix.ppt p. 1

Copyright © 2000 by L.R. Chevalier and B.A. DeVantier

Permission is granted to students at Southern Illinois University at Carbondale to make one copy of this material for use in the class ENGR 351, Numerical

Methods for Engineers. No other permission is granted.

All other rights are reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the copyright owner.

file: matrix.ppt p. 2

System of Linear Equations

• We have focused our last lectures on finding a value of x that satisfied a single equation f(x) = 0

• Now we will deal with the case of determining the values of x

1

, x

2

, .....x

n

, that simultaneously satisfy a set of equations file: matrix.ppt p. 3

System of Linear Equations

• Simultaneous equations f

1

(x

1

, x

2

, .....x

n

) = 0 f

2

(x

1

, x

2

, .....x

n

) = 0

.............

f

3

(x

1

, x

2

, .....x

n

) = 0

• Methods will be for linear equations a

11 x

1

+ a

12 x

2

+...... a

1n x n

=c

1 a

21 x

1

+ a

22 x

2

+...... a

2n x n

=c

2

..........

a n1 x

1

+ a n2 x

2

+...... a nn x n

=c n file: matrix.ppt p. 4

Mathematical Background

Matrix Notation

 

 a a a

.

11

.

12 13 a a a

21 22 23

.

...

...

a a a m 1 m 2 m 3

...

a

1 n a

2 n

.

 a mn

• a horizontal set of elements is called a row

• a vertical set is called a column

• first subscript refers to the row number

• second subscript refers to column number file: matrix.ppt p. 5

 a a a

11

.

21 m 1 a

12 a

22

.

a m 2 a

13

...

a

23

...

.

a m 3

...

a a

.

1 n

2 n a mn

This matrix has m rows an n column.

It has the dimensions m by n (m x n) note subscript file: matrix.ppt p. 6

column 3

 

 a a a

.

11

.

12 13 a a a

21 22 23

.

...

...

a a a m 1 m 2 m 3

...

a

1 n a

2 n

.

 a mn

Note the consistent scheme with subscripts denoting row,column row 2 file: matrix.ppt p. 7

Row vector: m=1

    b

1 b

2

.......

b n

Column vector: n=1 Square matrix: m = n

 

 c c

.

.

 c m

1

2

 a a a

11 12 a a a

21 22 a a a

31 32

13

23

33

 file: matrix.ppt p. 8

The diagonal consist of the elements a

11 a

22 a

33

 a a a

11

31

12 a a a

21 22 a a a

32

13

23

33

• Symmetric matrix

• Diagonal matrix

• Identity matrix

• Upper triangular matrix

• Lower triangular matrix

• Banded matrix file: matrix.ppt p. 9

Symmetric Matrix

a ij

= a ji for all i

’s and j

’s

  

5 1 2

1 3 7

 2 7 8

Does a

23

= a

32

?

Yes. Check the other elements on your own.

file: matrix.ppt p. 10

Diagonal Matrix

A square matrix where all elements off the main diagonal are zero

 

 a

11

0

0 a

0

0

22 a

0

0

33

0

0

0

0 0 0 a

44 file: matrix.ppt p. 11

Identity Matrix

A diagonal matrix where all elements on the main diagonal are equal to 1

   

1

0

0

1

0

0

0

0

0 0 1 0

0 0 0 1

The symbol [I] is used to denote the identify matrix.

file: matrix.ppt p. 12

Upper Triangle Matrix

Elements below the main diagonal are zero

 a a a

11 12

0

0 a

0

22 a a

13

23

33

 file: matrix.ppt p. 13

Lower Triangular Matrix

All elements above the main diagonal are zero

  

5 0 0

1 3 0

2 7 8

 file: matrix.ppt p. 14

Banded Matrix

All elements are zero with the exception of a band centered on the main diagonal

 

 a a

11 12 a a a

0

21 a

22

32 a

0

23

33 a

0

0

34

0 0 a a

43 44 file: matrix.ppt p. 15

Matrix Operating Rules

• Addition/subtraction add/subtract corresponding terms a ij

+ b ij

= c ij

• Addition/subtraction are commutative

[A] + [B] = [B] + [A]

• Addition/subtraction are associative

[A] + ([B]+[C]) = ([A] +[B]) + [C] file: matrix.ppt p. 16

Matrix Operating Rules

• Multiplication of a matrix [A] by a scalar g is obtained by multiplying every element of

[A] by g

 ga ga

11 12 ga ga

21 22

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

ga ga

.

.

.

1 n

2 n

 ga ga m 1 m 2

.

.

.

ga mn file: matrix.ppt p. 17

Matrix Operating Rules

• The product of two matrices is represented as

[C] = [A][B] n = column dimensions of [A] n = row dimensions of [B] c ij

 k

N 

1 a b ik kj file: matrix.ppt p. 18

Simple way to check whether matrix multiplication is possible

exterior dimensions conform to dimension of resulting matrix

[A] m x n [B] n x k = [C] m x k interior dimensions must be equal file: matrix.ppt p. 19

Matrix multiplication

• If the dimensions are suitable, matrix multiplication is associative

([A][B])[C] = [A]([B][C])

• If the dimensions are suitable, matrix multiplication is distributive

([A] + [B])[C] = [A][C] + [B][C]

• Multiplication is generally not commutative

[A][B] is not equal to [B][A] file: matrix.ppt p. 20

Inverse of [A]

   

1 

      file: matrix.ppt p. 21

Inverse of [A]

   

1      

I

Transpose of [A] t 

 a a

11 21 a a

12 22

.

.

.

.

.

.

a a

1 n 2 n

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

a m 1 a

.

.

.

m 2

 a mn file: matrix.ppt p. 22

Determinants

Denoted as det A or A for a 2 x 2 matrix a c a c b

 ad

 bc d b

 ad

 bc d file: matrix.ppt p. 23

Determinants cont.

There are different schemes used to compute the determinant.

Consider cofactor expansion

- uses minor and cofactors of the matrix

Minor : the minor of an entry a ij is the determinant of the submatrix obtained by deleting the ith row and the jth column

Cofactor : the cofactor of an entry a ij of an n x n matrix

A is the product of (-1) i+j and the minor of a ij file: matrix.ppt p. 24

Minor : the minor of an entry a ij is the determinant of the submatrix obtained by deleting the ith row and the jth column.

Example: the minor of a

32 for a 3x3 matrix is: minor of a

32

 a a a

11

21

31 a

32

For element a

32 the ith is row 3 row a

12 a

22 a

13 a

23

 a

33 file: matrix.ppt p. 25

Minor : the minor of an entry a ij is the determinant of the submatrix obtained by deleting the ith row and the jth column.

Example: the minor of a

32 for a 3x3 matrix is: minor of a

32

 a a a

11

21

31 a a a

12

22

32 a

13 a

23

 a

33

For element a

32 the jth column is column 2 file: matrix.ppt p. 26

Minor : the minor of an entry a ij is the determinant of the submatrix obtained by deleting the ith row and the jth column.

Example: the minor of a

32 for a 3x3 matrix is: minor of a

32

 a a a

11 12 a a a

21 22 23 a a a

31 32

13

33

 minor of a

32

 a a a

11 12 13 a a a

21 22 23 a a a

31 32 33

 a a

11 13 a a

21 23

 a a

11 23

 a a

13 21 file: matrix.ppt p. 27

Cofactor : A ij

, the cofactor of an entry a ij of an n x n matrix

A is the product of (-1) i+j and the minor of a ij i.e. Calculate A

31 for a 3x3 matrix

First calculate the minor a

31 minor of a

31

 a a a

11 12 13 a a a

21 22 23 a a a

31 32 33

 a a

12 13 a a

22 23

 a a

12 23

 a a

22 13 file: matrix.ppt p. 28

Cofactor : A ij

, the cofactor of an entry a ij of an n x n matrix

A is the product of (-1) i+j and the minor of a ij i.e. Calculate A

31 for a 3x3 matrix

First calculate the minor a

31 minor of a

31

 a a a

11 12 13 a a a

21 22 23 a a a

31 32 33

 a a

12 13 a a

22 23

 a a

12 23

 a a

22 13 file: matrix.ppt p. 29

Cofactor : A ij

, the cofactor of an entry a ij of an n x n matrix

A is the product of (-1) i+j and the minor of a ij i.e. Calculate A

31 for a 3x3 matrix

First calculate the minor a

31 minor of a

31

 a a a

11 12 13 a a a

21 22 23 a a a

31 32 33

 a a

12 13 a a

22 23

 a a

12 23

 a a

22 13

A

31

   a a

12 23

 a a

22 13

 file: matrix.ppt p. 30

Minors and cofactors are used to calculate the determinant of a matrix.

Consider an n x n matrix expanded around the ith row

A

 a i 1

A i 1

 a i 2

A i 2

......

a in

A in

(for any one value of i )

Consider expanding around the jth column

A

 a

1 j

A

1 j

 a

2 j

A

2 j

......

a nj

A nj

(for any one value of j ) file: matrix.ppt p. 31

a a a

11 12 13

D

 a a a

21 22 23 a a a

31 32 33

 a

11 a a

22 23 a a

32 33

 a

12 a a

21 23 a a

31 33

 a

13 a a

21 22 a a

31 32 det

  

1

1

 a

11

 a

12 a

23

 a

22 a

13

 

1

3 a

13

 a

21 a

32

  

1

2 a

12

 a

21 a

33 a

22 a

32

 a

23 a

31

 file: matrix.ppt p. 32

EXAMPLE

Calculate the determinant of the following 3x3 matrix.

First, calculate it using the 1st row (the way you probably have done it all along).

Then try it using the 2nd row.

 

1

4

6

7

3

1

9

2

5

 

 file: matrix.ppt p. 33

Properties of Determinants

• det A = det A T

• If all entries of any row or column is zero, then det A = 0

• If two rows or two columns are identical, then det A = 0 file: matrix.ppt p. 34

How to represent a system of linear equations as a matrix

[A]{X} = {C} where {X} and {C} are both column vectors file: matrix.ppt p. 35

0

0

.

.

3

1 x

0 .

5 x

1 x

1

1

0 x

2

.

52 x

2

1 .

9 x

3 x

3

 

0

0 .

67

.

01

0 .

3 x

2

0 .

5 x

3

 

0 .

44

 

{ X }

{ C }

0

0

0 .

.

3

.

5

1

0 .

52

1

0 .

3

1

0 .

.

1

9

5

 x x

2 x

1

3





0 .

0 .

0 .

01

67

44

 file: matrix.ppt p. 36

Practical application

• Consider a problem in structural engineering

• Find the forces and reactions associated with a statically determinant truss

90

60

30 hinge: transmits both vertical and horizontal forces at the surface roller: transmits vertical forces file: matrix.ppt p. 37

First label the nodes

2

30

FREE BODY DIAGRAM

90

1

60

3 file: matrix.ppt p. 38

Determine where you are evaluating tension/compression

F

1

2

30

F

2

90

1

60

F

3

3

FREE BODY DIAGRAM file: matrix.ppt p. 39

Label forces at the hinge and roller

F

1

H

2

2

30

F

2

V

2

FREE BODY DIAGRAM

1000 kg

90

1

60

F

3

3

V

3 file: matrix.ppt p. 40

F

1

H

2

2

30

F

2

V

2

FREE BODY DIAGRAM

1000 kg

F

H

F v

0

0

90

1

60

F

3

3

V

3 file: matrix.ppt p. 41

Node 1

F

1,V

F

1

30 60

F

1,H

F

H

F

V

F

3

F

1 cos 30

 

F

3 cos 60

 

F

1 , H

F

1 sin 30

 

F

3 sin 60

 

F

1 , V

F

1 cos 30

F

1 sin 30

F

3 cos 60

 

0

F

3 sin 60

  

1000 file: matrix.ppt p. 42

Node 2

F

1

30

F

2

H

2

V

2

F

H

F

V

0 H

F

F

2 2 1 cos 30

0 V

F

2 1 sin 30

 file: matrix.ppt p. 43

Node 3

F

3

F

2

60

V

3

F

H

F

V

F

3 cos 60

 

F

2

F

3 sin 60

 

V

3 file: matrix.ppt p. 44

F

1 cos 30

F

1 sin 30

F

3 cos 60

F

3 sin 60

 

0

 

1000

H

2

V

2

F

2

F

1

F

1 sin 30

 cos 30

0

F

3

F

3 cos sin 60

60

 

F

2

V

3

0

0

0

SIX EQUATIONS

SIX UNKNOWNS file: matrix.ppt p. 45

4

5

1

2

3

F

Do some book keeping

1

F

2

F

3

H

2

V

2

-cos30 0 cos60 0 0

-sin30 0 -sin60 0 0 cos30 1 0 sin30 0 0

0

1

0

0

1

-1 -cos60 0 0

6 0 0 sin60 0 0

0

0

0

V

3

0

0

1

0

0

0

0

-1000

0 file: matrix.ppt p. 46

This is the basis for your matrices and the equation

[A]{x}={c}

.

0

0

.

0

1

0

1

0 0

0

.

0

0 1

0

.

0

0

0

0

1

0

0

0

0

0

0

0

1

0

0

F

F

1

2

F

3

H

2

V

2

 

0

1000

0

0

0

V

3

0 file: matrix.ppt p. 47

Matrix Methods

• Gauss elimination

• Matrix inversion

• Gauss Seidel

• LU decomposition

 a a a

11 a a a

21

12

22

13

23

 a a a

31 32 33

 file: matrix.ppt p. 48

Systems of Linear Algebraic

Equations

Specific Study Objectives

• Understand the graphic interpretation of illconditioned systems and how it relates to the determinant

• Be familiar with terminology: forward elimination, back substitution, pivot equations and pivot coefficient file: matrix.ppt p. 49

Specific Study Objectives

• Know the fundamental difference between

Gauss elimination and the Gauss Jordan method and which is more efficient

• Apply matrix inversion to evaluate stimulus-response computations in engineering file: matrix.ppt p. 50

Specific Study Objectives

• Understand why the Gauss-Seidel method is particularly well-suited for large sparse systems of equations

• Know how to assess diagonal dominance of a system of equations and how it relates to whether the system can be solved with the

Gauss-Seidel method file: matrix.ppt p. 51

Specific Study Objectives

• Understand the rationale behind relaxation and how to apply this technique

• Understand that banded and symmetric systems can be decomposed and solved efficiently file: matrix.ppt p. 52

Graphical Method

2 equations, 2 unknowns

a x

11 1

 a x

12 2

 c

1 a x

21 1

 a x

22 2

 c

2 x

2

( x

1

, x

2

) x

2

 

 a

11 a

12

 x

1

 c

1 a

12 x

2

 

 a

21 a

22

 x

1

 c

2 a

22 x

1 file: matrix.ppt p. 53

3 x

1

2 x

2

18 x

1

2 x

2

2 x

2 x

2

3 

2

 x

1

 



1

2

 x

1

9

1 x

2

9

1

3

2

( 4 , 3 )

2

1 x

1

Check: 3(4) + 2(3) = 12 + 6 = 18 file: matrix.ppt p. 54

Special Cases

• No solution

• Infinite solution

• Ill-conditioned x

2

( x

1

, x

2

) x

1 file: matrix.ppt p. 55

a) No solution - same slope f(x) b) infinite solution f(x) c) ill conditioned so close that the points of intersection are difficult to detect visually f(x) x

-1/2 x

1

+ x

2

-x

1

+2x

2

= 2

= 1 x x file: matrix.ppt p. 56

Let’s consider how we know if the system is ill-conditions. Start by considering systems where the slopes are identical

• If the determinant is zero, the slopes are identical a x

11 1

 a x

12 2

 c

1 a x

21 1

 a x

22 2

 c

2

Rearrange these equations so that we have an alternative version in the form of a straight line: i.e. x

2

= (slope) x

1

+ intercept file: matrix.ppt p. 57

x

2

  a

11 a

12 x

1

 c

1 a

12 x

2

  a

21 a

22 x

1

 c

2 a

22

If the slopes are nearly equal (ill-conditioned) a

11

 a

21 a

12 a

22 a a

11 22

 a a

21 12 a a

11 22

 a a

21 12

0

Isn’t this the determinant?

a a

11 12 a a

21 22

 det A file: matrix.ppt p. 58

If the determinant is zero the slopes are equal.

This can mean:

- no solution

- infinite number of solutions

If the determinant is close to zero, the system is ill conditioned.

So it seems that we should use check the determinant of a system before any further calculations are done.

Let’s try an example.

file: matrix.ppt p. 59

Example

Determine whether the following matrix is ill-conditioned.



37

19

.

2

.

2

4 .

7

2 .

5

 x

1 x

2

 

22

12 file: matrix.ppt p. 60

Cramer’s Rule

• Not efficient for solving large numbers of linear equations

• Useful for explaining some inherent problems associated with solving linear equations.

 a

11 a

21 a

31 a

12 a

22 a

32 a

13 a

23 a

33

 x x

2 x

1

3



 b b

2

3 b

1



      file: matrix.ppt p. 61

Cramer’s Rule

x

1

 b a a

1 12 13

1

A b a a

2 22 23 b a a

3 32 33 x

2

 a b a

11 1 13

1

A a b a

21 2 23 a b a

13 3 33 to solve for x i

- place {b} in the ith column file: matrix.ppt p. 62

Cramer’s Rule

x

1

 b a a

1 12 13

1

A b a a

2 22 23 b a a

3 32 33 x

2

 a b a

11 1 13

1

A a b a

21 2 23 a b a

13 3 33 to solve for x i

- place {b} in the ith column x

3

 a a b

11 12 1

1

A a a b

21 22 2 a a b

13 32 3 file: matrix.ppt p. 63

Cramer’s Rule

x

1

 b

1

1 b

2

A b

3 a

12 a

22 a

32 a

13 a

23 a

33 x

2

 a

11

1

A a

21 a

31 b

1 b

2 b

3 a

13 a

23 a

33 x

3

 a

11

1

A a

21 a

31 a

12 a

22 a

32 b

1 b

2 b

3 to solve for x i

- place {b} in the ith column file: matrix.ppt p. 64

EXAMPLE

Use of Cramer’s Rule

2 x

1

3 x

2

5 x

1

 x

2

5

 2

3 

 1 1  x

1 x

2

 5

 5 file: matrix.ppt p. 65

Elimination of Unknowns

( algebraic approach) a x

11 1

 a x

12 2

 c

1 a x

21 1

 a x

22 2

 c

2 a x

11 1

 a x

12 2

 c

1 a x

21 1

 a x

22 2

 c

2

  a

 

21 a

11

 a a x

21 11 1

 a a x

 a c a a x

21 11 1

 a a x

11 22 2

 a c

11 2

SUBTRACT file: matrix.ppt p. 66

Elimination of Unknowns

( algebraic approach) a a x

21 11 1

 a a x

2 1 12 2

 a c a a x

21 11 1

 a a x

11 22 2

 a c

11 2

SUBTRACT a a x

11 21 2

 a a x

22 11 2

 c a

1 21

 c a

2 11 x

2

 a c

21 1

 a c

11 2 a a

12 21

 a a

22 11 x

1

 a c

22 1

 a c

12 2 a a

11 22

 a a

12 21

NOTE: same result as

Cramer’s Rule file: matrix.ppt p. 67

Gauss Elimination

• One of the earliest methods developed for solving simultaneous equations

• Important algorithm in use today

• Involves combining equations in order to eliminate unknowns file: matrix.ppt p. 68

Blind (Naive) Gauss Elimination

• Technique for larger matrix

• Same principals of elimination

- manipulate equations to eliminate an unknown from an equation

- Solve directly then back-substitute into one of the original equations file: matrix.ppt p. 69

Two Phases of Gauss Elimination

 a a a

11 12 a a a

21 22 23 a a a

31 32

13

33

|

|

| c c c

3

1

2

 a a a

11 12 13

0

0

' a a

22

0

'

23

' ' a

33

|

|

| c c c

3

' '

'

1

2

Forward

Elimination

Note: the prime indicates the number of times the element has changed from the original value.

file: matrix.ppt p. 70

Two Phases of Gauss Elimination

 a

11

0

0 a

12

' a

22

0 a

13

' a

23

'' a

33

|

|

| x

3 x

2 x

1

 c

3

''

'' a

33

 c

2

'

 c

1

 a

1

23 x

3

' a

22 a

12 x

2

 a

11 a

13 x

3

 c c c

3

''

'

1

2

Back substitution file: matrix.ppt p. 71

EXAMPLE

2 x

1

 x

2

3 x

3

1

4 x

1

4 x

2

7 x

3

1

2 x

1

5 x

2

9 x

3

3 file: matrix.ppt p. 72

Evaluation of Pseudocode for Naïve Elimination

• DOFOR k =1 to n-1

DOFOR i=k+1 to n factor = a i,k

/a k,k

DOFOR j=k+1 to n a i,j

= a i,j

- factor x a k,j

ENDDO c i

= c i

- factor x c k

ENDDO

• ENDDO

Lets consider the translation of this in the next few overheads using an example

3x3 matrix file: matrix.ppt p. 73

2 x

1

 x

2

3 x

3

1

4 x

1

4 x

2

7 x

3

1

2 x

1

5 x

2

9 x

3

3

 2 1 3

4 4 7

 2 5 9

  

1 1 3

First, lets develop a the code to read the elements of the A and C matrices into a FORTRAN program

Use arrays file: matrix.ppt p. 74

2 1 3

4 4 7

 2 5 9

  

1 1 3

Let’s also keep are convention of a ij

Make the array A a double array

Dimension C as a single array

DIMENSION A(50,50) C(50)

Before programming any further, practice reading the array from an ASCII file and printing the resulting array on the screen. file: matrix.ppt p. 75

2 1 3

4 4 7

 2 5 9

  

1 1 3

•DOFOR k =1 to n-1

DOFOR i=k+1 to n factor = a i,k

/a k,k

DOFOR j=k+1 to n

ENDDO a i,j

= a i,j

- factor x a k,j c i

= c i

- factor x c k

ENDDO

•ENDDO

DO 10 K=1,N-1

10 CONTINUE

TRY TO DO THIS IN CLASS FIRST file: matrix.ppt p. 76

For an n x n matrix

DO 10 K=1,N-1

DO 20 I=K+1,N

FACTOR = A(I,K) / A(K,K)

30

DO 30 J = K+1,N

A(I,J)=A(I,J) - FACTOR*A(K,J)

20

C(I)=C(I) - FACTOR*C(K)

CONTINUE

10 CONTINUE file: matrix.ppt p. 77

Pitfalls of the Elimination

Method

• Division by zero

• Round off errors magnitude of the pivot element is small compared to other elements

• Ill conditioned systems file: matrix.ppt p. 78

Division by Zero

• When we normalize i.e. a

12

/a

11 we need to make sure we are not dividing by zero

• This may also happen if the coefficient is very close to zero

2 x

2

3 x

3

8

4 x

1

6 x

2

7 x

3

 

3

2 x

1

 x

2

6 x

3

5 file: matrix.ppt p. 79

Techniques for Improving the

Solution

• Use of more significant figures

• Pivoting

• Scaling

 a a a

11 12 a a a

21 22 23 a a a

13 23

13

33

 x x

2 x

1

3



 b

1

 b

2

 b

3



      file: matrix.ppt p. 80

Use of more significant figures

• Simplest remedy for ill conditioning

• Extend precision computational overhead memory overhead file: matrix.ppt p. 81

Pivoting

• Problems occur when the pivot element is zero division by zero

• Problems also occur when the pivot element is smaller in magnitude compared to other elements (i.e. round-off errors)

• Prior to normalizing, determine the largest available coefficient file: matrix.ppt p. 82

Pivoting

• Partial pivoting rows are switched so that the largest element is the pivot element

• Complete pivoting columns as well as rows are searched for the largest element and switched rarely used because switching columns changes the order of the x’s adding unjustified complexity to the computer program file: matrix.ppt p. 83

Division by Zero - Solution

Pivoting has been developed to partially avoid these problems

2 x

2

3 x

3

8

4 x

1

6 x

2

7 x

3

 

3

2 x

1

 x

2

6 x

3

5

4 x

1

6 x

2

7 x

3

 

3

2 x

2

3 x

3

8

2 x

1

 x

2

6 x

3

5 file: matrix.ppt p. 84

Scaling

• Minimizes round-off errors for cases where some of the equations in a system have much larger coefficients than others

• In engineering practice, this is often due to the widely different units used in the development of the simultaneous equations

• As long as each equation is consistent, the system will be technically correct and solvable file: matrix.ppt p. 85

2 x

1

, x

2

, x

1

 x

2

2 x

1

 x

2

Scaling x

1

 x

2

 x

1

 x

2

1

2 x

1

 x

2

 x

1

 x

2

2

1 x

1

 x

2

1 file: matrix.ppt p. 86

EXAMPLE

(solution in notes)

Use Gauss Elimination to solve the following set of linear equations

3 x

2

13 x

3

 

50

2 x

1

6 x

2

 x

3

4 x

1

8 x

3

4

45 file: matrix.ppt p. 87

SOLUTION

3 x

2

13 x

3

 

50

2 x

1

6 x

2

 x

3

4 x

1

8 x

3

4

45

First write in matrix form, employing short hand presented in class.

0

2

3

13

6 1

4 0 8

50

45

4

We will clearly run into problems of division by zero.

Use partial pivoting file: matrix.ppt p. 88

0

2

3

13

6 1

4 0 8

50

45

4

Pivot with equation with largest a n1 file: matrix.ppt p. 89

0

2

3

13

6 1

4 0 8

50

45

4

4 0 8

2

6 1

0 3

13

4

45

50

 file: matrix.ppt p. 90

0

2

3

13

6 1

4 0 8

50

45

4

4 0 8

2

6 1

0 3

13

4

45

50

4 0 8

0

6

3

0 3

13

4

43

50

Begin developing upper triangular matrix file: matrix.ppt p. 91

4 0 8

0

6

3

0 3

13

4

43

50

4

0

0

0

6

0

8

3

.

4

43

.

 x

3 x

1

.

4

 

.

4 x

2

 

.

CHECK

.

  

.

  

50

43

 

.

6

 

.

okay

...end of problem file: matrix.ppt p. 92

GAUSS-JORDAN

• Variation of Gauss elimination

• primary motive for introducing this method is that it provides and simple and convenient method for computing the matrix inverse .

• When an unknown is eliminated, it is eliminated from all other equations, rather than just the subsequent one file: matrix.ppt p. 93

GAUSS-JORDAN

• All rows are normalized by dividing them by their pivot elements

• Elimination step results in and identity matrix rather than an UT matrix

 a

11 a

12 a

0

0 a a

0

22 a

13

23

33

   

1

0

0

1

0

0

0

0

0 0 1 0

0 0 0 1

 file: matrix.ppt p. 94

Graphical depiction of Gauss-Jordan

 a a a

11 12

'

13

 a

21 a '

22 a a a a

31 32

23

''

33

|

|

| c

1 c c

3

''

2

'

1 0 0

0 1 0 

 0 0 1

|

|

| c c

1

  c

3

2

 

1 0 0

0 1 0 

 0 0 1

|

|

| c

1 c c

2

 

3

 x

1 x

2

 c

1

 c

2 x

3

 c

3 file: matrix.ppt p. 95

Matrix Inversion

• [A] [A] -1 = [A] -1 [A] = I

• One application of the inverse is to solve several systems differing only by {c}

[A]{x} = {c}

[A] -1 [A] {x} = [A] -1 {c}

[I]{x}={x}= [A] -1 {c}

• One quick method to compute the inverse is to augment [A] with [I] instead of {c} file: matrix.ppt p. 96

Graphical Depiction of the Gauss-Jordan

Method with Matrix Inversion

 a a a

11 12 a a a

21 22 23 a a a

31 32

13

33

1

0

0

1

0

0

0 0 1

1 0 0

0 1 0

0 0 1

1

1 a a a

11 12

13

1

 a a a

21

1

1

22

1

23

31

1

1 a a a

32

1

33

1

Note: the superscript

“-1” denotes that the original values have been converted to the matrix inverse, not 1/a ij file: matrix.ppt p. 97

Stimulus-Response

Computations

• Conservation Laws mass force heat momentum

• We considered the conservation of force in the earlier example of a truss file: matrix.ppt p. 98

Stimulus-Response Computations

• [A]{x}={c}

• [interactions]{response}={stimuli}

• Superposition if a system subject to several different stimuli, the response can be computed individually and the results summed to obtain a total response

• Proportionality multiplying the stimuli by a quantity results in the response to those stimuli being multiplied by the same quantity

• These concepts are inherent in the scaling of terms during the inversion of the matrix file: matrix.ppt p. 99

Error Analysis and System

Condition

• Scale the matrix of coefficients, [A] so that the largest element in each row is 1. If there are elements of [A] -1 that are several orders of magnitude greater than one, it is likely that the system is ill-conditioned.

• Multiply the inverse by the original coefficient matrix. If the results are not close to the identity matrix, the system is ill-conditioned

.

• Invert the inverted matrix. If it is not close to the original coefficient matrix, the system is ill-conditioned.

file: matrix.ppt p. 100

To further study the concepts of ill conditioning, consider the norm and the matrix condition number

• norm - provides a measure of the size or length of vector and matrices

• Cond [A] >> 1 suggests that the system is ill-conditioned file: matrix.ppt p. 101

LU Decomposition Methods

Chapter 10

• Elimination methods

Gauss elimination

Gauss Jordan

LU Decomposition Methods file: matrix.ppt p. 102

Naive LU Decomposition

• [A]{x}={c}

• Suppose this can be rearranged as an upper triangular matrix with 1’s on the diagonal

• [U]{x}={d}

• [A]{x}-{c}=0 [U]{x}-{d}=0

• Assume that a lower triangular matrix exists that has the property

[L]{[U]{x}-{d}}= [A]{x}-{c} file: matrix.ppt p. 103

Naive LU Decomposition

• [L]{[U]{x}-{d}}= [A]{x}-{c}

• Then from the rules of matrix multiplication

• [L][U]=[A]

• [L]{d}={c}

• [L][U]=[A] is referred to as the LU decomposition of [A]. After it is accomplished, solutions can be obtained very efficiently by a two-step substitution procedure file: matrix.ppt p. 104

Consider how Gauss elimination can be formulated as an LU decomposition

U is a direct product of forward elimination step if each row is scaled by the diagonal

1 a a

12 13

0 1 a

0 0 1

23

 file: matrix.ppt p. 105

Although not as apparent, the matrix [L] is also produced during the step. This can be readily illustrated for a three-equation system

 a

11 a

12 a a a a

21 22

13

23 a a a

31 32 33

 x x

2 x

3

1



 c

1

 c

2

 c

3



The first step is to multiply row 1 by the factor f

21

 a

21 a

11

Subtracting the result from the second row eliminates a

21 file: matrix.ppt p. 106

 a

11 a

12 a a a a

21 22

13

23 a a a

31 32 33

 x x

2 x

1

3



 c

1

 c

2

 c

3



Similarly, row 1 is multiplied by f

31

 a

31 a

11

The result is subtracted from the third row to eliminate a

31

In the final step for a 3 x 3 system is to multiply the modified row by f

32

 a '

32 a '

22

Subtract the results from the third row to eliminate a

32 file: matrix.ppt p. 107

The values f

21

, f

31

, f

32 are in fact the elements of an [L] matrix

  

 f f

1 0 0

1 0

21

31 f

32

1

CONSIDER HOW THIS RELATES TO THE

LU DECOMPOSITION METHOD TO SOLVE

FOR {X} file: matrix.ppt p. 108

[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x} file: matrix.ppt p. 109

Crout Decomposition

• Gauss elimination method involves two major steps forward elimination back substitution

• Efforts in improvement focused on development of improved elimination methods

• One such method is

Crout decomposition file: matrix.ppt p. 110

Crout Decomposition

Represents and efficient algorithm for decomposing [A] into [L] and [U]

 l

11

0 l l

21 22 l l l

31 32

0

0

33

 1

 u

0 1

12 u

13

 u

0 0 1

23

 a

11 a

12 a a a a

21 22

13

23 a a a

31 32 33

 file: matrix.ppt p. 111

 l

11

0 l l

21 22

0

0 l l l

31 32 33

 1

 u

0 1

12 u

13

 u

0 0 1

23

 a

11 a

12 a a a a

21 22

13

23 a a a

31 32 33

Recall the rules of matrix multiplication.

The first step is to multiply the rows of [L] by the first column of [U] a

11

         

11

1

0 0

0 0 a

21

 l

21 a

31

 l

31

 l

11

Thus the first column of [A] is the first column of [L] file: matrix.ppt p. 112

 l

11

0 l l

21 22

0

0 l l l

31 32 33

 1

 u

0 1

12 u

13

 u

0 0 1

23

 a

11 a

12 a a a a

21 22

13

23 a a a

31 32 33

Next we multiply the first row of [L] by the column of [U] to get l

11

 a

11 l u

11 12

 a

12 l u

11 13

 a

13 file: matrix.ppt p. 113

l

11

 a

11 l u

11 12

 a

12 l u

11 13

 a

13 u

12

 a

12 l

11 u

13

 a

13 l

11 u

1 j

 a

1 j l

11

 

 l l

11

21 l

0

22

0

0 l

31 l

32 l

33

 

 

1 u

12

0 1 u u

0 0 1

13

23

 

 

 a

11 a

12 a a

21 a

22 a a

31 a

32 a

13

23

33

 

Once the first row of [U] is established the operation can be represented concisely for j

2 3 n file: matrix.ppt p. 114

Schematic depicting

Crout

Decomposition file: matrix.ppt p. 115

l i 1

 a il for i

 n u

1 j

 a

1

For j l

11

 j for j

 n

1 n l ij

 a ij

 k

1 j 

1 l u ik kj for i

,

1 ,.....

n u jk

 a jk

1 i j 

1 l u ji ik for k j 1 , j

2 ....

n l jj l nn

 a nn

 k

1 n 

1 l u nk kn file: matrix.ppt p. 116

The Substitution Step

• [L]{[U]{x}-{d}}= [A]{x}-{c}

• [L][U]=[A]

• [L]{d}={c}

• [U]{x}={d}

• Recall our earlier graphical depiction of the

LU decomposition method file: matrix.ppt p. 117

[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x} file: matrix.ppt p. 118

d

1

 c

1 l

11 c i

 i

1  l d ij j d i

 j

1 for i

 n l ii

Back substitution

      x n

 d n x i

 d i

 n 

1 u x ij j for i n 1 , n

2 ,.....

n file: matrix.ppt p. 119

Parameters used to quantify the dimensions of a banded system.

HBW

BW BW = band width

HBW = half band width

Diagonal file: matrix.ppt p. 120

Thomas Algorithm

• As with conventional LU decomposition methods, the algorithm consist of three steps decomposition forward substitution back substitution

• Want a scheme to reduce the large inefficient use of storage involved with banded matrices

• Consider the following tridiagonal system file: matrix.ppt p. 121

 a a

11 12 a

21 a a

22

32 a a

.

23

33 a

.

.

34

.

.

.

.

.

a n

1 , n

2

.

a n

1 , n

1 a

1 a n

1 , n

 a nn

 x x

2 x

1

3

 .

.

.

.

x n

 c c

2 c

1

3

 .

.

.

.

c n

Note how this tridiagonal system require the storage of a large number of zero values.

file: matrix.ppt p. 122

 f g

1 1 e f g

2 2 e

3 f

2

3 g

.

.

.

3

.

.

.

e n

.

.

1

.

f n

1 e n g n

1

 f n

0 e

2 e

3

 f f f

3

1

2

 g g

2 g

1

3

 .

.

.

.

e n

 .

.

.

.

f n

.

.

.

.

0

Note that we have changed our notation of a’s to e,f,g

In addition we have changed our notation of c’s to r file: matrix.ppt p. 123

0 e

2 e

3

 f f f

1

2

3

 g g

2 g

3

1

 .

.

.

.

e n

 .

.

.

.

f n

 .

.

.

.

0

0 b

12 b

13 b b b

21 22 23

 b b b

31 32 33

.

.

.

.

.

.

.

.

.

.

.

.

 b b n 1 n 2

0

Storage can be accomplished by either storage as vector (e,f,g) or as a compact matrix [B]

Storage is even further reduced if the matrix is banded and symmetric. Only elements on the diagonal and in the upper half need be stored.

file: matrix.ppt p. 124

Gauss Seidel Method

• An iterative approach

• Continue until we converge within some prespecified tolerance of error

• Round off is no longer an issue, since you control the level of error that is acceptable

• Fundamentally different from Gauss elimination this is an approximate, iterative method particularly good for large number of equations file: matrix.ppt p. 125

Gauss-Seidel Method

• If the diagonal elements are all nonzero, the first equation can be solved for x

1 x

1

 c

1

 a x

12 2

 a x

13 3 a

11 a x

1 n n

• Solve the second equation for x

2

, etc.

To assure that you understand this, write the equation for x

2 file: matrix.ppt p. 126

 x

1 x

2 x

3

 c

1

 a x

12 2

 a x

13 3 a

11 c

2

 a x

21 1

 a x

23 3

 a

22 c

3

 a x

31 1

 a x

32 2 a

33 x n

 c n

 a x

1

 a n 3 x

2 a nn a x

1 n n a x

2 n n a x

3 n n a nn

1 x n

1 file: matrix.ppt p. 127

Gauss-Seidel Method

• Start the solution process by guessing values of x

• A simple way to obtain initial guesses is to assume that they are all zero

• Calculate new values of x i starting with x

1

= c

1

/a

11

• Progressively substitute through the equations

• Repeat until tolerance is reached file: matrix.ppt p. 128

x

1

  c

1

 a x

12 2

 a x

13 3

/ a

11 x

2

  c

2

 a x

21 1

 a x

23 3

/ a

22 x

3

  c

3

 a x

31 1

 a x

32 2

/ a

33 x

1

  c

1

 a

12

0

 a

13

0

/ a

11

 c

1

0

 a

11 x

2

  c

2

 a x '

21 1

 a

23

/ a

22

 x x

3

  c

3

 a x '

31 1

 a x '

32 2

/ a

33

 x

'

2

'

3

 x '

1 file: matrix.ppt p. 129

EXAMPLE

Given the following augmented matrix, complete one iteration of the Gauss

Seidel method.

2

3 1

4 1 2

3 2 1

2

2

1

 file: matrix.ppt p. 130

Gauss-Seidel Method convergence criterion

 x i j  x i j

1 x i j

100

  s as in previous iterative procedures in finding the roots, we consider the present and previous estimates.

As with the open methods we studied previously with one point iterations

1. The method can diverge

2. May converge very slowly file: matrix.ppt p. 131

Convergence criteria for two linear equations

,

1 2

  c

1

 a

11 a

12 a

11 x

2

,

1 2

  a c

2

22

 a

21 a

22 x

2 consider the partial derivatives of u and v

 u

 x

1

 v

 x

1

0

  a

21 a

22

 v

 x u

 x

2

2

  a

12 a

11

0 file: matrix.ppt p. 132

Convergence criteria for two linear equations

,

1 2

  c

1

 a

11 a

12 a

11 x

2

,

1 2

  c

2

 a

22 a

21 a

22 x

1

Class question: where do these formulas come from?

consider the partial derivatives of u and v

 u

 x

1

 v

 x

1

0

  a

21 a

22

 u

 x

2

 v

 x

2

  a

12 a

11

0 file: matrix.ppt p. 133

Convergence criteria for two linear equations cont.

 u

 x

 u

 y

 v x

1

 v y

1

Criteria for convergence where presented earlier in class material for nonlinear equations.

Noting that x = x

1 y = x

2 and

Substituting the previous equation: file: matrix.ppt p. 134

Convergence criteria for two linear equations cont.

a

21 a

22

1 a

12 a

11

1

This is stating that the absolute values of the slopes must be less than unity to ensure convergence.

Extended to n equations: a ii

  a ij where j

1, n excluding j

 i file: matrix.ppt p. 135

Convergence criteria for two linear equations cont.

a ii

  a ij where j

1, n excluding j

 i

This condition is sufficient but not necessary; for convergence.

When met, the matrix is said to be diagonally dominant.

file: matrix.ppt p. 136

x

2

Review the concepts of divergence and convergence by graphically illustrating Gauss-Seidel for two linear equations x

1 u : 11 x

1

13 x

2

286 v : 11 x

1

9 x

2

99 file: matrix.ppt p. 137

x

2 v : 11 x

1

9 x

2

99 u : 11 x

1

13 x

2

286

Note: we are converging on the solution x

1 file: matrix.ppt p. 138

x

2 u : 11 x

1

13 x

2

286 v : 11 x

1

9 x

2

99

Change the order of the equations: i.e. change direction of initial estimates x

1

This solution is diverging!

file: matrix.ppt p. 139

Improvement of Convergence

Using Relaxation

This is a modification that will enhance slow convergence.

After each new value of x is computed, calculate a new value based on a weighted average of the present and previous iteration.

x i new   x i new

   x i old file: matrix.ppt p. 140

Improvement of Convergence Using

Relaxation x i new   x i new

   x i old

• if 

= 1unmodified

• if 0 < 

< 1 underrelaxation nonconvergent systems may converge hasten convergence by dampening out oscillations

• if 1< 

< 2 overrelaxation extra weight is placed on the present value assumption that new value is moving to the correct solution by too slowly file: matrix.ppt p. 141

Jacobi Iteration

• Iterative like Gauss Seidel

• Gauss-Seidel immediately uses the value of x i in the next equation to predict x i+1

• Jacobi calculates all new values of x i

’s to calculate a set of new x i values file: matrix.ppt p. 142

Graphical depiction of difference between Gauss-Seidel and Jacobi x x

1

3

 c c

1

3

 a x

12 2 a x

31 1

 a x

13 3 a x

32 2

FIRST ITERATION

/

/ a x

2

  c

2

 a x

21 1

 a x

23 3

/ a a

11

22

33 x

1

  c

1

 a x

12 2

 a x

13 3

/ a

11 x

2

  c

2

 a x

21 1

 a x

23 3

/ a

22 x

3

  c

3

 a x

31 1

 a x

32 2

/ a

33 x x

1

3

 c c

1

3

 a x

12 2 a x

31 1

SECOND ITERATION a x

13 3 a x

32 2

/

/ a x

2

  c

2

 a x

21 1

 a x

23 3

/ a a

11

22

33 x

1

  c

1

 a x

12 2

 a x

13 3

/ a

11 x

2

  c

2

 a x

21 1

 a x

23 3

/ a

22 x

3

  c

3

 a x

31 1

 a x

32 2

/ a

33 file: matrix.ppt p. 143

EXAMPLE

Given the following augmented matrix, complete one iteration of the Gauss Seidel method and the

Jacobi method.

2

3 1

4 1 2

3 2 1

2

2

1

We worked the Gauss Seidel method earlier file: matrix.ppt p. 144