Chapter 04.08 Gauss-Seidel Method

advertisement
Chapter 04.08
Gauss-Seidel Method
After reading this chapter, you should be able to:
1. solve a set of equations using the Gauss-Seidel method,
2. recognize the advantages and pitfalls of the Gauss-Seidel method, and
3. determine under what conditions the Gauss-Seidel method always converges.
Why do we need another method to solve a set of simultaneous linear equations?
In certain cases, such as when a system of equations is large, iterative methods of solving
equations are more advantageous. Elimination methods, such as Gaussian elimination, are
prone to large round-off errors for a large set of equations. Iterative methods, such as the
Gauss-Seidel method, give the user control of the round-off error. Also, if the physics of the
problem are well known, initial guesses needed in iterative methods can be made more
judiciously leading to faster convergence.
What is the algorithm for the Gauss-Seidel method? Given a general set of n equations and
n unknowns, we have
a11 x1  a12 x2  a13 x3  ...  a1n xn  c1
a21 x1  a22 x2  a23 x3  ...  a2 n xn  c2
.
.
.
.
.
.
an1 x1  an 2 x2  an3 x3  ...  ann xn  cn
If the diagonal elements are non-zero, each equation is rewritten for the corresponding
unknown, that is, the first equation is rewritten with x1 on the left hand side, the second
equation is rewritten with x 2 on the left hand side and so on as follows
04.08.1
04.08.2
Chapter 04.08
x1 
c1  a12 x 2  a13 x3   a1n x n
a11
x2 
c 2  a 21 x1  a 23 x3   a 2 n x n
a 22


x n 1 
xn 
c n 1  a n 1,1 x1  a n 1, 2 x 2   a n 1,n  2 x n  2  a n 1,n x n
a n 1,n 1
c n  a n1 x1  a n 2 x 2    a n ,n 1 x n 1
a nn
These equations can be rewritten in a summation form as
n
c1   a1 j x j
j 1
j 1
x1 
a11
n
c2   a2 j x j
j 1
j 2
x2 
a 22
.
.
.
c n 1 
x n 1 
n
a
j 1
j  n 1
n 1, j
xj
a n 1,n 1
n
c n   a nj x j
xn 
j 1
j n
a nn
Hence for any row i ,
n
ci   aij x j
xi 
j 1
j i
, i  1,2,, n.
aii
Now to find xi ’s, one assumes an initial guess for the xi ’s and then uses the rewritten
equations to calculate the new estimates. Remember, one always uses the most recent
estimates to calculate the next estimates, xi . At the end of each iteration, one calculates the
absolute relative approximate error for each xi as
Gauss-Seidel Method
a
i

x inew  x iold
x inew
04.08.3
 100
where xinew is the recently obtained value of xi , and xiold is the previous value of xi .
When the absolute relative approximate error for each xi is less than the pre-specified
tolerance, the iterations are stopped.
Example 1
Three-phase loads are common in AC systems. When the system is balanced the analysis can
be simplified to a single equivalent circuit model. However, when it is unbalanced the only
practical solution involves the solution of simultaneous linear equations. In one model the
following equations need to be solved.
0.7460  0.4516 0.0100  0.0080 0.0100  0.0080  I ar   120 
0.4516 0.7460 0.0080 0.0100 0.0080 0.0100   I   0.000 

  ai  

0.0100  0.0080 0.7787  0.5205 0.0100  0.0080  I br   60.00

  

0.0080 0.0100 0.5205 0.7787 0.0080 0.0100   I bi    103.9 
0.0100  0.0080 0.0100  0.0080 0.8080  0.6040  I cr   60.00

  

0.0080 0.0100 0.0080 0.0100 0.6040 0.8080   I ci   103.9 
Find the values of I ar , I ai , I br , I bi , I cr , and I ci using the Gauss-Seidel method. Use
 I ar  20
 I  20
 ai   
 I br  20
  
 I bi  20
 I cr  20
   
 I ci  20
as the initial guess and conduct two iterations.
Solution
Rewriting the equations gives
120   0.4516I ai  0.0100I br   0.0080I bi  0.0100 I cr   0.0080I ci
I ar 
0.7460
0.000  0.4516I ar  0.0080I br  0.0100I bi  0.0080I cr  0.0100I ci
I ai 
0.7460
 60.00  0.0100I ar   0.0080I ai   0.5205I bi  0.0100I cr   0.0080I ci
I br 
0.7787
 103.9  0.0080I ar  0.0100I ai  0.5205I br  0.0080I cr  0.0100I ci
I bi 
0.7787
 60.00  0.0100I ar   0.0080I ai0.0100I br   0.0080I bi   0.6040I ci
I cr 
0.8080
04.08.4
Chapter 04.08
I ci 
103.9  0.0080I ar  0.0100I ai  0.0080I br  0.0100I bi  0.6040I cr
0.8080
Iteration #1
Given the initial guess of the solution vector as
 I ar  20
 I  20
 ai   
 I br  20
  
 I bi  20
 I cr  20
   
 I ci  20
Substituting the guess values into the first equation
120   0.4516I ai  0.0100I br   0.0080I bi  0.0100I cr   0.0080I ci
I ar 
0.7460
 172.86
Substituting the new value of I ar and the remaining guess values into the second equation
I ai 
0.000  0.4516I ar  0.0080I br  0.0100I bi  0.0080I cr  0.0100I ci
0.7460
 105.61
Substituting the new values of I ar , I ai , and the remaining guess values into the third
equation
 60.00  0.0100I ar   0.0080I ai   0.5205I bi  0.0100I cr   0.0080I ci
0.7787
 67.039
Substituting the new values of I ar , I ai , I br , and the remaining guess values into the fourth
equation
 103.9  0.0080I ar  0.0100I ai  0.5205I br  0.0080I cr  0.0100I ci
I bi 
0.7787
I br 
 89.499
Substituting the new values of I ar , I ai , I br , I bi , and the remaining guess values into the fifth
equation
I cr 
 60.00  0.0100I ar   0.0080I ai0.0100I br   0.0080I bi   0.6040I ci
0.8080
 62.548
Substituting the new values of I ar , I ai , I br , I bi , I cr , and the remaining guess value into the
sixth equation
I ci 
103.9  0.0080I ar  0.0100I ai  0.0080I br  0.0100I bi  0.6040I cr
0.8080
 176.71
The absolute relative approximate error for each I then is
Gauss-Seidel Method
04.08.5
172.86  20
 100
172.86
 88.430%
 105.61  20
a 2 
 100
 105.61
 118.94%
 67.039  20
a 3 
 100
 67.039
 129.83%
 89.499  20
a 4 
 100
 89.499
 122.35%
 62.548  20
a 5 
 100
 62.548
 131.98%
176.71  20
a 6 
 100
176.71
 88.682%
At the end of the first iteration, the estimate of the solution vector is
 I ar   172.86 
 I    105.61 
 ai  

 I br   67.039
 

 I bi    89.499 
 I cr    62.548
  

 I ci   176.71 
and the maximum absolute relative approximate error is 131.98% .
Iteration #2
The estimate of the solution vector at the end of Iteration #1 is
 I ar   172.86 
 I    105.61 
 ai  

 I br   67.039
 

 I bi    89.499 
 I cr    62.548
  

 I ci   176.71 
Substituting the values from Iteration #1 into the first equation
120   0.4516I ai  0.0100I br   0.0080I bi  0.0100I cr   0.0080I ci
I ar 
0.7460
a 1 
04.08.6
Chapter 04.08
 99.600
Substituting the new value of I ar and the remaining values from Iteration #1 into the second
equation
0.000  0.4516I ar  0.0080I br  0.0100I bi  0.0080I cr  0.0100I ci
I ai 
0.7460
 60.073
Substituting the new values of I ar , I ai , and the remaining values from Iteration #1 into the
third equation
 60.00  0.0100I ar   0.0080I ai   0.5205I bi  0.0100I cr   0.0080I ci
0.7787
 136.15
Substituting the new values of I ar , I ai , I br , and the remaining values from Iteration #1 into
the fourth equation
 103.9  0.0080I ar  0.0100I ai  0.5205I br  0.0080I cr  0.0100I ci
I bi 
0.7787
I br 
 44.299
Substituting the new values of I ar , I ai , I br , I bi , and the remaining values from Iteration #1
into the fifth equation
I cr 
 60.00  0.0100I ar   0.0080I ai0.0100I br   0.0080I bi   0.6040I ci
0.8080
 57.259
Substituting the new values of I ar , I ai , I br , I bi , I cr , and the remaining value from Iteration
#1 into the sixth equation
I ci 
103.9  0.0080I ar  0.0100I ai  0.0080I br  0.0100I bi  0.6040I cr
0.8080
 87.441
The absolute relative approximate error for each I then is
99.600  172.86
 100
99.600
 73.552%
 60.073   105.61

 100
 60.073
 75.796%
 136.35   67.039 

 100
 136.35
 50.762%
 44.299   89.499

 100
 44.299
 102.03%
a 1 
a
2
a
3
a
4
Gauss-Seidel Method
a
5
a
6
04.08.7
57.259   62.548
 100
57.259
 209.24%
87.441  176.71

 100
87.441
 102.09%

At the end of the second iteration, the estimate of the solution vector is
 I ar   99.600 
 I    60.073
 ai  

 I br    136.15 
 

 I bi   44.299
 I cr   57.259 
  

 I ci   87.441 
and the maximum absolute relative approximate error is 141.4087% .
Conducting more iterations gives the following values for the solution vector and the
corresponding absolute relative approximate errors.
Iteration
1
2
3
4
5
6
I ar
172.86
99.600
126.01
117.25
119.87
119.28
I ai
–105.61
–60.073
–76.015
–70.707
–72.301
–71.936
I br
–67.039
–136.15
–108.90
–119.62
–115.62
–116.98
I bi
–89.499
–44.299
–62.667
–55.432
–58.141
–57.216
Iteration
1
2
3
4
5
6
a 1 %
88.430
73.552
20.960
7.4738
2.1840
0.49408
a 2 %
118.94
75.796
20.972
7.5067
2.2048
0.50789
a 3 %
129.83
50.762
25.027
8.9631
3.4633
1.1629
a 4 %
122.35
102.03
29.311
13.053
4.6595
1.6170
I cr
–62.548
57.259
–10.478
27.658
6.2513
18.241
a 5 %
131.98
209.24
646.45
137.89
342.43
65.729
I ci
176.71
87.441
137.97
109.45
125.49
116.53
a 6 %
88.682
102.09
36.623
26.001
12.742
7.6884
After six iterations, the absolute relative approximate errors are decreasing, but are still high.
Allowing for more iteration, the relative approximate errors decrease significantly.
Iteration
32
33
I ar
119.33
119.33
I ai
–71.973
–71.973
I br
–116.66
–116.66
I bi
–57.432
–57.432
I cr
13.940
13.940
I ci
119.74
119.74
04.08.8
Chapter 04.08
Iteration
a 1 %
a 2 %
a 3 %
a 4 %
a 5 %
a 6 %
32
33
3.0666  10 7
1.7062  10 7
3.0047  10 7
1.6718  10 7
4.2389  10 7
2.3601  10 7
5.7116  10 7
3.1801  10 7
2.0941  10 5
1.1647  10 5
1.8238  10 6
1.0144  10 6
After 33 iterations, the solution vector is
 I ar   119.33 
 I   71.973
 ai  

 I br    116.66 
 

 I bi   57.432
 I cr   13.940 
  

 I ci   119.74 
The maximum absolute relative approximate error is 1.1647  10 5 % .
The above system of equations does not seem to converge. Why?
Well, a pitfall of most iterative methods is that they may or may not converge. However, the
solution to a certain classes of systems of simultaneous equations does always converge
using the Gauss-Seidel method. This class of system of equations is where the coefficient
matrix [ A] in [ A][ X ]  [C ] is diagonally dominant, that is
n
aii   aij for all i
j 1
j i
n
a ii   a ij for at least one i
j 1
j i
If a system of equations has a coefficient matrix that is not diagonally dominant, it may or
may not converge. Fortunately, many physical systems that result in simultaneous linear
equations have a diagonally dominant coefficient matrix, which then assures convergence for
iterative methods such as the Gauss-Seidel method of solving simultaneous linear equations.
Example 2
Find the solution to the following system of equations using the Gauss-Seidel method.
12 x1  3x2  5x3  1
x1  5 x2  3x3  28
3x1  7 x2  13x3  76
Use
 x1  1
 x   0 
 2  
 x3  1
as the initial guess and conduct two iterations.
Gauss-Seidel Method
04.08.9
Solution
The coefficient matrix
12 3  5
A   1 5 3 
 3 7 13 
is diagonally dominant as
a11  12  12  a12  a13  3   5  8
a22  5  5  a21  a23  1  3  4
a33  13  13  a31  a32  3  7  10
and the inequality is strictly greater than for at least one row. Hence, the solution should
converge using the Gauss-Seidel method.
Rewriting the equations, we get
1  3 x 2  5 x3
x1 
12
28  x1  3x3
x2 
5
76  3x1  7 x2
x3 
13
Assuming an initial guess of
 x1  1
 x   0 
 2  
 x3  1
Iteration #1
1  30   51
12
 0.50000
28  0.50000  31
x2 
5
 4.9000
76  30.50000   74.9000 
x3 
13
 3.0923
The absolute relative approximate error at the end of the first iteration is
0.50000  1
a 1 
 100
0.50000
 100.00%
4.9000  0
a 2 
100
4.9000
 100.00%
x1 
04.08.10
Chapter 04.08
3.0923  1
 100
3.0923
 67.662%
The maximum absolute relative approximate error is 100.00%
Iteration #2
1  34.9000  53.0923
x1 
12
 0.14679
28  0.14679  33.0923
x2 
5
 3.7153
76  30.14679   73.7153
x3 
13
 3.8118
At the end of second iteration, the absolute relative approximate error is
0.14679  0.50000
a 1 
100
0.14679
 240.61%
3.7153  4.9000
a 2 
100
3.7153
 31.889%
3.8118  3.0923
a 3 
 100
3.8118
 18.874%
The maximum absolute relative approximate error is 240.61%. This is greater than the value
of 100.00% we obtained in the first iteration. Is the solution diverging? No, as you conduct
more iterations, the solution converges as follows.
a
3

Iteration
1
2
3
4
5
6
x1
0.50000
0.14679
0.74275
0.94675
0.99177
0.99919
a 1 %
100.00
240.61
80.236
21.546
4.5391
0.74307
x2
4.9000
3.7153
3.1644
3.0281
3.0034
3.0001
This is close to the exact solution vector of
 x1  1
 x    3
 2  
 x3  4
a 2 %
100.00
31.889
17.408
4.4996
0.82499
0.10856
x3
3.0923
3.8118
3.9708
3.9971
4.0001
4.0001
a 3 %
67.662
18.874
4.0064
0.65772
0.074383
0.00101
Gauss-Seidel Method
04.08.11
Example 3
Given the system of equations
3x1  7 x2  13x3  76
x1  5x2  3x3  28
12 x1  3x2 - 5x3  1
find the solution using the Gauss-Seidel method. Use
 x1  1
 x   0 
 2  
 x3  1
as the initial guess.
Solution
Rewriting the equations, we get
76  7 x 2  13 x3
x1 
3
28  x1  3 x3
x2 
5
1  12 x1  3 x 2
x3 
5
Assuming an initial guess of
 x1  1
 x   0 
 2  
 x3  1
the next six iterative values are given in the table below.
Iteration
1
2
3
4
5
6
x1
21.000
–196.15
1995.0
–20149
2.0364  105
–2.0579  106
a 1 %
95.238
110.71
109.83
109.90
109.89
109.89
x2
0.80000
14.421
–116.02
1204.6
–12140
1.2272  105
a 2 %
100.00
94.453
112.43
109.63
109.92
109.89
x3
50.680
–462.30
4718.1
–47636
4.8144  105
–4.8653  106
a 3 %
98.027
110.96
109.80
109.90
109.89
109.89
You can see that this solution is not converging and the coefficient matrix is not diagonally
dominant. The coefficient matrix
 3 7 13 
A   1 5 3 
12 3  5
is not diagonally dominant as
a11  3  3  a12  a13  7  13  20
Hence, the Gauss-Seidel method may or may not converge.
04.08.12
Chapter 04.08
However, it is the same set of equations as the previous example and that converged. The
only difference is that we exchanged first and the third equation with each other and that
made the coefficient matrix not diagonally dominant.
Therefore, it is possible that a system of equations can be made diagonally dominant if one
exchanges the equations with each other. However, it is not possible for all cases. For
example, the following set of equations
x1  x2  x3  3
2 x1  3x2  4 x3  9
x1  7 x2  x3  9
cannot be rewritten to make the coefficient matrix diagonally dominant.
Key Terms:
Gauss-Seidel method
Convergence of Gauss-Seidel method
Diagonally dominant matrix
SIMULTANEOUS LINEAR EQUATIONS
Topic
Gauss-Seidel Method – More Examples
Summary Examples of the Gauss-Seidel method
Major
Electrical Engineering
Authors
Autar Kaw
July 12, 2016
Date
Web Site http://numericalmethods.eng.usf.edu
Download