Ch2-2

advertisement
Chapter 2
Iterative Methods for Solving Sets of Equations
2.2 The Successive-Over-Relaxation (SOR) Method
The SOR method is very similar to the Gauss-Seidel method except that it uses a scaling factor
to reduce the approximation error. Consider the following set of equations
n
a
j 1
ij
xj = bi , i = 1, 2, , n
For Gauss-Seidel method, the values at the k iteration are given by
xi( k ) =
1
a ii
i 1
n


(k )
b

a
x

aij x (jk 1) 

 i  ij j
j 1
j i 1


It should be noted that for the calculation of xi, the variables with index less than i are at the (k)
iteration while the variables with index greater than i are at still at the previous (k-1) iteration.
The equation for the SOR method is given as
 1
xi( k ) = xi( k 1) +  
 aii
i 1
n



(k )
b

a
x

aij x (jk 1)   xi( k 1) 

 i  ij j

j 1
j i 1


i 1
n
 1 


The term in the bracket  bi   aij x (jk )   aij x (jk 1)   xi( k 1)  is just difference between the
 aii 

j 1
j i 1

variables of the previous and present iterations for the Gauss-Seidel method
 1

 aii
i 1
n



(k )
b

a
x

aij x (jk 1)   xi( k 1)  = [ xi( k )  xi( k 1) ]Gauss-Seidel

 i  ij j

j 1
j i 1


This difference is essentially the error for the iteration since at convergence this difference must
approach zero. The SOR method obtains the new estimated value by multiplying this difference
by a scaling factor  and adding it to the previous value. The SOR equation can also be written
in the following form
xi( k ) = (1  ) xi( k 1) +
 
i 1
n

(k )
b

a
x

aij x (jk 1) 

 i  ij j
aii 
j 1
j i 1

When  = 1 the above equation is the formula for Gauss-Seidel method, when  < 1 it is the
under-relaxation method, and when  < 1 it is the over-relaxation method. We use the SOR
method to solve the set of equations presented in Example 2.1-1.
2-7
T1 =
T2 =
T3 =
T4 =
T5 =
T6 =
T7 =
T8 =
1
( T2 + T3 + 500 + 500)
4
1
( T1 + T4 + T1 + 500)
4
1
( T1 + T4 + T5 + 500)
4
1
( T2 + T3 + T6 + T3)
4
1
( T3 + T6 + T7 + 500)
4
1
( T4 + T5 + T8 + T5)
4
1
(2T5 + T8 + 2000)
9
1
(2T6 + 2T7 + 1500)
9
Table 2.2-1 lists the Matlab program and some results using SOR iteration with various scaling
factor .
Table 2.2-1 ________________________
%
scale=0.2:.1:1.8;
n=length(scale);iter=scale;errorT=ones(8,1);
for k=1:n
omega=scale(k);
% Initial guesses for the temperatures
T=400*ones(8,1);
for i=1:100
errorT(1)=.25*(T(2)+T(3)+1000)-T(1);T(1)=T(1)+omega*errorT(1);
errorT(2)=.25*(2*T(1)+T(4)+500)-T(2);T(2)=T(2)+omega*errorT(2);
errorT(3)=.25*(T(1)+T(4)+T(5)+500)-T(3);T(3)=T(3)+omega*errorT(3);
errorT(4)=.25*(T(2)+2*T(3)+T(6))-T(4);T(4)=T(4)+omega*errorT(4);
errorT(5)=.25*(T(3)+T(6)+T(7)+500)-T(5);T(5)=T(5)+omega*errorT(5);
errorT(6)=.25*(T(4)+2*T(5)+T(8))-T(6);T(6)=T(6)+omega*errorT(6);
errorT(7)=(2*T(5)+T(8)+2000)/9-T(7);T(7)=T(7)+omega*errorT(7);
errorT(8)=(2*T(6)+2*T(7)+1500)/9-T(8);T(8)=T(8)+omega*errorT(8);
eT=abs(errorT);
if max(eT)<.01, break, end
end
iter(k)=i;
fprintf('scale factor = %g, # of iteration = %g\n',omega,i)
fprintf('T = %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f %6.2f\n',T)
end
plot(scale,iter)
xlabel('Scale factor');ylabel('Number of iterations');grid on
2-8
>> sor
scale factor = 0.9, # of iteration = 17
T = 489.30 485.15 472.06 462.00 436.94 418.73 356.99 339.05
scale factor = 1, # of iteration = 13
T = 489.30 485.15 472.06 462.00 436.95 418.73 356.99 339.05
scale factor = 1.1, # of iteration = 7
T = 489.30 485.15 472.06 462.01 436.95 418.74 356.99 339.05
scale factor = 1.2, # of iteration = 9
Figure 2.2-1 shows the number of iterations required for convergence as a function of the scaling
factor . There is a minimum in the number of iterations at  of about 1.2. Normally the value of
the scaling factor for a minimum iteration is between 1 and 2 and this value can not be
determined before hand except for some special cases. Under-relaxation method ( < 1) always
requires more iterations than the Gauss-Seidel method. However under-relaxation is sometimes
used to slow the convergence if a value of the scaling factor  1 leads to divergence.
100
90
80
Number of iterations
70
60
50
40
30
20
10
0
0.2
0.4
0.6
0.8
1
Scale factor
1.2
1.4
1.6
1.8
Figure 2.1-1 The variation of number of iterations with scaling factor.
2-9
2.3 Newton’s Method for Systems of Nonlinear Algebraic Equations
Consider two equations f1(x1, x2) and f2(x1, x2) for which the roots are desired. Let p10 , p20 be the
guessed values for the roots. f1(x1, x2) and f2(x1, x2) can be expanded about point ( p10 , p20 ) to
obtain
f 1
(x1  p10 ) +
x1
f
f2(x1, x2) = f2( p10 , p20 ) + 2 (x1  p10 ) +
x1
f1(x1, x2) = f1( p10 , p20 ) +
f1
(x2  p20 ) = 0
x2
f 2
(x2  p20 ) = 0
x2
Let y10 = (x1  p10 ) and y10 = (x2  p20 ), the above set can be written in the matrix form
 f1
 x
 1
 f 2
 x1
f1 
0
 f ( p 0 , p 0 )
x2   y1 
  0  =   1 10 20 
f 2  y 2
 
 f1 ( p1 , p2 )
x2 
or
J(p(0))y(0) =  F(p(0))
In general, the superscript (0) can be replaced by (k1)
J(p(k-1))y(k-1) =  F(p(k-1))
J(p(k-1)) is the Jacobian matrix of the system. The new guessed values x at iteration k are given
by
x = p(k) = p(k-1) + y(k-1)
Example 2.3-1
Use Newton’s method with the initial guess x = [0.1 0.1 0.1] to obtain the solutions to the
following equations2
1
=0
2
f2(x1, x2, x3) = x12  81(x2 + 0.1)2 + sin x3 + 1.06 = 0
10  3
f2(x1, x2, x3) = e  x1x2 + 20x3 +
=0
3
f1(x1, x2, x3) = 3x1  cos(x2 x3) 
Solution

2
Numerical Analysis by Burden and Faires
2-10
The following two formula can be applied to obtain the roots
J(p(k-1))y(k-1) =  F(p(k-1))
J(p(k-1)) is the Jacobian matrix of the system.
 f1
 x
 1
f
J(p(k-1)) =  2
 x1
 f
 3
 x1
f1
x2
f 2
x2
f 3
x2
f1 
x3 
f 2 
x3 
f 3 

x3 
F(p(k-1)) is the column vector of the given functions
 f1 ( x1 , x2 , x3 ) 
F(p(k-1)) =  f 2 ( x1 , x2 , x3 )


 f 3 ( x1 , x2 , x3 ) 
The new guessed values x at iteration k are given by
x = p(k) = p(k-1) + y(k-1)
Table 2.3-1 lists the Matlab program to evaluate the roots from the given initial guesses.
Table 2.3-1 Matlab program for Example 2.3-1 (Ex2d3d1) ------------% Newton Method
%
f1='3*x(1)-cos(x(2)*x(3))-.5';
f2='x(1)*x(1)-81*(x(2)+.1)^2+sin(x(3))+1.06';
f3= 'exp(-x(1)*x(2))+20*x(3)+10*pi/3-1' ;
% Initial guess
%
x=[0.1 0.1 -0.1];
for i=1:5
f=[eval(f1) eval(f2) eval(f3)];
Jt=[3
2*x(1)
-x(2)*exp(-x(1)*x(2))
x(3)*sin(x(2)*x(3)) -162*(x(2)+.1) -x(1)*exp(-x(1)*x(2))
x(2)*sin(x(2)*x(3)) cos(x(3))
20]';
%
dx=Jt\f';
x=x-dx';
fprintf('x = ');disp(x)
end
2-11
Matlab can also evaluate the Jacobian matrix of the system analytically as shown in Table 2.3-2
Table 2.3-2 Matlab program for Example 2.3-1 ------------% Newton Method with Jacobian matrix evaluated analytically by Matlab
%
%
syms x1 x2 x3
F=[3*x1-cos(x2*x3)-.5
x1^2-81*(x2+.1)^2+sin(x3)+1.06
exp(-x1*x2)+20*x3+(10*pi-3)/3];
Jac=[diff(F,x1) diff(F,x2) diff(F,x3)];
x1=.1;x2=.1;x3=-.1;
k=0;
disp(' k x1
x2
x3')
fprintf('%3.0f %10.7f %10.7f %10.7f\n',k,x1,x2,x3)
for k=1:10
Am=eval(Jac);Bc=eval(F);
yk=Am\Bc;
x1=x1-yk(1);
x2=x2-yk(2);
x3=x3-yk(3);
fprintf('%3.0f %10.7f %10.7f %10.7f\n',k,x1,x2,x3)
if max(abs(yk))<.00001, break, end
end
The printouts from Matlab are
Jac =
[
3, sin(x2*x3)*x3, sin(x2*x3)*x2]
[
2*x1, -162*x2-81/5,
cos(x3)]
[ -x2*exp(-x1*x2), -x1*exp(-x1*x2),
20]
k
0
1
2
3
4
5
x1
0.1000000
0.4998697
0.5000142
0.5000001
0.5000000
0.5000000
x2
0.1000000
0.0194668
0.0015886
0.0000124
0.0000000
0.0000000
x3
-0.1000000
-0.5215205
-0.5235570
-0.5235985
-0.5235988
-0.5235988
2-12
Download