ALGORITHM 5.4 Adams Fourth-Order Predictor

advertisement
5.6 Multistep Methods
The methods discussed to this point in the chapter are called one-step methods because the
approximation for the mesh point ti 1 involves information from only one of the previous mesh
points, ti . Although these methods might use functional evaluation information at points between
ti and ti 1 , they do not retain that information for direct use in future approximation. All the
information used by these methods is obtained within the subinterval over which the solution is
being approximated.
Since the approximate solution is available at each of the mesh points t0 , t1 ,
 
approximation at ti 1 is obtained, and because the error w j  y t j
, ti before the
tends to increase with j ,
it seems reasonable to develop methods that use these more accurate previous data when
approximating the solution at ti 1 .
Method using the approximation at more than one previous mesh point to determine the
approximation at the next point are called multistep method. The precise definition of these
methods follow, together with the definition of the two types of multistep methods.
Definition 5.14
An m-step multistep method for solving the initial-value problem
y '  f  t , y  , a  t  b, y  a    ,
(5.22)
has a difference equation for finding the approximation wi 1 at the mesh point ti 1 represented
by the following equation, where m is an integer greater than 1:
wi 1  am1wi  am2 wi 1 
 a0 wi 1m
 h bm f  ti 1 , wi 1   bm1 f  ti , wi 

for i  m  1, m,
(5.23)
 b0 f  ti 1m , wi 1m   ,
, N  1 , where h   b  a  N , the a0 , a1 ,
, am1 and b0 , b1 ,
, bm are
constants, and the starting values
w0   , w1  1 , w2   2 ,
, wm1   m1
are specified.
■
When bm  0 the method is called explicit, or open, since Eq. (5.23) then gives wi 1 explicitly
in terms of previously determined values. When bm  0 the method is called implicit, or closed,
since wi 1 occurs on both sides of Eq. (5.23) and is specified only implicitly.
EXAMLE 1
The equations
w0   , w1  1 , w2   2 , w3   3 ,
wi 1  wi 
(5.24)
h
55 f  ti , wi   59 f  ti 1 , wi 1   37 f  ti 2 , wi 2   9 f  ti 3 , wi 3   ,
24
for each i  3, 4,
, N  1 , define an explicit four-step method known as the fourth-order
Adams-Bashforth technique. The equations
w0   , w1  1 , w2   2 ,
wi 1  wi 
for each i  2,3,
(5.25)
h
9 f  ti 1 , wi 1   19 f  ti , wi   5 f  ti 1 , wi 1   f  ti 2 , wi 2   ,
24
, N  1 , define an implicit three-step method known as the fourth-order
Admas-Moulton technique.
The staring values in either (5.24) or (5.25) must be specified, generally by assuming w0   and
generating the remaining values by either a Runge-Kutta method or some other one-step
technique.
To apply an implicit method such as (5.25) directly, we must solve the implicit equation for wi 1 .
It is not clear that this can be done in general or that a unique solution for wi 1 will always be
obtained.
To begin the derivation of a multistep method, note that the solution to the initial-value problem
ti , ti1  , has the property that
(5.22), if integrated over the interval
y  ti 1   y  ti   
ti 1
ti
y '  t  dt  
ti 1
ti
f  t , y  t   dt.
Consequently,
y  ti 1   y  ti   
ti 1
ti

Since we cannot integrate f t , y  t 

f  t , y  t   dt.
without knowing y  t  , the solution to the problem, we

instead integrate an interpolating polynomial P  t  to f t , y  t 
the previously obtained data points
(5.26)
t0 , w0  , t1, w1  ,

that is determined by some of
, ti , wi  . When we assume, in addition,
that y  ti   wi , Eq. (5.26) becomes
y  ti 1   wi  
ti 1
ti
P  t  dt.
(5.27)
Although any form of the interpolating polynomial can be used for the derivation, it is most
convenient to use the Newton backward-difference formula.
To derive an Adams-Bashforth explicit m-step technique, we form the backward-difference
polynomial
t , f t , y t  , t
Pm1  t  through
i
i
i 1
i


, f  ti 1 , y  ti 1   ,

, ti 1m , f ti 1m , y ti 1m   .
Since Pm1  t  is an interpolatory polynomial of degree m 1 , some number  i in
 ti1m , ti  exists
with
f  t , y  t    Pm1  t  
f  m   i , y   i  
m!
 t  ti  t  ti 1   t  ti 1m  .
Introducing the variable substitution t  ti  sh , with dt  h ds into Pm1  t  and the error
term implies that

ti1
ti
f  t , y  t   dt  
ti1 m 1
  1
ti
k 0

ti 1
k
 s  k
   f  ti , y  ti  dt
 k
f  m   i , y   i  
m!
ti
m 1
 t  ti  t  ti 1   t  ti 1m  dt
   k f  ti , y  ti   h  1
k 0

The integrals
h m 1 1
s  s  1
m ! 0
k
 s 
0  k  ds
1
 s  m  1 f  m i , y i   ds.
 s 
 ds for various values of k are easily evaluated and are listed in
k
 1 0 

k
1
Table 5.10. For example, when k  3 ,
 s 
 1 0   ds   0
3
3
1
1

  s   s  1  s  2  ds
1 2  3
1 1 3
s  3s 2  2s  ds


0
6
1

1  s4
19 3
   s3  s 2      .
6 4
0 6  4  8
Table 5.10
k
 s 
 ds
k
 1 0 

k
1
As a consequence,
0
1
2
3
4
5
1
1
2
5
12
3
8
251
720
95
288

ti1
ti
1
5

f  t , y  t   dt  h  f  ti , y  ti    f  ti , y  ti     2 f  ti , y  ti   
2
12

h m1 1

s  s  1  s  m  1 f  m i , y i   ds.

0
m!
Since s  s  1


(5.28)
 s  m 1 does not change sign on 0,1 , the Weighted Mean Value Theorem
for Integrals can be used to deduce that for some number  i , where ti 1m  i  ti 1 , the error
term in Eq. (5.28) becomes
h m 1 1
s  s  1
m ! 0

 s  m  1 f  m i , y i   ds
h m 1 f  m   i , y  i  
m!
 s  s  1  s  m  1 ds
1
0
or
h m1 f  m  i , y  i    1
Since y  ti 1   y  ti  

ti 1
ti
m
 s 
0  m  ds.
1
(5.29)
f  t , y  t   dt , Eq. (5.26) can be written as
1
5

y  ti 1   y  ti   h  f  ti , y  ti    f  ti , y  ti     2 f  ti , y  ti   
2
12

m 1  s 
 h m1 f  m  i , y  i    1    ds.
0
 m


EXAMPLE 2
To derive the three-step Adams-Bashforth technique, consider Eq. (5.30) with m  3 :
1
5


y  ti 1   y  ti   h  f  ti , y  ti    f  ti , y  ti     2 f  ti , y  ti   
2
12


1

 y  ti   h  f  ti , y  ti     f  ti , y  ti    f  ti 1 , y  ti 1   
2

5

  f  ti , y  ti    2 f  ti 1 , y  ti 1    f  ti  2 , y  ti  2    
12

h
 y  ti    23 f  ti , y  ti    16 f  ti 1 , y  ti 1    f  ti  2 , y  ti  2    .
12
The three-step Adams-Bashforth method is, consequently,
w0   , w1  1 , w2   2 ,
wi 1  wi 
for i  2,3,
, N 1 .
h
 23 f  ti , wi   16 f  ti 1 , wi 1   5 f  ti 2 , wi 2   ,
12 
(5.30)
Mulitstep methods can also be derived by using Taylor series. An example of the procedure
involved is considered in Exercise 10. A derivation using a Lagrange interpolating polynomial is
discussed in Exercise 9.
The local truncation error for multistep methods is defined analogously to that of one-step
methods. As in the case of one-step methods, the local truncation error provides a measure of how
the different equation fails to solve the difference equation.
Definition 5.15
If y  t  is the solution to the initial-value problem
y '  f  t , y  , a  t  b, y  a    ,
and
wi 1  am 1wi  am  2 wi 1 
 a0 wi 1 m
 h bm f  ti 1 , wi 1   bm 1 f  ti , wi  
is the
 b0 f  ti 1 m , wi 1m  
i  1 st step in a multistep methods, the local truncation error at this step is
 i 1  h  
for each i  m  1, m,
y  ti 1   am 1 y  ti    a0 y  ti 1 m 
h
 bm f  ti 1 , y  ti 1    b0 f  ti 1 m , y  ti 1 m    ,
, N 1 .
(5.31)
■
EXAMPLE 3
To determine the local truncation error for the three-step Adams-Bashforth method derived in
Example 2, consider the form of the error given in Eq. (5.29) and the appropriate entry in Table
5.10:
h 4 f 3  i , y  i    1
Using the fact that f
 3
3
 s 
3h4 3
ds

f  i , y  i   .
0  3 
8
1
  , y     y     and the difference equation derived in Example
4
i
i
i
2, we have
y  ti 1   y  ti  1
  23 f  ti , y  ti    16 f  ti 1 , y  ti 1    5 f  ti  2 , y  ti  2   
h
12
4
 3h3  4
1  3h 3
 
f  i , y  i    
y  i  , for some i   ti  2 , ti 1  .
h 8
8

 i 1  h  
Some of the explicit multistep methods together with their required starting values and local
truncation errors are as follows. The derivation of these techniques is similar to the procedure in
Example 2 and 3.
Adams-Bashforth Two-Step Explicit Method:
w0   , w1  1 ,
(5.32)
h
wi 1  wi  3 f  ti , wi   f  ti 1 , wi 1   ,
2
, N  1 . The local truncation error is  i 1  h  
where i  1, 2,
5
y "'  i  h 2 , for some
12
i   ti 1 , ti 1  .
Adams-Bashforth Three-step Explicit Method:
w0   , w1  1 , w2   2 ,
wi 1  wi 
where i  2,3,
h
 23 f  ti , wi   16 f  ti 1 , wi 1   5 f  ti 2 , wi 2   ,
12 
, N  1 . The local truncation error is  i 1  h  
(5.33)
3  4
y  i  h3 , for some
8
i   ti 2 , ti 1  .
Adams-Bashforth Four-step Explicit Method:
w0   , w1  1 , w2   2 , w3   3 ,
wi 1  wi 
(5.34)
h
55 f  ti , wi   59 f  ti 1 , wi 1   37 f  ti 2 , wi 2   9 f  ti 3 , wi 3   ,
24
where i  3, 4,
, N  1 . The local truncation error is  i 1  h  
251  5
y  i  h 4 , for some
720
i   ti 3 , ti 1  .
Adams-Bashforth Five-step Explicit Method:
w0   , w1  1 , w2   2 , w3   3 , w4   4 ,
h
1901 f  ti , wi   2774 f  ti 1 , wi 1 
720 
2616 f  ti  2 , wi  2   1274 f  ti 3 , wi 3   251 f  ti  4 , wi  4   ,
wi 1  wi 
where i  4,5,
N  1 . The local truncation error is  i 1  h  
(5.35)
95  6
y  i  h5 , for some
288
i   ti 4 , ti 1  .
Implicit methods are derived by using
t
i 1
in the approximation of the integral

ti 1
ti

, f  ti 1 , y  ti 1   as an additional interpolation node
f  t , y  t   dt.
Some of the more common implicit methods are as follows.
Adams-Moulton Two-step Implicit Method:
w0   , w1  1 ,
wi 1  wi 
h
5 f  ti 1 , wi 1   8 f  ti , wi   f  ti 1 , wi 1   ,
12 
N  1 . The local truncation error is  i 1  h   
where i  1, 2,
(5.36)
1  4
y  i  h3 , for some
24
i   ti 1 , ti 1  .
Adams-Moulton Three-step Implicit Method:
w0   , w1  1 , w2   2 ,
wi 1  wi 
where i  2,3,
(5.37)
h
9 f  ti 1 , wi 1   19 f  ti , wi   5 f  ti 1 , wi 1   f  ti 2 , wi 2   ,
24
, N  1 . The local truncation error is  i 1  h   
19 5
y  i  h 4 , for some
720
i   ti 2 , ti 1  .
Adams-Moulton Four-step Implicit Method:
w0   , w1  1 , w2   2 , w3   3 ,
h
 251 f  ti 1 , wi 1   646 f  ti , wi 
720 
264 f  ti 1 , wi 1   106 f  ti  2 , wi  2   19 f  ti 3 , wi 3   ,
wi 1  wi 
where i  3, 4,
N  1 . The local truncation error is  i 1  h   
(5.38)
3  6
y  i  h5 , for some
160
i   ti 3 , ti 1  .
It is interesting to compare an m-step Adams-Bashforth explicit method to an (m-1)-step
Adams-Moulton Four-step implicit method. Both involve m evaluations of f per step, and both
have the terms y
 m1
 i  h m
in their local truncation errors. In general, the coefficients of the
terms involving f in the local truncation error are smaller for the implicit methods than for the
explicit methods. This leads to greater stability and smaller roundoff errors for the implicit
methods.
EXAMPLE 4
Consider the initial-value problem
y '  y  t 2  1, 0  t  2, y  0  0.5,
and the approximations given by the explicit Adams-Bashforth four-step method and the implicit
Adams-Moulton three-step method, both using h  0.2 .
The Adams- Bashforth method has the difference equation
wi 1  wi 
for i  3, 4,
h
55 f  ti , wi   59 f  ti 1 , wi 1   37 f  ti  2 , wi  2   9 f  ti 3 , wi 3   .
24 
,9 . When simplified using f  t , y   y  t 2  1, h  0.2, and ti  0.2i , it
becomes
wi 1 
1
35wi  11.8wi 1  7.4wi  2  1.8wi 3  0.192i 2  0.192i  4.736  .
24
The Adams-Moulton method has the difference equation
wi 1  wi 
for i  2,3,
9 . This reduces to
wi 1 
for i  2,3,
h
9 f  ti 1 , wi 1   19 f  ti , wi   5 f  ti 1 , wi 1   f  ti  2 , wi  2   ,
24 
1
1.8wi 1  27.8wi  wi 1  0.2wi  2  0.192i 2  0.192i  4.736  .
24 
9.
The result in Table 5.11 were obtained using the exact values from y  t    t  1  0.5e for
2
t
 , 1 ,  2 , and 3 in the explicit Adams- Bashforth case and for  , 1 , and  2 in the implicit
Adams- Moulton case.
In Example 4 the implicit Adams- Moulton method gave better results than the explicit AdamsBashforth method of the same order. Although this is gererally the case, the implicit methods have
the inherent weakness of first having to convert the method algebraically to an explicit
representation for wi 1 . This procedure is not always possible, as can be seen by considering the
elementary initial-value problem
y '  e y , 0  t  0.25, y  0  1.
Table 5.11
Since f  t , y   e , the three-step Adams-Moulton method has
y
wi 1  wi 
h
9e wi1  19e wi  5e wi1  e wi2 
24 
as its difference equation, and this equation cannot be solved explicitly for wi 1 .
We could use Newton’s method or the secant method to approximate wi 1 , but this complicates
the procedure considerably. In practice, implicit multistep methods are not used as described
above. Rather, they are used to improve approximations obtained by explicit methods. The
combination of an explicit and implicit technique is called a predictor-corrector method. The
explicit method predicts an approximation, and the implicit method corrects this prediction.
Consider the following fourth-order method for solving an initial-value problem. The first step is
to calculate the starting values w0 , w1 , w2 , and w3 for the four-step explicit Adams-Bashforth
method. To do this, we use a fourth-order one-step method, the Runge-Kutta method of order four.
 0
The next step is to calculate an approximation, w4
, to
y  t4  using the explicit
Adams-Bashforth method as predictor:
w4 0  w3 
h
55 f  t3 , w3   59 f  t2 , w2   37 f  t1 , w1   9 f  t0 , w0   .
24 
 0
This approximation is improved by inserting w4 in the right side of the three-step implicit
Adams-Moulton method and using that method as a corrector. This gives
w41  w3 


h 
9 f t4 , w4 0  19 f  t3 , w3   5 f  t2 , w2   f  t1 , w1   .


24

 0
The only new function evaluation required in this procedure is f t4 , w4

in the corrector
equation; all the other values of f have been calculated for earlier approximations.
The value w4 is then used as the approximation to y  t4  , and the technique of using the
1
Adams-Bashforth method as a predictor and the Adams-Moulton method as a corrector is repeated
to find w5 and w5 , the initial and final approximations to y  t5  , etc.
 0
1
Improved approximations to y  ti 1  could be obtained by iterating the Adams-Moulton formula
wik11  wi 
However,


h 
9 f ti 1 , wik1  5 f  ti 1 , wi 1   f  ti  2 , wi  2   .

24 
w   converges to the approximation given by the implicit formula rather than to
k 1
i 1
the solution y  ti 1  , and it is usually more efficient to use a reduction in the step size if
improved accuracy is needed.
Algorithm 5.4 is based on the fourth-order Adams-Bashforth method as predictor and one iteration
of the Adams-Moulton method as corrector, with the starting values obtained from the
fourth-order Runge-Kutta method.
ALGORITHM 5.4 Adams Fourth-Order Predictor-Corrector
Purpose: To approximate the solution of the initial-value problem
y '  f  t , y  , a  t  b, y  a    ,
at
 N  1 equally spaced numbers in the interval a, b :
INPUT
endpoints a, b ; integer N ; initial condition
OUTPUT approximation w to y at the
Step 1
h  b  a  N ;
 N  1
.
values of t .
t0  a ;
w0   ;
 t0 , w0  .
OUTPUT
Step 2 For i  0,1, 2 , do Step 3-4
(*Computing starting values using Runge-Kutta method*.)
Step 3
K1  hf  ti , wi  ;
K2  hf  ti  h 2, wi  K1 2 ;
K3  hf  ti  h 2, wi  K2 2 ;
K4  hf  ti  h, wi  K3  .
wi 1  wi   K1  2K2  2K3  K4  6 ;
ti 1  a  (i  1)h .
Step 4
Step 5 For i  3,
Step 6
, N  1 do Steps 6-8
t 0  a  ( i  1 )h;
w0  wi  h 55 f  ti , wi   59 f  ti 1 , wi 1   37 f  ti 2 , wi 2 
9 f  ti 3 , wi 3   24;
 Predict
wi 1 
w0  wi  h 9 f  t 0, w0   19 f  ti , wi   5 f  ti 1 , wi 1 
 f  ti 2 , wi 2  24.
 Correct
wi 1 
Step 7
C
PREPARE FOR NEXT ITERATION
T(I+1) = TO
W(I+1) = WO .
Step 8
Step 9 RETURN.
This produced ADAMS-FOURTH ORDER PREDICTOR-CORRECTOR method described in the
following subroutine:
SUBROUTINE ADAMS4(N,A,B,ALPHA,T,W)
C**********************************************************************
C ADAMS-FOURTH ORDER PREDICTOR-CORRECTOR ALGORITHM 5.4
*
C
*
C***********************************************************************
C
TO APPROXIMATE THE SOLUTION OF THE INITIAL VALUE PROLEM:
C
Y'=F(T,Y), A<=T<=B, Y(A)=ALPHA,
C
AT (N+1) EQUALLY SPACED NUMBERS IN THE INTERVAL [A,B].
C
C
INPUT: ENDPOINTS A,B; INITIAL CONDITION ALPHA; INTEGER N.
C
C
OUTPUT: APPROXIMATION W TO Y AT THE (N+1) VALUES OF T.
C**********************************************************************
INTEGER N
REAL A,B,ALPHA
REAL T(0:10),W(0:10)
EXTERNAL F
C
STEP 1
H = (B-A)/N
T(0) = A
W(0) = ALPHA
C STEP 2
DO 110 I=0,2
C
STEPS 3-4
C
COMPUTE STARTING VALUES USING RUNGE-KUTTA METHOD GIVEN IN A
C
SUBROUTINE--NOTE: FUNCTION F IS NEEDED IN THE SUBROUTINE
CALL NRK4(H,T(I),W(I),T(I+1),W(I+1))
C
CALL RK4(1,T(I),T(I)+H,W(I),T(I+1),W(I+1))
C
STEP 4,
110 CONTINUE
C STEP 5
DO 20 I=3,N-1
C
STEP 6-8
C
STPE 6
C
TO, WO WILL BE USED IN PLACE OF T, W RESP.
TO=A+(I+1)*H
C
PREDICT W(I+1)
WO = W(I)+H*(55*F(T(I),W(I))-59*F(T(I-1),W(I-1))
*
+37*F(T(I-2),W(I-2))-9*F(T(I-3),W(I-3)))/24
C
CORRECT W(I)
WO = W(I)+H*(9*F(TO,WO)+19*F(T(I),W(I))
*
-5*F(T(I-1),W(I-1))+F(T(I-2),W(I-2)))/24
C
STEP 7
C
PREPARE FOR NEXT ITERATION
T(I+1) = TO
20
C
W(I+1) = WO
STEP 8
CONTINUE
STEP 9
RETURN
END
SUBROUTINE NRK4(H,TO,WO,TI,WI)
TI = TO+H
XK1 = H*F(TO,WO)
XK2 = H*F(TO+H/2,WO+XK1/2)
XK3 = H*F(TO+H/2,WO+XK2/2)
XK4 = H*F(TI,WO+XK3)
WI = WO+(XK1+2*(XK2+XK3)+XK4)/6
RETURN
END
■
EXAMPLE 5 Use Adams Fourth-order Predictor-Corrector method to approximate the solution
of initial-value problem
y '  y  t 2  1, 0  t  2, y  0  0.5,
with N  10 . AL54.f is shown as follows:
C******************************************************************
C Example 5 (Section 5.6) Using ADAMS-FOURTH ORDER
C
PREDICTOR-CORRECTOR ALGORITHM (ORDER 4) to
C
approximete the solution of the initial-value problem
C
dy/dt=y-t^2+1, y(0)=0.5
C******************************************************************
C
INTEGER N
REAL A,B,ALPHA
REAL T(0:10),W(0:10),Y(0:10)
INTRINSIC EXP,ABS
OPEN(UNIT=10,FILE='AL54.doc',STATUS='UNKNOWN')
C
*** Input initial values
A = 0.0
B = 2.0
ALPHA = 0.5
N = 10
WRITE(10,*) ' T(i)
W(i)
Y(i)
|W(i)-Y(i)|'
WRITE(10,*) '---------------------------------------------'
WRITE(*,*) ' T(i)
W(i)
Y(i)
|W(i)-Y(i)|'
WRITE(*,*) '---------------------------------------------'
CALL ADAMS4(N,A,B,ALPHA,T,W)
DO 10 I = 0,10
C *** Exact solution
Y(I) = (T(I)+1.0)*(T(I)+1.0)-0.5*EXP(T(I))
WRITE(10,99) T(I),W(I),Y(I),ABS(W(I)-Y(I))
WRITE(*,99) T(I),W(I),Y(I),ABS(W(I)-Y(I))
10
CONTINUE
STOP
99
FORMAT(1X,F5.1,3F12.7)
END
REAL FUNCTION F(T,Y)
C=========================================
C PURPOSE
C Find the value of function f(t,y) = y-t^2+1
C-------------------------------------------------------------------C
REAL T,Y
F = Y-T*T+1.0
RETURN
END
Table 5.12 lists the results obtained by using Algorithm 5.4 for the initial-value problem
y '  y  t 2  1, 0  t  2, y  0  0.5,
with N  10 . The results here are more accurate than those in Example 4, which used only the
corrector (that is, the implicit Adams-Moulton method), but this is not always the case.
Table 5.12
Error
ti
yi  y  ti 
wi
yi  wi
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
0.5000000
0.8292986
1.2140877
1.6489406
2.1272295
2.6408591
3.1799415
3.7324000
0.5000000
0.8292933
1.2140762
1.6489220
2.1272056
2.6408286
3.1799026
3.7323505
0
0.0000053
0.0000114
0.0000186
0.0000239
0.0000305
0.0000389
0.0000495
1.6
1.8
2.0
4.2834838
4.8151763
5.3054720
4.2834208
4.8150964
5.3053707
0.0000630
0.0000799
0.0001013
Other multistep methods can be derived using integration of interpolating polynomials over
intervals of the form t j , t j 1  , for j  i  1 , to obtain an approximation to y  ti 1  . When an
interpolating polynomial is integrated over ti 3 , ti 1  , the result is the explicit Milne’s method:
4h
 2 f  ti , wi 1   f  ti 1 , wi 1   2 f  ti  2 , wi  2   .
3 
14 4  5
h y i  , for some i   ti 3 , ti 1  .
which has local truncation error
45
wi 1  wi 3 
This method is occasionally used as a predictor for the implicit Simpson’s method.
wi 1  wi 1 
h
 f  ti 1 , wi 1   4 f  ti , wi   f  ti 1 , wi 1   ,
3


which has local truncation error  h 90 y
4
 5
i  , for some i  ti1, ti1  , and is obtained
by integrating an interpolating polynomial over ti 1 , ti 1  .
The local truncation error involved with a predictor-corrector method of the Milne-Simpson type
is generally smaller than that of the Adams-Bashforth-Moulton method. But the technique has
limited use because of roundoff error problems, which do not occur with the Adams procedure.
Elaboration on this difficulty is given in Section 5.10.
EXERCISE SET 5.6
Q4. Use Adams Fourth-order Predictor-Corrector method to approximate the solution of
initial-value problem
(a)
y '  te3t  2 y, 0  t  1, y  0  0.0,
with
N  10 ;
The
1
1
1
y  t   te3t  e3t  e 2t and compare the results to the actual values.
5
25
25
exact
solution
Download