Colorado School of Mines CHEN403

advertisement
Colorado School of Mines CHEN403
Numerical Methods for the Solution of ODEs
References ............................................................................................................................................................... 1
Introduction ............................................................................................................................................................ 1
Accuracy, Stability, and Step Size.................................................................................................................... 3
Forward Euler Method........................................................................................................................................ 4
Backward Euler Method..................................................................................................................................... 6
Comparison of the Stability Characteristics of the Euler Methods.................................................... 7
Taylor Series Expansions & Order of Accuracy......................................................................................... 9
Runge-Kutta Methods....................................................................................................................................... 10
Stiff Equations ..................................................................................................................................................... 13
References
Numerical Recipes in Fortran 77
William Press, Saul Teukolsky, William
Vetterling, & Brian Flannery
Cambridge University Press, 1992
Nonlinear Analysis in Chemical Engineering
Bruce A. Findlayson
McGraw-Hill, 1980
Numerical Methods
Germund Dahlquist, Åke Björk, & Ned
Anderson
Prentice-Hall, 1974
Computational Methods in Chemical
Engineering
Owen Hanna & Orville Sandall
Prentice-Hall, 1995
Problem Solving in Chemical Engineering
with Numerical Methods
Michael B. Cutlip & Mordechai Shacham
Prentice Hall, 1999
Computer Methods for Mathematical
Computations
George E. Forsythe, Michael A. Malcolm, &
Cleve B. Moler
Prentice-Hall, 1977
Applied Numerical Methods
Brice Carnahan, H. A. Luther, & James O.
Wilkes
John Wiley & Sons, 1969
Numerical Initial Value Problems in
Ordinary Differential Equations
C. William Gear
Prentice-Hall, 1971
Introduction
We would like to find a numerical solution to a single non-linear 1st order ODE (ordinary
differential equation) of the form:
Numerical Methods
-1-
December 21, 2008
Colorado School of Mines CHEN403
 dy

F  , y ,t  = 0
 dt

or:
dy
= f ( y ,t ) .
dt
The basic philosophy is to start with a known value of y ( t ) and then approximate the next
value y ( t + h) using a finite difference approximation to the derivative dy / dt . For a 1st
order finite difference:
dy y( n+1) − y(n )
≈
= f y * ,t *
dt
h
(
)
(
⇒ y( n+1) = y(n ) + h ⋅ f y * , t *
)
where y( n ) refers to the value of y from the numerical scheme at the n-th time step. The
trick is to find some appropriate values for h and f ( y * , t * ) that lead to high accuracy &
stability without requiring an unmanageable number of function evaluations and time
steps.
The techniques that we will look at will also be applicable to systems of non-linear 1st order
ODEs:
dyi
= fi ( y1 , y2 ,… , yN ) for i = 1,2,… , N
dt
or in vector notation:
dy
= f (y)
dt
where y refers to the set of variables [ y1 , y2 ,… , yN ] and f refers to the set of defining
T
functions [ f1 , f2 ,… , f N ] . Note that a dependency on time can always included by defining a
new variable:
T
dyN +1
=1
dt
Also, initial value problems of order higher than 1st order can always be recast as a system
of 1st order equations by defining new variables to account for the higher order derivatives.
For example, the single 3rd order ODE:
Numerical Methods
-2-
December 21, 2008
Colorado School of Mines CHEN403
d3 y
d2 y
dy
+
4
+ 5 + 2 y = 2sin ( t ) with y ( 0) = y '( 0) = y ''( 0) = 0 .
3
2
dt
dt
dt
Can be recast with the following definitions:
dy1
= y2
dt
dy2
= y3
dt
dy4
=1
dt
so the original ODE becomes:
dy3
= 2sin ( y4 ) − 4 y3 − 5 y2 − 2 y1
dt
with y1 ( 0 ) = y2 ( 0) = y3 ( 0 ) = y4 ( 0 ) = 0 . Now the single ODE has become the following
system of equations:
y2
 y1  

y  

y3
d  2 

=



y
dt 3
2sin ( y4 ) − 4 y3 − 5 y2 − 2 y1 
  

1

 y4  
As with a single equation, we approximate the dy / dt terms using 1st order finite
differences:
( )
y ( n+1) = y ( n ) + h ⋅ f y * .
Accuracy, Stability, and Step Size
Accuracy is a measure of how closely our numerical solution y( n ) matches our “true”
solution of y t ( n ) . We will refer to two types of accuracy. The first, local accuracy, is the
accuracy of the numerical method over a single time step. When these numerical methods
are used, we will take multiple time steps. The errors from each time step will tend to
accumulate. The accuracy of a numerical method over many time steps will be referred to
as the global accuracy. We will sometimes denote this global error as ε( n ) where:
( )
( )
( )
y( n ) ≡ y t ( n ) + ε( n ) ⇒ ε( n ) = y( n ) − y t ( n )
Numerical Methods
-3-
December 21, 2008
Colorado School of Mines CHEN403
( )
where y t ( n ) is the true solution at time t ( n ) .
Stability is a somewhat ambiguous term and may appear with such qualifiers as inherent,
partial, relative, weak, strong, absolute, etc. In general, a solution is said to be unstable if
errors introduced at some stage of the calculations are propagated and magnified
without bound throughout the subsequent calculations. An unstable result will not only
be inaccurate but also will be patently absurd. We can show that a criteria for stability is:
ε(n+1)
≤1.
ε( n )
The issues of accuracy and stability are important when trying to choose one or more
values for a step size for integration. We will generally need a step size much smaller than
the range of time in which we are interested, so many time steps must be performed to get
our numerical answer. The step size chosen should be small enough to achieve our
desired accuracy, but should be as large as possible to avoid an excessive number of
derivative evaluations.
In general, accuracy relates to the characteristics of the method when making the step size
smaller and smaller, whereas stability relates to the characteristics of the method when
making the step size larger and larger.
Forward Euler Method
The simplest method is to evaluate the derivative using the values at the beginning of the
time step. The recursion formula is then:
(
)
y( n+1) = y(n ) + h ⋅ f y( n ) , t (n ) = y(n ) + h ⋅ f ( n )
where f (n ) refers to the derivative function evaluated using the n-th time step values. This
method is often referred to as the forward Euler method. The method is simple and
explicit (i.e., requires no iteration to get y( n+1) ). Its disadvantages are that it is not very
accurate ( O(h) global accuracy) & can have stability problems with time steps that are too
large.
Let’s look at numerically solving the ODE:
dy
= 1 − 2 y with y ( 0) = 0 .
dt
Note that there is an analytical solution:
Numerical Methods
-4-
December 21, 2008
Colorado School of Mines CHEN403
y (t ) =
(
)
1
1 − e −2t .
2
The forward Euler recursion formula will be:
(
)
y( n+1) = y(n ) + h 1 − 2 y(n ) = h + (1 − 2h ) y(n ) .
The following chart shows the numerically approximated solution at various step sizes as
compared to the analytical solution. Notice that at a reasonably small step size of h = 0.1
the numerical solution is pretty good — the “knee” of the curve is pretty close & we get to
the correct ultimate values. However, the larger step sizes give unreasonable values.
h = 0.5 gets to the ultimate value in only a single step & h = 0.9 oscillates. In fact, h = 1
oscillates between 1 and 0 while h > 1 oscillates in a diverging manner.
Forward Euler Solution to y ' = 1 - 2y , y (0) = 0
1.0
Analytical Solution
0.9
h=0.1
0.8
h=0.5
h=0.9
0.7
0.6
y (t ) 0.5
0.4
0.3
0.2
0.1
0.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
t
The problems with h > 1 show the stability problems of the forward Euler method. Errors
introduced at early time steps, instead of dying out, start to grow and dominate the
numerical solution. This shows that the forward Euler method is conditionally stable for
small enough step sizes. For this problem, the stability limit is h = 1 . However, what if we
just wait until the end of the problem to change to a large step size. The following chart
shows what happens if we use a step size of h = 0.1 until t = 2 & then use a step size of
h = 1.1 . Notice that even though we are almost at the ultimate value, the large step size is
still unstable.
Numerical Methods
-5-
December 21, 2008
Colorado School of Mines CHEN403
Forward Euler Solution to y' = 1 - 2y, y(0) = 0
h = 0.1 for t < 2
h = 1.1 for t > 2
1.2
Analytical Solution
1.0
Numerical Solution
0.8
y (t ) 0.6
0.4
0.2
0.0
0.0
5.0
10.0
15.0
20.0
25.0
30.0
t
Backward Euler Method
For the forward Euler method, we evaluated the derivative function at the beginning of the
time step, f ( y(n ) , t (n ) ) . But we can evaluate f at any appropriate position. If we use the
value at the end of the time step we get:
(
)
y( n+1) = y( n ) + h ⋅ f y( n+1) , t ( n+1) = y( n ) + h ⋅ f (n+1) .
This method is often referred to as the backward Euler method. The method is still
simple, but it is implicit (i.e., it may require iteration to get y( n+1) ). It only has O(h) global
accuracy just like the forward Euler method, but it has much better stability characteristics.
For our example, we now have:
(
y( n+1) = y(n ) + h 1 − 2 y(n+1)
)
⇒ y(n+1) =
y( n ) + h
.
1 + 2h
Note that in this particular case we can solve directly for y( n+1) , but this is not always the
case. The following chart shows the numerical solution at various step sizes as compared to
the analytical solution. Notice that at a reasonably small step size of h = 0.1 the numerical
solution is pretty good just like for the backward Euler method — the “knee” of the curve is
pretty close & we get to the correct ultimate values. The biggest difference is that even
Numerical Methods
-6-
December 21, 2008
Colorado School of Mines CHEN403
though larger step sizes are not very accurate, the results do not oscillate or diverge. It can
be shown that the backward Euler method is unconditionally stable for any step size.
Backward Euler Solution to y ' = 1 - 2y , y (0) = 0
0.6
0.5
Analytical Solution
h=0.1
0.4
h=0.5
h=0.9
y (t ) 0.3
0.2
0.1
0.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
t
Comparison of the Stability Characteristics of the Euler Methods
A common method used to examine the stability characteristics of a numerical method is to
determine what happens when the method is applied to the test equation:
dy
= −λy with y ( 0) = 1
dt
where λ is real & positive. This test equation has the analytical solution
y ( t ) = 1 − e −λt .
If we expression the solution as the sum of the exact solution and the error ε , then the
error must also satisfy:
dε
= −λε .
dt
Numerical Methods
-7-
December 21, 2008
Colorado School of Mines CHEN403
We can now see what happens to the error along a time step. For the forward Euler
method:
(
ε( n+1) = ε( n ) + h −λε(n )
)
ε( n+1) = ε( n ) (1 − λh)
So, for stability:
ε(n+1)
2
= ( 1 − λh ) ≤ 1 ⇒ 0 ≤ h ≤
(n)
ε
λ
Only values of h ≤ λ /2 will give stable results. This is why the forward Euler method is
conditionally stable.
Remember our example problem? We found that the numerical solution oscillated up to
h = 1 and then was unstable for h > 1 . From this relationship above, the example problem
must have had λ = 2 (remember the term e −2t which corresponds to the term e −λt in the
test problem?). Also note:
ε( n+1) = − (1 − λh) ε( n ) when
1
2
<h≤ .
λ
λ
The error will not necessarily grow, but it will change sign, leading to an oscillatory
behavior that is not present in the exact solution. For some problems these oscillations
may be noticeable and unacceptable.
Now lets do a stability analysis for the backward Euler method:
(
ε( n+1) = ε( n ) + h −λε( n+1)
ε( n+1) =
)
ε
( 1 + λh )
(n)
So, for stability:
ε( n+1)
1
=
≤1 ⇒ h≥0
(n)
ε
( 1 + λh )
Now, all values of h will give stability. This is why the backward Euler method is
unconditionally stable. This was shown in our example problem.
Numerical Methods
-8-
December 21, 2008
Colorado School of Mines CHEN403
Taylor Series Expansions & Order of Accuracy
We can use Taylor series expansions to create new integration schemes and determine
their order of accuracy. Remember that the Taylor series expansion of a function around
y ( t ) is:
y ( t + h) = y ( t ) + hy '( t ) +
h2
h3
y '' ( t ) + y '''( t ) + ⋯
2
6
We could also have written the expansion around y ( t + h) :
y ( t ) = y ( t + h) − hy ' ( t + h) +
h2
h3
y '' ( t + h) − y ''' ( t + h ) + ⋯ .
2
6
Not all of the derivatives of y ( t ) are explicitly known. We may need to compute the
derivatives needed in the Taylor series expansion from the defining equation itself. Since:
y '( t ) =
dy
= f ( y ,t )
dt
and:
df =
∂f
∂f
df ∂f dy ∂f ∂f
∂f
dy + dt ⇒
=
+ =
f+
∂y
∂t
dt ∂y dt ∂t ∂y
∂t
then:
y '' ( t ) =
d 2 y df ∂f
∂f
=
=
f+
2
dt
dt ∂y
∂t
y '''( t ) =
d 3 y  ∂ 2 f dy ∂ 2 f 
∂f  ∂f dy ∂f
= 2
+
+
f + 
3
dt
∂y  ∂y dt ∂t
 ∂y dt ∂y∂t 
  ∂ 2 f dy ∂ 2 f 
+ 2
+
  ∂y∂t dt ∂t 
 ∂2 f
∂2 f 
∂f  ∂f
∂f   ∂ 2 f
∂2 f 
= 2 f +
f + +
f+ 2
f + 
∂y∂t 
∂y  ∂y
∂t   ∂y∂t
∂t 
 ∂y
and so on.
Let us take the expansion around y ( t ) . We can truncate the expression after the linear
term and get the forward Euler method:
y ( t + h) ≈ y ( t ) + hy '( t ) .
Numerical Methods
-9-
December 21, 2008
Colorado School of Mines CHEN403
Since we have neglected the 2nd order and higher terms, it is said that the approximate
function is of order accuracy O(h2 ) . This represents the local accuracy (i.e., accuracy along
a single step). Generally, the error starts to grow when more than one step is made — the
global accuracy will generally be one order less than the local accuracy. So, for the
forward Euler method, the global accuracy will be O(h) .
If we take the expansion around y ( t + h) and truncate the expression after the linear term,
we get the backward Euler method. Since we have neglected the 2nd order and higher
terms, it has local accuracy of order O(h2 ) just like the forward Euler method. Also like the
forward Euler method, the global accuracy will be O(h) .
Just what does the order accuracy mean to us? If a method has order accuracy O(hm ) , then
reducing the step size by the fraction F will reduce the error in the numerical solution
from ε to F mε . For example, for order accuracy O(h) , cutting the step size in ½ will cut
the error also in ½. However, for order accuracy O(h2 ) , cutting the step size in ½ will cut
the error to ¼. This shows a real advantage of methods that have higher orders of
accuracy.
How can we get higher orders of accuracy? We can include the effects of the higher order
derivatives directly. Or, we can use additional function evaluations to simulate the effects
of the higher order derivatives.
Runge-Kutta Methods
It is possible to develop procedures that involve on 1st order derivative evaluations but
which produce results equivalent in accuracy to the higher-order Taylor expansion
formulas. These algorithms are the Runge-Kutta methods. Approximations of the 2nd, 3rd,
and 4th orders are equivalent to keeping the h2 , h3 , and h4 terms, respectively. These
require function evaluations at 2, 3, and 4 points, respectively, along the interval
t ( n ) ≤ t ≤ t ( n+1) . (Method of order ν > 4 require more then ν function evaluations.)
The Runge-Kutta formula of order ν can be written as:
ν
y( n+1) = y(n ) + ∑ wi ki .
i =1
Where the wi are weighting factors whose sum is 1 (i.e., w1 + w2 + ⋯ + wν = 1 ) and:
i −1


ki = h ⋅ f  t (n ) + ci h, y(n ) + ∑ ai , j ki  .
j =1


Numerical Methods
- 10 -
December 21, 2008
Colorado School of Mines CHEN403
and where the ci and ai , j are constants. The values for the wi , ci , and ai , j constants are
chosen to simulate keeping higher order derivative terms in the Taylor series expansion &
hence give high orders of accuracy.
The Runge-Kutta methods belong to a class of predictor-corrector methods. The first step
is to use an explicit formula to predict the value at the end of the time step. Then, implicit
formulas are used to correct this initial estimate. However, instead of iterating to
convergence using the implicit formula, the initial estimate is used for only a couple
correction steps.
A 2nd order Runge-Kutta method is known as Heun’s method or the improved Euler
method. It uses a forward Euler prediction step with a backward Euler correction step. The
formula is:
1
1
y( n+1) = y( n ) + k1 + k2
2
2
(n)
(n)
k1 = h ⋅ f t , y
k2
(
= h ⋅ f (t
)
(n)
+ h, y( n ) + k1 ) .
The stability equation for this method is:
1
1 − λh
ε(n+1)
2
=
≤1 ⇒ h>0
1
ε( n )
1 + λh
2
but will lead to oscillations for h > 2/ λ .
Perhaps the most widely used Runge-Kutta method is the 4th order method with constants
developed by Gill. The method is:
1
b
d
1
y( n+1) = y( n ) + k1 + k2 + k3 + k4
6
3
3
6
(n)
(n)
k1 = h ⋅ f t , y
(
)
1
1 

k2 = h ⋅ f  t (n ) + h, y( n ) + k1  .
2
2 

1


k3 = h ⋅ f  t (n ) + h, y(n ) + ak1 + bk2  .
2


(
)
k4 = h ⋅ f t (n ) + h, y( n ) + ck2 + dk3 .
Numerical Methods
- 11 -
December 21, 2008
Colorado School of Mines CHEN403
where the constants are:
a=
2 −1
2− 2
2
2+ 2
, b=
, c=−
, d=
.
2
2
2
2
It is reported that the Runge-Kutta-Gill method is stable for h ≤ 2.8/ λ .
Let’s again look at numerically solving our sample ODE:
dy
= 1 − 2 y with y ( 0) = 0 .
dt
The following chart shows the numerically approximated solution using the 2nd order
improved Euler method. We are using the same step sizes as before. Remember that for
the larger step sizes in the forward Euler method, the numerical solution was unstable.
Here, the larger step sizes are not very accurate, but they are not unstable. Plus, the
smaller step size of h = 0.1 gives a very good numerical solution — this solution is visually
better than either of the forward or backward Euler solutions.
Finally, the following chart shows the numerically approximated solution using the 4th
order Runge-Kutta-Gill method. Notice that the numerical solution is excellent, even for the
fairly large step size of h = 0.5 .
Runge-Kutta-Gill Solution to y ' = 1 - 2y , y (0) = 0
0.6
0.5
0.4
Analytical Solution
h=0.1
h=0.5
h=0.9
y (t ) 0.3
0.2
0.1
0.0
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
t
Numerical Methods
- 12 -
December 21, 2008
Colorado School of Mines CHEN403
Stiff Equations
An important characteristic of a system of ordinary differential equations is whether they
are stiff or not. A system is stiff if the characteristic time constants for the problem vary by
many orders of magnitude. Mathematically, if
dy
= f ( y ) ≈ Ay
dt
by linearization, then there are a set of eigenvalues, λ i , which form the solution to:
(
)
det A − λ i I = 0
The solution to the linearized ODE will be of the form:
N
yi = ∑ C j e
−λ j t
j =1
N
=∑ C j e
−t / τ j
where λ i =
j =1
1
.
τi
A stiffness ratio S can be defined as:
S=
max ( Re ( λ i ) )
min ( Re ( λ i ) )
.
Typically, S > 1000 is considered a stiff system & S > 106 is a very stiff system.
What does this mean to us? If a solution is made up of a sum of exponential decay terms all
with different time constants, then we must integrate over a time length that will
encompass the term that takes the longest to decay. This will be controlled by the largest
time constant (i.e., the smallest eigenvalue). However, for stability considerations, the
largest step size that can be used is controlled by the smallest time constant (i.e., the largest
eigenvalue). When these are different by several orders of magnitude, we may have a
problem in trying to perform the numerical solution in a reasonable number of time steps.
Lets look at a stiff example. The following ODE system:
 y1 ( 0)  2
d  y1   −500.5 499.5   y1 
y =
 y  where  y 0  =  

dt  2   499.5 −500.5  2 
 2 ( )  1
has the solution
Numerical Methods
- 13 -
December 21, 2008
Colorado School of Mines CHEN403
 y1  1.5e −t + 0.5e −1000t 
.
y =
−t
−1000t 
 2  1.5e − 0.5e

Here, the eigenvalues are λ = [1,1000] and the equivalent time constants are
T
τ = [1,0.001] . The stiffness ratio is S = 1000 , signifying a stiff problem. The exponential
term associated with λ = 1000 will decay 99% of the way to its ultimate value (of zero) by
t = 0.0046 ; however, the exponential term associated with λ = 1 will decay 99% of the way
to its ultimate value (of zero) by t = 4.6 . If we use the forward Euler method, we must keep
the step-size as h ≤ 2/ λ max to keep the numerical solution stable. For this problem, that
means h ≤ 0.002 , meaning that we would need at least 2,300 time steps to reach our final
time of t = 4.6 .
T
The following figure shows the numerical solution at early times. In this chart, the step size
used is h = 0.0001 , much smaller than the stability limit of h ≤ 0.002 . However, using this
extremely small step size will require 46,000 steps to reach the ultimate limit of t = 4.6 .
What happens if we raise the step size after t > 0.01 , where the effects of the larger
eigenvalue has already died out?
Forward Euler Solution to Sample Stiff ODE System
h =0.0001
2.5
y 1 (t )
2.0
1.5
y 2 (t )
y (t )
1.0
0.5
0.0
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.010
t
The next figure shows the numerical solution after t > 0.01 when the step size is increased.
We will go just above the stability limit of h ≤ 0.002 to h = 0.00201 . Notice that just this
little bit above the stability limit starts to show instability — it takes a while, but the
oscillations do occur after t > 1.0 .
Numerical Methods
- 14 -
December 21, 2008
Colorado School of Mines CHEN403
Forward Euler Solution to Sample Stiff ODE System
h =0.0001 for t <0.01
h =0.00201 for t >0.01
2.5
2.0
y 1( t )
1.5
y (t )
1.0
y 2( t )
0.5
0.0
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
t
So how do we numerically solve stiff problems? We need to use methods that are of high
order but also unconditionally stable. This means that, in general, some type of implicit
method will have to be used. Methods have been published by C. W. Gear and are available
as the GEARB package of routines. These routines have proven to be very popular for
solving stiff problems.
Some of the newer mthods that are successful in solving stiff systems are based upon
lineaizing the set of derivatives and effectively doing a single Newton iteration at the end of
the step. These are referred to as semi-implicit methods. The following system of
equations is linearized:
(
y( n+1) = y( n ) + h f y(n+1)
)

∂f
⇒ y( n+1) ≈ y(n ) + h  f y( n ) +
∂y

( )
(
y( n )

y ( n+1) − y ( n ) 

)
and then can be rearranged:

∂f
 I − h⋅
∂y


  y( n+1) − y( n )  = h f y( n )
y( n ) 

( )
and solved as:
Numerical Methods
- 15 -
December 21, 2008
Colorado School of Mines CHEN403

∂f
y ( n+1) = y ( n ) + h  I − h ⋅
∂y

−1

 f y( n )
y( n ) 

( )
Here, the derivative term is the Jacobian matrix, J , with elements J ij = ∂fi / ∂y j . The final
solution for y( n+1) requires some type of matrix inversion (usually an effective matrix
inversion via an LU decomposition).
Rosenbrock methods are semi-implicit Runge-Kutta methods. One particular set of
methods have been proposed by Kaps & Rentrop. Their recursion equations have the
following form:
1
(n) 
(n)
 γh I − J  g 1 = f y


1
c21
(n) 
(n)
 γh I − J  g 2 = f y + a21 g 1 + h g 1


(
(
)
)
1
1
(n) 
(n)
 γh I − J  g 3 = f y + a31 g1 + a32 g 2 + h [c31g 1 + c32g 2 ]


1
1
(n) 
(n )
 γh I − J  g 4 = f y + a31 g 1 + a32 g 2 + h [c41 g 1 + c42g 2 + c43g3 ]


(
)
(
)
y ( n+1) = y ( n ) + b1 g 1 + b2 g 2 + b3 g 3 + b4 g 4
where J ( n ) is the Jacobian matrix evaluated at the beginning of the step.
The following Table 1 shows the parameters for this method. Note that there are two sets
of bi parameters, one set for a 4th order method and another set for a 3rd order method.
These two sets of methods are very useful since they can be used to estimate errors on a
particular step as:
ε = y (4nth+1) − y (3nrd+1)
( )
= ( b1,4th − b1,3rd ) g 1 + ( b2,4th − b2,3rd ) g 2 + ( b3,4th − b3,3rd ) g 3 + ( b4,4th − b4,3rd ) g 4 = O h4
and, hence, lead to an algorithm for increasing or decreasing the step size to achieve a
particular tolerance.
Numerical Methods
- 16 -
December 21, 2008
Colorado School of Mines CHEN403
Table 1. Kaps-Rentrop Semi-Implicit Runge-Kutta Parameters
b1
4th Order
Value
19/9
3rd Order
Value
97/54
1.92
b2
1/2
11/36
a32
0.24
b3
25/108
25/108
c21
-8
b4
125/108
0
c31
14.88
c32
2.4
c41
-0.896
c42
-0.432
c43
γ
-0.4
0.5
Parameter
Value
Parameter
a21
2
a31
Numerical Methods
- 17 -
December 21, 2008
Colorado School of Mines CHEN403
Summary of Formulas
Forward Euler Method:
y( n+1) = y(n ) + h ⋅ f y( n ) , t (n ) = y(n ) + h ⋅ f ( n )
(
)
Backward Euler Method:
y( n+1) = y( n ) + h ⋅ f y( n+1) , t ( n+1) = y( n ) + h ⋅ f (n+1) .
(
)
Improved Euler Method:
1
1
y( n+1) = y( n ) + k1 + k2
2
2
(n)
where:
k1 = h ⋅ f t , y( n )
k2
(
= h ⋅ f (t
(n)
)
+ h, y( n ) + k1
)
Runge-Kutta-Gill Method:
1
b
d
1
y( n+1) = y( n ) + k1 + k2 + k3 + k4
6
3
3
6
(n)
(n)
k1 = h ⋅ f t , y
where:
(
)
1
1 

k2 = h ⋅ f  t (n ) + h, y( n ) + k1  .
2
2 

1


k3 = h ⋅ f  t (n ) + h, y(n ) + ak1 + bk2  .
2


(n )
(n)
k4 = h ⋅ f t + h, y + ck2 + dk3 .
(
a=
Numerical Methods
)
2 −1
2− 2
2
2+ 2
, b=
, c=−
, d=
.
2
2
2
2
- 18 -
December 21, 2008
Download