Uploaded by margarita

Lecture10 6Oct2015

advertisement
Linear Mul+step Methods (LMMs) Review of Methods to Solve ODE IVPs For ODE Initial Value Problem:
(1) Euler’s forward method
(2) Euler’s backward method
(3) Heun’s method
Predictor:
Corrector:
Review of Methods to Solve ODE IVPs (4) Second-order Runge-Kutta method with
(5) Second-order Runge-Kutta method with
Review of Methods to Solve ODE IVPs (6) Second-order Runge-Kutta method with
(7) Third-order Runge-Kutta method
Review of Methods to Solve ODE IVPs (8) (Classical) Fourth-order Runge-Kutta method
Notice that for ODEs that are a function of x alone, the classical fourth-order RK method is similar to Simpson’s 1/3.
Review of Methods to Solve ODE IVPs (9) Runge-Kutta-Fehlberg (RKF45) method
Review of Methods to Solve ODE IVPs The fourth-order Runge-Kutta method is given by:
and fifth-order method:
Review of Methods to Solve ODE IVPs (10) Butcher’s fifth-order Runge-Kutta method
Mul+step Methods The methods of Euler, Heun, and Runge-Kutta that have
been presented so far are called single-step methods,
because they use only the information from one previous
point to compute the successive point; that is, only the
initial point (x0,y0) is used to compute (x1,y1) and, in
general, yi is needed to compute yi+1.
After several points have been found, it is feasible to use
several prior points in the calculation. This is the basis of
multistep methods.
One example of these methods is called Adams-Bashforth
four-step method, in which yi-3, yi-2, yi-1, and yi is required in
the calculation of yi+1.
Mul+step Methods This method is not self-starting; four initial points (x0, y0),
(x1, y1), (x2, y2), and (x3, y3) must be given in advance in
order to generate the points {(xi, yi): i ≥ 4}.
A desirable feature of multistep methods is that the local
truncation error (LTE) can be determined and a correction
term can be included, which improves the accuracy of the
answer at each step.
Also, it is possible to determine if the step size is small
enough to obtain an accurate value for yi+1, yet large enough
so that unnecessary and time-consuming calculations are
eliminated.
Using the combinations of a predictor and corrector requires
only two function evaluations of f(x,y) per step.
Deriva+on of a Mul+step Method Integrate the differential equation
(10.1)
from
to get
or
(10.2)
Deriva+on of a Mul+step Method Now, the step size
By the integral limits
Back to the equation (10.2), if we approximate the integral
by Simpson’s 1/3 rule:
Deriva+on of a Mul+step Method Putting things together, we get
(10.3)
to get
In equation (10.3) above, we require
, so this is a two-step method rather than a one-step
method.
General Form of Linear Mul+step Methods (LMMs) These schemes are called “linear” because they involve
linear combinations of y's and f 's, and “multistep”
because (usually) more than one step is involved.
Given a sequence of equally spaced step levels
with step size h, the general k-step LMM can be written as
(10.4)
where
General Form of Linear Mul+step Methods (LMMs) The method is defined through the parameters
Given the approximate solution up to
we obtain the
approximate solution
at the new step level
from
equation (10.4) as
(10.5)
General Form of Linear Mul+step Methods (LMMs) then the scheme is explicit since
If
evaluated directly without the need to solve
the scheme is implicit since we need to solve
can be
If
each step.
Note that to get started, the k-step LMM needs to the first k
step levels of the approximate solution,
so something
to be specified. The ODE IVPs only give
extra has to be done.
Standard approaches include using a one-step method to get
or using a one-step method to get
then
a two-step method to get
a (k-1)-step method to get
and then continue with the k-step method.
Newton-­‐Cotes Open Formulas The open formulas can be expressed in the form of a
solution of an ODE for n equally spaced data points:
(10.6)
where fn(x) is an nth-order interpolating polynomial.
If n = 1:
If n = 2:
Newton-­‐Cotes Open Formulas If n = 3:
If n = 4:
If n = 5:
where
,
, etc.
Newton-­‐Cotes Closed Formulas The general expression of the closed form:
(10.7)
where the integral is approximated by an nth-order NewtonCotes closed integration formula.
If n = 1:
If n = 2:
Newton-­‐Cotes Closed Formulas If n = 3:
If n = 4:
Adams-­‐Bashforth Formulas Rewrite a forward Taylor series expansion
(10.8)
and a 2nd-order backward expansion
can be used to approximate the derivative:
(10.9)
Adams-­‐Bashforth Formulas Substituting eqn. (10.9) into eqn. (10.8) we get the 2nd-order
Adams-Bashforth formula:
(10.10)
Higher-order Adams-Bashforth formulas can be developed
by substituting higher-difference approximations into eqn.
(10.8), generally represented as
(10.11)
Adams-­‐Bashforth Formulas Coefficients for Adams-Bashforth predictors
Order β0 β1 β2 β3 β4 1 1 2 3/2 -­‐1/2 3 23/12 -­‐16/12 5/12 4 55/24 -­‐59/24 37/24 -­‐9/24 5 1901/720 -­‐2774/720 2616/720 -­‐1274/720 251/720 6 4277/720 -­‐7923/720 9982/720 -­‐7298/720 2877/720 β5 -­‐475/720 Adams-­‐Moulton Formulas Rewrite a backward Taylor series expansion around xi+1
(10.12)
Solving for yi+1 gives
(10.13)
Using the same technique as Adams-Bashforth yields the
2nd-order Adams-Moulton formula
(10.14)
Adams-­‐Moulton Formulas The nth-order Adams-Moulton formula can be generally
written as
(10.15)
Adams-­‐Moulton Formulas Coefficients for Adams-Moulton correctors
Order β0 β1 β2 β3 β4 2 1/2 1/2 3 5/12 8/12 -­‐1/12 4 9/24 19/24 -­‐5/24 1/24 5 251/720 646/720 -­‐264/720 106/720 -­‐19/720 6 475/1440 1427/1440 -­‐798/1440 482/1440 -­‐173/1440 β5 27/1440 Milne’s Method Milne’s method is based on Newton-Cotes integration
formulas and uses the three-point Newton-Cotes open
formula as a predictor
(10.16)
and the three-point Newton-Cotes closed formula
(Simpson’s 1/3 rule) as a corrector
(10.17)
where j is an index representing the number of iterations of
the modifier.
Milne’s Method The predictor error is given by
(10.18)
and the corrector error is given by
(10.19)
Adams-­‐Bashforth-­‐Moulton Method This method is a popular multistep method that uses the 4thorder Adams-Bashforth formula as the predictor
(10.20)
and the 4th-order Adams-Moulton formula as the corrector
(10.21)
Adams-­‐Bashforth-­‐Moulton Method The error coefficients are given as
(10.22)
(10.23)
Why Bother with All These Schemes? Example 10.1 Approximate
with y(0) = 1 over the interval [0, 10].
Example 10.2 Approximate
with y(0) = 1, step size h = 1/8, over the interval [0, 3].
Local Trunca+on Error of LMMs For general LMMs
The Local Truncation Error (LTE) is defined as:
(10.24)
where y(x) is an exact solution of the ODE
Zero-­‐Stability A starting point for establishing if a numerical method for
approximating ODEs is any good or not is by seeing if it can
solve
The solution of this ODE is:
(10.25)
Applying either the Euler’s forward or backward method to
yields
(10.26)
This is the case for all Runge-Kutta methods. The property
related to solving
that is required for k-step
LMMs is actually less demanding than getting the right
answer. It is called Zero Stability.
Zero-­‐Stability and Root Condi+on For the general LMM in (10.4), we define the first and
second characteristic polynomials:
A Linear Multistep Method is zero-stable if and only if all
the roots of the characteristic polynomial
satisfy
and any root with
is simple.
The characteristic polynomial
is obtained by applying
the general LMM equation (10.4) to
to get
(10.27)
Consistency and Convergence Consistency: The LMM approximation scheme consistent
with the ODE if:
LTE à 0 as h à 0
Convergence: for all
exact – approx à 0 as h à 0
For most well-behaved ODE systems, LMMs with sensible
initial data satisfy
zero stability + consistency ⟹ convergence
LTE = O(hp) ⟹ exact – approx = O(hp)
Download