LN7

advertisement
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
ORDINARY DIFFERENTIAL EQUATIONS (C&C 4th ed., PART 7)
Most fundamental laws of science are based on models that explain variations in
physical properties and states of systems described by differential equations.
[Note to student: in C&C don’t get lost in details of variations of
key methods or in multi-step methods. See PT 7.4, p. 808-809.]
Basics:
ODE's involve only one independent variable, i.e., y(x)
d 2y
+  y = g(x)
1-dimensional problem in space x
2
dx
dy
= f(y,t)
time – dynamics problem
dt
PDE's more than one independent variable, i.e., T(x,y)
2 T 2 T
+
=0
[Laplace Equation]
x 2 y 2
2
2 u
2 u
[Wave Equation]

c
t 2
x 2
Auxiliary Conditions
Because we are effectively integrating an indefinite integral, we need
additional information to obtain a unique solution. Note that an nth
order equation generally requires n auxiliary conditions.
• Initial Value problem (IV): information at a single value of the
independent variable, typically at the beginning. [y(0) = y0]
• Boundary Value Problem (BV): information at more than one value of the
independent variable, typically at both ends. [y(0) = y0 and y(1) = yf]
Outline of Study of ODE's
Single-Step Methods for I.V. Problems (C&C 4th ed., Ch. 25)
a. Euler
b. Heun and Midpoint/Improved Polygon
c. General Runge-Kutta
d. Adaptive step-size control
II. Stiff ODEs (C&C 26.1)
III. Multi-Step Methods for I.V. Problems (C&C 4th ed., 26.2)
a. Non-Self-Starting Heun
b. Newton-Cotes
c. Adams
IV. Boundary Value Problems (C&C 4th ed., Ch. 27.1 & 27.2)
a. Solution Methods
1. Shooting Method
2. Finite Difference Method
b. Eigenvalue Problems { [A] - [I] } {x} = {0}
I.
Page 7-1 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
One-Step Methods (C&C 4th ed., Chap. 25)
• Consider the generic 1st order I.V. ODE problem:
dy
= ƒ(x,y) where (xo,yo) are given and we wish to find y = y(x).
dx
The independent variable x may be space x, or time t.
We can solve higher-order I.V. ODE's by transforming them to 1st-order ODE's
dy
d 2 y dy
+
+ 5y = 0; let z =
2
dx
dx
dx
dz
Substituting we have:
+ z + 5y = 0
dx
Solve a SYSTEM of two linear, first-order ODE’s:
dy
dz
= z and
+ z + 5y = 0
dx
dx
Example:
Basic approach is stepwise:
Start with (x0,y0) ==> (x1,y1) --> (x2,y2) --> etc.
Next Val. = Pres. Val. + slope * step size
yi+1 =
yi +
i *
h
where h = xi+1 – xi = step size
• Key to the various one-step methods is how the slope is obtained.
• Slope represents a weighted average of the slope over the entire interval
and may not be the tangent at (xi, yi)
Euler Method (C&C 4th ed., 25.1, p. 682)
Given (xi,yi), need to determine (xi+1,yi+1)
xi+1 = xi + h
yi+1 = yi + i h
Estimate slope as i = f(xi,yi)
yi+1 = yi + f(xi,yi) h
Slope at the beginning of the step is applied across the entire interval.
Analysis Local Error for the Euler Method
h2
Taylor series of true solution: yi+1 = yi + yi' h + yi"
+…
2
where yi' = f(xi,yi) = fi
Euler rule:
y i+1 = yi + fi h
Truncation error:
h2
Ea = yi+1 – ŷ i+1 = yi"
= O (h2)
2
Page 7-2 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Global Error (derivation not given in text, just stated
and shown by numerical examples):
2
Ea  G n, h * O (h ) = O (h)


Error Analysis
1. Truncation Error (Truncating Taylor Series)
[Large step size  large errors]
2. Rounding Error (Machine precision)
[very small step size  roundoff errors]
Two kinds of Truncation of Error:
Local – error within one step due to application of method
Propagation – error due to previous local errors.
Global Truncation Error = Local + Propagation
Generally if local truncation error is O (hn+1) then, as with numerical
quadrature formulas, Global truncation error is O (hn).
Proof is more difficult.
General Error Notes:
1. For stable systems, error is reduced by decreasing h
2. If method is O (hn) globally, then it is exact for (n-1)th order
polynomial in x
Improved One-Step Methods
Heun's Method (C&C 4th ed., 25.2.1, p. 694)
Predict:
y o = y + f (x ,y ) h
i+1
i
i
i
Estimate average slope as:
f 
Correct:
Notes:
1.
2.
3.
f(x i ,yi )  f(x i1 ,yi1o )
2
xi+1 = xi + h
yi+1 = yi + f h
Local Error O(h3) and Global Error O(h2)
Corrector step can be iterated using εa as a stopping criterion
If derivative is only f (x), i.e., not f (x,y), then there is no
predictor, and
f(x i )  f(x i1 )
yi+1 = yi +
h
2
so the method reduces to the Trapezoidal Rule.
Page 7-3 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-4 of 7-28
Midpoint Method (C&C 4th ed., 25.2.2, p. 698)
Predict
Estimate slope as
yi+1/2:
h
yi+1/2 = yi + f (x i ,yi )
2
f ( x i 1 / 2 , y i 1 / 2 )
with xi+1/2 = xi +
h
2
Correct yi+1:
yi+1 = yi + f ( x i 1 / 2 , y i 1 / 2 ) h
Note: Local Error O (h3) and Global Error O (h2) – same order as Heun.
General Runge-Kutta Methods (C&C 4th ed., 25.3, p. 701)
Employing single-step:
yi+1 = yi + ( xi, yi, h) h
Increment function or slope, ( xi, yi, h), is a weighted average:
 = a1k1i + a2k2i + … + ajkji + … + ankni
for an nth order method, where:
aj = weighting factors that generally sum to unity
kji = slope at point xji such that xi < xji < xi+1
k1i = f (xi ,yi);
k2i = f (xi + p1h , yi + q11k1ih)
k3i = f (xi + p2h , yi + q21k1ih + q22k2ih)
.....
kni = f (xi + pn-1h , yi + qn-1,1k1,ih + qn-1,2k2,ih +…+ qn-1,n-1kn-1,ih)
First-order Runge-Kutta Method – Euler's Method – Global truncation error O(h)
yi+1 = yi + k1 h
where k1 = f(xi ,yi).
Second-order Runge-Kutta Methods (C&C 4th ed., 25.3.1, p. 702) – Global truncation error O(h2)
• Assume a2= 1/2
— Heun w/single corrector: no iteration:
• Assume a2= 1
— Midpoint (Improved Polygon) Method:
1
1 
yi+1 = yi +  k1  k 2 h ; k1 = f(xi ,yi); k2 = f( xi + h,yi + hk1)
2
2 
yi+1 = yi + k2 h ;
k1 = f(xi ,yi);
k2 = f( xi +
1
1
h , yi + hk1)
2
2
• Assume a2= 2/3 — Ralston's Method (min. bound on truncation error for 2nd order RK):
1
3
3
2 
yi+1= yi +  k1  k2 h; k1 = f(xi ,yi); k2 = f(xi + h, yi + hk1)
3
4
4
3 
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-5 of 7-28
Classical Fourth-order Runge-Kutta Method (C&C 4th ed., 25.3.3, p. 707)
– Global truncation error O(h4).
yi+1 = yi + (1/6) [ k1 + 2 k2 + 2 k3 + k4 ] h
where
k1 = f(xi ,yi)
Euler (start-point) slope
1
1
h , yi + k1h)
2
2
1
1
k3 = f(xi + h , yi + k2h)
2
2
k2 = f( xi +
k4 = f(xi + h , yi + k3h)
Midpoint slope
Better midpoint slope
Full step (endpoint) slope
1. Global truncation error, O(h4)
2. If the derivative, f, is a function of only x, this reduces to Simpson's 1/3 Rule.
h
yi1  yi   k1  4k 2  k 4  where k2 = k3
6
Examples of Frequently Encountered, Simple ODE's
1. Exponential growth
unconstrained growth of biological organisms,
positive feedback electrical systems, and
chemical reactions generating their own catalyst
dy
 y
dt
with solution
y(t) = y0 e  t
2. Exponential decay
discharge of a capacitor,
decomposition of material in a river,
wash-out of chemicals in a reactor, and
radioactive decay
dy
 y
dt
with solution
y(t) = y0 e - t
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Numerical Solution of a simple growth ODE
y' = + 2.77259 y with y(0) = 1.00; Solution is y = exp(+2.773 x) = 16x
Step sizes vary so that all methods use the same number of
function evaluations to progress from x = 0 to x = 1.
x
0.000
0.125
0.250
0.375
0.500
0.625
0.750
0.875
1.000
x
0.0000
0.0625
0.1250
0.1875
0.2500
0.3125
0.3750
0.4375
0.5000
0.5625
0.6250
0.6875
0.7500
0.8125
0.8750
0.9375
1.0000
Heun
w/o iter
4th-order
RungeKutta
1.000
1.347
1.813
2.442
3.288
4.427
5.962
8.028
10.81
1.000
1.000
13.97
15.56
0.125
0.25
0.5
Exact
Solution
Euler
Heun
w/o iter
4th-order
RungeKutta
1.0000
1.1892
1.4142
1.6818
2.0000
2.3784
2.8284
3.3636
4.0000
4.7568
5.6569
6.7272
8.0000
9.5137
11.314
13.454
16.000
1.0000
1.1733
1.3766
1.6151
1.8950
2.2234
2.6087
3.0608
3.5911
4.2134
4.9436
5.8002
6.8053
7.9846
9.3683
10.992
12.896
1.0000
1.0000
15.326
15.952
h=
0.0625
0.125
0.25
Exact
Solution
Euler
1.000
1.414
2.000
2.828
4.000
5.657
8.000
11.31
16.00
h=
1.933
3.738
3.945
7.227
1.4066
1.9786
1.9985
2.7832
3.9149
3.9940
5.5068
7.7460
7.9820
10.8958
Which was able to provide the more accurate estimate of y(1)?
h * ki
for R-K
1.386
2.347
3.013
5.564
5.469
9.260
11.89
21.95
h * ki
for R-K
0.6931
0.9334
1.0166
1.3978
1.3853
1.8653
2.0317
2.7935
2.7684
3.7279
4.0604
5.5829
5.5327
7.4502
8.1147
11.157
Page 7-6 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-7 of 7-28
Higher-Order ODEs and Systems of Equations (C&C 4th ed., 25.4, p.
711)
An n-th order ODE can be converged into a system of n coupled 1st-order ODEs.
Systems of first order ODEs are solved just as one solves a single ODE.
Consider the 4th-order ODE:
y’’’’ + a(x) y’’’ + b(x) y’’ + c(x) y’ + d(x) y = f(x)
By letting
y’’’ = v3; y’’ = v2; and y’ = v1
this 4th-order ODE can be written as a system of four coupled 1st-order ODEs:
v1
y 

  

v2
d  v1  



v3
dx  v2  
 v3   f(x)  a(x) v3  b(x) v2  c(x) v1  d(x) y 
  

Given the initial conditions @ x = 0: y(0), v1(0) = y’(0), v2(0) = y’’(0), v3(0) = y’’’(0), a
numerical scheme can be used to integrate this system forward in time.
Example: Single-degree-of-freedom, undamped, harmonically driven oscillator.
(see also C&C 4th ed., 28.4, p. 797)
ODE is 2nd order:
m
d 2x
+ k x = P(t) or
dt 2
m = mass
k = spring constant
d 2x
P( t )
+ 2 x =
2
m
dt
2
T =
= natural period

 = driving frequency
P(t) = forcing function = P sin(t)
k
 =
= circular frequency
m

f =
= natural frequency
2
dx
( 0) = 0
Initial conditions: x(0) = 0 and
dt
Analytical solution:
P
P
x(t) =
sin(t) +
sin(t)
2
2
2
m(   )
m(   2 )
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-8 of 7-28
= (homogeneous soln.) + (particular soln.)
Recast 2nd-order ODE as two 1st-order ODE's:
dx
= v = f(t,x,v)
dt
with x(0) = 0
P sin( t)
dv
=
– 2x = g(t,x,v)
m
dt
with v(0) = 0
We can solve these two 1st-order ODE's sequentially.
Let m = 0.25, k = 42, P = 20,  = 8, and thus g = 80 sin(8t) - 162x.
Solve from t = 0 until t = 1.
For example, by the Euler method with h = 0.01:
t
t0 = 0
t1 = t0 + h = 0.01
t2 = t1 + h = 0.02
t3 = t2 + h = 0.03
...
tn = tn-1 + h = 1.00
x
x0 = 0
x1 = x0 + f(t0,x0,v0)h = 0
x2 = x1 + f(t1,x1,v1)h = 0
x3 = x2 + f(t2,x2,v2)h = 0.000639
...
xn = xn-1 + f(tn-1,xn-1,vn-1)h
v
v0 = 0
v1 = v0 + g(t0,x0,v0)h = 0
v2 = v1 + g(t1,x1,v1)h = 0.0639
v3 = v2 + g(t2,x2,v2)h = 0.1914
...
vn = vn-1 + g(tn-1,xn-1,vn-1)h
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Adaptive Step-size Control (C&C 4th ed., 25.5, p. 716)
Goal: with little additional effort estimate (bound) magnitude of
local truncation error at each step so that step size can be
reduced/increased if local error increases/decreases.
1. Repeat analysis at each time step with step length h and h/2.
Compare results to estimate local error. (C&C 4th ed., 25.5.1)
Use Richardson extrapolation to obtain higher order result.
2. Use a matched pair of Runge-Kutta formulas of order r and r+1 that use
common values of ki, and yield estimate or bound local truncation error.
C&C 4th ed., 25.5.2 discusses 4th-5th-order Runge-Kutta-Fehlberg pair.
A 2nd – 3rd order RK pair yielding local truncation error estimate:
2nd-order Midpoint Method O(h2):
k1 = f(xi ,yi);
k2 = f( xi +
1
1
h , yi + hk1)
2
2
yi+1 = yi + k2 h
Third-order RK due to Kutta O(h3) (C&C 25.3.2)
k3 = f( xi + h , yi +h(2k2-k1) )
1
yi+1* = yi + h(k1 +4k2+ k3)
6
Estimate of truncation error for midpoint formula is
1
Ei = yi+1 – yi+1* = – h(k1 - 2k2+ k3)
6
This is a central difference estimate of (const.) h3 y'''
which describes the local truncation error for midpoint method.
Use more accurate values yi+1* , but Ei provides estimate of
the local error that can be used for step-size control.
C&C Section 25.5.2 discusses Runge-Kutta Fehlberg that is a 4th - 5th order
RK pair that provides an estimator of the local truncation error.
Page 7-9 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-10 of 7-28
Stiff Differential Equations (C&C 26.1, p. 726)
A stiff system of ODE’s is one involving rapidly changing components together with slowly changing
ones. In many cases, the rapidly varying components die away quickly, after which the solution is
dominated by the slow ones.
Even simple first-order ODE’s can be stiff. C&C gives the example
dy
 1000 y  3000  2000e  t
y(0) = 0
dt
with solution y(t )  3  0.998e1000t  2.002et . The middle term damps out very quickly and after it
does, the slow solution closely follows the path: y(t )  3  2et . We would need a very small time
step h = Δt to capture the behavior of the rapid transient and to preserve a stable and accurate solution,
and this would then make it very laborious to compute the slowly evolving solution.
A stability analysis of this equation provides important insights. For a solution to be stable means that
errors at any stage of computation are not amplified but are attenuated as computations proceed. To
analyze for stability, we consider the homogeneous part of the ODE
dy
y (t )  y0e  at
  ay
with solution
dt
dy
Euler's method yields yi 1  yi  i h  yi  ayi h  yi (1  ah) . Assume some small error exists in the
dt
initial condition y0 or in an early stage of the solution. Then we see that after n steps,
yn  yearly (1  ah)n . The magnitude yn will remain bounded if and only if 1  ah  1 . Thus, if h >
2/a, yn   as n   . For this problem, a = 1000 and h must be less than 2/1000 = 0.002 to
obtain a stable solution; indeed, h must be considerably smaller than this to obtain an accurate
solution.
Euler’s method is known as an explicit method because the derivative is taken at the known point i.
An alternative is to use an implicit approach, in which the derivative is evaluated at a future step i+1.
The simplest method of this type is the backward Euler method that yields
dy
yi
yi 1  yi  i 1 h  yi  ayi 1h  yi 1 
dt
1  ah
Because yn will remain bounded for any value of h, this method is said to be unconditionally stable.
Implicit methods always entail more effort than explicit methods, especially for nonlinear equations or
sets of ODE’s, so the stability is gained at a price. Moreover, accuracy still governs the step size.
One needs to be aware of the occurrence of stiff equations. If the interest is in SLOWLY varying
dynamics one may:
1.Reformulate model to omit FAST variables:
2.Assume some interactions are at equilibrium, i.e., dy/dt = 0
More generally, one must use special codes for solving stiff differential equations that are generally
implicit multistep methods (C&C Section 26.1, p. 723). For example, MATLAB includes such
algorithms (C&C Section 27.3.2, p. 772). We re-visit stability concepts after we consider multi-step
methods.
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-11 of 7-28
Predictor-Corrector Multi-Step Methods for ODE's (C&C 26.2, p. 730)
Utilize valuable information at previous points; however, need to employ a single-step method
(typically 4th –order RK) to solve for the first few steps. For increased accuracy, use a predictor that
has truncation of same order as corrector so one can estimate errors and apply “modifiers.” In
addition, iterate the corrector to minimize truncation error and improve stability.
Non-Self Starting (NSS) Heun Method
o = ym+ f
O(h3)-Predictor – yi+1
(x i ,yi m ) 2h
i-1
Higher order predictor than the self-starting Heun.
This is an open integration formula (midpoint).
f(x i ,yi m )  f(x i1 ,yi1 j1 )
j
m
O(h )-Corrector – yi+1 = yi +
h
2
Notes: 1. Corrector is applied iteratively for j=1, ..., m
2. yim is fixed (from previous step iterations)
3. This is a closed integration formula (Trapezoid)
3
Multi-step methods. Two general schemes solving ODE's:
1. Newton-Cotes Formulas with ck from Table 21.4, and c k from Table 21.2 of C&C.
n 1

 yi  n  h
ck fik  O(h n 1 ) open predictor

k 0
yi+1 = 
n 1

y

h
ck fik 1  O(h n 1 ) closed corrector
 in 1

k 0
2. Adams Formulas (Generally more stable) with k and  k determined from Tables 26.1 and 26.2,
respectively.
 n 1
open predictor
h
k fik  O(h n 1 )
Adams-Bashforth

k 0
yi+1 = yi + 
n 1

closed corrector
k fik 1  O(h n 1 )
h
Adams-Moulton
 k 0




ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-12 of 7-28
Popular Methods
1. Heun non-self starting –– Newton-Cotes with n=1
2. Milne's –– Newton-Cotes with n=3 (but use Hamming’s corrector for better stability)
3. 4th-Order Adams –– • Predictor 4th-Order Adams-Bashforth
• Corrector 4th-Order Adams-Moulton
If predictor and corrector are of same order, we can obtain estimates of the truncation
error during computation.
Improving Accuracy and Efficiency (use of modifiers)
1. provide criterion for step size adjustment (Adaptive Methods)
2. employ modifiers determined from error analysis
For the Non-Self-Starting (NSS) Heun Method:
• Final Corrector Modifier

1
0
E c  yim1  yˆ i+1
5

0
yim1  yˆ i+1
m
m
yi1  yi1 
5
• Predictor Modifier (excluding 1st step)
4
4
E p  yim  yˆ i0
yˆ i01  yˆ i01  yim  yˆ i0
5
5
0
0
w/ ŷ i = unmodified ŷi+1 from previous step
m
from previous step
yim = unmodified yi+1




General Predictor-Corrector Schemes
Errors cited are local errors. Global errors order h smaller.
Predictors:
2
Euler:
y i1 = yi + hƒ(xi,yi) + O(h )
Midpoint ( same as Non-Self-Starting Heun):
3
y i1 = yi-1 + 2hƒ(xi,yi) + O(h )
Adams-Bashforth 2nd-Order:
h
3
y i1 = yi +  3 (x i ,yi )  (x i 1 ,y i 1 ) +O(h )
2
Adams-Bashforth 4th-Order:
h
5
y i1 = yi+  55 (x i ,yi )  59  (x i1 ,yi 1 ) 37  (x i 2 ,yi 2 )  9  (x i 3 ,yi 3 ) +O(h )
24
Hamming (and Milne's Method):
4h
y i1 = yi-3 +
2 (x i ,yi )   (xi1 ,yi 1 )  2 (xi 2 ,yi 2 ) +O(h5)
3
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Correctors:
• refine value of yi+1 given a yˆ i+1
• more stable numerically ––no spurious solutions which go out of control
Adams-Moulton 2nd-Order closed (Non-self starting Heun):
h
yi+1 = yi +  (x i1 ,yi 1 )  (x i ,yi ) +O (h3)
2
Adams-Moulton 4th-Order closed (relatively stable):
h
yi+1 = yi +
9 (x i1 ,yi1 )  19 (x i ,yi )  5 (x i 1,y i 1)  (x i 2 ,y i 2 )  +O (h5)
24
Milne's (Newton-Cotes 3-point, Simpson's 1/3 Rule, not always stable!!!):
h
5
yi+1 = yi-1 +  (x i1 ,yi 1 )  4 (x i ,yi )  (x i 1,y i 1 ) +O (h )
3
Hamming (modified Milne, stable):
9
1
3h
yi+1 = yi – yi-2 +
 (x i1 ,yi1 )  2 (xi ,yi )  (xi 1 ,yi 1 ) +O (h5)
8
8
8
Example: Single-degree-of-freedom, undamped, harmonically driven oscillator
ODE is 2nd order:
m
P(t)
d 2x
d 2x
+
k
x
=
P(t)
or
+ 2x =
2
2
m
dt
dt
2
= natural period

 = driving frequency
m = mass
T =
k = spring constant
P(t) = forcing function = P sin(t)
k

 =
= circular frequency
f =
= natural frequency
m
2
Initial conditions: x(0) = 0 and
Analytical solution:
x(t) =
dx
( 0) = 0
dt
P
P
sin(t) +
sin(t)
2
2
2
m(   )
m(   2 )
Recast 2nd-order ODE as two 1st-order ODE's:
dx
= v = f(t,x,v)
with x(0) = 0
dt
dv
P sin( t )
=
 2x = g(t,x,v)
with v(0) = 0
dt
m
Let m = 0.25, k = 42, P = 20,  = 8, and thus g = 80 sin(8t)  162x.
Use 4th-ord. Adams Method to solve from t = 0 until t = 1 w/ h=0.1
Page 7-13 of 7-28
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-14 of 7-28
First, we need start-up values. Use 4th-order RK to get 4 points:
t
0.0
0.1
0.2
0.3
x(t)
0.0000
0.1038
0.5324
0.8692
v(t)
0.0000
2.6234
5.1401
0.4949
Second, the Adams-Bashforth 4th-order predictor is:
h
yi+1 = yi +
55(x i ,yi )  59 (x i1,y i 1)
24
5
37(xi2 ,yi2 )  9(xi3 ,yi3 ) +O (h )
We apply this predictor to both x and v:
0.1
x(0.4) = x(0.3) +
[55(0.4949) - 59(5.1401) +37(2.6234) -9(0.0)] = 0.1235
24
0.1
v(0.4) = v(0.3) +
[55 g(0.3, 0.8692, 0.4949)
24
– 59 g(0.2, 0.5324, 5.1401) + 37 g(0.1, 0.1038, 2.6234) – 9 g(0, 0, 0)] = –11.2492
Third, the Adams-Moulton 4th-order corrector is:
h
yi+1 = yi +
9 (x i1 ,yi1 )  19 (x i ,yi )
24
5
5(xi1 ,yi1 )  (xi2 ,yi2 ) +O (h )
We apply the corrector to both x and v to obtain the 1st iteration:
0.1
x(0.4) = x(0.3) +
[9(-11.2492) + 19(0.4949) - 5(5.1401) + (2.6234)] = 0.8169
24
0.1
v(0.4) = v(0.3) +
[9 g(0.4, 0.1235, -11.2492) + 19 g(0.3, ...) ...] = -6.7435
24
We then can iterate until convergence:
x(0.4) = 0.8169, 0.842862, 0.843847, 0.843834, ...
v(0.4) = -6.7435, -10.84973, -11.00371, -11.00949, ...
Do the Second and Third tasks for each successive step.
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-15 of 7-28
Comparison of the most frequently used One-Step and Multi-Step methods
Advantages of Runge-Kutta such as 4th-Order Formula
• simple to apply
• self-starting (single-step)
• easy to change step size
• always stable (all h; no spurious solutions)
• can use matched pairs of different order to estimate local truncation error
Advantages of Predictor-Corrector Formulas such as Adam-Moulton O(h4)
• without iteration twice as efficient as Runge-Kutta 4th-Order
• local truncation error is easily estimated from difference so we can adjust step sizes and apply
modifiers. But decreasing step size requires new back points.
For same step sizes, h, error terms of methods are reasonably similar.
Stability Analysis Motivation (really fun stuff)
• stable means errors at any stage of computation are attenuated (not amplified) as computations
proceed.
• Runge-Kutta is always stable
• Predictor-Corrector Schemes
– more accurate with corrector
– predictors may be unstable (extrapolation)
– if corrector is unstable, big problems!!! (example: Milne's method)
Problems arise because we are replacing a 1st order ODE
with a high order difference equation.
dy
Consider:
= ƒ(t,y)
dt
1st-Order Taylor Series gives:
dy
 f(t o ,yo )
 f(t o ,yo )
 ƒ(to,yo) +
(t  t o ) +
(y  yo )
dt
t
y
independent of y:

f(t o ,yo )

ƒ(to,yo) +
(t–t o ) solutuion to ODE is equivalent to
t


numerical integration
dependent on y:
f(t o ,yo )

+
(y–yo ) feedback may amplify
y
causing instability

ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-16 of 7-28
Stability and Error Analysis
As before, experiment with the initial value problem where f is a function of y:
y' = K y
y(0) = yo
for different positive and negative values of K. See if artificial
solutions become larger than true solution:
y(t) = y0 exp(Kt).
Let t = ih with i = step number and h = stepsize
Analytical Solution:
i
y(t) = y(ih) = yo exp(Kt) = yo eK(ih) = yo exp[hK]
2
3
i
= yo [1 + hK + (hK) /2 + (hK) /3! + … ]
Local truncation error of Heun method
yi1  yi 
h
 yˆ i+1  yi 
2
where
ŷi1  yi + hyi
combining these relationships with yi' = K yi yields
2
h
[K (yi + h (K yi))+ (K yi)] = yi [1 + hK + (hK) /2 ]
2
3
= yi [ exp(hK) + O(h ) ]
3
so that the local truncation error is O(h ).
yi+1
= yi +
Global truncation error for Heun follows from above for fixed t = nh
yn = y0 [ 1 + hK + (hK)2 /2] n
= y0 [ exp(hK) + O(h)3 ] n
= y0 exp(nhK) [ 1 + exp(–hK) O(h)3 ] n
= y0 exp(Kt) [ 1 + n exp(–hK) O(h3) + n(n-1)[ exp(–hK) O(h3)] 2 /2 + ... ]
= y0 exp(Kt) [ 1 + O(h2) ] where n exp(–hK) O(h3) = O(h2)
This shows that the global truncation error for the Heun method is indeed O(h2).
The numerical solution always follows the true solution + O(h2).
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-17 of 7-28
Stability Analysis for Newton-Cotes midpoint predictor used alone.
yi+1 = yi-1 + 2hyi'
Repeated use of predictor yields the linear difference eqn.:
yi+1 = yi-1 + 2h[ Kyi ] = 2hK yi + yi-1
The linear differential equation ay'' + by' + cy = 0 has two linearly independent solutions so that in
general:
y(t) = A1 exp(t) + A exp(t)
where j are the roots of a + b + c = 0.
Similarly the linear difference equation
a yi+1 + b yi + c yi-1 = 0
has two linearly independent solutions A1 (r1)i and A2 (r2)i
where rj are the roots of ar2 + br + c = 0.
Thus yi+1 = 2hK yi + yi-1 has two linearly independent solutions, so the general solution is:
yi = A1 (r1)i + A2 (r2)i
where rj are the roots of r2 = 2hK r + 1. These roots are
r = hK  1 hK  
2
so

r1 1 + hK + (hK)2/2 .... and r2 = – 1 + hK – (hK)2/2 ....
The first root r1 equals {exp(hK) + O(h3) },
so A1 (r1)i will approximate the true solution of the ODE.
However, second root r2, equal to {– exp(–hK) + O(h3)},
is a spurious solution to the difference equation.
When K < 0, |r2 | > 1 > | r1|.
With increasing i, A2 (r2)i increases whereas A1 (r1)i decreases.
So the bad solution grows while the good solution dies away and is lost. => BAD!


Predictors often have stability problems because they predict/extrapolate ahead.
This is why stable correctors are used. They are more accurate and reliable.
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-18 of 7-28
Boundary Value Problems (C&C 27.1, p. 753)
Recall: Unique solution to an nth -order ODE requires n given conditions
Conditions for 2nd Order ODE:
Initial value problem
y(xo) = yo
y'(xo) = y'o
Boundary value problem
y(xo) = yo
y(xf) = yf
Shooting Method (C&C 27.1.1, p. 754)
d 2y
Solve
= ƒ(x,y,y'), given y(xo) = yo and y(xf) = yf
dx 2
1. Convert 2nd-Order ODE to two 1st-Order ODE's
dy
=z
dx
dz
= ƒ(x,y,z)
dx
2. Given the initial condition, y(xo) = yo, estimate the initial condition z(xo) = zo(1)
3. Solve I.V. ODE from x = xo to x = xf to find yf(1)
4. Re-estimate the initial condition z(xo) = zo(2), and again solve from x = xo to x = xf to find yf(2)
5. If ODE is linear, based on the above we can interpolate or extrapolate to find the "correct" z(xo) =
zo
z0  z0 
(1)
Given yf, (yf(1), zo(1)), and (yf(2), zo(2)):
zo
(2)
yf
(2)
–
zo
(1)
–
yf
(1)
y
f –
yf
(1)

If the ODE is nonlinear, iterate until zo yields the correct yf
Example: Single-degree-of-freedom, undamped, harmonically driven oscillator (same as previous
d 2x
d2 x
P(t)
example). ODE is 2nd order: m 2 + k x = P(t) or
  2x 
2
dt
dt
m
2
m = mass
T =
= natural period

k = spring constant
 = driving frequency
P(t) = forcing function = P sin(t)
k

 
 =
= circular frequency
f =
= natural frequency
m
2
Now, we are given boundary conditions:
x(0) = 0 and x(1) = 0.5
ENGRD 241 Lecture Notes
(i.e.,
Section 7: Ordinary Differential Equations
Page 7-19 of 7-28
dx
(0) is no longer necessarily zero, but is unknown)
dt
Let us use the Shooting Method by employing a 4th-order RK.
Recast 2nd-order ODE as two 1st-order ODE's:
dx
= v = f(t,x,v)
with x(0) = 0
dt
dv Psin(  t)

– 2x = g(t,x,v)
with v(0) = ?
dt
m
Let m = 0.25, k = 42, P = 20,  = 8, and thus g = 80 sin(8t) - 162x.
Use 4th-ord. RK Method to solve from t=0 until t=1 w/ h=0.1.
1. Guess v(0) = 1.0
2. Solve for x(1) = 0.9169 (Note: not equal to 0.5)
3. Guess v(0) = 100
4. Solve for x(1) = 0.0933
5. Because our ODE is linear, we can interpolate:
100  1.0
v(0) = 1.0 +
(0.5 – 0.9169) = 51.113
0.0933  0.9169
6. With v(0) = 51.113, check for x(1) = 0.5000
OK
Finite Difference Method (C&C 27.1.2, p. 757)
• The shooting method is inefficient for higher-order equations with several boundary conditions.
• Finite Difference method has advantage of being direct (not iterative) for linear problems, but
requires the solution of simultaneous algebraic equations.
1. Divide domain to obtain n interior discrete points
(usually evenly spaced @ h)
0
•
1
•
•
2
3
•
…
i-1
•
i
•
•
i+1
…
n+1
n
•
•
x
x f
xo
 x=h
2. Write a finite divided difference expression for the ODE at each interior point.
3. Use known values of y at x = xo and x = xf
4. Set-up n-linear equations with n-unknowns. Realizing that the system is banded and often
symmetric, solve with most efficient method.
Note: If higher-order FDD equations and/or centered differences are used, may need to employ
imaginary or "phantom" points outside of domain to express B.C.'s (together with appropriate
FD versions of B.C.'s).
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-20 of 7-28
Sailboat mast deflection problem
4
d v  f(z)
base: v(0) = 0; v’(0) = 0; top: v(L)’’ = 0; v’’’(L) = 0
4
EI
dz
Centered Finite-Divided-Difference expressions of O (h2):
v  4vi1  6vi  4vi 1  vi 2
d4v
v'''' =
 i2
[load dist’n f(z)]/EI
4
dz
h4
 vi2  2vi 1  2vi 1  vi  2
d3v
v''' = 3 
shear/EI
dz
2h 3
v  2vi  vi 1
d2v
v'' = 2  i1
moment/EI
dz
h2
v  v i 1
dv
v' =
 i 1
slope
2h
dz
Mast with 11 Nodes
Imaginary
Node
0
1
2
3
4
Use equation:
Use v'(0) to
get in terms
of v(1)
5
6
d 4v
dz4

7
8
9
Imaginary
10 Nodes
f (z)
EI
U
s
e
v(0)
given
Use h = L/10.
Write the FD equations at points 1 through 10:
Use v'''(L) = 0
to get in terms
of other v's
v
'
4
'
h f(z1 )
h 4 f(z 2 )
@ 1: v–1 – 4 v0 + 6 v1 – 4 v2 + v3 =
@ 2: v( 0 – 4 v1 + 6 v2 – 4 v3 + v4 =
EI
EI
L
)
h 4 f(z 2 )
h 4 f(z 2 )
@ 9: v7 – 4 v8 + 6 v9 – 4 v10 + v11 =
@ 10: v8 – 4 v9 + 6 v10 – 4 v11 + v12 =
EI
EI
=
We now need to eliminate v–1, v0, v11, and v12 with the four boundary conditions
0
v(0) = 0:
v =0
0
v'(0) = 0:
v 1  v1
0
2h
v-1 = v1
v''(L) = 0:
v 9  2v10  v11
v'''(L) = 0:
 v8  2v 9  2v11  v12
h2
2h3
0
0
t
o
v11 = 2v10 + v9 g
e
t
v12 = v8 - 4v9 + 4v10
i
n
t
e
r
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
@i=1
7 v1 – 4 v2 + v3 = h4f(x1) / EI
@i=2
– 4 v1 + 6 v2 – 4 v3 + v4 = h4 f(x2)/ EI
@i=9
v7 – 4 v8 + 5 v9 – 2 v10 = h4f(x9) / EI
@ i = 10
v8 – 4 v9 + 2 v10 = h4f (x10)/ EI
 7 -4 1
  v1 
 4 6 -4 1
v 

 2 
 1 4 6 4 1
  v3 

 
1 -4 6 -4 1

  v4 

  v 5  h 4 f
1 -4 6 -4 1

  
1
-4
6
-4
1

  v 6  EI

  v7 
1 -4 6 -4 1

 
1 -4 6 -4 1   v8 


1 -4 5 -2   v 9 

 
1 -4 2   v10 

Page 7-21 of 7-28
 f (x1 ) 
 f (x ) 
 2 
 f (x 3 ) 


 f (x 4 ) 
 f (x 5 ) 


 f (x 6 ) 
 f (x 7 ) 


 f (x8 ) 
 f (x 9 ) 


f (x10 ) 
Once we solve for the vi's, we can obtain secondary results such as bending moments and shear forces
by substituting the finite-divided-difference operators and the values of the vi's into such equations as:
M = EI v''
V = EI v'''
For more refined results, we can use a smaller h and more segments.
Merits of Different Numerical Methods for ODE Boundary Value Problems
• Shooting method
 Conceptually simple and easy.
 Inefficient for higher-order systems w/ many boundary conditions.
 May not converge for nonlinear problems.
 Can blow up for bad guess of initial conditions.
• Finite Difference method
 Stable
 Direct (not iterative) for linear problems.
 Requires solution of simultaneous algebraic eqns.
 More complex.
FD better suited for eigenvalue problems.
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-22 of 7-28
Engineering Applications of Eigenvalue Problems (C&C 27.2, p. 759)
1. Natural periods & modes of vibration
of dynamic systems and structures. Boundary-value problem from separation of variables:
m*a = force  m(x)
 2 y(x,t)
t 2
If one assumes that y has form:
= k(x)
 2y
x 2
y(x,t) = exp(it) u(x)
then displacement function u(x) satisfies ODE: – 2 u(x) = [k(x)/m(x)]
d 2u
+ 2 u(x) = 0
dx 2
With FD approximation becomes: [A] {ui} = – 2 {ui}
which may be written: a(x)
2. Buckling loads and modes of structures (C&C 27.2.3, p. 762)
Deflection of vertically loaded beam with horizontally constrained end:
EI y’’ – Py = 0; y(0) = 0; y(L) = 0.
Is there deflection?
3.
4.
5.
6.
7.
Directions and values of principal stresses.
Directions and values of principal moments of inertia.
Condition numbers linear systems of equations.
Stability criteria for numerical solution of PDE’s.
Other problems in which "principal values" are sought.
Matrix Form of Eigenvalue Problem
[A] {X} =  {X}

( [A] – I] ) {X} = 0
but
{X}  0
det [A – I ] = 0 ==> find 
• If [A] is n x n, there are n eigenvalues  i and n eigenvectors {X}i
• If [A] is symmetric, the eigenvalues and eigenvectors are real
and the eigenvectors are orthogonal:
XiT X j
= 0
if
ji
= ci2
if j = i
• Because only the relative values of the components of {x}i are
significant, each eigenvector can be scaled by 1/ci to yield
“orthonormality”:
XiT X j
= 0
if
ji
= 1
if
j=i
d 2u
dx 2
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-23 of 7-28
Computation of Eigenvalues and Eigenvectors
1. Power and Inverse Power method for largest and smallest eigenvalues (C&C 4th ed., Sec.
27.2.5, p. 767)
2. Direct numerical algorithms: Jacobi, Given, Householder,
QR factorization (C&C 4th ed., Sec. 27.2.6, p. 770)
3. Use of characteristic polynomial for “toy” problems, i.e., with n being no more than 2 or 3
(C&C 4th ed., Sec. 27.2.4, p. 765)
Power Method for the largest eigenvalue
Compute xt+1 = A xt / || xt ||
 for larger and larger t to estimate largest (A).
 Always works for Symmetric A.
Inverse Power Method for the smallest eigenvalue
Compute xt+1 = B xt / || xt ||
 for larger and larger t to estimate largest (B).
 Here B = A-1. Eigenvalues of B are inverse of those of A.
 BUT WE DO NOT COMPUTE A-1 !
(Don’t do it. Instead use LU decomposition of A.)
Shifting method for any Eigenvalue:
Suppose we want to compute eigenvector whose eigenvalue is about .
Let B = (A –  I)-1 and compute xt+1 = B xt / || xt ||
for larger and larger t to estimate largest  [(A – I)-1].
This will compute the eigenvalue nearest to  !
But we DO NOT COMPUTE INVERSE.
Use LU decomposition. If A has eigenvector-value pairs (ei, i),
(A –  I) ei = A ei –  I eI = i ei–  ei = ( i –  ) ei
This shifted matrix has the same eigenvectors, ei, as A with shifted eigenvalues (i – ).
Hence (A –  I)  1 has eigenvalues ( i –  )  1.
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-24 of 7-28
Some Engineering Application of Eigenvalue Problems:
1. Natural periods and modes of vibration of structures and dynamical systems. (C&C Ex. 27.4, p.
761)
Application: determination of the natural frequencies of a system
Example: A mass-spring system with three identical (frictionless) masses connected by three
springs with different spring constants.
The displacement of each spring is measured relative to its own local coordinate system with an origin
at the spring's equilibrium position.
d 2x
m 2 1 = –3 k x1 – 2 k (x1 – x2)
dt
d 2 x2
m
= –2 k (x2 – x1) – k (x2 – x3)
dt 2
d 2 x3
m
= – k (x3 – x2)
dt 2
If one assumes that x has the form:
d 2x i
xi = ai cos t ;
= – 2 ai cos t
2
dt
m2
Then, with  =
, the governing equations become:
k
m2
–
a1 = – 5a1 + 2a2
==>
5 a1 – 2 a2 = a1
k
m2
–
a2 = 2a1 – 3a2 + a3 ==> – 2 a1 + 3 a2 – a3 = a2
k
m2
–
a3 = a2 – a3
==>
– a2 + a3 = a3
k
 5 2 0 a1 
a1 
5a1  2a 2  a1
 
 
 2a1  3a 2  a 3  a 2 ==> 2 3 1a 2  =  a 2 
0 1 1 a 
a 
 a 1  a 3  a 3

 3 
 3 
[A]
{X} =  {X}
or
5   2
0 a1 
 
 2 3   1 a 2  = 0
 0
 
1 1  

a 3 
[A – I ]
{X} = 0
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-25 of 7-28
(homogeneous equation)
For a non-trivial solution:
5   2
0 
det [A – I ] = 0
==> find  det  2 3   1  = 0
 0
1 1  


The determinant yields a cubic equation for ;
(5 – ) [(3 – )(1 – ) – (–1)(–1)] – 2 [(–1) – (–2)(1 – )] = 0
(5 – ) [2 – 4  + ] – 2 [ 2 – 2 ] = 0
6 – 18  + 9  –  = 0
The three solutions of the cubic equation are the three eigenvalues:
1 = 6.29
2 = 2.29
3 = 0.42
Thus, there are 3 possible natural frequencies. Recall that the frequency is defined as  =
k
, and
m
we can use this to find the 3 ’s in radians/second.
The corresponding eigenvectors {a}i are found by solving:
5   i
 2

 0
2
3  i
1
0 
1 
1  i 
 a1i 
 
a 2i  = 0
a 
 3i 
Because this is a homogeneous equation, we can only find the relative values of the ai's. For 1 = 6.29
2
0   a1i 
5  6.29
 
 2
3  6.29
1  a 2i  = 0

 0
1
1  6.29  a 3i 
a2
= – 0.645
a1
a3
= 0.122
a1
Only the relative values of the ai's are significant. Thus, setting a1 = 1.00:
 1.000 


for 1 = 6.29,
{a}1 = 0.645
 0.122 


Similarly,
for 2 = 2.29
 0.74 


{a}2 = 1.000 
 0.77 


for 3 = 0.42
0.25


{a}3 = 0.55
1.00 


ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-26 of 7-28
These three eigenvectors are the natural modes of vibration corresponding to the respective natural
frequencies of vibration.
Thus, there are 3 possible natural frequencies.
Recall that the frequency is defined as
[A] {x} =  {x}
[A – I ] {x} = 0
det [A – I ] = 0 ==> find 
• If [A] is n x n, there are n eigenvalues  i and n eigenvectors {x}i
• If [A] is symmetric, the eigenvectors are orthogonal:
{x}iT {x}j = 0
= 1
if
if
ji
j=i
POWER METHOD for Eigenvector Analysis (C&C 27.2.5, p. 767)
(An iterative approach for solving for eigenvalues & eigenvectors)
Assume that a vector {y} can be expressed as a linear combination of the eigenvectors:
n
{y} = b1{x1} + b2{x2} + . . . + bn{xn} =
bi  x i 

i1
Multiplying the above equation by [A] yields:
n
[A] {y} =

i1
bi  A   x i  =
n
bi i xi 

i1
Order the i so that
|1 | > |2 | > |3 | > . . . > |n |
n


 

bi  i  x i   where i  1
Then [A] {y} = 1  b1 x1 
1


 1 
i2



Multiplying the equation by [A] again, we have:
2
n


 i 
2 
2
bi    x i  
[A] {y} = 1 b1 x1 


 1 
i2


Repeating this process m times:
m
n


 i 
m 
m
bi    x i  
[A] {y} = 1 b1 x1 


 1 
i2




m
 
Since  i   0 for all i  1 as m ––> ,
 1 
and
[A]m {y} ––> 1m b1 x1 ,
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
Page 7-27 of 7-28
Since we can only know the relative values of the elements of {x}, we may normalize {x}. If {x} is
normalized such that the largest element is equal to {1}, then m1 b1 is the first eigenvalue, 1
The remaining eigenvectors corresponding to increasingly smaller eigenvalues can be found
sequentially.
Recall that
n


 

bi  i   x i  
[A] {y} = 1 b1 x1 


 1 
i2


Subtracting 1 b1 {x}1 from both sides
 n

bi  i  x i  
[A] {y} – 1 b1 {x}1
= 


 i2

and substituting for {x}1
n


 
m

bi  i   x i  
[A] {y} – 1 b1 [A] {y} =  2 b 2 x 2  


 2 
i 3


Regrouping :
n


 
m-1

bi  i   x i  
(1 - 1b1[A] ) [A]{y} =  2 b 2 x 2  


 2 
i 3


and we may solve for {x}2 in the same fashion.




POWER Method: Numerical Example
3 7 9   x1 
 x1 
9 4 3  x  =   x 
 2

  2


x 
9 3 8   x 3 
 3
Find the maximum eigenvector.
Initial guess: {y}T = {1 1 1}
3 7 9  1 19 
  
[A] {y} = 9 4 3 1 = 16  => 20
  
9 3 8  1 20 
Approx.
eigenvalue
0.95


0.80
1.00 


eigenvector
ENGRD 241 Lecture Notes
Section 7: Ordinary Differential Equations
2nd iteration:
3 7
[A]2{y} = 9 4

9 3
Approximate error, a:
(ei, j  ei1, j
max
*100%
ei, j
j
9
3
8 
=
0.95


0.80 = 18.95
1.00 


0.92084


0.77836
 1.00 


0.92084 - 0.95
*100% = 4.2%
0.92084
3rd iteration:
3 7
[A]2{y} = 9 4

9 3
Approximate error, a:
(ei, j  ei1, j
max
*100%
ei, j
j
9
3
8 
=
0.92084


0.77836 = 18.623
 1.00 


0.92420


 0.77331
 1.00 


0.77331 - 0.77836
*100% = 0.75%
0.77331
After 10 iterations:
 = 18.622
0.92251


{x} = 0.77301
 1.00 


a = 0.00033%
Page 7-28 of 7-28
Download