Visually Illustrating Stability in Numerical Integration Techniques

advertisement
Visually Illustrating Stability in
Numerical Integration Techniques
Rudy Scott
Instructor: Dr. Kenneth Wiggins
Walla Walla College
June 2002
MAA Pacific Northwest Section Meeting
Portland, Oregon
Abstract
We present a method of visually illustrating the stability of numerical
integration using computer particle dynamics simulation developed by Jeff Lander and
use this method to illustrate several integration techniques including Euler, velocity
Verlet, Milne, and Adams-Bashforth-Moulton.
Visually Illustrating Stability in Numerical Integration Techniques
Rudy Scott—Walla Walla College
1 Introduction
Many numerical techniques are available for the solution of initial-value problems. The choice in
a particular application depends on many factors including speed of convergence, computational expense,
data-storage requirements, accuracy, and stability. [1]
One application is in the simulation of particle dynamic systems. These are systems designed to
model physical interactions based on Newton’s laws of motion. Each particle in the system has an
associated mass, but no spatial extent. The forces between particles are calculated using physical laws.
Then a numeric integrator is used to calculate the new velocities and positions from these forces. Particle
dynamic systems are frequently used in molecular modeling applications. [2]
In this paper, we explore a visual method illustrating some of the above-mentioned numeric
integration characteristics. Stability will be of particular interest to us. By stability, we mean that the
qualitative behavior in our approximation matches that of the actual solution and that small changes in
initial conditions result in reasonably small changes in the state of the system after many iterations. We
visually illustrate several integration techniques using this method.
We begin by presenting the mathematical background behind three of these integration techniques,
then proceed to discuss our project and results, and conclude with suggestions for areas of future inquiry.
2 Mathematical Background
A simple numerical technique for solving initial value problems is Euler’s method [3]. Consider
the IVP
 x(t )  f (t , x(t ))

 x(a)  xa
t  a, b
Suppose that x has continuous derivatives with respect to t on [a,b]. If we know the value of x at some
point t, we may construct a Taylor expansion to determine the value x(t+h):
x(t  h)  x(t )  hx(t ) 
1 2
1
1
h x(t )  h 3 x(t )  ...  h m x ( m ) (t )  E
2!
3!
m!
1
where E is the truncation error. Taking the first order Taylor approximation we
have x(t  h)  x(t )  hx(t ) which leads to the formula x(t  h)  x(t )  hf (t , x(t )) where h is called
the step size. This first order formula is called Euler’s method and it gives us approximations to the
solution at points t+ih where i N.
Euler’s method is (1) easy to implement and (2) computationally inexpensive. It may be shown
that the error in each iteration of the Euler formula converges to zero as h converges to zero and with
essentially the same rapidity [5]. We denote this by saying the error is O(h) [5]. This error may result in
divergence from the actual solution when large step sizes are used. To see this, we follow MacDonald’s
explanation [3]. Consider the IVP:
The exact solution is x  e
 x(t )  ax  t 

 x(0)  1
 at
where a > 0
which is monotonically decreasing. Suppose the Euler
approximation yields a value of xn after n iterations. Then
xn1  xn  axn h  xn 1  ah  . But since
x0 = 1, by solving explicitly we have: xn = (1 – ah)n. If ah < 1, then this approximate explicit solution is
monotonically decreasing [3] like the exact solution. But when 1 < ah < 2, the approximation is oscillatory
although decaying towards the exact solution. Worse, when ah > 2, the approximate solution diverges from
the actual solution—the technique “blows up.” Euler’s method is convergent [6] in the sense that as h tends
to zero, the global truncation error also tends to zero. But the global truncation error can be large when the
step size is not “small.”
In our example, this “smallness” is measured relative to the magnitude of a. In general, the
“stiffer” the ODE or system of ODEs, the smaller the step size required. The term stiffness comes from the
application of ODE’s to spring systems, such as described below. A method is said to be stable as relates
to systems of stiff equations when ah lies within a region of absolute stability. For our example, this is true
when ah < 2. [6]
ah < 1
1 < ah < 2
2
ah > 2
Another technique for integrating particle dynamic systems is the velocity Verlet algorithm [4]. Consider
the system of second order differential equations resulting from Newton’s laws of motion acting on a
system of particles:
force
 
 xi (t )  mass  fi (t , xi (t ))

where t is time, and
 xi  t0   vinit
xi(t) is the position vector of the ith particle.

 xi  t   xinit

We consider the second order ODE resulting from a single component of an individual particle.
The same formulas may be applied component-wise to each particle vector in the system. Through
substitution this second order ODE may be written as a system of first order ODEs. Letting v = x’ gives:
v(t )  f (t , x(t ))

v  t0   vinit
 x(t )  v  t 

 x  t0   xinit
where t represents time,
x(t) represents position,
v(t) represents velocity,
and f(t,x(t)) represents acceleration.
The velocity Verlet algorithm may be derived from this system. We follow Anderson’s [4]
derivation here and begin by Taylor expansions about x(t) and v(t):
h2
f ( x(t ))  O(h3 )
2
h2
v(t  h)  v(t )  hf ( x(t ))  v(t )  O(h3 )
2
x(t  h)  x(t )  hv(t ) 
provided that
But x 
4
x exists and is bounded in the first expansion and x  exists and is bounded in the second.
d
d


dx
x 
f (t , x) 
f (t , x) 
f (t , x)
by the chain rule. So these are not
dt
dt
t
x
dt
unreasonable assumptions.
Now, we introduce a reverse Taylor expansion about time t+h,
v(t  h  h)  v(t  h)  hf ( x(t  h)) 
(  h) 2
v(t  h)  O(h3 )
2
Simplifying,
h2
v(t )  v(t  h)  hf ( x(t  h))  v(t  h)  O(h3 )
2
3
Now, we solve for v(t + h):
v(t  h)  v(t )  hf ( x(t  h)) 
h2
v(t  h)  O(h3 ) .
2
We now have two expresssions for v(t + h). Averaging these gives,
h
h2
v(t  h)  v(t )  ( f ( x(t ))  f ( x(t  h)))  (v(t )  v(t  h))  O(h 3 ) .
2
4
We would like to eliminate the h2 term. Continuing our assumption that f has two bounded
derivatives with regard to x, the chain rule implies:
v(t ) 
df ( x(t )) dx df ( x(t ))
 
v(t ) .
dx
dt
dx
Then applying the mean value theorem,
f   t   f   t  h   hf   c 
This implies
So,
for some c in (t, t + h).
v(t )  v(t  h)  O(h) since f  is bounded.
h
v(t  h)  v(t )  ( f ( x(t ))  f ( x(t  h)))  O(h3 ) .
2
These results are summarized in the formulas for the velocity Verlet method:
x(t  h)  x(t )  hv(t ) 
h2
f ( x(t ))  O(h3 )
2
h
v(t  h)  v(t )  ( f ( x(t ))  f ( x (t  h)))  O(h 3 ) .
2
To use velocity Verlet, first calculate x(t  h) . Next, recompute the forces f ( x (t  h)) , then
calculate the new velocities. One advantage of this algorithm is it provides both position and velocity
information. Local error is O(h3) and global error is O(h2) assuming the existence and continuity of 2
derivatives of f. [4]
The final technique we show is the Adams-Bashforth-Moulton [5] corrector-predictor technique of
order 4. In this quadrature method, we estimate the value of an integral by a finite sum. We write
x(t  h)  x(t )  
t h
t
x(s)ds
x(t  h)  x(t )  
t h
t
 x(t )  
t h
t
4
x(s)ds
f (s, x(s))ds
Let tn = t, ti = tn – (n-i)h, and xi = x(ti) for each 0  i  n. To approximate the integral we examine
an interpolating polynomial [6] P3(u) to f(s,x(u)) obtained by using previous data points: (tn,xn), (tn-1,xn-1),
(tn-2, xn-2), and (tn-3, xn-3). So we have,
x t  h   x t   
t h
t
P3  u  du .
Using Newton’s interpolatory divided difference formula, we write
x(t  h)  x(t )  
t h
t
 3 k f (tn , xn ) k 1

(u  tn j )  du


k
h k!
j 0
 k 0

 k f (tn , xn ) t  h k 1

 x(t )   
(
u

t
)
du


n

j
hk k ! t j 0
k 0 

3
1
where
 (u  t
j 0
n j
) is interpreted to be 1.
Let u = t + sh with du = h ds. Introducing this change of variable gives us
3  k

 f (tn , xn ) 1 k 1
x  t  h   x(t )   
h   (sh  hj ) ds 
k
0
h k!
k 0 
j 0

1 k 1
 k f (tn , xn )

k
 x(t )   
h

h
(
s

j
)
ds


0 j 0
hk k !
k 0 
.
3
1 k 1
But we may evaluate
k 0
k 1
k 2
k 3
  (s  j ) ds easily for each value of k:
0
j 0
1 1
  (s  j ) ds
0

j 0
1 0
0
 (s  j ) ds
j 0
1 1
  (s  j ) ds
0
j 0
1 2
  (s  j ) ds
0
j 0
  s 0
1
1
  1 ds
0
1
1
 s2 
 
 2 0
1
  s ds
0

1
2

5
6

9
4.
1
1
  s +s ds
2
0
 s3 s 2 
  
 3 2 0
1
1
  (s +3s +2s) ds
3
0
5
2
 s4

   s3  s 2 
4
0
So we have,
1
5
3


x(t  h)  x(t )  h  f (tn , xn )  f (tn , xn )   2 f (tn , xn )   3 f (tn , xn ) 
2
12
8


1
5

 x(t )  h  f (tn , xn )   f (tn , xn )  f (tn 1 , xn 1 )    f (tn , xn )  2 f (tn 1 , xn 1 )  f (t n  2 , xn  2 )  
2
12

3
 f (tn , xn )  3 f (tn1 , xn1 )  3 f (tn2 , xn2 )  f (tn3 , xn3 ) 
8

 x(t ) 
h
55 f (tn , xn )  59 f (tn1, xn1 )  37 f (tn2 , xn2 )  9 f (tn3 , xn3 )  .
24
This is known as the Adams-Bashforth formula and will serve as our predictor. Our implicit
corrector formula, Adams-Moulton, is derived [5] similarly except the earliest node is replaced by the
predicted node, (tn+1, xn+1), in approximating the integral. The Adams-Moulton formula is
x(t  h)  x(t ) 
h
9 f (tn1 , xn1 )  19 f (tn , xn )  5 f (tn1, xn2 )  f (tn2 , xn2 ) 
24
Combining the explicit Adams-Bashforth formula with the implicit Adams-Moulton formula gives
us the Adams-Bashforth-Moulton (ABM) predictor-corrector method. The Adams-Bashforth formula
provides an approximation for the point (tn+1,xn+1). This approximation can then be used by the AdamsMoulton formula to correct the prediction [5]. So we have:
x(t  h)  x(t ) 
h
55 f (t , x(t ))  59 f (t  h, x(t  h))  37 f (t  2h, x(t  2h))  9 f (t  3h, x(t  3h)) 
24
x  t  h   x(t ) 
h
9 f (t  h, x(t  h))  19 f (t , x(t ))  5 f (t  h, x(t  h))  f (t  2h, x(t  2h))  .

24 
6
3 Project
In [7], Jeff Lander introduced a simple 3D-particle dynamics simulator for the PC with source
code. Using this simulator we can see a striking visual illustration of the effects of numeric integrator
instability. The simulator allows for simple particles connected by springs (governed by Hooke’s law) and
optionally gravity. The original simulator used only Euler integration.
Figure 1 illustrates a simple spring
box in Lander’s program. The box is falling
due to the effects of gravity. The new
positions of each particle are calculated by
Euler integration. When the time step size or
the magnitude of the spring constants are small
enough, the Euler method converges, and the
box is well behaved.
Figure 1: Euler integration with small spring coefficients.
Figure 2 shows the same box, with
larger spring coefficient values. As we
increase the spring constants, we increase the
forces undergone by the particles and stiffen
the differential equations governing particle
motion. Euler’s method no longer converges
and the box begins convulsing and leaping
around the constraining box in an
unpredictable manner.
Figure 2: Euler integration with large spring coefficients causes
instability.
A subsequent article [8] by Lander added two integrators: midpoint and Runge-Kutta 4th order.
Our goal was to find and implement other methods of integration that would meet the following criteria:
1) Computationally quick
2) Large region of absolute stability (able to better handle stiff equations)
3) Minimizing additional data storage requirements.
7
We first implemented the velocity
Verlet algorithm shown previously. This
algorithm proved to be significantly more stable
than either Euler or midpoint techniques. In
fact, its stability was comparable to 4th order
Runge-Kutta. Yet it executed in less the half the
time of the computationally expensive RK-4.
Figure 3 shows a “stiff-springed” box that is
unstable under Euler, but well-behaved under
Figure 3: Larger spring coefficients maintain stability
under Velocity Verlet.
velocity Verlet.
After we communicated our results to Lander, he sent code demonstrating an integration technique
he called Crenshaw’s algorithm. This appears to be a modified Adam’s-Bashfourth-Moulton 2nd order
technique. This method only requires storing force data from one previous interation, so data overhead is
small.
We proceeded to implement the Beeman
Algorithm [1] (similar to velocity Verlet) and
Adams-Bashforth-Moulton methods of orders 2 and
4. Since the Adams-Bashforth-Moulton methods are
predictor-corrector algorithms, they require data for
several previous integration steps to be stored. This
increased the data-storage requirements of our
simulator.
Figure 4: Menu showing various integrators.
To facilitate the examination and storage of simulation data, we also added a data logging system
to Lander’s simulator. This system allows position, acceleration, and force data to be stored for each
particle and each time step of a simulation. With this information the simulation may be replayed or frozen
at a particular step while the numerical data is examined. Additionally, we implemented a graphing system
developed by Paul Barvinko that allows us to view this data visually. The data may also be exported to text
files for further manipulations in Excel or other packages.
8
4 Results and future exploration
In order to quantify the success of these algorithms in accomplishing our goals we examined two
characteristics (1) computational time and (2) region of stability. Computational time was measured by
inserting a microsecond timer into the code just before and after the call to the integrator function. This
tool allows the comparison of algorithm speed on the same system, but cannot be used to compare
algorithms run on a different computer since the timing is highly processor dependent.
Stability was measured by setting the maximum time step to a fixed 0.01 and varying the spring
constant, i.e. “stiffness” of the system. An interval was recorded for each algorithm. The lower number
indicates the values at which instability was first detected when energy is added to the system. The upper
number indicates the value at which the system “blows up” without any energy being added. As each
simulation is unique, these values were determined over multiple trial simulations. They should be
considered estimates that are somewhat subjective in nature. It is particularly difficult to compare Verlet,
Beeman, and Adams-2 since they are closely spaced and their stability results are not very consistent.
Additionally, other characteristics of the system including the size and shape of the box influence
these stability figures. For example, when spring coefficients exceed 1000, the kinetic energy of the box is
very high. At these levels, the rebound from a collision may cause a new collision with the opposite wall.
Depending on the restitution constant, a rapid sequence of collisions may result cause increasing kinetic
energy that quickly leads to numeric instability.
Since our goal was to achieve a large region of stability with small computation times, the ratio of
stability to computation speed was used as a performance index. Our results are summarized in the table
below for each algorithm. They are also shown graphically below.
Table 1: Summary of results
Integrator
Milne
Euler
Midpoint
Adams-Bashfourth-Moulton-4
Crenshaw
Adams-Bashfourth-Moulton-2
Beeman
Velocity Verlet
Runge-Kutta 4
Global
Error
??
O(h)
O(h2)
O(h4)
??
O(h2)
O(h2)
O(h2)
O(h4)
Computation
Time
24-26
5-7
21-23
26-28
6-8
20-23
22-25
21-24
53-58
9
Spring Coefficient
Stability estimates
14-15
12-14
330-350
450-600
377-378
1700-2500
2450-2500
3200-4200
8700-9300
Average
Index
0.58
2.17
15.45
19.44
53.9
97.7
105
164
162
In our analysis of error in velocity Verlet, it was assumed that force, f(t, x), was both continuous
and smooth. Moreover, we assumed it possessed continuous, bounded derivatives with respect to t. In
practice, this is often not the case. Our simulator illustrates this in handling collisions.
When a collision is encountered, the simulator responds by applying an instantaneous impulse to
the colliding particle’s velocity vector. This impulse reverses the component of the particle’s velocity
normal to the collision and scales it by a dampening factor. Because this change in velocity occurs in a
single time step, a discontinuity occurs in velocity, which propagates through the system.
In order to maintain stability our numeric integration algorithm must be able to handle this
discontinuity gracefully. For predictor-corrector algorithms, this is a particular challenge since the
algorithm may use data points that span the discontinuity. To avoid this, we reset our predictor-correctors
when a collision is encountered. Implementing this change resulted in improved stability, particularly for
second order Adams-Bashforth-Moulton (ABM-2).
To date, Velocity Verlet has the best spring coefficient bound to computation time index.
However, the error margin in this stability analysis is very high leaving our results inconclusive. ABM-2 is
the strongest contender from the predictor-corrector family. This may be because the corrector used in this
method is the Implicit Trapezoidal method, which is A-Stable, meaning its region of absolute stability
contains the entire left half-plane [6]. According to [6], this is the only A-stable multi-step method. This
may explain why ABM-2 seems to have a considerably larger region of absolute stability than ABM-4.
While a quantitive analysis is difficult, one advantage of illustrating region of stability visually is
in assessing qualitative characteristics. For instance, while velocity Verlet has a high performance index, it
is rather “jittery” for the upper half of its stability range. It often bounces more than expected and even
when no energy is added, it tends to vibrate rather than staying in one place. Runge-Kutte does not exhibit
this “jitteriness.”
The next technique we hope to implement is Gear’s algorithm [8], which is considered quite
accurate for stiff equations. Another area to be explored is code optimization. It would be interesting to
see what computational gains could be realized by a careful optimization of the integration code. It would
also be useful to analyze the stability when varying time step sizes are used.
10
Spring Coefficient vs. Computation Time
10000
RK-4
9000
8000
Spring Coefficient
7000
6000
5000
Verlet
4000
3000
Beeman
Adams-2
2000
1000
Adams-4
Midpoint
Milne
Crenshaw
Euler
0
0
10
20
30
40
50
60
Computation Time
Performance Index
180
164.44
162.16
Verlet
RK-4
160
140
120
105.32
97.67
100
80
53.93
60
40
15.45
20
0.58
2.17
Milne
Euler
19.44
0
Midpoint
ABM-4
Crenshaw
11
ABM-2
Beeman
5 Acknowledgements
This paper builds on and explores concepts and programming introduced by Jeff Lander of
Darwin3D in the March and April 1999 issues of Game Developer Magazine [7,8]. Original source code
and programming is by Jeff Lander. Our modifications also incorporated the graphing class code
developed by Paul Barvinko.
Special thanks to Dr. Kenneth Wiggins of Walla Walla College for his time, comments and
suggestions on this paper and to Dr. Thomas Thompson also of Walla Walla College for his encouragement
and help making its presentation possible.
12
References
[1] CHARMM “Molecular Dynamics Tutorial” ONLINE: http://www.ch.embnet.org/MD_tutorial/
[2] Yip, Sidney and Ju Li. (Spring 2002) “Elements of Molecular Dynamics” Statistical Processes and
Atomistic Simulations. Massachusetts Institute of Technology. ONLINE: http://longmarch.mit.edu/22.53/c2/main.pdf
[3] MacDonald, James. (Spring 2001) “Course Notes for PHYS306: Computational Methods of Physics”
University of Delaware. ONLINE: http://www.physics.udel.edu/faculty/macdonald/
Ordinary%20Differential%20Equations/Euler's%20Method.htm
[4] Andersen, Hans C. (November 2001) “Accuracy of Integrators for Equations of Motion in Molecular
Dynamics” Lecture Notes, Chemistry 276. Stanford University. ONLINE:
http://chemweb.stanford.edu/fall2001/chem276/c276_01_lecture11.pdf
[5] Cheney, Ward and David Kincaid. “Multistep Methods” Numerical Mathematics and Computing.
Fourth Edition. Brooks/Cole Publishing Company (1999): Pacific Grove, CA. 294-304.
[6] Burden, Richard L. and Douglas J. Faires. “Polynomial Interpolation” Numerical Analysis. Sixth
Edition. Pacific Grove, CA: Brooks/Cole Publishing Company, 1997. 136-155.
[7] Lander, Jeff. “Collision Response: Bouncy, Trouncy, Fun” Game Developer Magazine. March, 1999.
ONLINE: http://www.gamasutra.com/features/20000208/lander_pfv.htm
SOURCE: http://www.darwin3d.com/gdm1999.htm#gdm0399
[8] Lander, Jeff “Lone Game Developer Battles Physics Simulator” Game Developer Magazine. April,
1999. ONLINE: http://www.gamasutra.com/features/20000215/lander_pfv.htm
SOURCE: http://www.darwin3d.com/gdm1999.htm#gdm0399
Other Recommended Resources
Baraff, David and Witkin, Andrew. “Particle System Dynamics” SigGraph 1997 Course Notes.
ONLINE: http://www-2.cs.cmu.edu/~baraff/sigcourse/
Franzen, Stefan. (Spring 2000) “CH 795N/ CHE 597B” Statistical Mechanics and Simulations of
Fluids and Soft Matter. North Carolina State University. ONLINE:
http://chsfpc5.chem.ncsu.edu/CH795N/lecture/V/V.html
Shampine, Lawrence F. and Gordon, M.K. Computer Solution of Ordinary Differential Equations:
The Initial Value Problem. W.H. Freemand and Company (1975): San Francisco, CA.
13
Download