ppt - Astronomy & Physics

advertisement
Computational Methods
in Physics
PHYS 3437
Dr Rob Thacker
Dept of Astronomy & Physics (MM-301C)
thacker@ap.smu.ca
Today’s Lecture

Numerical integration
Simple rules
 Romberg Integration
 Gaussian quadrature


References: Numerical Recipes has a pretty good
discussion
Quadrature & interpolation

Quadrature has become a synonym for numerical integration


All quadrature methods rely upon the interpolation of the
integral function f using a class of functions



Primarily in 1d, higher dimensional evaluation is sometimes dubbed
cubature
e.g. polynomials
Utilizing the known closed form integral of the interpolated function
allows simple calculation of the weights
Different functions will require different quadrature algorithms


Singular functions may require specialized Gaussian quadrature, or
changes of variables
Oscillating functions require simpler methods, e.g. trapezoidal rule
Consider definite integration using
trapezoids
Approximate integral using
the areas of the trapezoids

b
 f ( x)dx   area of trapezoid s
a
f ( x0 )  f ( x1 )
f ( x1 )  f ( x2 )
h
2
2
f ( x2 )  f ( x3 )
f ( x3 )  f ( x4 )
h
h
2
2
h
h
x0
a
x1
Dx
x2
x3 x4
Dx
Dx
b
ForFor
n intervals
general
formula is to
n intervals
this generalizes
b

a
h
 f ( x0 )  2 f ( x1 )  2 f ( x2 )  2 f ( x3 )  f ( x4 )
2
h
  f ( x0 )  f ( x4 )  h f ( x1 )  f ( x2 )  f ( x3 )
2

n 1
h
f ( x)dx   f (a)  f (b)  h f ( xi )
2
i 1
(1)
The composite trapezoidal rule.
Fitting multiple parabolas
Fit successive triplets of datums

Using triplets of datums we
can fit a parabola using the
Lagrange interpolation
polynomial
p1 ( x) 
( x  x0 )( x  x2 )
( x  x1 )( x  x2 )
f ( x0 ) 
f ( x1 )
( x0  x1 )( x0  x2 )
( x1  x0 )( x1  x2 )


( x  x0 )( x  x1 )
f ( x2 )
( x2  x0 )( x2  x1 )
h
x0 x1
a
Dx
x2 x3 x4
Dx
Dx
b
If the xi,xi+1 are separated by
h then
p1 ( x) 
( x  x0 )( x  x2 )
( x  x0 )( x  x1 )
( x  x1 )( x  x2 )
f
(
x
)

f
(
x
)

f ( x2 )
0
1
2
2
2
2h
h
2h
Need to integrate this expression to get area under the parabola
Compound Simpson’s Rule

After slightly lengthy but trivial algebra we get
x2
 p1 ( x)dx 
x0

We can do the same on x2,x3,x4 to get
x4
 p1 ( x)dx 

x2
This is “Simpson’s 3-point Rule”
h
 f ( x2 )  4 f ( x3 )  f ( x4 )
3
Hence on the entire region
x4


h
 f ( x0 )  4 f ( x1 )  f ( x2 )
3
x0
f ( x)dx 
h
 f ( x0 )  4 f ( x1 )  2 f ( x2 )  4 f ( x3 )  f ( x4 )
3
In general for an even number of intervals n
b

a
h
4h n / 2
2h n / 21
f ( x)dx   f (a)  f (b)   f ( x2i 1 ) 
f ( x2i )

3
3 i 1
3 i 1
This is “Simpson’s Composite Rule”
Higher order fits
Can increase the order of the fit to cubic, quartic etc.
For a cubic fit over x0,x1,x2,x3 we find
x3
3h
x f ( x)dx  8  f ( x0 )  3 f ( x1 )  3 f ( x2 )  f ( x3 )
0
Simpson’s 3/8th Rule
For a quartic fit over x0,x1,x2,x3,x4



x4

x0
2h
7 f ( x0 )  32 f ( x1 )  12 f ( x2 )  32 f ( x3 )  7 f ( x4 )
f ( x)dx 
45
Boole’s Rule

In practice these higher order formulas are not that useful, we
can devise better methods if we first consider the errors
involved
Newton Cotes Formulae

The trapezoid & Simpson’s rule are examples of Newton-Cotes
formulae (“Interpolatory quadrature rules”)


Assume fixed Dx=(b-a)/m
Higher order formulae are given on mathworld.wolfram.com



Limit to value of higher order formulae, compound formulae with adaptive
step sizes are usually better
Lagrange interpolating polynomials are found to approximate a
function given at f(xn)
The “degree” of the rule is defined as the order p of the
polynomial that the quadrature rule integrates exactly


Trapezoid rule – p=1
Simpson’s rule – p=2
Error in the Trapezoid Rule

Consider a Taylor expansions of f(x) about a
( x  a) 2
( x  a) 3
f ( x)  f ( a )  ( x  a ) f ' (a ) 
f ' ' (a) 
f ' ' ' (a)  ...
2!
3!

The integral of f(x) written in this form is then
b

a

( x  a) 2
( x  a)3
( x  a) 4
f ( x)dx   xf (a) 
f ' (a ) 
f ' ' (a) 
f ' ' ' (a)
2
!
3
!
4
!

b

( x  a)5

f ' ' ' ' (a)  ....
5!
a
h2
h3
h4
 hf (a) 
f ' (a) 
f ' ' (a) 
f ' ' ' (a)  ...
2
6
24
(1)
where h=b-a
Error in the Trapezoid Rule II

Perform the same expansion about b
b

a

h2
h3
h4
f ( x)dx  hf (b) 
f ' (b) 
f ' ' (b) 
f ' ' ' (b)  ...
2
6
24
(2)
If we take an average of (1) and (2) then
b

a
h
h2
h3
f ( x)dx   f (a)  f (b)   f ' (a)  f ' (b)   f ' ' (a)  f ' ' (b)
2
4
12


h4
h5
  f ' ' ' (a)  f ' ' ' (b) 
f iv (a)  f iv (b)  ... (3)
48
240

Notice that odd derivatives are differenced while even
derivatives are added
Error in the Trapezoid Rule III

We also make Taylor expansions of f´ and f´´´ around
both a & b, which allow us to substitute for terms in f´´
and fiv and to derive
b

a


h
h2
h4
 f ' ' ' (a)  f ' ' ' (b)  ... (10)
f ( x)dx   f (a)  f (b)   f ' (a)  f ' (b) 
2
12
720
It takes quite a bit of work to get to this point, but the
key issue is that we have now created correction terms
which are all differences
If we now use this formula in the composite trapezoid
rule there will be a large number of cancellations
Error in the Composite Trapzoid

We now sum over a series of trapezoids to get
b

f ( x)dx 
a
h
 f (a)  f ( x1 )   f ( x1 )  f ( x2 )  ...   f ( xn2 )  f ( xn1 )   f ( xn1 )  f (b)
2
h2
  f ' (a )  f ' ( x1 )    f ' ( x1 )  f ' ( x2 )   ...   f ' ( xn  2 )  f ' ( xn 1 )    f ' ( xn 1 )  f ' (b) 
12
h4
 f ' ' ' (a)  f ' ' ' ( x1 )    f ' ' ' ( x1 )  f ' ' ' ( x2 )   ...   f ' ' ' ( xn2 )  f ' ' ' ( xn1 )    f ' ' ' ( xn1 )  f ' ' ' (b) 

720
 ...
n 1
h
h2
h4
 f ' ' ' (a)  f ' ' ' (b)  ... (11)
  f (a )  f (b)  h f (a  ih )   f ' (a )  f ' (b) 
2
12
720
i 1


Note now h=(b-a)/n
The expansion is in powers of h2i
Euler-Maclaurin Integration rule

The formula we just derived is called the EulerMaclaurin integration rule. It has a number of uses:



If the integrand is easily differentiable the correction terms
can be calculated precisely
We can use Richardson Extrapolation to remove the first
error term and progressively produce a more accurate result
If the derivatives at the end points are zero then the formula
immediately tells you that the simple Trapezoid Rule gives
extremely accurate results!
Richardson Extrapolation: Review


Idea is to improve results from numerical method from order
O(hk) to O(hk+1)
Suppose we have a quantity A that is represented by the
k
k 1
k 2
expansion
A  A(h)  Kh  K ' h  K ' ' h  ...


Write this expansion as A  A(h)  Khk  O(h k 1 ) (1)


K,K’,K’’ are unknown and represent error terms, k is a known constant
Note: If we drop the O(hk+1) terms we have a linear equation in A and
K
By reducingAthe
step
for A
A
(h /size
2) we
K (get
h / 2another
) k  O(equation
h k 1 )

(2)
Note the O(hk+1) terms are different in each expansion
Eliminate the leading error

So we have two equations in A again:
A  A(h)  Kh k  O(h k 1 ) (1) and
A  A(h / 2)  K (h / 2) k  O(h k 1 ) (2)

Kh k
 A(h / 2)  k  O(h k 1 )
2
We can eliminate K, the leading order error:
2 k A  A  2 k (2)  (1)
( 2 k -1 )A  2 k A(h / 2)  Kh k  O(h k 1 )  A(h)  Kh k  O(h k 1 )
 2 k A(h / 2)  A(h)  O(h k 1 )
2 k A(h / 2)  A(h)
k 1
 A

O
(
h
)
k
2 1
Define the higher order accurate
estimate

We have just eliminated K and found that
2k A(h / 2)  A(h)
k 1
A

O
(
h
)
k
2 1
2k A(h / 2)  A(h)
B ( h) 
2k  1

Define B(h) as follows:

Then we have a new equation for A:
k 1
A  B(h)  O(h )

and B(h) is accurate to higher order than our previous
A(h)
A  A(h)  Khk  O(h k 1 )
See Numerical Recipes, partially borrowed from A Ferri
Romberg Integration: preliminaries

The starting point for Romberg integration is the composite
trapezoid rule which we may write in the following form
I 
b
a

n 1
h
 h2
f ( x) dx   f (a)  f (b)  2 f ( xi )  [ f ' (a)  f ' (b)]
2
i 1
 12
Can define a series of trapezoid integrations each with a
successively larger m and thus more sub-divisions.
 Let n=2k-1 : k=1 => 1 interval
 The widths of the intervals are given by hk = (b-a)/2k-1
2 1
 hk2
hk 
f ( x) dx   f (a)  f (b)  2  f (a  ihk )  [ f ' (a)  f ' (b)]
2
i 1
 12
k 1
I 
b
a
Define this as Rk,1 (it’s just the comp. trap. rule)
Romberg Integration: preliminaries


The Rk,1 describes a family of
progressively more accurate
estimates
Can show (see next slide)
that there is a relationship
between successive Rk,1

2 k 2
1


R k ,1  R k 1,1  hk 1  f (a  (2i  1)hk )
2

i 1




h2
R2,1
Each new Rk,1 adds 2k-2 new
interior points in the
evaluation
Series converges
Re-use old values in
comparatively slow
the new calculation!
h3
R3,1
Recurrence relationship
2 1

hk 
We defined Rk ,1   f (a )  f (b)  2  f (a  ihk )
2 
i 1

Expand sum into odd & even parts and note hk-1  2hk
k 1
k 1
k 1
2 1
2 1
hk
Rk ,1   f (a )  f (b)  hk  f (a  ihk )  hk  f (a  ihk )
2
i 1, 3, 5,..
i  2 , 4 , 6 ,..
k 1
k 2
2 1
2 1
hk
  f (a )  f (b)  hk  f (a  ihk )  hk  f (a  2 jhk )
2
i 1, 3, 5,..
j 1
k 2
Even terms written
as 2j~i, sub for hk
k 1
2 1
2 1
hk
  f (a )  f (b)  hk  f (a  jhk 1 )  hk  f (a  ihk )
2
j 1
i 1, 3, 5,..
2 1
2 1

1  hk 1
 f (a)  f (b)  hk 1  f (a  jhk 1 )  hk  f (a  ihk )
 
2 2
j 1
i 1, 3, 5,..

k 2
2

1
  Rk 1,1  hk 1  f (a  (2 j  1)hk )
2
j 1

k 2
k 1
Odd terms written
with 2j-1~i
Consider
k=1
k=2
k=3

0
sin x dx  2
R1,1 

2
[sin( 0)  sin(  )]  0
1


R2,1  [ R1,1   sin( )] 
2
2
2
1


3
R3,1  [ R2,1  [sin( )  sin( )]]  1.8962
2
2
4
4
k=4
R4,1  1.9742
k=5
R5,1  1.9936
k=6
R6,1  1.9984
Converges really fairly slowly…
Romberg Integration

Idea is to apply Richardson extrapolation to the series
Powers of hk2i because of
of approximations defined by Rk,1
Consider


b
a

Euler-Maclaurin expansion


b
f ( x) dx  Rk ,1   K h   f ( x)dx  Rk ,1  K h   Ki hk2i
i 1
2i
i k
2
1 k
a
We also have the expansion
for the hk+1


b
a
i 2
f ( x) dx  Rk 1,1   K i hk2i 1
i 1
 Rk 1,1  K h
2
1 k 1

  K i hk2i 1
i 2
but hk 1  hk / 2
K1hk2 
hk2i
K1hk2 
hk2i
 Rk 1,1  2   K i 2i  Rk 1,1 
  Ki i
2
2
4
4
i 2
i 2
b

a
K1hk2 
hk2i
f ( x)dx  Rk 1,1 
  Ki i
4
4
i 2
(2)
(1)
Eliminate the leading error again

So we now subtract ¼ of (1) from (2) to get
K1hk2 
hk2i K1hk2 1 
3 b
1
2i
f
(
x
)
dx

R

R


K


K
h

 ik
k 1,1
k ,1
i
4 a
4
4
4i
4
4 i 2
i 2
 4  4i 
  K h  i 1 
i 2
 4 

2i
i k

b
a
i 1

4
1
1 
2i  1  4

f ( x)dx  Rk 1,1  Rk ,1   K i hk  i 1 
3
3
3 i 2
 4

 Rk 1,1

R

i 1
 Rk ,1  1 

2i  1  4
  K i hk  i 1 
3
3 i 2
 4

k 1,1
Define this as Rk,2
Eliminating the error at stage j

Almost the same as before except now

b
a
f ( x) dx  Rk , j 1 

and
b
a

K h
i  j 1
2i
i k
f ( x) dx  Rk 1, j 1 
b
  f ( x) dx  Rk , j 1  K h
2 ( j 1)
j 1 k
a

  K i hk2i (A)
i j

K h
i  j 1
2i
i k 1
b
substitute for hk 1  2hk   f ( x) dx  Rk 1, j 1  4
a
j 1
2 ( j 1)
j 1 k
K h

  4i K i hk2i (B)
i j
Subtract (B) from 4 j-1  (A) to get :
(4
j 1
b
 1)  f ( x) dx  4 j 1 Rk , j 1  Rk 1, j 1  terms order hk2 j & higher
a
b
  f ( x) dx 
a
b
4 j 1 Rk , j 1  Rk 1, j 1
(4 j 1  1)
  f ( x) dx  Rk , j 1 
a
 terms order hk2 j & higher
Rk , j 1  Rk 1, j 1
(4 j 1  1)
Define this as Rk,j
 terms order hk2 j & higher
Generalize to get Romberg Table
Using our
Rk , j 1  Rk 1, j 1
previous definition: Rk , j  Rk , j 1 
j 1
4
Obtain from
a single
composite
trapezoidal
integration
R1,1
1
Construct Romberg Table
R2,1
R2,2
R3,1
R3,2
R3,3
R4,1
R4,2
R4,3
R4,4
Rn,1
Rn,2
Rn,3
Rn,4
I=Rn,1 + O(hn2)
Error:
O(hk2j)
I=Rn,n + O(hn2n)
…
Rn,n
Consider previous example

0
sin x dx  2
0.00000000
1.57079633
2.09439511
1.89611890
2.00455976
1.99857073
1.97423160
2.00026917
1.99998313
2.00000555
1.99357034
2.00001659
1.99999975
2.00000001
1.99999999
1.99839336
2.00000103
2.00000000
2.00000000
2.00000000
Error in R6,6 is only 6.61026789e-011 – very rapid convergence
2.00000000
Accurate to
O(h612)
What is numerical integration really
calculating?

Consider the definite integral
b
I   f ( x)dx

a
The integral can be approximated by weighted sum
n
I   i f ( xi )
i 1



The wi are weights, and the xi are abscissas
Assuming that f is finite and continuous on the interval [a,b]
numerical integration leads to a unique solution
The goal of any numerical integration method is to choose
abscissas and weights such that errors are minimized for the
smallest n possible for a given function
NON EXAMINABLE
See Numerical Recipes for a lengthy discussion
Gaussian Quadrature






Thus far we have considered regular spaced abscissas,
although we have considered the possibility of adapting
spacing
We’ve also looked solely at closed interval formulas
Gaussian quadrature achieves high accuracy and
efficiency by optimally selecting the abscissas
It is usual to apply a change of variables to make the
integral map to [-1,1]
There are also a number of different families of
Gaussian quadrature, we’ll look at Gauss-Legendre
Let’s look at a related example first
Midpoint Rule: better error
properties than expected
b
 f ( x)dx  (b  a) f ( x
a
mid
)
Despite fitting a
single value error
occurs in second
derivative
3
a

b
(
b

a
)


 (b  a) f 
f ' ' ( x )

24
 2 
f(x)
a
xm
b
x
Coordinate transformation on to [-1,1]

The transformation is a simple linear mapping
ba
ba
t
x
2
2
 x  1  t  a

x  1  t  b
a

b
a
t1
f ( t )dt  
1
1
t2
b
1
ba
ba ba
f(
x
)(
)dx   g( x )dx
1
2
2
2
Gaussian Quadrature on [-1, 1]
Recall the original series approximation
n
I   f ( x)dx   ci f ( xi )  c1 f ( x1 )  c2 f ( x2 )    cn f ( xn )
1
1
i 1
Consider, n=2, then we have
1

1
-1

x1
x2
f(x)dx  c1f(x 1 )  c2f(x 2 )
1
We have 4 unknowns, c1, c2, x1, x2, so we can choose these
values to yield exact integrals for f(x)=x0,x,x2,x3
Gaussian Quadrature on [-1, 1]
 f(x)dx  c f(x )  c f(x )
1
1
1
1
2
2
Evaluate the integrals for f = x0, x1, x2, x3

f


f

f

f

Yields four equations for the four unknowns
1
 1   1dx  2  c 1  c 2
1
1
 x   xdx  0  c 1 x 1  c 2 x 2
1
2
 x   x dx   c 1 x 12  c 2 x 22
1
3
2
1
1
2
 x   x 3 dx  0  c 1 x 13  c 2 x 23
3
1
c 1  1
c  1
 2
1

  x1 
3

1

 x2  3

1
1
I   f ( x)dx  f ( )  f ( )
1
3
3
1
Higher order strategies

Method generalizes in a straightforward way to higher
numbers of abscissas




Exact integrals increase to always provide n integral
equations for the n unknowns
Note the midpoint rule is the n=1 formula
Example, for n=3
Need to find (c1, c2, c3, x1, x2, x3) given functions f(x) =
x0, x1, x2, x3,x4, x5

Gives
5
3 8
5
3
I   f ( x)dx  f ( )  f (0)  f ( )
1
9
5 9
9
5
1
Alternative Gauss-based strategies



The abscissas are the roots of a polynomial belonging to a class
of orthogonal polynomials – in this case Legendre polynomials
Thus far we considered integrals only of a function f (where f
b
was a polynomial)
m
Can extend this to
 W ( x) f ( x)dx   i f ( xi )

Example:
1
I
1


exp(  cos x)
1 x2
i 1
2
dx
We may also need to change the interval to (-1,1) to allow for singularities
Why would we do this?


a
To hide integrable singularities in f(x)
The orthogonal polynomials will also change depending on W(x)
(can be Chebyshev, Hermite,…)
Summary

Low order Newton-Cotes formulae have comparatively
slow convergence properties



Applying Richardson Extrapolation to the compound
trapezoid rule gives Romberg Integration


Higher order methods have better convergence properties
but can suffer from numerical instabilities
High order ≠ high accuracy
Very good convergence properties for a simple method
Gaussian quadrature, and related methods, show good
stability and are computationally efficient

Implemented in many numerical libraries
Next lecture

Numerical integration problems
Using changes of variable
 Dealing with singularities


Multidimensional integration
Download