Numerical Integration

advertisement

1

Methods for Numerical Integration of Functions of One Variable

This practical introduces the following:

Recursive Algorithms

Romberg Integration

This practical is designed to:

Illustrate how the error in performing numerical integrations depends systematically on the algorithm and step size for a definite integral involving a function of one variable.

Show you how to handle some special cases: When one of the limits of integration is infinite and when the derivative of the function to be integrated is large or infinite at the limits of integration.

1.1 Introduction

In the Senior Freshman year you were introduced to numerical integration in one dimension using the Trapezoid rule and Simpson’s rule. It was probably obvious to you that you obtained a more accurate answer when the interval between sampling points (or step size ) was reduced. What is less obvious is the way in which the error in the numerical integration depends on step size. This practical will focus on predicting errors in numerical integration schemes and will allow you to demonstrate that the error in a numerical integration can be reduced systematically to an acceptable value. Of course, the magnitude of the acceptable error depends on the problem.

The trapezoid rule is

 b a f x dx

 h

 f

0

2

...

f

N

1

 f

N

2

, (1) and f n

is the n th

sampling point. But what is the error in this approximation? Can the error be predicted, i.e.

does it depend on the function and its limits in some predictable way, or do we simply keep reducing the step size, h, until the integrand converges and hope for the best? When we say that the integrand has converged we mean that when h is decreased the change in the answer is less than some acceptable value. In fact, the error can be predicted and it depends on the step size and derivatives of f at a and b, the limits of integration. Analysis of errors in numerical integration is based on Taylor series expansions of the function to be integrated. A Taylor series expansion of the function f about the point a has the form

2 f a

 h )

( )

 hf ' ( )

 h

2 f a

. (2)

2!

The primes denote derivatives of the function and are evaluated at a . One test for convergence of the expansion is that ratios of magnitudes of successive terms in the expansion should be less than 1 (see, for example , M. Boas, Mathematical Methods in the

Physical Sciences, pp8 ff ). If a truncated Taylor series expansion is used then we say that the expansion is correct to order n , where the first term omitted from the expansion contains h to the power n , f a

 h )

( )

'( )

(

2

).

(3)

Since a Taylor series expansion depends on the function and its derivatives, we are implicitly assuming that f is well behaved, i.e.

the function and its derivatives are continuous functions, and that the step size chosen is small enough that a Taylor series expansion of the function about the sampling points converges.

1.2 Romberg Integration ( deVries pp 149 - 160 )

One way of reducing numerical error, which we will define as the difference between the exact result and the numerical result, is to decrease the step size. However, a second means of reducing the numerical error is to use a better integration rule. For example, for a given step size, Simpson’s rule will generally yield a more accurate result than the

Trapezoid rule. There is a systematic way of generating better and better integration rules which includes the Trapezoid rule and Simpson’s rule as the first two cases. This more general method is called Romberg integration.

The Euler-Maclaurin rule for integration (see deVries pp153ff ) includes higher powers of the step size than the Trapezoid rule and is given by

 b a f x dx

 h

 f

0

2 f

1

...

f

N

1

 f

N

2

 h

2

12

 f '

 f '

0 N

4 

 h 

720 f ' ' '

0

 f ' ' '

N

...

. (4)

You can see that it contains terms corresponding to the Trapezoid rule plus higher terms, which depend on the derivatives of f at the integration limits. We can now derive a new integration scheme - Romberg integration - which is correct to O( h

2

) , O( h

4

) , etc .

 b a

(pronounced order h squared, and so on) depending on the order of the Romberg integration rule chosen. To see how to do this rewrite Eq. 4 as f x dx

T m , 0

  h

2   h

4 

...

. (5)

3

T m,0

is the Trapezoid rule part and coefficients

and

can easily be determined by comparing Eq. 4 and 5. N = 2 m

( m integer) is the number of intervals the range of integration has been divided into. The meaning of the second subscript on T will become clear shortly. Now rewrite Eq. 5 for a step size, h/2 , half of its original size. Obviously the number of intervals in the integration range has doubled so m increases by 1 and Eq. 5 becomes

 b a f x dx

T m

1 ,0 h

2

 2

 

 h

2

 4

 

...

.

Now subtract Eq. 5 from 4*Eq. 6 and divide the result by 3 to obtain

(6)

 b a

4 T m

1 ,0

 T m ,0

3

4 h

4

T m

O h

4

) . (7)

We now have an expression which is correct to O( h

4

) . One set of differences has been used and so the second subscript is incremented from 0 to 1 and we will refer to this as the first order Romberg integration rule. We can now subtract two versions of Eq. 7 to eliminate errors up to O( h

6

) . This procedure can be carried out to higher and higher

 b a order. The general relationship between T coefficients is f x dx

T

 2 ( k

1 )

)

4 k

T

  

4 k 

T

1

 

1 , k

1  2 ( k

1 )

) . (8)

The kind of relationship between successive approximations to the integrand in Romberg

} integration is particularly suited to numerical computation, as the next approximation can be computed in terms of the previous one, until some convergence criterion set by the programmer has been reached. In C and Fortran 90 (but not in Fortran 77) it is possible to use a language feature called recursion in which a function calls itself until some condition is met. An example of a recursive function is the following, which computes N factorial. factr(int N)

{ int answer; if (n==1) return (1); answer = factr(N-1)*N; /* recursive call */ return (answer);

4

Exercises 1

(a) Prove that Romberg integration with k =1 is actually Simpson’s rule.

(b) Use the code fragment in romberg.c

to construct a programme to perform Romberg integration for a function of one variable. In the code fragment the variable n is equivalent to m+k in Eq. 8. Use your programme to perform the following integration numerically

0 sin xdx and compare to the analytic result.

(c) The difference between the exact result and the numerical result is the numerical error.

Plot a graph of log (numerical error) versus log (number of intervals, N ) for Romberg integration of sin x for k = 0 (Trapezoid rule) and k = 1 (Simpson’s rule). Use m = 2

N intervals with N

3 . Use the fit function in Gnuplot with f(x) = a + b*x to determine the slopes of the two graphs and hence show that the Trapezoid rule has an error proportional to h

2

and that Simpson’s rule has an error proportional to h

4

.

(d) Find the constants of proportionality of the error terms in using the Trapezoid rule and

Simpson’s rule from the intercepts a in part (c). Compare your value to the value predicted by the Euler-Maclaurin rule (Eq. 4).

(e) Modify your Romberg integration programme so that it

Performs the numerical integration to order k

Compares the result at order k to the result at order k-1

Prints the result if the difference in the two results is less than some predetermined value or performs the numerical integration to order k+1 if the difference is larger

1.3

Fresnel diffraction at a straight edge

According to Fresnel’s theory of diffraction, the intensity of light falling on a screen after it has been diffracted by a sharp, straight edge is

F(w) = 0 .

5 I o

0 .

5

C(w)

 

0 .

5

S(w)

2

. where w is closely related to the distance from the diffracting edge and I o

is a geometrical factor. C and S are given by

S w

 w

0 sin

 x

2

2 dx C w

 w

0 cos

 x

2

2 dx .

Exercises 2

(a) Use your Romberg integration programme to plot a graph of F(w) versus w for w in the range -4 < w < 4.

(b) For small values of w , the following series expansions may be used:

C w

 w

1

1

 

2!5 2 w

2

2

 

1

 

4!9 2 w

2

4

 

...

 and

S w

 w

1



 w

2

 

1



 w

2

3

 

1



 w

2



5

 ...

.

Use these expansions to confirm your graphical results for F(w) by plotting appropriate graphs.

1.4 Use of change of variables to improve numerical integration convergence

According to Eq. 4, the coefficient of the error term of order h

2

is the difference of derivatives of the function to be integrated, at the limits of integration, divided by 12. If one or both of these derivatives is large (or even infinite), this can lead to slow convergence of the numerical integration with stepsize or incorrect answers. One solution to this problem is to make a change of variables so that the derivative(s) at the limit(s) of integration is(are) small enough. Making a change of variable is equivalent to making the step interval depend on the integration variable.

Exercises 3

(a) Use your Romberg integration programme to integrate

1

1

1

 x

2 dx

2

Read the section entitled A Change of Variables beginning on p157 of deVries . You should reproduce the values in the table on p158 .

(b) Make the change of variables x = cos

suggested and check that the integrand converges for fewer Romberg iterations, when compared to part (a).

(c) Now use your programme to integrate

1

1

1

 

2

  by a suitable change of variables.

5

6

1.5

Improper Integrals

Integrals with the form,

I

0

, are termed improper and are commonly encountered in physics. For example, C(

) or

S(

) in section 1.3 or the Ewald method for treating the Coulomb interaction in systems with periodic boundary conditions (see, for example, Kittel, Introduction to Solid State

Physics , 7 th

Edition, Appendix B or Elliott, The Physics and Chemistry of Solids , pp

101ff ). If we try to apply the Trapezoid rule, as we have used it so far, we obviously encounter the problem that an infinite number of intervals is required to cover the integration range. However, if the integrand I is to be finite then f(x) must tend to zero faster than 1/x at large x . (Note that different criteria apply in more dimensions). We could therefore integrate to some large value and hope that this result was a good enough approximation to the exact result. However, as we have seen before, there is generally a right way and a wrong way to perform computational tasks. The latter suggestion will be the wrong way in nearly every case. An alternative approach is to divide the integration range into [ 0, a ] and [ a,

]. The dividing point, a , is chosen to optimise accuracy and speed of the calculation. The integral with limits [ 0, a ] can be done by the methods used already and the integral with limits [ a,

] can be done by a suitable transformation of variables such as y = 1/x , in which case the limits of integration become [ 1/a, 0 ] and the integrand becomes

I

 a

0 f ( x ) dx 

1 /

 a

0 f y

(

2 y ) dy

Exercises 4

(a) Use this substitution of variables to evaluate

0 1 dx

 x

2 dx to an accuracy of 8 decimal places. Use a dividing point a = 1 . Investigate how the choice of a affects the rate of convergence of the numerical integration, i.e.

the order of

Romberg integration necessary to obtain an accuracy of 8 decimal places.

APPENDIX A Romberg Integration

Taylor expansion of f about a gives

 b a

  b a

 f a

( x

) '( )

( x

 a )

2

2 !

f a

( x

 a )

3

3 !

f a

  b a

 h

2

2 !

f a

 h

3

3 !

f a

 h

4

4!

f a

 dx

A similar expansion of f about b gives

 b a

  b a

 h

2

2 !

 h

3

3 !

f a

 h

4

4 !

f a

 dx

 dx

7

Download