Chapter 1: Statement of the Problem

advertisement

Gauss-Bond Quadrature

2/25/2011

Chad Birger

Table of Contents

Chapter 1: Statement of the Problem

Chapter 2: Quadrature Discussion

Works Cited

2

4

12

Chapter 1: Statement of the Problem

There are many quadrature methods available for approximating integrals. There are the methods that are taught in Calculus, including the Trapezoidal rule and Simpsons rule. These methods are not very powerful and are often used just to show much more sophisticated concepts. Most of the quickly converging methods use some sort of

Gaussian rules.

The best quadrature methods are implemented by finding a way to return both an integral approximation and some sort of error estimate for this approximation. The most common way this is done is by finding more than one approximation and then finding a bound on the error term by comparing these approximations. If this error term is “small enough,” then we can return the most accurate of the approximations that were found.

These methods are well documented and are implemented in various computer programs that perform quadrature. Gauss

– Legendre polynomials are the basis for each of these methods, which include Gaussian quadrature, Gauss – Kronrod quadrature, and

Gauss – Patterson quadrature. There is a very powerful computer package called

QUADPACK that uses these methods for numerical approximation.

In 2001, in a paper self-published by Bond, a new quadrature technique was introduced. This new method takes advantage of the very efficient Gaussian rules. It starts with the basic Gauss

– Legendre polynomial, but then uses the nodes to create a new approximation without any additional function calls. This is a vast improvement over previous methods, which require at the least n

1 additional function calls.

An important aspect of numerical quadrature that was not included in Bond’s work deals with error analysis of this new method. Another unfortunate omission is that tables

were only given for certain sizes of n. This leaves users with very limited choices of approximating formulas.

This paper contains a literature review of previous methods and analysis. The most effective techniques are introduced and discussed. There is also background information and a short analysis for the use of QUADPACK.

Following this is an in-depth look into the new Gauss

– Bond quadrature technique. It includes methods for finding nodes and weights, error analysis, as well as charts of tables. There is also a listing of computer algorithms written in Matlab that use this new technique to approximate integrals.

Chapter 2: Quadrature Discussion

Integrals are a very important part of Mathematics. In calculus, we are taught that one of the applications of an integral is the method of finding the area “underneath” a curve. Evaluating an integral consists of finding an antiderivative of the function we are integrating, and then finding the difference of this antiderivate at the upper and lower limits of integration. This is part of the Fundamental Theorem of Calculus. If f is continuous at every point of [a, b], and if F is any antiderivative of f on [a, b], then

 b a

       

This works very well as long as the above conditions are satisfied. However, this is not always the case. There are many examples of functions for which an antiderivative formula is not known or readily accessible. For these functions, we can use numerical approximation techniques to evaluate the integral. We call these approximation methods quadrature.

Many methods of quadrature are available. Each of these methods have certain quadrature rules that follow the form  b a

     i n

1

  

E , where w(x) represents a fixed weight function, { x i

} are the abscissae or nodes, { w i

} are weights, n is the number of nodes, and E is the error term. A quadrature sum is said to be of degree d if it is exact for all functions which are polynomials of degree

 d . Two properties that are desirable are that all of the nodes are inside the interval of integration and all of the weights are positive (Piessens, 1983).

There are three basic steps that quadrature rules should follow. The first is the invention of a suitable method for solving the problem at hand. The second step is that

the rule should find the accuracy of the method by finding a suitable truncation error, whether exact or an upper bound. Finally, quadrature rules should verify stability in the proposed algorithm. The algorithm should not be subject to rounding errors. One good way to secure this is to use high precision arithemetic.

One numerical approximation technique that is taught in calculus is the

Trapezoidal Rule. The Trapezoidal Rule fits trapezoids with endpoints consisting of the function values at the interval endpoints. The areas of each trapezoid are easily calculated and summed up to provide an estimate for the area under the curve. Looking at the error term using the Trapezoidal Rule, we see that E ≤ b

 a

12 h f

(2) , where h = b

 a

and

 

 

maximizes the second derivative on the interval. n

A second approximation technique from elementary calculus is Simpson’s Rule.

This rule partitions our region into equally spaced subintervals and then fits parabolas to the endpoints of each subinterval by using the function values at these endpoints as well as the function value halfway between endpoints. The area of each of these regions is now calculated and summed up over the entire interval. This technique usually produces much more accurate results than the Trapezoidal Rule. We can see this by looking at the error formula E ≤ b

 a

180

.

There are many other quadrature methods available, but the method we are going to focus on is Gaussian Quadrature. In previous methods, we were setting the nodes at certain intervals, usually the endpoints and n

1 equally spaced points in between. In

Gaussian quadrature, we will leave the nodes free to be solved for by using a system of equations with unknown weights and nodes. This gives the exact result to the integral for

each polynomial of degree 2 n

1 . We can then find the values of the nodes and weights on [0,1] by solving the nonlinear system of equations A c = b , which gives values for A and c with A =

 x x

1

1

2

1 x

1

2 n

1

1 x

2 x

2

2

1 x

3 x

3

2 x

2

2 n

1 x

3

2 n

1 x x

1 n

2 n x n

2 n

1

and b =

1

1

1

1

2

1

1 x

2 n

1 dx

.

For constructing classical Gaussian quadrature rules, there are algorithms available, which take advantage of properties of orthogonal polynomials. These properties use the following: a weight function ( )

0 , integrable on

 

, all moments

 i

  b a i

exist, and

 

. From these, it is possible to define an orthogonal set

0

0 of polynomials

 n

, where

 n is of exact degree n and satisfies

 b a w x

 m x

 n x dx

0 for m

 n . The zeros of

 n x with n

1 are real, simple, and located on the interval

 

.

In Gaussian quadrature, the sums of the highest degree d = 2n-1 are obtained from Gauss formulae. The classical orthogonal polynomials define a classical Gaussian quadrature formula. These formulae are differentiated by their weight function and interval of integration and their names are Legendre polynomials, Chebyshev

polynomials, Jacobi polynomials, Laguerre polynomials, and Hermite polynomials

(Piessens, 1983).

The most commonly used formula to find nodes and weights in Gaussian quadrature is called the Gauss-Legendre rule. This rule covers the interval

 

and uses the weight function ( )

1 . Any finite range can be transferred to apply to this interval by using the substitution x

 b

2 a t

 b

2 a

.

Legendre polynomials consist of the set {P

0

(x), P

1

(x), …, P n

(x),

…} with the following properties:

1) for each n, P n

(x) is a polynomial of degree n.

2) 

1

1

P x P x dx i

( ) n

( )

0 for i

 n .

There are other methods to find these polynomials. G. Rybicki developed a routine for calculating the nodes and weights of Gauss – Legendre. This routine involves a recurrence relationship and uses Newton’s method to find the nodes. It also uses the fact that the nodes and weights are symmetric, so we only are required to find half of them.

Now that we have all of this covered, we can talk about the Gaussian rules. These apply to each set of polynomials. There are 6 properties of Gaussian rules listed below.

1) The weights and nodes are irrational; with a few exceptions (n=2, and odd numbers have a node at the midpoint; n=3 also have rational weights)

2) The quadrature is open. This means that there are no nodes occurring at either endpoint.

3) The sets of nodes are disjoint. This means that the n point nodes are distinct from m point nodes. The only node that is not disjoint is 0 on an odd numbered quadrature.

4) If the function is like a polynomial over the given interval, the Gaussian rules will do a good job of integrating it.

5) Gaussian quadrature is interpolary. If we use the n nodes and function values to interpolate a polynomial, the result obtained by integrating this polynomial is identical to the result of Gauss.

6) There is a relationship between Gaussian quadrature and orthogonal polynomials.

In our case, the n Gaussian nodes will be the zeros of the n th Legendre polynomial

(Kahaner, 1989).

After satisfying all of these rules and obtaining a set of nodes and weights for our

Gaussian quadrature, the next step is to find our approximation of the integral. Our quadrature sum yields an approximation: Q n n

  k

1

( k

) (Piessens, 1983). This requires only n function calls and is shown to converge much faster than Simpson’s method. The error term for Gaussian quadrature is shown to be E =

( b

 a )

2 n

1

 

4

2 n 1

  

!

3 f . Analysis of this error term also shows that Gaussian quadrature is much more accurate than previous methods.

Quadrature rules are the “building blocks” on which to build our approximation algorithms. The algorithm itself is how the rules are implemented. Each rule by itself might not be useful, but when put to use collectively in the algorithm, they become important.

In practical quadrature, we cannot look at the absolute error when deciding if our approximation is close enough. It is common practice to estimate the remainder term by combining two or more quadrature rules. For example, if we have two quadrature rules

Q n

and Q m

, with m

 n , we could say

 n

Q m

Q n

, provided m is such that this estimate is sufficient (Monegato, 2001).

If we look at the absolute value of the difference between two approximations, A and B, and this relative difference is “small” enough, then we can accept the result. This difference is used to estimate the error in the less accurate of the two approximations.

We call these two quadrature approximations a quadrature pair, and it is denoted

(A, B). The result of each quadrature pair gives us both an integral estimate and an error estimate. However, using this method, error estimation has a large “heuristic component.” This procedure turns out to be closer to an art than a science (Kahaner,

1989).

In Gaussian quadrature, we use the quadrature pair (G n

, G

2n-1

). This gives us two approximations. The second rule has exactly twice the number of panels, or subintervals, as the first. We would expect G

2n-1

to be the more accurate approximation, so we will use that as our estimate and | G n

- G

2n-1

| as an error estimate.

Looking at the amount of work related to this method, we see that we will need to make 3n

– 1 evaluations (or 3n – 2 if n is odd). Kronrod noticed this in his work and sought a method that could reuse some of the existing nodes. He invented a new quadrature method that placed new nodes between the existing nodes from an n point

Gaussian quadrature. He knew that the optimal extensions of the Gaussian rules existed

from previous work by Szego. Then in 1965, Kronrod was the first to compute these optimal points of Gaussian rules (Piessens, 1983).

Using Kronrod quadrature, we now have a quadrature pair (G n

, K

2n+1

), where G n

is the n point Gaussian rule, and K

2n+1 is the Gauss

– Kronrod rule

K

2 n

1

 i n

1 a f x i

( i

)

 n j

1

1 b f x j

( j

) . From this, we can see that K

2n+1

shares n nodes with G n

.

We can use this fact to conclude that only 2n + 1 function calls are required to approximate this Gauss – Kronrod pair. Also, remember that G n

approximates exactly a polynomial of degree 2n – 1, while K

2n+1

approximates a polynomial of degree 3n + 1.

From this, we expect K

2n+1

to be more accurate than G n

, so we can use G n

to calculate the error estimate and return K

2n+1

as our integral estimate. It has been shown through experimentation that the actual error estimate is closer to

200 G n

K

2 n

1

1.5

, which still overestimates the actual error (Kahaner, 1989).

Patterson used the results of Kronrod’s work and came up with a system of equations that keeps finding optimal points outside of each other. If we start with an n point quadrature, we could then find optimal points for an m = 2n + 1 quadrature, a 2m +

1 quadrature, and so on. The main version used starts with a 10 point Gauss rule, then finds optimal points for rules of order 21, 43, and 87.

QUADPACK is the computational software package that implements these quadrature rules. It was programmed in FORTRAN and published in 1983. It consists of a collection of basic routines that are combined into a very powerful tool for approximating integrals. The package is organized into a web of functions that call each other to give the best possible approximation. There are 12 automatic quadrature programs and 6

non-automatic programs, which use Gauss

– Kronrod quadrature. The majority of these functions are adaptive (Piessens, 1983).

Adaptive integration algorithms recursively divide the region of integration into subregions. The integral is then approximated separately for each sub-region using which ever quadrature rule that is requested. The algorithm quits and returns when the sum of all local estimates and errors of the sub-regions satisfy the overall requirement.

The adaptive algorithms are used to try to minimize the computational effort by placing more nodes in the part of the region that is most difficult. As a general rule, the algorithm should also check all previously calculated function values to make sure that it is not performing extra work.

There are two main sub-division strategies in adaptive algorithms. These are global or local. Globally adaptive refers to the sum of the error estimates must be smaller than the given error tolerance. Using this method, all sub-regions are eligible for further subdivision at any time (Krommer, 1998).

Locally adaptive refers to attempting to achieve accuracy for each subinterval.

Usually a tolerance is given at the start of integration, and each time the region is split, half of the tolerance is sent to the sub-division. In order to achieve accuracy, the subregions are designated as active or inactive, depending if they meet the local error tolerance. This method actually leads to greater accuracy than is originally required.

Works Cited

Kahaner, D. M. (1989). Numerical Methods and Software. Englewood Cliffs, NJ: Prentice Hall.

Krommer, A. R. (1998). Computational Integration. Philadelphia, PA:: SIAM.

Monegato, G. (2001). Numerical Algorithms 26. An overview of the computational aspects of Kronrod

quadrature rules., 173 –196.

Piessens, R. e. (1983). QUADPACK: A Subroutine Package for Automatic Integration. Berlin: Springer-

Verlag.

Download