Introduction to 2-Dim Gaussian Quadrature over a rectangle

advertisement
Section 4A:
Introduction to iterated 2-dimensional Gaussian
Quadrature over a rectangle
We are now going to use the work on 1-dimensional quadrature to develop a rule for
2-dimensional quadrature. First recall Gaussian Quadrature for an interval [a, b]: For
example, with n = 3, we get 4 points in [a, b], x0, x1, x2, x3, and 4 positive weights w0, w1,
w2, w3.
In general, for a degree 2n+1 rule, we have n+1 points, xi, and weights, wi, and the
quadrature rule is then Q(f) = w0f(x0) + ….. + wnf(xn) (with the nodes in [a, b] and the
weights positive). For a continuous function f(x) on [a, b], we then approximate
[a,b] f(x)dx  Q(f).
deg f  2n+1.
We know that we get exact results if f is any polynomial with
Now we consider a function f(x, y) defined on a rectangular region R  [a, b] x [c, d]:
We want to estimate I = R f(x, y)dx dy. In Calculus we learned that this can be
computed as an iterated integral:
I = [c,d] [a,b] f(x, y)dx dy,
or I = [a,b] [c,d] f(x, y)dy dx.
For example, let n = 3 and consider a polynomial f(x, y) with degrees of x and y
separately at most 2n+1 (= 7). We next apply Gaussian Quadrature on [a, b] with
n = 3, to find x0, ….., xn in [a, b], and positive weights w0, ……, wn. Then we apply
Gaussian Quadrature on [c, d] with n = 3 and find y0, ….., yn in [c, d], and positive
weights v0, ……, vn.
Now we define our 2-dimensional quadrature rule by the following formula:
Q[f] = 0in 0jn wi vj f(xi, yj).
For example, for n = 1 we get:
Q[f] = w0 v0 f(x0, y0) + w0 v1 f(x0, y1) + w1 v0 f(x1, y0) + w1 v1 f(x1, y1).
We claim that Q[f] = R f(x, y)dx dy if the degree of f in x  2n+1 and if the degree of
f in y  2n+1. We will show this for n = 1. Suppose f(x, y) has degree  3 in x and y
separately.
Let q(y) = f(x0, y), a polynomial in y of degree at most 3. Therefore,
v0 q(y0) + v1 q(y1) = [c,d] q(y) dy .
(A)
Now let r(y) = f(x1, y), a polynomial in y of degree at most 3. Therefore,
v0 r(y0) + v1 r(y1) = [c,d] r(y) dy .
(B)
Now let p(x) = [c,d] f(x, y) dy, a polynomial in x of degree  2n+1, so
[a,b] p(x) dx =
w0 p(x0) + w1 p(x1)
(C).
Now Q[f] = w0 v0 f(x0, y0) + w0 v1 f(x0, y1) + w1 v0 f(x1, y0) + w1 v1 f(x1, y1)
= w0 (v0 f(x0, y0) + v1 f(x0, y1)) + w1 (v0 f(x1, y0) + w1 v1 f(x1, y1))
= w0 (v0 q(y0) + v1 q(y1)) + w1 (v0 r(y0) + v1 r(y1))
= w0 p(x0) + w1 p(x1)
(by (A) & (B))
= [a,b] p(x) dx
(by (C))
= [a,b] [c,d] f(x, y)dy dx
= R f(x, y)dy dx.
For n = 1, this completes the proof that Q is a quadrature rule with exact results for
polynomials f(x, y) with degrees at most 2n+1 in x and y separately. For n > 1, the same
kind of proof works. This kind of rule is called 2-dimensional iterated Gaussian
Quadrature. The same kind of rule can be developed for triple integrals or in higher
dimensions. Notice that with n = 1 we have a rule with 4 nodes which is exact for any
polynomial with degrees at most 3 in x and y separately, so the total degree of this
polynomial is at most 6. But this rule is not exact for all polynomials of degree 6, which
include x6, x5 y, x4 y2, x3 y3, x2 y4, x y5, y6. To obtain a rule that is exact for all
polynomial of degree 6 requires 10 points [2]. The minimum number of nodes required
for exact results up to a given degree > 6 is unknown.
We conclude this section with Mathematica code that implements this approach to
2-dimensional iterated Gaussian Quadrature. Recall the code for 1-dimensional Gaussian
H
HL @
D
@ LHL D
@ DH
@
@
D
H
H
L
H
L
L

H
L
D
@
@
@
D
@
D
D
D
@
@
D
8
<
8
<
D
@@
D
D
@
@
D@
@
D
D
@
D8 <D
@@
@
D
D
D
HL@D
@
@
D@
@
D
D
@
@
D
D
@
@
D
D
D
@
@
@
D
@
D
D
D
@@D8 <8 <D
@
@
D@
@
D
D
@
D
8
<
D
@@
DD
L
Quadrature:
Compute
* the densities and nodes
for Gaussian quadrature
of degree
precision 2 n+ 1 over
a,b ;
generates n+ 1 nodes and densities.
Based on R.Curto and L.Fialkow
Houston J. Math. 17 1991 , 603- 635 ,
MR 93 a:47016*
Gaussian n_, a_, b_ :=
For i = 0, i £ 2 n + 1, i ++ ,
bb i
b^ i + 1 - a^ i + 1
=
i+1
;
For i = 0, i £ n, i = i + 1,
For j = 0, j £ n, j = j + 1,
Mtab i, j = bb i + j
;
H = Table Mtab r, s , r, 0, n , s, 0, n
J = N Inverse H
;
;
For i = 0, i £ n, i = i + 1,
v i
= bb n + 1 + i
;
w = Table v i , i, 0, n
;
z = J.w;
p = 0;
For i = 0, i £ n, i = i + 1,
p = p+ z
i+1
t^ i ;
q = t^ n + 1 - p;
roots = Solve q Š 0, t ;
For i = 1, i £ n + 1, i = i + 1,
x i- 1
= roots
i
1
2
;
For i = 0, i £ n, i = i + 1,
For j = 0, j £ n, j = j + 1,
TTab i, j = x j ^i
;
V = Table TTab rr, ss , rr, 0, n , ss, 0, n
;
For i = 0, i £ n, i = i + 1,
vvv i
= bb i
;
w = Table vvv i , i, 0, n
;
weights = N Inverse V .w ;
The Gaussian nodes are x[0], …, x[n] and the weights are weights[1], …, weights[n+1].
We now use the 1-dimensional rule separately in the x-direction and the y-direction to
compute the following 2-dimensional iterated Gaussian quadrature rule.
H
@
D @
D@
D
L
@@DDH
@
@
D@
@
D
DD@
@D
D
D
@
@
@
D@
@
D@
D
@D
D
D
@
@ @
D
@
D
@
@
D
@
D
D
D
D
L
Iterated
*
Gaussian quadrature estimate of
degree 2 n+ 1 for integral of predefined
function f x,y
over rectangle
a,b
x
c,d ;
exact if f is a polynomial whose degrees in x and y
are each £ 2 n+ 1; the value is stored in II*
G2dim n_, a_, b_, c_, d_ :=
Gaussian n, a, b ;
For i = 0, i £ n, i ++ ,
xx i = x i ;
xweight i = weights
i+ 1
;
i+ 1
;
Gaussian n, c, d ;
For i = 0, i £ n, i ++ ,
yy i = x i ;
yweight i
= weights
II = 0;
For j = 0, j £ n, j ++ ,
For i = 0, i £ n, i ++ ,
II = II + xweight j yweight i f xx j , yy i
;
The variable II holds estimate of [a,b] [c,d] f(x, y)dy dx based on 2-dimensional iterated
Gaussian quadrature. In the next section we illustrate this method in several examples.
Section 4B:
Experiments with iterated 2-dimensional Gaussian
Quadrature for rectangular regions
In these experiments we are trying to find the smallest n which gives a Gaussian relative
error less than 1% in the 2-dimensional Gaussian Quadrature approximation. The error is
based on a comparison with the Mathematica approximation, which we consider to be the
"correct" value.
Throughout these experiments, we use 100-digit precision in order to get accurate results
for the matrix inversion in the Gaussian code (Section 4A). Final results are only shown
to around 6 significant digits (Mathematica's default precision).
Example 1:
f(x) = 3x2y2 + 4x3y – 2xy2+ 3x2+ 2y2– 6xy +3x + 2y +1,
a = 1, b = 3, c = 2, d = 5.
The graph for this function is:
1000
750
500
250
0
1
5
4
1.5
3
2
2.5
3
2
We see that N  n = 1 satisfies our objective:
N=1
Mathematica
Iterated Gaussian
1608.
1608.
Gaussian
Relative Error
1.41402*10-16
The maximum degree in each variable separately for the above function is  3, so n = 1
should give exact results. Therefore, the answer for the Gaussian relative error should be
0, but because of roundoff error, it gave us 1.41402*10-16.
Example 2:
f(x) = x4 + 3x3y + 7y4 – 6x2y2 +2x –4y +6,
a = 1, b = 3, c = 2, d = 5.
The graph for this function is:
4000
3000
5
2000
1000
0
1
4
1.5
3
2
2.5
3
2
We were able to get a relative error less than 0.01 on our first attempt (with n = 1) as you
can see in the table. Since the maximum degree in each variable separately is 4, n = 2
should give exact results, which the table shows (to within machine roundoff error).
N=1
N=2
Mathematica
Iterated Gaussian
Gaussian
Relative Error
7383.6
7364.16
7383.60
0.00263196
2.46355*10-16
Example 3:
f(x,y) = (x2+y2),
a = 1, b = 6, c = 4, d = 6.
The graph for this function is:
8
6
7
6
5
5.5
1
5
2
3
4.5
4
5
6
4
n = 1 gave us a good Gaussian relative error, but we included n = 2 for confirmation.
N=1
N=2
Mathematica
Iterated Gaussian
62.2769
62.2692
62.2773
Gaussian
Relative Error
0.000124067
5.80135x10-6
Example 4:
f(x,y) = (x+y)/(x-y),
a = 1, b = 5, c = 1/4, d = 1/2
The graph for this function is:
2.25
2
1.75
1.5
1.25
0.5
0.45
0.4
1
0.35
2
3
0.3
4
5
0.25
n = 1 did not give us a Gaussian relative error less than 1%, so we used n = 2 in order to
obtain a better Gaussian relative error. Next, n = 3 confirms the approximation.
N=1
N=2
N=3
Mathematica
Iterated Gaussian
1.38016
1.3577
1.3749
1.37894
Gaussian
Relative Error
0.0162743
0.00381203
0.000878947
Example 5a:
f(x,y) = Sin(x2y + yx2),
a = 0, b = 3, c = 1, d = 3.
The graph for this function is:
1
0.5
3
0
-0.5
2.5
-1
0
2
1
1.5
2
3
1
The method seemed to start giving us positive results up to N = 2, but it broke down
when it reached n = 3, since the relative error suddenly increased. In addition, the
method had to reach n = 6 in order to give us a Gaussian relative error less than 1%, but
as n increases further the relative error does not stabilize. This difficulty is caused by the
extremely irregular graph: small errors in measuring (x, y) will cause big errors in
measuring f(x, y).
N=1
N=2
N=3
N=4
N=5
N=6
N=7
N=8
N = 20
Mathematica
Iterated Gaussian
0.645902
1.52852
0.593642
-0.333341
1.30596
0.752682
0.6452
0.926495
0.387134
0.649014
Gaussian
Relative Error
1.36648
0.0809097
1.51609
1.02192
0.165318
0.00108682
0.43442
0.400631
0.00481717
Example 5b:
f(x,y) = Sin(x2y + yx2),
a = 0, b = 1, c = 0, d = 1.
We use the same function as before, but we move the region closer to the origin, where
the graph is more regular.
1
0.75
1
0.5
0.8
0.25
0
0
0.6
0.4
0.2
0.4
0.2
0.6
0.8
1
0
n = 1 did not satisfy Gaussian relative error less than 1%. However, n = 2 and n = 3
showed small errors.
N=1
N=2
N=3
Mathematica
Iterated Gaussian
0.28955
0.295064
0.289447
0.289544
Gaussian
Relative Error
0.0190411
0.000424248
0.0000218553
In conclusion, when can compare the tables for Examples 5a and 5b. We can see that for
the intervals a = 0, b = 3, c = 1, and d = 3, where the graph shows more variations, the
results started to break down. It is difficult to integrate f(x, y) in this region. On the other
hand, with the intervals a = 0, b = 1, c = 0, and d = 1, the graph is smoother and the
method quickly gave us a Gaussian relative error less than 1%.
Download