Solutions to Homework 3

advertisement
Solutions to Homework 3
Section 3.4
4d) I wrote code, but that was not necessary. For MatLab output, see the appendix. The free cubic
spline with six digit rounding of the coefficients is

3

for x ∈ [−1, −.5]
.861995 + .175638(x + 1) + .065651(x + 1)
2
3
p(x) = .958020 + .224876(x + .5) + .098476(x + .5) + .028281(x + .5)
for x ∈ [−.5, 0]


2
3
1.09861 + .344563x + .140898x − .093932x
for x ∈ [0, .5]
8d) I wrote code, but that was not necessary. For MatLab output, see the appendix. The clamped
cubic spline with six digit rounding of the coefficients is

2
3

for x ∈ [−1, −.5]
.861995 + .155362(x + 1) + .065375(x + 1) + .016003(x + 1)
2
3
p(x) = .958020 + .232740(x + .5) + .089379(x + .5) + .015020(x + .5)
for x ∈ [−.5, 0]


2
3
1.09861 + .333384x + .111910x + .008758x
for x ∈ [0, .5]
Section 3.5
For 2a and 2c, let (x0 , y0 ) = (0, 0) and (x1 , y1 ) = (5, 2). Let (x̂0 , ŷ0 ) and (x̂1 , ŷ1 ) denote the
left and right guidepoints, respectively. Then α0 = x̂0 − x0 , α1 = x1 − x̂1 , β0 = ŷ0 − y0 , and
β1 = y1 − ŷ1 . MATLAB output and figures are in the appendix.
2a) (x̂0 , ŷ0 ) = (1, 1) and (x̂1 , ŷ1 ) = (6, 1). So, α0 = β0 = β1 = 1 = −α1 . Therefore, (α0 + α1 ) =
0, (α1 + 2α0 ) = 1, (β0 + β1 ) = 2, and (β1 + 2β0 ) = 3. Using Equations (3.24) and (3.25) from page
162, we have
x(t) = [2(−5) + 3(0)] t3 + [3(5) − 3(1)] t2 + 3(1)t + 0
= −10t3 + 12t2 + 3t
y(t) = [2(−2) + 3(2)] t3 + [3(2) − 3(3)] t2 + 3(1)t + 0
= 2t3 − 3t2 + 3t.
2c) (x̂0 , ŷ0 ) = (1, 1) and (x̂1 , ŷ1 ) = (6, 3). So, x̂0 , x̂1 , α0 , and α1 are unchanged from problem 2a
and therefore so is x(t). Now β0 = 1 = −β1 , so (β0 + β1 ) = 0, and (β1 + 2β0 ) = 1. From Equation
(3.25) from page 162, we have
y(t) = [2(−2) + 3(0)] t3 + [3(2) − 3(1)] t2 + 3(1)t + 0
= −4t3 + 3t2 + 3t.
1
Section 4.1
6b) Use the most accurate 3-point formulat to fill in the missing data from the following table:
x
7.4
7.6
7.8
8.0
f (x)
-68.3193
-71.6982
-75.1576
-78.6974
f 0 (x)
in the first three cases h = .2, and in the final case h = −.2. For the first and last data point, we
must use the forward 3-point formula (where h < 0 actually makes it a backward formula) and the
two middle cases will use the central 3-point formula.
1
[−3(−68.3193) + 4(−71.6982) − (−75.1576)] = −16.6933
.4
1
f 0 (7.6) ≈ [−75.1576 − (−68.3193)] = −17.0958
.4
1
0
f (7.8) ≈ [−78.6974 − (−71.6982)] = −17.4980
.4
1
f 0 (8.0) ≈ [−71.6982 − 4(−75.1576) + 3(−78.6974)] = −17.9000.
.4
f 0 (7.4) ≈
8b) Now we now the data was generated from f (x) = ln(x+2)−(x+1)2 so that f 0 (x) =
2
and f (3) (ξ) = (x+2)
3 . Therefore, our table should actually be
x
7.4
7.6
7.8
8.0
f (x)
-68.3193
-71.6982
-75.1576
-78.6974
1
x+2 −2(x+1)
f 0 (x)
-16.6936
-17.0958
-17.4980
-17.9000
Thus, for the final three values our approximations are exact. The absolute error for f 0 (7.4) is
.0004.
Let E(x) denote
the theoretical
error bound for f 0 (x). The theoretical bound for the forward
(3)
h2
3-point formula is f (ξ) 3 and for the central 3-point formula it is half of that. For x = 7.4 and
x = 7.6, ξ ∈ [7.4, 7.8] and for x = 7.8 and x = 8.0 ξ ∈ [7.6, 8.0]. Since f (3) (x) is decreasing on both
of these intervals, its maximum is achieved at the left endpoint of the appropriate interval. Hence,
2
.04
2
.04
E(7.4) ≤
≤
= .00003211
3
3
(ξ + 2)
3
(9.4)
3
2
2
.04
.04
≤
= .00001605
E(7.6) ≤
3
3
(ξ + 2)
6
(9.4)
6
2
.04
2
.04
E(7.8) ≤
≤
= .00001507
3
3
(ξ + 2)
6
(9.6)
6
2
.04
2
.04
E(8.0) ≤
≤
= .00003014
3
3
(ξ + 2)
3
(9.6)
3
2
Obviously, we meet this bound with the last three approximations. The first approximation however
does not achieve this bound. This is almost certainly due to the precision of the initial data as the
original data is less precise than the bounds.
G28) To derive an approximation technique for f (3) (x0 ) we expand f (x) as a Taylor series centered
at x0 and evaluate at the points x0 ± h.
so
1
1
1
f (x0 + h) = f (x0 ) + f 0 (x0 )h + f 00 (x0 )h2 + f (3) (x0 )h3 + f (4) (x0 )h4 +
2
6
24
1 00
1 (3)
1
0
2
3
f (x0 − h) = f (x0 ) − f (x0 )h + f (x0 )h − f (x0 )h + f (4) (x0 )h4 −
2
6
24
1 (5)
1 (3)
3
0
f (x0 + h) − f (x0 − h) = 2f (x0 )h + f (x0 )h + f (x0 )h5 + · · · .
3
60
1 (5)
f (x0 )h5 + · · ·
120
1 (5)
f (x0 )h5 + · · ·
120
(1)
Then solving (1) for f (3) (x0 ) results in
f (3) (x0 ) =
3
6
1
[f (x0 + h) − f (x0 − h)] − 2 f 0 (x0 ) − f (5) (x0 )h2 + · · · .
3
h
h
20
(2)
Substituting 2h for h in (2) and multiplying the new equation by 4 provides
4f (3) (x0 ) =
6
128 (5)
3
[f (x0 + 2h) − f (x0 − 2h)] − 2 f 0 (x0 ) −
f (x0 )h2 + · · · .
2h3
h
20
(3)
Subtracting (2) from (3) and dividing the resulting equation by 3 produces a centered approximation
for f (3) (x0 ) with error of order O(h2 ).
f (3) (x0 ) =
1
127 (5)
[f (x0 + 2h) − 2f (x0 + h) + 2f (x0 − h) − f (x0 − 2h)] −
f (x0 )h2 + · · · . (4)
3
2h
60
29) Of course we want to minimize error. Finding the minimum of the error e(h) is basic calculus.
1
3 3
h
h
M
must
equal
zero.
Thus
M
or
h
=
e0 (h) = −
+
=
. Since h is positive, the second
2
2
3
3
M
h
h
1
3 3
derivative is positive and therefore e(h) has a minimum at h = M .
3
Section 4.2
8) Let N1 (h) =
1
h
[f (x0 + h) − f (x0 )]. Then
h2
h
f 0 (x0 ) = N1 (h) − f 00 (x0 ) − f 000 (x0 ) + O(h3 )
6
2
h
h
h2
− f 00 (x0 ) − f 000 (x0 ) + O(h3 ).
so f 0 (x0 ) = N1
2
4
24
So two times equation (6) minus equation (5) gives us
h
h2
0
f (x0 ) = 2N1
− N1 (h) + f 000 (x0 ) + O(h3 ).
2
12
Let N2 (h) = 2N1 h2 − N1 (h). Then
h
h2
+ f 000 (x0 ) + O(h3 ).
f (x0 ) = N2
2
48
0
Thus four times equation (8) minus equation (7) gives us
h
0
3f (x0 ) = 4N2
− N2 (h) + O(h3 ).
2
(5)
(6)
(7)
(8)
(9)
Now we solve equation (9) for f 0 (x0 ) and unravel the definition of N2 (h) in terms of the original
function f .
4
h
1
0
f (x0 ) = N2
− N2 (h) + O(h3 )
3
2
3
4
h
h
1
h
=
2N1
− N1
−
2N1
− N1 (h) + O(h3 )
3
4
2
3
2
8
6
1
h
h
= N1
− N1
+ N1 (h) + O(h3 )
3
4
3
2
3
84
h
62
h
11
=
f x0 +
− f (x0 ) −
f x0 +
− f (x0 ) +
[f (x0 + h) − f (x0 )] + O(h3 )
3h
4
3h
2
3h
1
h
h
32f x0 +
− 12f x0 +
+ f (x0 + h) − 21f (x0 ) + O(h3 )
(10)
=
3h
4
2
Substituting 4h for h in equation (10) and arranging the terms in an aesthetically pleasing order
gives us a O(h3 ) approximation of the derivative of f at x0 :
f 0 (x0 ) =
1
[f (x0 + 4h) − 12f (x0 + 2h) + 32f (x0 + h) − 21f (x0 )] + O(h3 ).
12h
4
Section 4.3
2a) With the trapezoid rule, we have h = .5 and
Z .25
cos2 (.25)
.5 2
cos2 x dx ≈
cos (−.25) + cos2 (.25) =
≈ .4693956.
2
2
−.25
6a) With Simpson’s method, we have h = .25 and
Z .25
1 2
.25 2
cos (−.25) + 4 cos2 (0) + cos2 (.25) =
cos (.25) + 2 ≈ .4897985.
cos2 x dx ≈
3
6
−.25
(In the calculations we used that cos is an even function. The absolute errors are 2a) .0203171 and
6a) .0000858.)
16) Here we have h =
b−a
3
so 43 h = 14 (b − a). Then our quadrature method can be written as
Z b
1
f (x)dx = (b − a) [3f (a + h) + f (b).]
4
a
The degree of accuracy of this quadrature method is 2. We verify that it produces exact results for
polynomials of degree 0, 1, and 2 but
R b fails to do so for a polynomial of degree 3.
Suppose f (x) = x0 = 1. Then a f (x)dx = b − a. The quadrature approximation is
1
(b − a) [3(1) + 1] = b − a
4
so the quadrature is exact for constant functions. Now suppose f (x) = x1 = x so that
1 2
1
2
2 (b − a ) = 2 (b − a)(b + a). The approximation is
Rb
a
f (x)dx =
1
1
1
(b − a) [3(a + h) + b] = (b − a) [3a + (b − a) + b] = (b − a)(b + a)
4
4
2
so the quadrature is exact for Rpolynomials of degree 1.
Rb 2
b
3
Suppose f (x) = x2 . Then a f (x)dx = 13 (b3 − a3 ) = 31 (b − a)(b2 + ab + a2 ). Then b−a
a x dx =
(b2 + ab + a2 ). Applying the quadrature method, we have
Z b
3 2
3
3
x2 dx ≈
3(a + h)2 + b2 =
3(a + 2ah + h2 ) + b2
b−a a
4
4
3
(b − a)2
2
2
=
3a + 2a(b − a) +
+b
4
3
3
1 2
2
2
2
2
=
3a + 2ab − 2a + b + (b − 2ab + a )
4
3
3
1 2
2
2
2
=
(a + ab + b ) + (b + ab + a ) = b2 + ab + a2 .
4
3
So the method is exact for polynomials of degree 2 as well. Finally, suppose f (x) = x3 . Then
Rb
1 4
1
4
3
2
2
3
a f (x)dx = 4 (b −a ) = 4 (b−a)(b +ab +a b+b ). So we need only check that [3f (a+h)+f (b)] =
(b3 + ab2 + a2 b + b3 ). However
3f (a + h) + f (b) = 3(a + h)3 + b3 = 3(a3 + 3a2 h + 3ah2 + h3 ) + b3 .
We simply verify that the coefficient for b3 is incorrect. b3 appears only in the terms b3 and
3h3 = 19 (b3 − 3ab2 + 3a2 b − a3 ) so the coefficient for b3 is 10
9 in the quadrature formula instead of
1. Therefore, this method does not exactly recover the integral for a cubic.
5
Section 4.4
14) f (x) = x ln x so f 0 (x) = ln x + 1 and f 00 (x) = x1 . Let ET (h), EM (h), and ES (h) denote
the ABSOLUTE
R 2error for the trapezoidal, midpoint, and Simpson’s methods, respectively, when
approximating 1 f (x)dx. In this case, b − a = 1 and f 00 (x) ≤ 1 for all x ∈ [1, 2]. Part (c) is done
prior to part (b).
00
2
(a) By Theorem 4.5, ET (h) = f 12(µ) h2 ≤ h12 . Since we want ET (h) ≤ 10−5 , we have h2 ≤ .00012
1
or h ≤ .010954 (truncating rather than rounding to ensure the bound). Since h = b−a
n = n , then
n ≥ 92 will suffice.
00
00
(c) By Theorem 4.6, EM (h) = f 6(µ) h2 = 2 f 12(µ) h2 = 2ET (h). Then EM (h) ≤ 10−5 ⇔ ET (h) ≤
.000005 or h2 ≤ .00006. So we need h ≤ .00774 and n ≥ 130.
(b) By Theorem 4.4, ES (h) =
f (4) (µ) 4
180 h .
ES (h) ≤ 10−5
Then n =
1
h
≈
1
.17320
Since f (4) (x) is decreasing on [1, 2], then f (4) (µ) ≤ 2. Thus
1
⇔ h4 ≤ 90 10−5 ⇔ h ≤ (.0009) 4 ≈ .17320.
≥ 5.7735. Therefore, n ≥ 6 will suffice.
The approximations are (a. Trapezoid) .636301186, (c. Midpoint) .636287731, and (b. Simpson)
.636297501 which are all within 10−5 with Simpson’s absolute error about half the other two absolute
errors.
22) Let To be the set of all times m6 for m = 1, 3, . . . , 13 and So the sum of the respective speeds.
Let Te be the set of times n6 for n = 2, 4, . . . , 12 and Se the sum of the respective speeds. Then,
since h3 = 2 the length of the track in feet is approximately 2 ∗ (124 + 4So + 2Se + 123) = 9858.
2
26) Let g(t) = √12π e−t /2 . It is moderatley helpful to recognize that this is the pdf of the normal
Rx
distribution with mean 0 and standard deviation 1. So G(x) = −∞ f (t)dt is the cummulative
distribution function. We know that the normal distribution is symmetric about the mean (here
0). Additionally, loosely speaking, the probability you are within 2 standard deviations of the
mean is approximately .95R for any distribution. Also, for the continuous random variable X,
x
P(X ≤ x) = G(x) = .5 + 0 g(t)dt. So x = 2 is a reasonable initial guess since we translate this
problem to one asking for what value x do we have
R xP(X ≤ x) = .95.
First I want to ensure that my function f (x) = 0 g(t)dt is accurate to within 10−5 . By Theorem
2
4.4, the error can be determined using f (4) (x) = √12π (x − x3 )e−x /2 . By graphing (this is fine as we
don’t
to be exact, but just get some bound) the absolute value of this function, we see that
(4) need
f (x) ≤ .35 on [0, ∞). So ES (x) ≤ .35 x54 since b = x and a = 0. Now we know that it is very
180 n
unlikely that x ≥ 3 so let’s use x = 3. We want ES (x) ≤ 10−5 so .4725
≤ 10−5 or n ≥ 15. So in my
n4
composite Simpson method, I use n = 16.
Finally, if F (x) = f (x) − .45, the F 0 (x) = f 0 (x) = g(x) and I run Newton’s method with
tolerance 10−5 . From above, I used an initial approximation of 2. My algorithms return the
approximation 1.64485333 which, using Simpsons method, gives an approximation to the integral
of .449999997. The error of 3 × 10−8 is certainly within our limits.
x3
−5
For the Trapezoid method, I need ET (x) ≤ .35
12 n2 ≤ 10 . Again, using x = 3, then n = 281
will suffice. I obtain 1.64485832 as the approximate root to FT (x) (same as F above but uses the
Trapezoid method). This returns a Trapezoid approximation to the integral of .450000003 with the
same absolute error.
One could change the stopping criteria in Newton’s method to a function value tolerance for
this problem.
6
Section 4.6
2f) We want to use adaptive quadrature
by hand to understand exactly how the method
R 1.6 (painfully
2x
works) to approximate the integral 1 x2 −4 dx with tolerance 10−3 . We use the book’s notation
S(a, b).
2(1)
2(1.3) 2(1.6)
S(1, 1.6) = .1 −
≈ −.739105339
−4
−
3
2.31
1.44
2(1.15) 2(1.3)
2(1)
−4
−
≈ −.261412444
S(1, 1.3) = .05 −
3
2.6775
2.31
2(1.3)
2(1.45) 2(1.6)
S(1.3, 1.6) = .05 −
≈ −.473053517.
−4
−
2.31
1.8975
1.44
Now |S(1, 1.6) − (S(1, 1.3) + S(1.3, 1.6))| = .004639378 < .015 = 15(10−3 ). Therefore the theoretical development of adaptive quadrature says we should stop with this second level of approximation
as our approximation of the error meets our tolerance. Thus
Z 1.6
2x
dx ≈ S(1, 1.3) + S(1.3, 1.6) = −.734465960.
2
x −4
1
(The actual value is .733969175 so the absolute error is .000496790 and we achieve the desired
accuracy.)
Section 4.7
R 1.6
R1
R −1.44 1
1
4f) By a double substitution, 1 x22x−4 dx = −3
u du = .78 −1 .78w−2.22 dw. Then for n = 5, we
have
Z 1.6
2x
1
1
dx = .78 .2369268850
+
x2 − 4
.78(.9061798459) − 2.22 .78(−.9061798459) − 2.22
1
1
1
+ .4786286705
+
.78(.5384693101) − 2.22 .78(−.5384693101) − 2.22
1
−.5688888889
.
2.22
Evaluating this in Matlab returns -.733969133. The actual value of the integral is -.733969175 and
our absolute error is 4.2 × 10−7 .
Alternatively, without the first the substitution and with f (x) = x22x−4 , we have
Z
1.6
Z
1
f (.3x + 1.3)dx ≈ −.733968725
f (x)dx = .3
1
−1
with absolute error 4.5 × 10−7 .
(In both Gaussian Quadratures above, we use 5 nodes with error less than 4.5 × 10−7 . However,
our adaptive quadrature uses 6 nodes and has error greater than 4.9×10−4 . Optimal nodes certainly
improve our accuracy.)
7
APPENDIX: MATLAB OUTPUT
3.4 4d)
>> x=[-1 -.5 0 .5];
>> a=[.86199480 .95802009 1.0986123 1.2943767];
>> cubicfree(x,a)
ans =
0.8620
0.1756
0
0.0657
0.9580
0.2249
0.0985
0.0283
1.0986
0.3446
0.1409
-0.0939
1.2944
0
0
0
>> cubicclamp(x,a,.15536240, .45186276)
ans =
0.8620
0.1554
0.0654
0.0160
0.9580
0.2327
0.0894
0.0150
1.0986
0.3334
0.1119
0.0088
1.2944
0
0
0
3.5 2a) and 2c) This function also returns the correct values for the polynomial
coefficents in matrix form. The first four terms are the coefficients for t^0 through t^3 in
x(t) while the last four terms are the coefficients for t^0 through t^3 in y(t). The plots of
the curves are in this same order on the next page.
>> aa=[0 0;5 2];
>> bb=[1 1];
>> cc=[6 1];
>> bezierplot(aa,bb,cc)
ans =
0 3
12 -10
0
3
-3
2
3
3
-4
>> dd=[6,3];
>> bezierplot(aa,bb,dd)
ans =
0 3
12 -10
0
3
2.5
2
1.5
1
0.5
0
-0.5
-1
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
2a
3
2.5
2
1.5
1
0.5
0
-0.5
2c
Download