EXERCISES-Misc

advertisement
Chapter 11, Problem 6. (*** solve in class ***)
Apply Cholesky decomposition to the symmetric matrix
6 15 55  a0   152.6 
15 55 225  a    585.6 


 1  



55 225 979 a 2  2488.8
In addition to solving for the Cholesky decomposition, employ it to solve for the a’s.
l11
 l11 l21 l31 l41 
l

l22
l22 l32 l42 
21
T
T



[ A]  [ A]  L  L 
l31 l32 l33

l33 l43 



l44 
l41 l42 l43 l44  
i 1
lki 
aki   lijlkj
j 1
lii
for i  1,2,, k  1
Chapter 11, Solution 6.
l11  6  2.449
l21 
15
 6.1237
2.449
l22  55  6.12372  4.18
l31 
55
 22.454
2.449
l32 
225  6.1237(22.454)
 20.92
4.18
l33  979  22.4542  20.922  6.11
Thus, the Cholesky decomposition is
k 1
lkk  akk   lkj2
j 1
 2.449



[L]  6.1237 4.18

22.454 20.92 6.11
The solution can then be generated by first using forward substitution to modify the righthand-side vector,
[ L]{D}  {B}
which can be solved for
 62.3 


{D}   48.8 
11.37


Then, we can use back substitution to determine the final solution,
[ L]T { X }  {D}
which can be solved for
2.48


{X }  2.36
1.86 


Chapter 17, Problem 5.
Use least-squares regression to fit a straight line to
x
y
6
29
7
21
11
29
15
14
17
21
21
15
23
7
29
7
29
13
37
0
39
3
Along with the slope and the intercept, compute the standard error of the estimate and the
correlation coefficient. Plot the data and the regression line. If someone made an
additional measurement of (x = 10, y = 10), would you suspect, based on a visual
assessment and the standard error, that the measurement was valid or faulty? Justify your
conclusion.
Chapter 17, Solution 5.
y = a0 + a1x
a1 
n xi yi   xi  yi
n x   xi 
2
2
i

11 * 2380  234 * 159
 0.78055
11 * 6262  (234) 2
using (1), a0 can be expressed as a0  y  a1 x  31.0589
n
St   ( yi  y ) 2
n
n
i 1
i 1
S r   ei2   ( yi  a0  a1 xi ) 2
i 1
r2 
St  S r
St
The results can be summarized as
y  31.0589  0.78055 x
(s y / x  4.476306 ; r  0.901489 )
At x = 10, the best fit equation gives 23.2543. The line and data can be plotted along with the
point (10, 10).
35
30
25
20
15
10
5
0
0
10
20
30
40
The value of 10 is nearly 3 times the standard error away from the line,
23.2543 – 3(4.476306) = 9.824516
Thus, we can tentatively conclude that the value is probably erroneous. It should be noted that
the field of statistics provides related but more rigorous methods to assess whether such
points are “outliers.”
Chapter 17, Problem 8 (*** solve in class ***)
Fit the following data with (a) a saturation-growth-rate model, (b) a power equation, and (c) a
parabola. In each case, plot the data and the equation.
x
y
0.75
1.2
2
1.95
3
2
4
2.4
6
2.4
8
2.7
8.5
2.6
Chapter 17, Solution 8
(excel file contains the computational details).
Saturation Growth:
y  3
x
1 3  x


3  x
y
3 x

1 1 3 1

 *
y 3 3 x
1
1
1
 a 0  a1  0.34154  0.36932
y
x
x
(a) We regress 1/y versus 1/x to give
Therefore, 3 = 1/a0 = 1/0.3415 = 2.9279 and  3 = a1*3 = 0.3693 *2.9279 = 1.0813, and the
saturation-growth-rate model is
y  3
x
x
 2.9279
3  x
1.0813  x
The model and the data can be plotted as
3
2
1
0
0
(b) Power Equation:
3
6
9
y   2 x  2  log y  2 log x  log  2
We regress log10(y) versus log10(x) to give
log 10 y  0.1533  0.3114 log 10 x
Therefore, 2 = 100.1533 = 1.4233 and 2 = 0.3114, and the power model is
y  1.4233x 0.3114
The model and the data can be plotted as
3
2
y = 1.4233x0.3114
R2 = 0.9355
1
0
0
3
6
9
(c) Polynomial regression can be applied to develop a best-fit parabola
y  a0  a1 x  a2 x 2
n
n
i 1
i 1
whic h minimizes the error : S r   ei2   ( yi  a0  a1 xi  a2 xi2 ) 2
Equations to be solved:
 n

  xi
 xi2

x x
x x
x x
2
i
3
i
4
i
i
2
i
3
i
  a0    yi 
  

  a1     xi yi 
 a2   xi2 yi 



32.3
201.8  a0   15.3 
 7
 32.3 201.8 1441.5   a    78.5 

 1  

201.8 1441.5 10965.4 a2  511.9
y  0.990728  0.449901x  0.03069 x 2
The model and the data can be plotted as
3
2
y = -0.0307x2 + 0.4499x + 0.9907
1
R2 = 0.9373
0
0
3
6
9
Chapter 18, Problem 6. (*** solve in class ***)
Repeat Probs. 18.1 through 18.3 using the Lagrange polynomial.
Chapter 18, Solution 6.
f1 ( x) 
f 2 ( x) 
18.1
x  x0
x  x1
f ( x0 ) 
f ( x1 )
x0  x1
x1  x0
 x  x1  x  x2  f ( x )
 x0  x1  x0  x 2  0
 x  x0  x  x2  f ( x )

 x1  x0  x1  x 2  1
 x  x0  x  x1  f ( x )

 x2  x0  x2  x 1  2
Estimate the common logarithm of 10 (log 10) using linear interpolation.
x0 = 8
x1 = 9
x2 = 11
x3 = 12
18.1 (a):
f(x0) = 0.90309
f(x1) = 0.95424
f(x2) = 1.04139
f(x3) = 1.07918
Interpolate between log 8 = 0.9031 and log 12 = 1.0792
x0 = 8
x1 = 12
f1 (10) 
f(x0) = 0.9031
f(x1) = 1.0792
10  12
10  8
0.9031 
1.0792  0.991
8  12
12  8
18.1 (b): Interpolate between log 9 = 0.95424 and log 11 = 1.04139.
x0 = 9
x1 = 11
f1 (10) 
f(x0) = 0.95424
f(x1) = 1.04139
10  11
10  9
0.95424 
1.04139  0.9978
9  11
11  9
18.2:
Fit a second-order Lagrange polynomial to estimate log 10 using the data from Prob. 18.1 at
x = 8, 9, and 11.
x0 = 8
x1 = 9
x2 = 11
f(x0) = 0.90309
f(x1) = 0.95424
f(x2) = 1.04139
f 2 (10) 
(10  9)(10  11)
(10  8)(10  11)
0.90309 
0.95424
(8  9)(8  11)
(9  8)(9  11)
(10  8)(10  9)

1.04139  1.0003434
(11  8)(11  9)
18.3:
Fit a third-order Lagrange polynomial to estimate log 10 using the data from Prob. 18.1
x0 = 8
x1 = 9
x2 = 11
x3 = 12
f(x0) = 0.90309
f(x1) = 0.95424
f(x2) = 1.04139
f(x3) = 1.07918
(10  9)(10  11)(10  12)
(10  8)(10  11)(10  12)
0.90309 
0.95424
(8  9)(8  11)(8  12)
(9  8)(9  11)(9  12)
(10  8)(10  9)(10  12)
(10  8)(10  9)(10  11)

1.04139 
1.07918  1.0000449
(11  8)(11  9)(11  12)
(12  8)(12  9)(12  11)
f3 (10) 
Chapter 18, Problem 8.
Employ inverse interpolation using a cubic interpolating polynomial and bisection to determine
the value of x that corresponds to f (x) = 0.23 for the following tabulated data:
x
2
3
4
5
6
7
f (x)
0.5
0.3333
0.25
0.2
0.1667
0.1429
Chapter 18, Solution 8.
The following points are used to generate a cubic interpolating polynomial
x0 = 3
x1 = 4
x2 = 5
x3 = 6
f(x0) = 0.3333
f(x1) = 0.25
f(x2) = 0.2
f(x3) = 0.1667
The polynomial can be generated in a number of ways including Newton’s Divided
Difference Interpolating Polynomials.
i
0
1
2
3
xl
3
4
5
6
f(xl)
0.3333
0.25
0.2
0.1667
First
-0.0833
-0.05
-0.0333
Second
0.01665
0.00835
Third
-0.0027
The result is:
f3(x) = 0.3333 + (x-3)(-0.0833) + (x-3)(x-4)(0.01665) + (x-3)(x-4)(x-5)(-0.0027)
If we simplify the above expression, we get:
f 3 ( x)  0.943  0.3261833x  0.0491x 2  0.00271667 x 3
The roots problem can then be developed by setting this polynomial equal to the desired
value of 0.23
0  0.713  0.3261833 x  0.0491 x 2  0.00271667 x 3
Bisection can then be used to determine the root. Using initial guesses of xl = 4 and xu = 5, the
first five iterations are
i
1
2
3
4
5
xl
4.00000
4.00000
4.25000
4.25000
4.31250
xu
5.00000
4.50000
4.50000
4.37500
4.37500
xr
4.50000
4.25000
4.37500
4.31250
4.34375
f(xl)
0.02000
0.02000
0.00504
0.00504
0.00160
f(xr)
-0.00811
0.00504
-0.00174
0.00160
-0.00009
f(xl)f(xr)
-0.00016
0.00010
-0.00001
0.00001
0.00000
a
11.11%
5.88%
2.86%
1.45%
0.72%
If the iterations are continued, the final result is x = 4.34213.
Chapter 21, Problem 4.
Integrate the following function analytically and using the trapezoidal rule,
with n = 1, 2, 3, and 4:
 x  2 / x 
2
2
1
dx
Use the analytical solution to compute true percent relative errors to evaluate the accuracy of
the trapezoidal approximations.
Chapter 21, Solution 4.
Analytical solution:

2
1
 x3
( x  2 /x) dx    4 x 
3
2
2
4
  8.33333
x 1
Trapezoidal rule for n :
f ( x0 )  f ( x1 )
f ( xn 1 )  f ( xn )
f ( x1 )  f ( x2 )
h
 h
2
2
2
n 1
ba 

I
f
(
x
)

f
(
x
)

2
f
(
x
)

0
n
i

2n 
i 1

I h
99
9
2
t = (9 - 8.33333)/8.33333 = 8 %
For (n=1):
I = (2  1)
For (n=2):
I = (2-1)/4 [ 9+9+ 2*(1.5 + 2/1.5)2 ] = 8.513889
t = (8.5138-8.3333)/8.3333 = 2.16%
The results are summarized below:
n
1
2
3
4
t
Integral
9
8.513889
8.415185
8.379725
8%
2.167%
0.982%
0.557%
Let’s apply Richardson’s Extrapolation to these results and see what kind of improvement we get
for the error:
I
n
1
2
3
4
I with O(h2)
9
8.513889
8.415185
8.379725
4
1
Im  Il
3
3
I with O(h4)
8.3517
8.3349
I
16
1
I m  Il
15
15
I with O(h6)
8.3338
Chapter 13, Problem 9.
Employ the following methods to find the maximum of the function
t
0.006 %
f x    x 4  2 x 3  8 x 2  5 x
(a) Golden-section search ( xl  2 , xu  1 ,  s  1% ).
(c) Newton’s method ( x0  1 ,  s  1% ).
Chapter 13, Solution 9.
(a) First, the golden ratio can be used to create the interior points,
5 1
(1  (2))  1.8541
2
x1  2  1.8541  0.1459
x2  1  1.8541  0.8541
d
The function can be evaluated at the interior points
f ( x2 )  f (0.8541)  0.8514
f ( x1)  f (0.1459 )  0.5650
Because f(x1) > f(x2), the maximum is in the interval defined by x2, x1 and xu where x1
is the optimum. The error at this point can be computed as
 a  (1  0.61803)
1  (2)
 100%  785.41%
 0.1459
The process can be repeated and all the iterations summarized as
i
f(xl)
xl
x2
f(x2)
x1
f(x1)
f(xu)
xu
d
xopt
a
1
-2
-22
-0.8541
-0.851
-0.1459
0.565
1
-16.000
1.8541
-0.1459
785.41%
2
-0.8541
-0.851
-0.1459
0.565
0.2918
-2.197
1
-16.000
1.1459
-0.1459
485.41%
3
-0.8541
-0.851
-0.4164
0.809
-0.1459
0.565
0.2918
-2.197
0.7082
-0.4164
105.11%
4
-0.8541
-0.851
-0.5836
0.475
-0.4164
0.809
-0.1459
0.565
0.4377
-0.4164
64.96%
5
-0.5836
0.475
-0.4164
0.809
-0.3131
0.833
-0.1459
0.565
0.2705
-0.3131
53.40%
6
-0.4164
0.809
-0.3131
0.833
-0.2492
0.776
-0.1459
0.565
0.1672
-0.3131
33.00%
7
-0.4164
0.809
-0.3525
0.841
-0.3131
0.833
-0.2492
0.776
0.1033
-0.3525
18.11%
8
-0.4164
0.809
-0.3769
0.835
-0.3525
0.841
-0.3131
0.833
0.0639
-0.3525
11.19%
9
-0.3769
0.835
-0.3525
0.841
-0.3375
0.840
-0.3131
0.833
0.0395
-0.3525
6.92%
10
-0.3769
0.835
-0.3619
0.839
-0.3525
0.841
-0.3375
0.840
0.0244
-0.3525
4.28%
11
-0.3619
0.839
-0.3525
0.841
-0.3468
0.841
-0.3375
0.840
0.0151
-0.3468
2.69%
12
-0.3525
0.841
-0.3468
0.841
-0.3432
0.841
-0.3375
0.840
0.0093
-0.3468
1.66%
13
-0.3525
0.841
-0.3490
0.841
-0.3468
0.841
-0.3432
0.841
0.0058
-0.3468
1.03%
14
-0.3490
0.841
-0.3468
0.841
-0.3454
0.841
-0.3432
0.841
0.0036
-0.3468
0.63%
(c) The first and second derivatives of the function can be evaluated as
f ' ( x)  4 x3  6 x 2  16x  5
f " ( x)  12x 2  12x  16
which can be substituted into Eq. (13.8) to give
xi 1  xi 
 4 xi3  6 xi2  16xi  5
 12xi2  12xi  16
 1 
9
 0.4375
 16
which has a function value of 0.787094. The second iteration gives –0.34656, which
has a function value of 0.840791. At this point, an approximate error can be computed
as a = 128.571%. The process can be repeated, with the results tabulated below:
i
0
1
2
3
x
-1
-0.4375
-0.34656
-0.34725
f(x)
f'(x)
-2
0.787094
0.840791
0.840794
9
1.186523
-0.00921
-8.8E-07
f"(x)
-16
-13.0469
-13.2825
-13.28
a
128.571%
26.242%
0.200%
Thus, within three iterations, the result is converging on the true value of
f(x) = 0.840794 at x = –0.34725.
Download