hw.docx

advertisement
437/537 Fall 2012 Homework #1
due 4:00pm, Friday, Sep 7.
Problems are from the 2nd edition of Sauer's book unless problem number starts with "X", which
means I made it up myself.
0.0
X0. [10 points] Using the examples shown in class on Aug 28 as models, write a Python program
to find an approximate solution to the equation cos(x) = x by the method of repeatedly replacing
x by cos(x), starting with x = 2.0. Repeat enough times that the any change from one step to the
next occurs way out in the 15th or 16th digit.
0.3
X1. A floating point system similar to the IEEE system has: a sign bit, then a 2-bit exponent
biased by 1, then a 3-bit mantissa. Assuming all bit-codes are used for numbers, and that the allzeros code is used for 0.
(a) [4 points] List all the non-negative machine numbers in this system if denormalization is
NOT used. For each, give the bit-code, and the number it represents - in binary fixed-point form
(e.g. 110.0101), and in decimal notation. You may if you wish use this sheet for your answers.
(OpenOffice spreadsheet version here in case useful.)
(b) [2 points] Repeat part (a) if denormalization IS used.
(c) [1 point] The "machine epsilon" or εmach, defined to be the difference between 1 and the next
larger machine number, is a very important characteristic of a floating point number system like
this. What is εmach for this system?
(d) [1 point] How big is the "hole at zero" for this system in part (a), i.e. the gap between the
smallest positive number and 0?
(e) How big is the hole at zero in part (b)?
2 (a),(b) [3 points each] (Conversion from base-10 to binary, expression as 52-bit-mantissa
floating point.) Extra: Also write in IEEE 754 format, both binary and hex. (You can check your
answers using Octave.)
437/537 Fall 2012 Homework #2
due 4:00pm, Friday, Sep 14.
Problems are from the 2nd edition of Sauer's book unless problem number starts with "X", which
means I made it up myself.
0.4
X0. [5 points] Find a bound - in terms of machine epsilon - on the relative difference between the
product abc of three machine numbers a, b, and c, calculated on the one hand as a*(b*c) and on
the other as (a*b)*c.
X1. [10 points] If we use the approximation sin x - x ~ -x^3/3! + x^5/5! - x^7/7!, for which
values of x (near 0) will the approximation be evaluated more accurately than the formula sin x x itself if we are using arithmetic with machine epsilon = 2^-52? How many decimal digits are
reliable in the worst case if we use the approximation and direct evaluation in the appropriate
ranges? Show all your work!
X2.
(a) [4 points] The expression (log(1+x))/x is evaluated very inaccurately by machine arithmetic
when |x| is very small. Why?
When I originally wrote this question, I was thinking that you would provide an informal
argument rather than a formal error bound. But having now discussed it with a couple of students
who came to my office, I think it may clearer for you if you do an actual "delta" calculation. In
computing your relative error bound, use Taylor's theorem to simplify the expressions, consider
|x| to be small of the same order as the "delta"s, and neglect anything that is higher order than
linear in the small quantities. Hint to keep things a little simpler: I would leave out the division
by x until the end of the calculation, because we already know that a division (by x) just adds
eps/2 to the the relative error.
What alternative formula would you propose using for small |x|?
(b) [5 points] Following the model I gave in class, use Python and NumPy to create a plot that
shows the relative error (not its log) in the direct evaluation of the first formula for x between
10^-16 and 10^-14. Use a logarithmic scale for x.
(c) [2 points] Comment on the size of the errors and explain, if you can, any pattern you see in
the plot.
In this, and all future problems that involve coding, include a print out of your code and of the
results obtained by running that exact code.
1.1
Exercise 2. [3 points] (IVT to bracket roots.)
Exercise 4. [2 points] (Bisection method applied to problems of 2. Change "1/8" to 4 times
machine epsilon. You may use bisection_method.py.
# Root finding with bisection method
def sign(y):
if y>0: return 1
if y<0: return -1
return 0
def bisection(f,a,b,tol):
sfa = sign(f(a))
# so we won't have to call f or sign
multiple times at a or b
sfb = sign(f(b))
if sfa == 0: return a # a is a root
if sfb == 0: return b # b is a root
if sfa == sfb:
print "You didn't give me a bracket!"
quit()
while abs(a-b) > tol:
print a,b
# just to see how things are
proceeding
c = (a+b)/2.
# midpoint
if c==a or c==b: return c # this can happen if a,b are
consecutive machine numbers
# (would
mean you're asking for too much accuracy)
sfc = sign(f(c))
if sfc == 0: return c # hit a root exactly, by good luck
if sfc == sfa:
a = c
else:
b = c
return (a+b)/2.
# Example of use:
from math import *
def myf(x):
return cos(x) - x
r = bisection(myf,0,10,1.0e-3)
print "Root ~", r
437/537 Fall 2012 Homework #3
due 4:00pm, Friday, Sep 21.
Problems are from the 2nd edition of Sauer's book unless problem number starts with "X", which
means I made it up myself.
1.2
X2. (a) [3 points] The functional iteration you did in HW1 with g(x) = cos(x) did converge to a
fixed point of g, i.e. to a root of f(x) = cos(x) - x.
Evaluate g'(r) for that case and explain how this is consistent with Theorem 1.6.
(b) [1 point] The equation 4*x*(1-x) = x has a positive solution. What is it?
(c) [2 points] If you hadn't realized you could do (b) analytically, and tried iterating the function
g(x) = 4*x*(1-x) to try to find the solution numerically, explain why Theorem 1.6 says you will
be unsuccessful.
(d) (i)[2 points] Make a cobweb diagram for the function g1(x)=2.5*x*(1-x) starting at x=0.3.
You may use a modification of my code cobweb.py. Since unlike the in-class examples, g is not
a built-in function, you will have to set up g1 using the "def" construction (see previous
examples).
(ii)[2 points] Make a cobweb diagram for the function g2(x)=4*x*(1-x) starting at x=0.3.
1.4
Exercise 2 (a) [4 points] (A few steps of Newton's method by hand.)
X1. (a) [4 points] Using the code I wrote in class on Thursday, Sep 13, we can see that Newton's
method succeeds in finding the root (0) of f(x) = tanh(x) when the initial guess x0 is ± 1.08, but
fails when |x0| ≥ 1.09. Try the function brentq from scipy.optimize (Google it),
from scipy.optimize import brentq
and see if it converges given an initial bracket of (-1.1,1.09). As always, include hard copy of
your code and its output.
(b) [2 points] If brentq does converge, how many function evaluations does it use in order to get
an absolute error of less than 10-15.
(c) [2 points] How does this compare with the "cost" of Newton starting from 1.08?
X3. GRADS ONLY, BUT UNDERGRADS CAN DO FOR EXTRA CREDIT.
(From Atkinson.) We want to create a method for calculating the square root of any (nonnegative)
number, a, that uses a fixed number of Newton iterations. We can write
a = b * 2m
where m is an even integer and 1/4 <= b < 1. Then
sqrt(a) = sqrt(b) * 2m/2, 1/2 ≤ sqrt(b) < 1 .
Thus the problem reduces to calculating sqrt(b) for 1/4 ≤ b < 1. Use the linear interpolating
formula
x0 = (2b+1)/3 for 1/4 ≤ b < 1
as an initial guess for Newton iteration to find sqrt(b).
(a) [2 points] Find the maximum initial error |e0| = |x0-sqrt(b)|.
(b) [2 points] Show that for any allowed b, xi ≥ 1/2 for all i > 0. Hint: to see what's going on,
make a sketch of Newton's "g" for f(x) = x2 - b. Then use "Calc I" techniques.
(c) [3 points] Obtain a bound for ei+1 in terms of ei. Hint: Like we did in class, write Taylor's
theorem with the remainder at quadratic order, expanding at xi and evaluating at sqrt(b). I.e.
f(sqrt(b)) (which = 0 ) = f(xi) + (sqrt(b)-xi) f'(xi) + R2.
(d) [2 points] From your result in (c), obtain a bound for ei in terms of |e0|.
(e) [2 points] Use your result in (d) to determine how many iterations, n, suffice to be sure that
0 ≤ | en | ≤ 2-53,
which is the limit of IEEE double precision (64-bit floating point), regardless of the value of b
(as long as it's in [1/4,1]).
2.1
X2.(Cost of GE & BS). Here's some of the CPU specs of the computer "copper" in my office that
served you this file
$ cat /proc/cpuinfo
processor
: 0
vendor_id
: GenuineIntel
cpu family
: 6
model
: 15
model name
: Intel(R) Core(TM)2 Quad CPU
stepping
: 11
cpu MHz
: 1600.000
cache size
: 4096 KB
physical id
: 0
siblings
: 4
core id
: 0
cpu cores
: 4
Q6700
@ 2.66GHz
(a) [2 points] In 1 minute on this machine, about how big a matrix could you do Gaussian
Elimination on?
(b) [1 point] In 1 hour?
(c) [2 points] How long would a back-solve take on the same machine for each of these two
matrices?
(d) [2 points] Would these matrices fit in memory (see below)? Explain.
$ cat /proc/meminfo
MemTotal:
6125124 kB
(kB stands for kiloBytes. Recall that we use 64 bits = 8 Bytes per number.)
# cobweb.py
from numpy import *
from pylab import plot,show,xlabel,ylabel,subplot
g = cos
# arccos
lo = -pi
hi = pi
xa = linspace(lo,hi,2000)
gxa = g(xa)
subplot(111, aspect='equal')
lw = 2.
plot(xa,gxa,linewidth=lw)
plot(xa,xa,'k',linewidth=lw)
xlabel('x(i)')
ylabel('x(i+1)')
x = 2.0 # 0.74
def up(x):
plot( [x,
x], [
x ,g(x)],'r',linewidth=lw)
def across(x):
plot( [x,g(x)],[g(x),g(x)],'g',linewidth=lw)
plot( [x, x], [ lo,g(x)],'r',linewidth=lw) # up to g
for i in range(20):
if i>0: up(x)
across(x)
x = g(x)
print x
show()
Download