Homework 1. (Due Feb. 25)

advertisement
Homework 1. (Due Feb. 25)
Math. 639
In this exercise, you will develop a preconditioned conjugate gradient solver
and apply it to Ax = b with A coming from the variable coefficient finite
difference application and preconditioner B being the inverse of the constant
coefficient problem (developed earlier). As in last weeks exercise, the two
dimensional case has n2 grid points in (0, 1)2 . The matrix A comes from the
CRS data files:
n=32: crsvc1
n=64: crsvc2
n=128: crsvc3
n=256: crsvc4
Note that the first line of the file gives the value of n. Implement the preconditioned conjugate gradient algorithm based on A = BA and h·, ·i = (B −1 ·, ·)
with (·, ·) denoting the dot product. You should implement the evaluation
of A and B as functions denoted aeval(x, Ax) and beval(w) with beval returning w := Bw (so that the implementation of the identity preconditioner
is trivial). Thus, you can run different problems by using different aeval.m
and beval.m files. You can add other simple arguments to the argument list,
e.g., the number of unknowns but must avoid passing large data sets through
the arguments and other data that is not pertinent to the CG routine. For
example, in MATLAB, the CSR sparsity structure should be passed to aeval
by declaring the variables as global in the main program and in aeval.
You should include three tolerances into the implementation, namely, ǫ and
ǫe and ǫp .
The first is the targeted accuracy ǫ. If possible, you should iterate until
krj k/kr0 k < ǫ.
Here
krj k2 = hrj , rj i = hrj , pj i.
Note that this is the numerator of αj .
Now, hrj , pj i < −ǫe or (Apj , pj ) < −ǫe indicates trouble with the operators
or the implementation of the algorithm. After the implementation has been
debugged, this type of problem occurs when one of the operators are either
1
2
not symmetric and positive definite or, possibly, one is extremely badly
conditioned.
The last tolerance, ǫp is used to avoid using inappropriately small values of
(rj , rj ) or (Apj , pj ). Its goal is to detect when we have essentially achieved
the solution, i.e., if
|hrj , rj i| < ǫp or |(Apj , pj )| < ǫp
then we agree to say that we have arrived at the solution and shut down the
algorithm. The introduction of such a tolerance is a consequence of the fact
that computer round off errors produce nonzero numbers when they would
be zero if the computations were made with infinite precision.
I would propose trying ǫp = 10−10 and ǫe = 10−6 , keeping in mind that
these numbers should really depend on the scaling of A (especially when
B = I). You should be computing in double precision arithmetic (I believe
that MATLAB does this automatically).
Problem 1. Run you preconditioned conjugate gradient algorithm with A
and B as discussed above. For each value of n, iterate for the solution of
Ax = 0 with initial iterate x = (1, 1, . . . , 1)t and use ǫ = 10−6 . Report the
errors,
p
krm k/kr0 k and
(Aem , em )/(Ae0 , e0 )
as a function of n. Report also m and the reason for shut down, i.e., either
ǫp or ǫ. Shut down for ǫe is not acceptable for this problem.
Problem 2. This problem tries to investigate how things can go wrong. We
shall use a notoriously badly conditioned matrix A, i.e., the Hilbert matrix
given by
Ai,j = (i + j − 1)−1 ,
i, j = 1, . . . , n.
Note that A is symmetric and positive definite since
Z 1
xi+j−2 dx
Ai,j = (xi−1 , xj−1 )L2 (0,1) =
0
with (·, ·)L2 (0,1) denoting the inner product in L2 (0, 1). Use B = I as the
preconditioner. Try to solve our standard problem Ax = 0 (using your conjugate gradient algorithm) with our standard initial iterate. Try to break the
algorithm by running n = 4, 5, 6... until the algorithm breaks. Report the
number of iterations, final errors as well as the reason for shut down. In
this case, I would not be surprised if the algorithm shuts down because of the
ǫe condition.
3
Problem 3. This involves implementing the CG eigenvalue approach. You
can provide this as an additional option in the code created above. In this
case, you need process the numerator and denominator data as you go to
create the tridiagonal matrix (defined in class). Compute and report the
largest and smallest eigenvalues for the tridiagonal matrix. Use your power
method iteration to obtain some idea about how good of a job the CG method
is doing.
Download