13: A preconditioned min-res iteration for the mixed problem.

advertisement
13: A preconditioned min-res iteration for the mixed problem.
Math 639
(updated: January 2, 2012)
The iterative algorithms considered in this section for the mixed problem:
given b ∈ Rn+m , find x ∈ Rn+m solving
(13.1)
Ax = b,
are based on the corollary of the previous lecture, namely:
Corollary 1. There are constants c0 and c1 depending only on α, β, kAk,
and kBk satisfying
c0 kwk ≤ kAwk∗ ≤ c1 kwk for all w ∈ Rn+m .
(13.2)
We shall develop preconditioned iterative algorithms for (13.1). To this
end, we assume that M is a symmetric and positive definite (n+m)×(n+m)
matrix satisfying
(13.3)
c2 kwk2 ≤ (M w, w) ≤ c3 kwk2 for all w ∈ Rn+m .
Setting kwkM = (M w, w) (for w ∈ Rn+m ) and combining (13.2) and (13.3)
gives
c5 kwk2M ≤ kAwk2∗ ≤ c6 kwk2 for all w ∈ Rn+m .
n+m ,
2 −1
2
Here c5 = c−1
3 c0 and c6 = c1 c2 . Now if we define for w ∈ R
kwkM,∗ =
sup
y∈Rn+m
we find that
(w, y)
,
kykM
−1
2
2
2
c−1
3 kwk∗ ≤ kwkM,∗ ≤ c2 kwk∗ .
Combining the above inequalities gives
(13.4)
c7 kwk2M ≤ kAwk2M,∗ ≤ c8 kwk2 for all w ∈ Rn+m .
2
2 −2
Here c7 = c−2
3 c0 and c8 = c1 c2 .
The whole point of introducing the second dual norm is that it has a
particularly simple form. In fact,
kwk2M,∗ =
=
sup
y∈Rn+m
sup
z∈Rn+m
(w, y)2
(w, y)2
=
sup
kyk2M
y∈Rn+m (M y, y)
(w, M −1/2 z)2
= (M −1 w, w) for all w ∈ Rn+m .
(z, z)
Thus, (13.4) can be rewritten
(13.5)
c7 kwk2M ≤ kM −1 Awk2M ≤ c8 kwk2 for all w ∈ Rn+m .
1
2
This inequality is the basis for our iterative algorithms.
We first consider a simple preconditioned linear linear iteration for (13.1)
based on the preconditioned normal equations:
(13.6)
M −1 AM −1 Ax = M −1 AM −1 b.
We note that the matrix M −1 AM −1 A is symmetric in the inner product
(M w, w) and is positive definite by (13.5). In fact, (13.5) implies that the
condition number K of M −1 AM −1 A is bounded by K ≤ c8 /c7 . Thus, we
get a rapidly convergent iteration by applying the preconditioned conjugate
gradient method to (13.6) in the (M ·, ·) inner product. The resulting error
satisfies the standard conjugate gradient estimate,
√
K −1
i
kei kM ≤ 2ρ ,
ρi = √
.
K +1
Note that this algorithm can be carried out with two evaluations each of M −1
and A per iteration and the evaluation of M can be completely avoided.
We note that M −1 A is also a symmetric matrix in the (M ·, ·) whose square
equals M −1 AM −1 A. As (13.5) provides eigenvalue bounds for the square,
we find that the eigenvalues of M −1 A are in the intervals:
√
√
{ c7 ≤ |λ| ≤ c8}.
It follows that the min-res method (see below) can be applied to the system
M −1 Ax = M −1 b.
It would appear that this iteration may be more efficient as it only requires
one evaluation of M −1 and A per iteration. This is somewhat misleading it
can be shown that for some initial errors, the error after 2k − 1 of min-res
coincides with the cg-normal error with k steps.
The min-res algorithm produce xj = x0 +θ where θ ∈ Kn is the minimizer
kx − xj kM = argminζ∈Kn kx − x0 − ζkM .
Here Kn is the Krylov space
n−1
Kn = spani=0
{(M −1 A)i r0 }.
Here r0 = M −1 (b − Ax0 ). This implies the following theorem.
Theorem 1.
Proof. The above conjugate gradient algorithm produce x̃j = x0 + θ where
e n is the minimizer
θ∈K
kx − x̃j kM = argminζ∈Ke n kx − x0 − ζkM .
3
e n is the Krylov space
Here K
e n = spann−1 {(M −1 AM −1 A)i r̃0 }.
K
i=0
Here r̃0 = M −1 AM −1 (b − Ax0 ). The estimate follows from the conjugate
gradient estimate and the observation that
e j ⊂ K2j+1 .
K
Download