What is Multigrid? Hans Petter Langtangen og Aslak Tveito

advertisement
What is Multigrid?
Hans Petter Langtangen
og
Aslak Tveito
Outline
It works!
Two-point boundary value problem.
Properties
Relaxation
Two-grid method
Multigrid
Relaxation revisited; The heat connection
More examples, It still works!
1
CPU-measurements of some
numerical PDE-applications
(version 1.0)
Are Magnus Bruaset
Xing Cai
Hans Petter Langtangen
Klas Samuelson
Aslak Tveito
Gerhard Zumbusch
2
Identifier
PE2
PE3
EB2
EB3
DP2
HE2
WA 2
WA 3
AD3
RE1
Problem description
2D Poisson equation
3D Poisson equation
2D elliptic problem with variable coefficients
3D elliptic problem with variable coefficients
2D pressure equation with discontinuous coefficients
2D heat conduction equation
2D nonlinear water wave equation
3D nonlinear water wave equation
3D linear advection-diffusion equation
1D Richard’s equation
Table 1: Identifiers for the test problems.
Solution method
Identifier
Gauss elim.; banded matrix
Gauss elim.; sparse matrix
Jacobi iterations
Gauss-Seidel iterations
Conjugate Gradient
PCG + RILU prec. (
)
PCG + Fast Fourier prec.
Nested Multi-grid cycles
BG
SG
J
GS
CG
RPCG
FPCG
NMG
Diffpack name
basic method matrix type
GaussElim
MatBand
GaussElim
MatSparse
Jacobi
MatSparse
SOR
MatSparse
ConjGrad
MatSparse
ConjGrad
MatSparse
ConjGrad
MatSparse
DDSolver
MatSparse
Refs
[15]
[12],[14]
[18], [25],[26]
[18], [25],[26]
[13], [19]
[3],[16]
[24]
[17]
Table 2: Identifiers for all the solution
methods included in the report and Diffpack
names used in connection with MenuSystem.
3
BG
J
GS
CG
RPCG
FPCG
NMG
9 17 1.10
2.40
1.53
0.75
0.77
0.73
0.87
72.71
56.66
31.94
7.62
6.63
6.20
5.81
in ! 33!"
"# 65$
8999.13
2168.64
1121.55
94.81
64.16
55.69
48.73
2039.58
1307.90
1170.64
509.41
on
on CPU
model
% '/&(&( * ),),+.0 -- +0
1 23/&4&( 5*),)768+.-- 60
! 9!:&4 5)768- +<;<=
$ 3&4 )76 - ? >9@"A 9!:&4 ) .- Table 4: Total CPU-time (in seconds)
measured for the E B 3 problem run on sgi2.
The stopping criterion Sc1( BDCFEHG ) is used by
the iterative methods.
4
BG
J
GS
CG
RPCG
FPCG
NMG
9
CPU
0.94
1.77
0.91
0.14
0.15
0.12
0.30
17 # it.
230
117
28
8
11
5
CPU
71.33
51.68
26.65
2.43
1.47
1.07
0.97
"! 33! # it.
936
469
68
12
13
5
CPU
8982.24
2113.73
1067.04
52.27
21.45
12.34
7.66
# 65$ CPU
model
# it.
CPU
# it.
3753
1878
154
18
14
5
937.56
228.15
90.04
59.41
340
31
15
5
% 9#::&4&4 5)) +0 -- +0
1$ 29!:3&4&4 )) + -- 60
2 & ) 6 - +<;<=
3&4 ) - > @A % 2/&( 5) 6- Table 5: Solution of the linear system in the
E B 3 problem run on sgi2; CPU-time (in
seconds) and number of iterations. The
stopping criterion Sc1( B C EHG ) is used by the
iterative methods. The result of the NMG
method is obtained by using the Sc3( BDC EHG )
stopping criterion.
5
Two-point boundary value problem:
7 C
B
C
C B
Numerical approximation:
B
7 B
B
B
C B
B
Finite difference method:
E B C
6
This system can be written on the form
where
B
B
B
Here
,
B
B
... ... ...
B
B
B 7
The properties of
eigenvalues are
are well known, the
B and the associated eigenvectors are
B for B That is, we have the
eigenvalue/eigenvector relation
for
B 8
Defining the inner product
, we see that the eigenvectors are orthogonal,
C
9
Jacobis method
C
From the formulation
E B we get the relation
B ? E which motivates the iteration
B
, E
B C (1)
Here
is given. The iteration (1) is referred
to as Jacobis method.
10
Numerical example
B C
B
C
Number of iterations to have
B C E
n iterations iterations/
100
200
300
400
500
Here
23606
93500
209682
372152
580910
2.36
2.34
2.33
2.33
2.32
Conclusion: The number of iterations is
about .
11
Analysis of Jacobis method
Note that Jacobis method:
B
, E
B C
can be written on the form
where:
B ,
and
C
B
B
B
... ...
... ... ...
... ...
B
C
B
12
We note that
where
is the identity matrix and
B
tridiag
B B
be eigenvalue/eigenvector pair for ,
Let
i.e.
Then
thus
and eigenvectors B
has eigenvalues
B
for
B B with components
B 13
By subtracting the exact solution
satisfying
B from the approximate solution
we get
B where the error is defined by
By induction on m, we have
14
Note that any vector in
can be expanded
in terms of the eigenvectors . In particular,
there are scalars such
that
Since the eigenvectors satisfy
we get
C
15
By using the expansion
we get
and in general
Note that since
we have
B
B B B thus every “component” is reduced.
16
Numerical example 2
BDC(C
i.e
B 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
Plot of
Note that
, , 40
60
80
and BDC(C
100
120
C 17
Numerical example 3
BDC( C
i.e.
C B 1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
60
Plot of
Note that
80
and
100
120
C BC B
C C B 18
Numerical example 4
B C( C i.e.
BDC4C B 1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
Plot of
Note that
20
40
,
,
60
80
100
120
and
BDC4C BDCFB C 19
Numerical example 5
B C(C
B 1.2
1
0.8
0.6
0.4
0.2
0
−0.2
0
20
40
Plot of
Note that
B
60
80
and
100
120
C B C C B 20
We have observed that the error-contribution
from each eigenvector is governed by the
associated eigenvalue .
1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
Plot of
−1
0
10
20
30
40
50
60
70
80
90
100
The plot shows the eigenvalue distribution
for
BDC(C . Note that the high and low
frequencies are changed very little in
we get
absolute value, but when significant reduction of the error in each
iteration.
21
Relaxation
Aim: Change Jacobis method such that all
high frequencies are reduced in each
iteration.
Since the exact solution satisfies
we get
B
for any CHDB By defining the matrix
we have
B
B B
CH B ?
22
This motivates the relaxation scheme
Since
we get the error
/ ,
thus, as above, we have
23
Eigenvalues / eigenvectors for
Recall that
where
Since
we get
B
B
B
B
has eigenvectors and
Thus,
eigenvalues
B B
B
B
B 24
The error expansion
Since
and
we get, as above
-component is
Thus, the reduction
of
the
governed by
25
The eigenvalue distribution
1
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
0
10
20
Plot of
30
40
It can be showed that
50
60
70
80
90
for some
B
100
0
B
.
Thus for
, all high frequencies are
reduced by a factor B in each iteration.
26
Numerical example 6
BDC(CH
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
20
Plot of
Note that
,
60
80
, and 100
120
B
B
DB CFB
C 40
27
Numerical example 7
BDC( CH
1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
Plot of
Note that
60
,
and 80
100
120
C B
B
BDC B C 28
Numerical example 8
BDC( CH 1
0.8
0.6
0.4
0.2
0
−0.2
−0.4
−0.6
−0.8
−1
0
20
40
Plot of
Note that
,
and 80
100
120
BDC(C B
B
DB C B
C (C 60
29
Numerical example 9
BDC(CH
B 1.2
1
0.8
0.6
0.4
0.2
0
−0.2
0
20
40
Plot of
Note that
60
,
and
80
100
120
B C B C 30
High frequencies
, all errors spanned by
For
eigenvectors of high frequencies ,
, are reduced by a factor B in
each iteration.
B B
*B BDC E
C C4C C C(C(C(CFB
B BDC E
Observe that the reduction does not
depend on .
31
Low frequencies
Errors associated the low-frequencies
are very persistent.
Consider the case of
Then
and thus
where
B
B 32
Since
we have
B
B
B
and thus the error goes to zero slowly for any
choice of .
, we have
In the case of
B and thus, to get
we need
iterations.
33
Putting
BDC E
and using that
B we get
B Conclusion
High frequencies: OK.
Low frequencies: Not handled by
relaxation.
34
The Solution, the Residual
and the Error
(The Good, the Bad and the Ugly)
The solution is defined through the system
the error is given by
and residual is given by
The error and the residual are strongly
connected,
so
solves
35
Note that, given an approximation
residual
, the
can be computed. Note also that if we can
, i.e. an
compute an approximation of
approximation of the solution of
then
hence
and thus we can define
36
But: Is it simpler to compute and
approximate solution of
than of
Yes!
Recall that
and similarly
37
Hence both
and
in the equation
(2)
are spanned by smooth vectors.
The multigrid idea:
Solve (2) on a coarse grid.
38
Grid to grid operator
Suppose that we have a fine mesh
CHDB B
B where is odd.
for
0
1
2
3
4
5
6
Then a coarse grid can be defined by
where CHDB B and 0
1
2
3
39
Let
be an operator taking an element from
to .
So if
on is defined on
by
, we can define Similarly, we let
be a mapping from the
coarse grid to the fine mesh ,
40
Restriction
0
1
2
0
1
3
4
2
5
6
3
CH B B
B B
C B B
B B
Given
on , we define
on by
C
B
B 41
Weighted restriction
0
0
1
Given
on
2
1
3
4
5
2
6
3
, we define
on by
C
B , E B 42
Interpolation 0
0
Given
on B ?
1
2
1
3
4
5
6
2
, we define
3
by
B
on
HC B B 43
A two–grid scheme
Given the system
we compute an approximation by
performing relaxed Jacobi–iterations,
using
,
C
B
B Then, the residual is given by
44
and the error is defined by
and thus
Given an approximation
of
, we have
and a new approximation can be computed
45
We have seen above that both
and
can
be approximated by a linear combination of
“smooth” eigenvectors. Hence, we want to
solve
on a coarser mesh.
Let be the matrix associated the coarser
grid problem,
B
tridiag
B B 46
Define the coarse grid residual by
and solve the coarse grid error equation
Put
and compute
47
Algorithm
(1) is a given initial approximation
CHDB convergence
For
do
(2) Put
(3) Compute steps of relaxed
Jacobi
iterations,
B CHDB .
using
(4) Put
(5) Compute
(6) Compute
(7) Solve
(8) Compute
(9) Compute
(10) Put
48
Numerical experiment
B C
B
C
B
E BDC B
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
20
40
60
80
100
120
Error after 5 relaxed Jacobi iterations
in this example
49
−5
2.5
x 10
2
1.5
1
0.5
0
−0.5
−1
−1.5
−2
−2.5
0
20
40
60
80
100
120
Error after 1 two–grid iteration
50
Generalizations
Suppose we want to solve the system
defined on a fine mesh with the
associated mesh parameter . We assume
that we have a sequence of grids
/ /
with corresponding mesh parameters
We want to formulate multigrid methods for
solving
/
utilizing the grids 51
In order to that, we assume that:
1. The matrices
non-singular.
2. The system
CH B are
defined on the coarsest grid can be solved
very efficiently.
3. On every grid–level there is a
smoothing–operator which, applied to a
vector defined on , reduces all high
B
frequencies with a factor
independent of .
4. There exist interpolation operators
CHDB 5. There exist restriction operators
E B E
B 52
The two–grid method
Procedure Begin
For
B to
,
E E E E E
E E do ,
E end
53
Multigrid V–cycle
,
Procedure
Begin
if
C then
else
B to
For
E do
E E E C
E
E
E E ?
end
54
This is called a V–cycle because the grids are
visited as follows:
Similarly, a W–cycle have the form
55
More general Multigrid–procedure
,
Procedure
Begin
if
C then
else
C to
For
E do
?
E E E C
For B to do E E E E E
?
E
E For
B to do
end
number of pre–smoothing steps.
number of post–smoothing steps.
B
V-cycle
W-cycle
56
Relaxation revisited;
the heat connection
We have seen above that the error of the
relaxed Jacobi scheme
satisfies the scheme
where
and
C
B
(3)
C B
B
C
B
B
B ... ...
... ... ...
... ...
B
On component form:
B
E B C
C
57
Consider the heat equation,
C
CH
B (4)
C By introducing a grid in and and
approximating (4) by an explicit finite
difference scheme, we get
E (5)
. Boundary and
where
initial conditions are
HC 58
B
The scheme (5) can be rewritten on the form
E
It is well known
(6)
where
that (6) is stable for
B
B
Now, we have the heat scheme
E and the error of the scheme for the relaxed
Jacobi–iteration
B
E 59
, these schemes generates
Putting
exactly the same numbers. Hence, the
relaxed Jacobi–iteration generates a stable
and consistent approximation of the heat
equation,
The solution of this
equation is given by
E
where
Note that all the high
frequencies are
damped by the
E
term.
60
Download