Lecture 16 18.086 better to use full multigrid: FMG cycles are described below. Then the operation count comes back to O(n) even for this higher required accuracy e = O(h2). Multigrid methods (7.3) V-Cycles and W-Cycles and Full Multigrid Clearly multigrid need not stop at two grids. If it did stop, it would miss the remarkable power of the idea. The lowest frequency is still low on the 2h grid, and that part • of Low freq.won't on decay fine mesh frequency the error quickly => until high we move to 4h or 8hon (or coarse a very coarse 512h). mesh The two-grid v-cycle extends in a natural way to more grids. It can go down to coarser grids (2h, 4h, 8h) and back up to (4h, 2h, h) . This nested sequence of v-cycles a V-cycle V). Don't possible! forget that coarse grid sweeps are much faster than • isMore than(capital two meshes fine grid sweeps. Analysis shows that time is well spent on the coarse grids. So the W-cycle that stays coarse longer (Figure 7.11b) is generally superior to a V-cycle. • “Notation”: Figure 7.11: V-cycles and W-cycles and FMG use several grids several times. • Simplest one is the v-cycle with just two meshes (h and 2h grid spacing) current residual rh = b - Auh to the coarse grid. We iterate a few times on that grid, to approximate the coarse-grid error by E2h. Then interpolate back to Eh the fine grid, make the correction to uh + Eh, and begin again. v-cycle algorithm This fine-coarse-fine loop is a two-grid V-cycle. We call it a v-cycle (small v Here are the steps (remember, the error solves Ah(u - uh) = bh - Ahuh = rh): 1. Iterate on Ahu = bh to reach uh (say 3 Jacobi or Gauss-Seidel steps). 2. Restrict the residual rh = bh - Ahuh to the coarse grid by r2, = ~ i ~ 3. Solve A2,E2, 4. Interpolate E2h back t o Eh = IthE,,. 5. Iterate 3 more times on Ahu = bh starting from the improved uh = r2, (or come close to EZhby 3 iterations from E = 0). Add Eh to uh. + Eh. Steps 2-3-4 give the restriction-coarse solution-interpolation sequence that is t In theoffollowing: heart multigrid. Recall the three matrices we are working with: ui: quantities on fine grid A =grid Ah = original matrix vi: quantities on coarse R = R2h h = restriction matrix h Technicalities of multigrain algorithms • Interpolation matrix I from coarse (vi) => fine (ui) grid: linear interpolation u5 v3 0 X1 X2 X3 1 x1 x2 x3 x4 x5 x6 x7 assuming Dirichlet BC • 2D (see blackboard) 0 B B B B B B B B @ u1 u2 u3 u4 u5 u6 u7 1 0 C B C B C B C 1B C= B C 2B C B C B A @ 1 2 1 0 0 0 0 0 0 1 2 1 0 0 I 0 0 0 0 1 2 1 1 C C 0 1 C v1 C C · @ v2 A C C v3 C A Technicalities of multigrain algorithms • Restriction matrix R from fine (ui) => coarse (vi) grid: A possible choice: v1 = u2 etc. u5 v3 0 X1 X2 X3 1 x1 x2 x3 x4 x5 x6 x7 assuming Dirichlet BC • Smarter choice: Weighted average, i.e. v1 = (u1 +2u2 +u3 )/4. Then: 1 T R= I 2 Technicalities of multigrain algorithms • Restriction of system matrix A from fine (Ah) => coarse (A2h) grid: A2h = RAh I • Example using A=K5/h 0 2 1 B 1 B Ah = 2 @ ... h 0 A2h 2 1 2 ... ... 0 1 ... 1 1 ... ... C C ... A 2 1 = RAh I = . . . = (2h)2 ✓ 2 1 5 fine grid points => 2 coarse grid points 5 1 2 ◆ This is just the Kmatrix on the coarse mesh! current residual rh = b - Auh to the coarse grid. We iterate a few times on that grid, to approximate the coarse-grid error by E2h. Then interpolate back to Eh the fine grid, make the correction to uh + Eh, and begin again. v-cycle algorithm This fine-coarse-fine loop is a two-grid V-cycle. We call it a v-cycle (small v Here are the steps (remember, the error solves Ah(u - uh) = bh - Ahuh = rh): 1. Iterate on Ahu = bh to reach uh (say 3 Jacobi or Gauss-Seidel steps). 2. Restrict the residual rh = bh - Ahuh to the coarse grid by r2, = ~ i ~ 3. Solve A2,E2, 4. Interpolate E2h back t o Eh = IthE,,. 5. Iterate 3 more times on Ahu = bh starting from the improved uh = r2, (or come close to EZhby 3 iterations from E = 0). Add Eh to uh. + Eh. Steps 2-3-4 give the restriction-coarse solution-interpolation sequence that is t In theoffollowing: heart multigrid. Recall the three matrices we are working with: ui: quantities on fine grid A =grid Ah = original matrix vi: quantities on coarse R = R2h h = restriction matrix h Now multigrid begins. The current fine-grid residual is rh = -Ahe3. After restri on to the coarse grid it becomes r2h. Three weighted Jacobi iterations on the coar id error equation A2hE2h= r2h start with the guess E2h = 0. That produces t ucial error reduction shown in this figure contributed by Bill Briggs. Error behavior error of initial guess error eh error e’h igure 7.12: (v-cycle) Low frequency survives 3 fine grid iterations (center). It after grid iter. coarse duced by 3 coarse grid iterations and3 fine mapped back to theafter fine3grid by grid Bill iter. Brigg Eigenvector Analys he reader will recognize that one matrix (like I - S for a v-cycle) can describe ea multigrid handles the low frequencies. You can see that a perfect smoother followed by perfect multi at step 3) would leave no error. In reality, this will not happen. F (not so simple) analysis will show that a multigrid cycle with g reduce the error by a constant factor p that is independent of Some notes on performance Can show: [[errorafter step 511 5 p [[errorbefore step 111 w with typically 𝜌≈0.1 independent on h (i.e. N)! A typical value is p = Compare with p = .99 for Jacob Holy Grail of numerical analysis, to achieve a convergence factor • Typically for Jacobi alone: 𝜌≈0.99 of the overall iteration matrix) that does not move up to 1 as h a given relative accuracy in a fixed number of cycles. Since eac requires only O(n) operations on sparse problems of size n, mu algorithm. This does not change in higher dimensions. • A. There is a further point about the number of steps and the may want the solution error e to be as small as the discretizati original differential equation was replaced by Au = b). In our ex differences, this demands that we continue until e = O(h2) = O we need more than a fixed number of v-cycles. To reach pk k = O(1og N) cycles. Multigrid has an answer for this too. Downloaded 04/06/15 to 18.189.52.122. Redistribution subject to SIAM license or c 10^ 0.4 ------------ -------- --------0.2 -------------------- --------- 106 Some notes on performance n 2 4 6 8 2 iteration number m 4 6 8 iteration number m Error of each iteration 10 0 10 1 10 2 10 3 10 4 10 5 10 6 ---------------------------------------------- ----- -----------+/r ---------------- -------------------- -------------- ------<- `. 10' -^--.^---------- 20 ------------------------------------------ 40 60 80 100 120 k = O(1og N) cycles. Multigrid has an answer for this too. Instead of repeating v-cycles, or nesting them into V-cycles or W-cycles, it is better to use full multigrid: FMG cycles are described below. Then the operation count comes back to O(n) even for this higher required accuracy e = O(h2). Some notes on performance V-Cycles and W-Cycles and Full Multigrid Clearly multigrid need not stop at two grids. If it did stop, it would miss the remark• Total cost per complete v-cycle: O(N) (N: number of grid able power of the idea. The lowest frequency is still low on the 2h grid, and that part points) of the error won't decay quickly until we move to 4h or 8h (or a very coarse 512h). • • The error two-grid v-cycle extends in astep natural to moreindependent grids. It can go Since is reduced in each by away constant ondown h to coarser gridsN), (2h,the 4h,total 8h) and back (4h, 2h, h)low . This nested (and thus cost to up gettoarbitrary errors stillsequence is only of v-cycles is a V-cycle (capital V). Don't forget that coarse grid sweeps are much faster than O(N): fine grid sweeps. Analysis shows that time is well spent on the coarse grids. So the W-cycle that stays coarse longer (Figure 7.11b) is generally superior to a V-cycle. The finest level grid operations determine the cost! Figure 7.11: V-cycles and W-cycles and FMG use several grids several times.