Recent perspectives on Discontinuous Galerkin methods Franco Brezzi From joint works with

advertisement
Recent perspectives on Discontinuous Galerkin methods
Franco Brezzi
IMATI del CNR, Pavia, Italy
Istituto Universitario di Studi Superiori (IUSS), Pavia, Italy
From joint works with
Antonietti, Arnold, Cockburn, Hughes, Marini, Masud, Süli,....
Warwick, June 30th - July 3rd , 2009
1
PLAN:
• Original Derivation of DG Methods
• The Weighted Residuals Formulation
• The choice of the weights
• Some numerical Results
2
Eh =edges of Th .
AVERAGES AND JUMPS
n+
_
E
Th : decomposition of Ω in elements K;
E+
_
n
Definition of average and jump on an internal edge:
v+ + v−
{v} =
;
[ v]] = v + n+ + v − n− ∀e ∈ Eh◦ ≡ internal edges
2
τ+ + τ−
{τ } =
;
[ τ ] = τ + n+ + τ − n− ∀e ∈ Eh◦ ≡ internal edges
2
On the boundary edges: [ v]] = vn;
{τ } = τ
3
∂K
A MAGIC FORMULA
q τ · n ds =
e
e
[ q]] · {τ } ds +
e
e
{q}[[τ ] ds
Assume that q is an edge-wise smooth scalar, and τ an edge-wise
smooth vector. We obviously accept that they have different values
on the two sides of the same edge. Then, using the above definitions
of jump [ ·]] and average {·}, we have
K
where e ranges over all edges and e ranges over internal edges.
Clearly, there is nothing magic or deep: just reordering terms in the
sum. But it is surely nice.
4
A HYPERBOLIC MODEL PROBLEM
Let Ω be a bounded polygonal domain in R2 , and let the advective
velocity field β = (β1 , β2 )T be a vector-valued function defined on Ω
with βi ∈ C 1 (Ω), i = 1, 2.
5
A HYPERBOLIC MODEL PROBLEM
Γ−
= {x ∈ Γ : β(x) · n(x) > 0} = outflow,
= {x ∈ Γ : β(x) · n(x) < 0} = inflow,
We define the inflow and outflow parts of Γ = ∂Ω in the usual fashion:
Γ+
where n(x) denotes the unit outward normal vector to Γ at x ∈ Γ.
6
A HYPERBOLIC MODEL PROBLEM
Let moreover γ ∈ C(Ω) be the reactive term, f ∈ L2 (Ω) be the
external source, and g ∈ L2 (Γ− ) be the Dirichlet datum.
Lu ≡ div(βu) + γu = f
on Γ−
in Ω,
As a model problem we will consider the hyperbolic boundary value
problem
= g
Lu ≡ div(βu) + γu
Let us see the DG formulation (Lesaint-Raviart, Reed-Hill).
7
ORIGINAL DERIVATION OF THE DG METHOD
Ω
(div(βu) + γu)vh dx =
(−u (β · ∇vh ) + γuvh )dx+
K∈Th
(−u (β · ∇vh ) + γuvh )dx
{βu} · [ vh ] ds +
e⊂Γ−
f vh dx.
Ω
f vh dx.
(β · n) u vh ds =
Ω
∂K
β · n g vh ds =
Ω
f vh dx.
Multiply the equation by a test function vh , and integrate over Ω:
K
Then integrate by parts the first term:
K∈Th
K
e
e
Using the magic formula, [ βu]] = 0, and u = g on Γ− :
K∈Th
+
e⊂Γ−
8
K
e
{βuh } · [ vh ] ds +
e⊂Γ−
(−uh (β · ∇vh ) + γuh vh )dx
e⊂Γ−
e
β · n g vh ds =
Then you write the equation putting uh instead of u
K∈Th
+
K
{βuh }up · [ vh ] ds +
e⊂Γ−
(−uh (β · ∇vh ) + γuh vh )dx
e
e
β · n g vh ds =
Ω
Ω
f vh dx.
f vh dx.
Finally you substitute the average {βuh } by the upwind average
{βu}up , defined, on every edge, as the value of uh coming from the
upwind triangle:
K∈Th
+
e⊂Γ−
9
AN ELLIPTIC MODEL PROBLEM (DARCY)
on Γ = ∂Ω
in Ω
Let κ ∈ L∞ (Ω) be the diffusion coefficient, and consider the problem:
A u ≡ −div(κ∇u) = f
u=0
u=0
divσ = f
on Γ
in Ω
It is often convenient to introduce the flux σ = −κ∇u so that the
problem splits in two equations
⎧
in Ω
⎪ κ−1 σ + ∇u= 0
⎪
⎨
⎪
⎪
⎩
Let us see its DG formulation (Arnold, Wheeler, Douglas-Dupont)
10
κ∇u · ∇h vh dx −
e
K
∂K
κ∇u · n vh ds =
Ω
f vh dx.
You multiply the equation by a test function vh and integrate over Ω.
Then integrate by parts.
Ω
κ∇u · ∇h vh−
e
[ vh ] · {κ∇u}−
e
e
Ω
Rearranging terms with the magic formula you have
[ κ∇u]]{vh }= f vh .
Ω
κ∇u · ∇h vh dx −
e
e
[ vh ] · {κ∇u}ds =
Ω
f vh dx.
Now you say:”Oh, but I know that u is smooth: hence [ κ∇u]] is zero
and I can forget about it!” You then write
Ω
11
You had
Ω
κ∇u · ∇h vh dx −
κ∇u · ∇h vh −
·
∇h vh −
e
e
e
e
e
[ vh ] · {κ∇u}ds =
[ vh ] ·{κ∇u}−
[ vh ] ·{κ∇h uh }
e
e
Ω
f vh dx.
Ω
Ω
f v.
f vh .
[ uh ] ·{κ∇h vh } =
[ u]] ·{κ∇h vh } =
e
−
e
Then you say: ”Since u is smooth, then also [ u]] = 0!” And you add a
term to restore symmetry
Ω
κ∇h uh
e
Then you write uh in place of u
Ω
12
κ∇h uh · ∇h vh −
e
e
[ vh ] ·{κ∇h uh } −
Your discrete problem is now
Ω
κ∇h uh · ∇h vh −
e
e
e
[ vh ] · {κ∇h uh } −
[ uh ] ·{κ∇h vh } =
e
e
Ω
Ω
f vh .
f vh .
[ uh ] · {κ∇h vh }
[ uh ] · [ vh ] =
e
Then you say: ”Gosh! My method is unstable! However, since
[ u]] = 0, I can add a stabilizing term”
Ω
e
κγ
+
|e|
e
And you are happy. This is ”IP” (Interior Penalty).
13
DG MIXED FORMULATION
Let us see the DG mixed formulation (Bassi-Rebay, Cockburn-Shu).
K
K
K
K
K
Ω
fv
û τ · nds = 0
v σ̂ · nds =
u div τ dx +
σ · ∇vdx +
κ−1 σ · τ −
K
K
You multiply the equations κ−1 σ + ∇u= 0 and divσ = f by the test
functions τ and v, respectively, and integrate by parts
Ω
−
K
where û and σ̂ are the numerical fluxes meant to approximate u|∂E
and σ |∂E ≡ −κ∇u|∂E (respectively), to be modelled later on.
14
e
e
e
e
Ω
fv
û [ τ ] ds = 0
σ̂ [ v]]ds =
udivh τ dx +
σ · ∇h vdx +
κ−1 σ · τ −
−
Ω
Ω
Then you apply the magic formula assuming that û and σ̂ are single
valued
Ω
κ−1 σ h · τ +
Ω
∇h uh · τ +
σ · ∇h vdx +
e
e
σ̂ [ v]]ds =
Ω
{û − uh }[[τ ] ds+
e
e
fv
e
e
[ uh ] {τ }ds = 0
Then you put σ h in place of σ and uh in place of u, and you
integrate by parts back (!) the first equation
Ω
−
Ω
15
κ−1
σh
·τ+
Ω
∇uh
·τ+
e
e
{û −
uh }[[τ ] ds−
e
σ̂ [ v]]ds =
Ω
[ uh ] {τ }ds = 0
f v.
e
If your choice of û depends only on uh , and if the gradients of the
discretized scalars are contained in the space of discretized vectors,
you can use the first equation
Ω
to express σ h directly as an explicit function of uh
e
e
σ h = σ h (uh )
σ(uh ) · ∇h vdx +
and substitute in the second
−
Ω
For various choices of û and σ̂ you get a whole ZOO of methods
(Arnold-B-Cockburn-Marini).
16
THE WEIGHTED RESIDUALS APPROACH -1
(B-Cockburn-Marini-Süli - 2006)
Let us see the basic ideas behind it. Assume that we want to solve
the problem
div(βu) + γu = f in Ω ≡ Ω1 ∪ Σ ∪ Ω2
Σ
Ω2
[ β u]] = 0 on Σ and on Γ−
div(βu) + γu = f in Ωi (i = 1, 2)
with, say, β · nu = 0 on Γ− . Then we have
Ω
1
17
Σ
Ω2
[ β u]] = 0 on Σ and on Γ−
div(βu) + γu = f in Ωi (i = 1, 2)
THE WEIGHTED RESIDUALS APPROACH - 2
1
Ω
Ωi
(div(βuh ) + γuh − f ) B0 vh dx
Σ∪Γ−
[ βuh ] B1 vh ds = 0 ∀vh
Accordingly you could choose two operators, B0 and B1 and write
2 i=1
+
18
THE WEIGHTED RESIDUALS APPROACH -3
Similarly, if you want to solve the problem
− div(κ∇u) = f in Ω ≡ Ω1 ∪ Σ ∪ Ω2
Σ
Ω2
[ u]] = 0 on Σ and on Γ
− div(κ∇u) = f in Ωi (i = 1, 2)
with, say, u = 0 on Γ, then you have instead
Ω
1
[ κ∇u]] = 0 on Σ
19
Ωi
Σ
Ω2
[ u]] = 0 on Σ and on Γ
− div(κ∇u) = f in Ωi (i = 1, 2)
THE WEIGHTED RESIDUALS APPROACH - 4
Ω1
[ κ∇u]] = 0 on Σ
(− div(κ∇uh ) − f ) B0 vh dx
[ uh ] · B1 vh ds +
Σ
[ κ∇uh ] B2 vh ds = 0 ∀vh
Then you could choose three operators, B0 , B1 and B2 , and write
2 i=1
+
Σ∪Γ
20
Starting from
CHOICE OF THE WEIGHTS -1
((div(βuh ) + γuh − f ), B0 vh )h + < [|βuh |], B1 vh >− = 0 ∀vh
you can choose B0 vh ≡ vh and integrate by parts
(uh , −β · ∇vh + γvh )h − (f, vh )h
+ < [|βuh |], B1 vh >− = 0 ∀vh
+ < {βuh }, [|vh |] > + < [|βuh |], {vh } >−
21
Having
CHOICE OF THE WEIGHTS - 2
=
−{vh }
|β · n|
n · [|vh |] and get
+
2|β|
+ < [|βuh |], B1 vh >− = 0 ∀vh
+ < {βuh }, [|vh |] > + < [|βuh |], {vh } >−
(uh , −β · ∇vh + γvh )h − (f, vh )h
you can choose
B1 vh
(uh , −β · ∇vh + γvh )h − (f, vh )h
+ < {βuh }upw , [|vh |] >− = 0 ∀vh
22
CHOICE OF THE WEIGHTS - 3
On the other hand from
(div(βuh ) + γuh − f, B0 vh )h + < [|βuh |], B1 vh >− = 0 ∀vh
you can (brutally) choose B0 vh ≡ div(βvh ) + γuh and
B1 vh = μ(h)[|βvh |] to get
(div(βuh ) + γuh − f, div(βvh ) + γvh )h
+ μ(h) < [|βuh |], [|βvh |] >− = 0 ∀vh
23
CHOICE OF THE WEIGHTS - 4
(div(βuh ) + γuh − f, B0 vh )h + < [|βuh |], B1 vh >− = 0 ∀vh
However, you can also take B0 vh ≡ vh and B1 vh = μ(h)[|βvh |] for
μ(h) big enough to get
(div(βuh ) + γuh − f, vh )h + μ(h) < [|βuh |], [|βvh |] >− = 0 ∀vh
that surely works for γ + ∇ · β > 0.
24
CHOICE OF THE WEIGHTS FOR DARCY-1
on each internal edge
on each edge
in each element
Coming back to the Darcy problem − div(κ∇u) = f , with zero
Dirichlet boundary conditions, our residual equations are now:
• div κ∇ u + f = 0
• [ u]] = 0
• [ κ∇u]] = 0
25
CHOICE OF THE WEIGHTS FOR DARCY-2
Then we have to choose three operators B0 , B1 , and B2 , and to
write
=
K
u v dx
< u, v
>h =
e∈Eh
e
u v ds
< [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0 ∀vh
(A uh − f , B0 (vh ))h +
where again
(u,
v)h
K∈Th
and < u, v >h0 runs only on internal edges
26
A POSSIBLE CHOICE FOR B0
∂K
κ∇uh · n v ds
Choosing B0 (v) ≡ v and using the magic formula, we can write:
(Au, B0 v)h ≡ (−div(κ∇uh ), vh )h
= (κ∇uh , ∇vh )h −
K
= (κ∇uh , ∇vh )h − < {κ∇uh }, [ vh ] >h − < [ κ∇uh ] , {vh } >h0 .
27
A POSSIBLE CHOICE FOR B0
With the previous choice B0 (v) ≡ v our method becomes then
(κ∇uh , ∇vh )h − (f, vh )
− < {κ∇uh }, [ vh ] >h − < [ κ∇uh ] , {vh } >h0
+ < [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0
for all vh in the discrete space. The strong temptation is to use
B2 (vh ) = −{vh } (in order to kill the second purple term) and
B( vh ) = {κ∇uh } in order to symmetrize the first purple term. But
you could also use B( vh ) = −{κ∇uh } to get B-O
28
A POSSIBLE CHOICE FOR B2
With the previous choice B0 (v) ≡ v our method becomes then
(κ∇uh , ∇vh )h − (f, vh )
− < {κ∇uh }, [ vh ] >h − < [ κ∇uh ] , {vh } >h0
+ < [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0
for all vh in the discrete space. The strong temptation is to use
B2 (vh ) = {vh } (in order to kill the second purple term) and
B( 1vh ) = −{κ∇uh } in order to symmetrize the first purple term. But
you could also use B1 (vh ) = +{κ∇uh } to get B-O
29
A POSSIBLE CHOICE FOR B1
With the previous choice B0 (v) ≡ v our method becomes then
(κ∇uh , ∇vh )h − (f, vh )
− < {κ∇uh }, [ vh ] >h − < [ κ∇uh ] , {vh } >h0
+ < [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0
for all vh in the discrete space. The strong temptation is to use
B2 (vh ) = {vh } (in order to kill the second purple term) and
B1 (vh ) = −{κ∇uh } in order to symmetrize the first purple term. But
you could also use B1 (vh ) = +{κ∇uh } to get B-O
30
A POSSIBLE CHOICE FOR B1
With the previous choice B0 (v) ≡ v our method becomes then
(κ∇uh , ∇vh )h − (f, vh )
− < {κ∇uh }, [ vh ] >h − < [ κ∇uh ] , {vh } >h0
− < [ uh ] , {κ∇uh } >h − < [ κ∇uh ] , B2 (vh ) >h0 = 0
for all vh in the discrete space. The strong temptation is to use
B2 (vh ) = {vh } (in order to kill the second purple term) and
B1 (vh ) = −{κ∇uh } in order to symmetrize the first purple term. But
you could also use B1 (vh ) = +{κ∇uh } to get B-O
31
A POSSIBLE CHOICE FOR B1
Surely, in all cases, you would need to add some stabilizing term to
B1 . Hence, for instance, with B0 (v) ≡ v, B2 (vh ) = {vh }, and
B1 (vh ) = −{κ∇uh } + C1 (h)[[vh ] the method becomes
(κ∇uh , ∇vh )h − (f, vh )
− < {κ∇uh }, [ vh ] >h
− < [ uh ] , {κ∇uh } >h +C1 (h) < [ uh ] , [ uh ] >h = 0 ∀vh
32
OTHER CHOICES OF THE WEIGHTS FOR DARCY
We had to choose three operators B0 , B1 , and B2 in
(A uh − f , B0 (vh ))h +
< [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0 ∀vh
A simple minded but viable choice would be to take B0 (vh ) = A vh ,
B1 (vh ) = C1 (h)[[vh ] , and B2 (vh ) = C2 (h)[[κ∇uh ] to reach
(A uh − f , A vh )h +
C1 (h) < [ uh ] , [ vh ] ) >h +C2 (h) < [ κ∇uh ] , [ κ∇uh ] >h0 = 0 ∀vh
33
OTHER CHOICES OF THE WEIGHTS FOR DARCY
We had to choose three operators B0 , B1 , and B2 in
(A uh − f , B0 (vh ))h +
< [ uh ] , B1 (vh ) >h + < [ κ∇uh ] , B2 (vh ) >h0 = 0 ∀vh
Now you could ”play dumb” and take B0 (vh ) =vh ,
B1 (vh ) = C1 (h)[[vh ] , and B2 (vh ) = C2 (h)[[κ∇uh ] to reach
(A uh − f , vh )h +
C1 (h) < [ uh ] , [ vh ] ) >h +C2 (h) < [ κ∇uh ] , [ κ∇uh ] >h0 = 0 ∀vh
34
−1
10
−1
10
10
0
1
10
IP cubic μ=10/h
THE CLASSICAL CHOICE (IP)
IP quadratic μ=10/h
−2
10
−2
10
10
10
−6
−5
−4
−3
10
10
−5
−4
−3
10
2
10
10
1
10
10
0
10
−6
10
Left: Quadratic elements, C1 = 10/h. Right: Cubic elements,
C1 = 10/h.
35
2
10
0
10
−1
10
−2
10
−3
10
−4
10
0
10
−5
10
2
10
−2
10
−3
10
−4
10
−5
10
10
−6
10
0
1
10
LS cubic C1=C2=10/h
THE LEAST SQUARES CHOICE
LS quadratic C1=C2=10/h
1
10
2
10
Left: Quadratic elements, C1 = C2 = 10/h. Right: Cubic elements,
C1 = C2 = 10/h.
36
−1
10
DU quadratic C1=C2=10/h
10
0
THE DUMB CHOICE
−2
10
−3
10
−2
10
10
−6
−5
−4
10
−4
−3
10
2
10
10
1
10
10
0
10
−5
10
DU cubic C1=C2=10/h
1
10
2
10
Left: Quadratic elements, C1 = C2 = 10/h. Right: Cubic elements,
C1 = C2 = 10/h.
37
1
10
−1
10
10
0
1
10
DU cubic C1=C2=10
THE DUMB CHOICE: SENSITIVITY
DU quadratic C1=C2=1
−2
10
0
10
10
10
−6
−5
−4
−3
10
10
−3
−2
−1
10
2
10
10
1
10
10
0
10
−4
10
2
10
Left: Quadratic elements, C1 = C2 = 1. Right: Cubic elements,
C1 = C2 = 10.
38
−2
10
−3
10
−4
10
−5
10
0
10
−6
10
2
10
−1
10
−2
10
−3
10
−4
10
−5
10
10
−6
DU cubic C1=10 C2=10/h
1
10
2
10
C2 = 10. Right: Cubic elements,
10
0
THE DUMB CHOICE: SENSITIVITY
DU cubic C1=10/h C2=10
1
10
Left: Cubic elements, C1 = 10/h
C1 = 10 C2 = 10/h.
39
−1
10
DU quadratic C1=C2=10/h
OTHER RESULTS
−2
10
−3
10
−2
10
10
0
10
−6
−5
−4
10
−4
−3
10
2
10
10
1
10
10
0
10
−5
10
LS cubic C1=C2=1
1
10
2
10
Left: Dumb, quadratic elements, C1 = C2 = 10/h. Right: Least
Squares, cubic elements, C1 = 1 C2 = 1.
40
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
41
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
42
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
43
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
44
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
45
GENERAL CONSIDERATIONS
• You could use WRDG because it makes the derivation easier
• You could use WRDG because it makes the a posteriori analysis
more natural
• You could use WRDG because it suggests new, interesting
methods
• You could use WRDG because it is new, and there is much to be
done
• You could use WRDG because finally you know what you are
doing
46
Download