MATH 676 – Finite element methods in scientific computing

advertisement
MATH 676
–
Finite element methods in
scientific computing
Wolfgang Bangerth, Texas A&M University
http://www.dealii.org/
Wolfgang Bangerth
Lecture 33.5:
Which quadrature formula to use
http://www.dealii.org/
Wolfgang Bangerth
The need for quadrature
Recall from lecture 4 and many example programs:
We compute
A ij =( ∇ φ i , ∇ φ j )
F i =(φ i , f )
by mapping back to the reference cell...
A ij = ( ∇ φi , ∇ φ j )
=
=
∑K ∫K ∇ φi ( x)⋅∇ φ j ( x)
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣
̂
̂
̂
̂ )∇
J
(
x
)
∇
φ
(
x
)
⋅
J
∑K ∫K̂ K
i
K (x
j
K
...and quadrature:
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q ⏟
K
q
q
=: JxW
Similarly for the right hand side F.
http://www.dealii.org/
Wolfgang Bangerth
The need for quadrature
Question:
When approximating
A ij =( ∇ φi , ∇ φ j )
by
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q
q
q
how should we choose the points x̂ q and weights wq?
In other words:
Which quadrature rule should we choose?
http://www.dealii.org/
Wolfgang Bangerth
Considerations
Question: Which quadrature rule should we choose?
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q
q
q
Goals:
●
Efficient:
Make Q as small as possible
●
Accurate:
Do not introduce unnecessary errors
About accuracy:
In particular, do nothing that affects
the convergence order!
http://www.dealii.org/
Wolfgang Bangerth
1d: The matrix
Question: Which quadrature rule should we choose?
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q
K
q
q
Consider the 1d case:
●
We use an element of polynomial degree k
●
We use a linear mapping
Then:
●
●
●
J K , J −1
are constant
K , det J K
̂ φ̂ ( x̂ )
∇
is a polynomial of degree k-1
j
q
The integrand has polynomial degree 2(k-1)
http://www.dealii.org/
Wolfgang Bangerth
1d: The matrix
Question: Which quadrature rule should we choose?
A ij = ( ∇ φi , ∇ φ j )
≈
Q
∑K ∑q=1 J −1K ( x̂ q) ∇̂ φ̂ i ( x̂ q) ⋅ J −1K ( x̂ q) ∇̂ φ̂ j ( x̂ q) ∣det J K ( x̂ q )∣
wq
Consider the 1d case:
●
●
The integrand has polynomial degree 2(k-1)
Gauss quadrature with n points is exact for polynomials
up to degree 2n-1
Consequence:
We can compute the integral A ij =(∇ φi , ∇ φ j )
exactly via Gauss quadrature with n=k points!
http://www.dealii.org/
Wolfgang Bangerth
1d: The right hand side
Question: How about the right hand side?
F i = (φi , f ) ≈
Q
∑K ∑q=1 φ̂ i ( x̂ q )
f ( x q ) ∣det J K ( x̂ q )∣ w q
Consider the 1d case:
●
We use an element of polynomial degree k
●
We use a linear mapping
Then:
●
●
●
●
is constant
is a polynomial of degree k
f (x)
is not in general a polynomial
The integrand is not polynomial
det J K
̂ φ̂ ( x̂ )
∇
j
q
http://www.dealii.org/
Wolfgang Bangerth
1d: The right hand side
Question: What to do here?
F i = (φi , f ) ≈
Q
∑K ∑q=1 φ̂ i ( x̂ q )
f ( x q ) ∣det J K ( x̂ q )∣ w q
Consider Gauss integration with n points:
●
Integrates polynomials of degree 2n-1 exactly
●
For general f(x) essentially integrates
Q
F i = (φi , f ) ≈ ∑K ∑q=1 φ̂ i ( x̂ q ) f (x q ) ∣det J K ( x̂ q )∣ w q
≈ (φi , I 2n−k f )
where I2n-kf = f at the n quadrature points + n-k others
http://www.dealii.org/
Wolfgang Bangerth
1d: The right hand side
Consider Gauss integration with n points:
●
Integrates polynomials of degree 2n-1 exactly
●
For general f(x) essentially integrates
Q
∑K ∑q=1 φ̂ i ( x̂ q ) f ( x q ) ∣det J K ( x̂ q )∣
= ∑ K ∫K I 2n (φi f ) ≈ (φi , I 2n−k f )
F i = (φi , f ) ≈
wq
where I2n-kf = f at the n quadrature points + n-k others
on every cell
Consequence:
Inexact integration is equivalent to approximating
the solution of a slightly perturbed problem!
http://www.dealii.org/
Wolfgang Bangerth
1d: The right hand side
Consider the original and perturbed problems:
−Δ u=f
u=0
−Δ u=
̃ f̃
u=0
in Ω
on ∂ Ω
in Ω
on ∂ Ω
Consequence:
∥u−ũ h∥H ≤
∥u−
u∥
̃ H
⏟
1
+∥⏟
ũ −ũ h∥H
1
2n−k +1
≤C 1∥f − ̃f ∥H ≤C 2 h
∥f ∥H
−1
1
k
2n−k
≤C 3 h ∥̃u∥H
k
We want the first term to be at least as good as the
second. We need to choose n=k.
http://www.dealii.org/
Wolfgang Bangerth
Higher dimensions: The matrix
Question: Which quadrature rule should we choose?
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q
K
q
q
Consider the higher dimensional case:
●
Use an element of polynomial degree k in each direction
●
Use a d-linear mapping
Then:
●
●
●
●
●
are polynomials of degree k, kd
is a rational function
J −1
K
̂ φ̂ ( x̂ )
is a polynomial of degree dk-1
∇
j
q
The integrand is rational
For linear mappings, it is of degree 2(dk-1)
J K , det J K
http://www.dealii.org/
Wolfgang Bangerth
Higher dimensions: The matrix
Question: Which quadrature rule should we choose?
A ij ≈
Q
−1
−1
̂
̂ φ̂ ( x̂ ) ∣det J ( x̂ )∣ w
̂
J
(
x
̂
)
∇
φ
(
x
̂
)
⋅
J
̂ q) ∇
∑K ∑q=1 K q i q
K (x
j
q
K
q
q
Consider the higher dimensional case:
●
The integrand is rational
●
For linear mappings, it is of degree 2(dk-1)
●
Gauss quadrature with n points per direction is exact for
degree 2n-1 in each variable
Nevertheless, using the tensor product structure:
We need to use Gauss quadrature
with n=k+1 points per direction.
http://www.dealii.org/
Wolfgang Bangerth
Higher dimensions: The right hand side
Question: Which quadrature rule should we choose?
F i = (φi , f ) ≈
Q
∑K ∑q=1 φ̂ i ( x̂ q )
f ( x q ) ∣det J K ( x̂ q )∣ w q
Similar considerations can be applied:
We need to use Gauss quadrature
with n=k+1 points per direction.
http://www.dealii.org/
Wolfgang Bangerth
Summary
As a general rule of thumb:
●
●
Gauss quadrature with n=k+1 points per direction is
sufficient
– for the Laplace matrix
A ij = ( ∇ φ i , ∇ φ j )
– for the mass matrix
M ij = (φi , φ j )
– for the right hand side
F i = (φi , f )
It is generally also sufficient with variable coefficients:
A ij = (a( x) ∇ φi , ∇ φ j )
With n=k+1, the quadrature error does not dominate
the overall error (if a(x) is smooth).
http://www.dealii.org/
Wolfgang Bangerth
Non-smooth coefficients
What to with non-smooth terms?
For example
A ij = (a( x) ∇ φi , ∇ φ j )
F i = (φi , f )
where a(x) or f(x) are discontinuous.
Recall: Quadrature is equivalent to exact integration with
an interpolated coefficient.
For discontinuous functions, interpolation
does not help very much: Quadrature
produces large errors.
http://www.dealii.org/
Wolfgang Bangerth
Non-smooth coefficients
What to with non-smooth terms?
For example
A ij = (a( x) ∇ φi , ∇ φ j )
F i = (φi , f )
where a(x) or f(x) are discontinuous.
Before:
∥u−ũ h∥H ≤
1
∥u−
u∥
̃ H
⏟
+∥⏟
ũ −ũ h∥H
1
1
2n−k +1
≤C 1∥f − ̃f ∥H ≤C 2 h
∥f ∥H
−1
k
2n−k
≤C 3 h ∥̃u∥H
k
Now: The interpolation step fails! We may only get
∥u−ũ h∥H ≤
1
∥u−
ũ ∥H
⏟
≤C 1∥f − ̃f ∥H ≤C 2 h
−1
http://www.dealii.org/
+∥⏟
ũ −ũ h∥H
1
1
s+1
∥f ∥H
k
s
≤C 3 h ∥̃u∥H
k
Wolfgang Bangerth
Non-smooth coefficients
What to with non-smooth terms?
For example
A ij = (a( x) ∇ φi , ∇ φ j )
F i = (φi , f )
where a(x) or f(x) are discontinuous.
Now:
∥u−ũ h∥H ≤
∥u−
ũ ∥H
⏟
1
+∥⏟
ũ −ũ h∥H
1
s+1
≤C 1∥f − ̃f ∥H ≤C 2 h ∥f ∥H
−1
1
k
s
≤C 3 h ∥̃u∥H
k
Solution: Subdivide the cell into L pieces so that
h
C2
L
s+1
()
∥f∥H ≈C 3 hk ∥ũ ∥H
s
k
This is what the QIterated class does.
http://www.dealii.org/
Wolfgang Bangerth
Special purpose quadratures
There are situations where we want quadrature
rules other than Gauss:
●
●
To affect stability properties of a discretization
– Underintegration for nearly incompressible elasticity
– Special purpose quadrature for mixed problems
To improve sparsity of matrices
– Make some terms zero
– Make a matrix diagonal
http://www.dealii.org/
Wolfgang Bangerth
Sparsifying matrices
Using the trapezoidal rule for the Laplace matrix:
Assume:
●
Uniform mesh with square cells
●
Q1 element with shape functions
φ1=(1− x̂ )(1− ̂y ) , φ2= x̂ (1− ̂y ),
φ3 =(1− x̂ ) ̂y ,
φ4 = x̂ ̂y
and gradients
̂ φ = −(1− ̂y ) , ∇
̂ φ = (1− ŷ )
∇
1
2
−(1− x̂ )
− x̂
̂ φ = − ŷ ,
̂ φ = ŷ
∇
∇
3
4
1− x̂
x̂
(
( )
●
)
(
)
()
Trapezoidal rule with integration points at the vertices
http://www.dealii.org/
Wolfgang Bangerth
Sparsifying matrices
Using the trapezoidal rule for the Laplace matrix:
●
Q1 element with shape gradients
̂ φ = −(1− ̂y ) , ∇
̂ φ = (1− ŷ )
∇
1
2
−(1− x̂ )
− x̂
̂ φ = − ŷ ,
̂ φ = ŷ
∇
∇
3
4
1− x̂
x̂
(
( )
●
)
()
Q
∑K ∑q=1 J−1K ( x̂ q ) ∇̂ φ̂ i ( x̂ q ) ⋅ J −1K ( x̂ q ) ∇̂ φ̂ j ( x̂ q ) ∣det J K ( x̂ q)∣
wq
At all vertices, we have
̂ φ ⋅∇ φ =0,
∇
1
3
●
(
Trapezoidal rule with integration points at the vertices:
A ij ≈
●
)
̂ φ ⋅∇ φ =0,
∇
2
4
Degrees of freedom diagonal across cells do not couple
http://www.dealii.org/
Wolfgang Bangerth
Sparsifying matrices
Using the trapezoidal rule for the Laplace matrix:
●
Degrees of freedom diagonal across cells do not couple:
●
A40=A42=A46=A48=0
●
We can also show: A41=A43=A45=A47= -A44/4
●
●
●
This is exactly the 5-point stencil (→ finite differences)!
In 3d, this leads to the usual 7-point stencil
This matrix is sparser than normal
http://www.dealii.org/
Wolfgang Bangerth
Diagonal mass matrices
Using the trapezoidal rule for the mass matrix:
●
Q1 element with shape values
φ1=(1− x̂ )(1− ̂y ) , φ2= x̂ (1− ̂y ),
φ3 =(1− x̂ ) ̂y ,
φ4 = x̂ ̂y
●
Trapezoidal rule with integration points at the vertices:
M ij ≈
●
∑K ∑q=1 φ̂ i ( x̂ q )
At all vertices, we have
M ij ≈
●
Q
φ̂ j ( x̂ q ) ∣det J K ( x̂ q )∣ w q
φ̂i ( x̂q ) φ̂ j ( x̂q )=δij δiq
Q
∑K ( ∑q=1 ∣det J K ( x̂ q )∣
)
w q δij =
and thus
∑K ∣K∩supp φi∣δij
This mass matrix is diagonal!
http://www.dealii.org/
Wolfgang Bangerth
Diagonal mass matrices
Using the trapezoidal rule for the mass matrix:
●
This results in a diagonal mass matrix
●
This is useful in explicit time stepping schemes
●
Generalized to arbitrary elements by choosing
quadrature points at nodal interpolation points
http://www.dealii.org/
Wolfgang Bangerth
Summary
General rule:
●
●
●
Use Gaussian quadrature with n=k+1 per coordinate
direction where k is the highest polynomial degree in
your finite element
Think about the implications if you have non-smooth
coefficients
Only use quadrature rules other than Gaussian if you
know why.
http://www.dealii.org/
Wolfgang Bangerth
MATH 676
–
Finite element methods in
scientific computing
Wolfgang Bangerth, Texas A&M University
http://www.dealii.org/
Wolfgang Bangerth
Download