MATH 676 – Finite element methods in scientific computing

advertisement
MATH 676
–
Finite element methods in
scientific computing
Wolfgang Bangerth, Texas A&M University
http://www.dealii.org/
Wolfgang Bangerth
Lecture 31.5:
Nonlinear problems
Part 1: Introduction
http://www.dealii.org/
Wolfgang Bangerth
Nonlinear problems
Reality is nonlinear.
Linear equations are only approximations.
http://www.dealii.org/
Wolfgang Bangerth
Nonlinear problems
Reality is nonlinear.
Linear equations are only approximations.
Linear equations typically assume that something is
small:
●
Poisson equation for displacement of a membrane
Assumption: small displacement
●
Stokes equation
Assumption: slow flow, incompressible medium
●
Maxwell equations
Assumption: Small electromagnetic field strength
http://www.dealii.org/
Wolfgang Bangerth
Fluid flow example, part 1
Consider the Stokes equations:
∂u
−ν Δ u+∇ p = f
∂t
∇⋅u
= 0
ρ
These equations are the small-velocity approximation of the
nonlinear Navier-Stokes equations:
∂u
+u⋅∇ u −ν Δ u+∇ p = f
∂t
∇⋅u
= 0
ρ
http://www.dealii.org/
(
)
Wolfgang Bangerth
Fluid flow example, part 2
The Navier-Stokes equations
∂u
+u⋅∇ u −ν Δ u+∇ p = f
∂t
∇⋅u
= 0
ρ
(
)
are the small-pressure approximation of the variable-density
Navier-Stokes equations:
∂u
+u⋅∇ u −ν Δ u+ ∇ p = f
∂t
∇⋅(ρu)
= 0
ρ
http://www.dealii.org/
(
)
Wolfgang Bangerth
Fluid flow example, part 3
The variable-density Navier-Stokes equations can be further
generalized:
●
The viscosity really depends on
– pressure
– strain rate
●
Friction converts mechanical energy into heat
●
Viscosity and density depend on temperature
●
…
http://www.dealii.org/
Wolfgang Bangerth
1d elastic deformation example
Consider a 1d rubber band:
– Clamped at the ends
– Deformed perpendicularly by a force f(x)
– Leading to a perpendicular displacement u(x)
u(x)
x
http://www.dealii.org/
Wolfgang Bangerth
1d elastic deformation example
u(x)
√ (Δ x )2 +(u ' ( x) Δ x )2
u ' ( x) Δ x
Δx
x
If a material is linearly elastic, then the energy stored in a
deformation is proportional to its elongation:
E deformation (u) = lim Δ x →0 ∑
=
b
∫a
N
j=1, Δ x =
(b−a)
N
A ( √ (Δ x)2 +(u ' ( x j )Δ x)2−Δ x )
A ( √ (dx) +(u ' ( x) dx) −dx ) =
http://www.dealii.org/
2
2
b
∫a
A ( √ 1+(u ' ( x )) −1 ) dx
2
Wolfgang Bangerth
1d elastic deformation example
The total energy is a sum of two terms:
– the deformation energy
– the work against an external force
E (u) = E deformation + E potential
=
=
http://www.dealii.org/
b
∫a A ( √ 1+(u ' ( x)) −1) dx
b
2
−
b
∫a f ( x)u( x) dx
∫ [ A ( √ 1+(u ' ( x)) −1 )−f ( x)u( x)] dx
2
a
Wolfgang Bangerth
1d elastic deformation example
We seek that displacement u(x) that minimizes the energy
E (u) =
b
∫a [ A ( √ 1+(u ' (x ))2−1 )−f (x )u( x)] dx
This is equivalent to finding that point u(x) for which every
infinitesimal variation εv(x) leads to the same energy:
1
limϵ →0 ϵ [ E (u+ϵ v)−E (u) ] = 0
1
∀ v∈H0
In other words:
b
∫a
(
A
v ' ( x)u ' ( x )
)
−v ( x) f ( x) dx = 0
2
√ 1+(u ' ( x))
http://www.dealii.org/
1
∀ v ∈H 0
Wolfgang Bangerth
1d elastic deformation example
We seek that displacement u(x) that satisfies
b
∫a
(
A
v ' ( x)u ' ( x )
)
−v ( x) f ( x) dx = 0
2
√ 1+(u ' ( x))
1
∀ v ∈H 0
The strong form of this equation is:
(
− A
'
u'(x)
√ 1+(u ' ( x ))
2
)
= f (x )
In multiple space dimensions, this generalizes to this:
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
This is often called the minimal surface equation.
http://www.dealii.org/
Wolfgang Bangerth
Minimal surface vs. Poisson equation
Note: If the vertical displacement of the membrane is small
and smooth, then
∣∇ u∣2 ≪1
In this case, the (nonlinear) minimal surface equation
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
can be approximated by the (linear) Poisson equation:
−∇⋅( A ∇ u ) = f
http://www.dealii.org/
Wolfgang Bangerth
What makes this complicated?
Start with the minimal surface equation
∇u
−∇⋅ A
= f
2
√ 1+∣∇ u∣
(
)
and its weak form:
(
∇ φ, A
∇u
= (φ, f )
2
√ 1+∣∇ u∣
)
1
∀ φ∈ H 0
Let's see what happens if we just discretize as always using
uh ( x) =
http://www.dealii.org/
∑ j U j φ j (x )
Wolfgang Bangerth
What makes this complicated?
Start with the minimal surface equation
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
and discretize as always:
(
∇ φi , A
∇ ∑j U j φj
√1+∣∇ ∑ U φ ∣
2
j
j
j
)
= ( φi , f )
∀ i=1 ... N
We can pull some coefficients and sums out:
∑j
(
∇ φi , A
∇ φj
√ 1+∣∑ U ∇ φ ∣ )
2
j
j
U j = ( φi , f )
∀ i=1 ... N
j
This is a (potentially large) nonlinear system of equations!
http://www.dealii.org/
Wolfgang Bangerth
What makes this complicated?
Start with the minimal surface equation
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Discretizing as usual yields a system of nonlinear equations:
∑j
(
∇ φi , A
∇ φj
√ 1+∣∑ U ∇ φ ∣ )
2
j
j
U j = ( φi , f )
∀ i=1 ... N
j
This could be written as
A (U )U = F
Problem: We don't know how to solve such systems
directly. I.e., we know of no finite sequence of steps that
yields the solution of general systems of nonlinear systems!
http://www.dealii.org/
Wolfgang Bangerth
Nonlinear problems
In general: There is no finite algorithm to find
simultaneous roots of a general system of nonlinear
equations:
f 1 ( x 1 ,... , x N )=0
f 2 ( x 1 ,... , x N )=0
⋮
f N ( x 1 , ... , x N )=0
Or more concisely:
F ( x)=0
However: Such algorithms exist for the linear case, e.g.,
Gaussian elimination.
http://www.dealii.org/
Wolfgang Bangerth
Nonlinear problems
In fact: There is no finite algorithm to find a root of a single
general nonlinear equation:
f ( x)=0
All algorithms for this kind of problem are iterative:
●
Start with an initial guess x0
●
Compute a sequence of iterates {xk}
●
Hope (or prove) that
xk→ x where x is a root of f(.).
From here on: Consider only time-independent problems.
http://www.dealii.org/
Wolfgang Bangerth
Approach to nonlinear problems
Goal: Find a “fixed point” x so that
f ( x) = 0
Choose a function g(x) so that the solutions of
x = g( x )
are also roots of f(x). Then iterate
x k +1 = g ( x k )
This iteration converges if g is a contraction.
http://www.dealii.org/
Wolfgang Bangerth
Approach to nonlinear problems
Goal: Choose g(x) so that
x = g( x )
f (x )=0
⇔
Examples:
●
“Picard iteration” (assume that f(x)=p(x)x-h):
g (x ) =
●
1
h
p( x)
p(x k ) x k+1 = h
→
Pseudo-timestepping:
g (x ) = x±Δ τ f ( x )
●
→
x k +1−x k
= ± f ( xk)
Δτ
Newton's method
f ( x)
g (x ) = x−
f ' (x)
http://www.dealii.org/
→
x k +1
f ( xk)
= xk−
f ' ( xk )
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Picard iteration: Repeatedly solve
(
−∇⋅ A
or in weak form:
(
∇ φ,
∇ uk +1
√ 1+∣∇ u ∣
2
k
)
= f
A
∇ u k+ 1 = ( φ , f )
2
√ 1+∣∇ u k∣
)
1
∀ φ∈H 0
This is a linear PDE in uk+1. We know how to do this.
http://www.dealii.org/
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Picard iteration: Repeatedly solve
(
∇ φ,
A
∇ u k+ 1 = ( φ , f )
2
√ 1+∣∇ u k∣
)
1
∀ φ∈H 0
Pros and cons:
●
This is like the Poisson equation with a spatially varying
coefficient (like step-6) → SPD matrix, easy
●
Converges frequently
●
Picard iteration typically converges rather slowly
http://www.dealii.org/
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Pseudo-timestepping: Iterate to τ→∞ the equation
∂ u( τ)
∇ u( τ)
−∇⋅ A
= f
2
∂τ
√ 1+∣∇ u( τ)∣
(
)
For example using the explicit Euler method:
uk +1 −uk
∇ uk
−∇⋅ A
= f
2
Δτ
√ 1+∣∇ u k∣
(
http://www.dealii.org/
)
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Pseudo-timestepping: Iterate to τ→∞ the equation
∂ u( τ)
∇ u( τ)
−∇⋅ A
= f
2
∂τ
√ 1+∣∇ u( τ)∣
(
)
Alternatively (and better): Semi-implicit Euler method…
uk +1 −uk
∇ uk +1
−∇⋅ A
= f
2
Δτ
√ 1+∣∇ u k∣
(
)
...or some higher order time stepping method.
http://www.dealii.org/
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Pseudo-timestepping: Semi-implicit Euler method
uk +1 −uk
∇ uk +1
−∇⋅ A
= f
2
Δτ
√ 1+∣∇ u k∣
(
)
Pros and cons:
●
Pseudo-timestepping converges almost always
●
Easy to implement (it's just a heat equation)
●
With implicit method, can make time step larger+larger
●
Often takes many many time steps
http://www.dealii.org/
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Newton's method: Consider the residual
(
R (u) = f + ∇⋅ A
∇u
2
1+|∇
u|
√
)
Solve R(u)=0 by using the iteration
uk +1 = u k −[ R ' (uk )]−1 R (u k )
or equivalently:
[R ' (uk )] δ u k = −R (u k ),
http://www.dealii.org/
uk +1 = u k +δ uk
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Newton's method: Iterate on
[R ' (uk )] δ u k = −R (u k ),
uk +1 = u k +δ uk
Here, this means:
∇ u k⋅∇ δu k
A
A
−∇⋅
∇
δu
+
∇⋅
A
∇
u
=
f
+∇⋅
∇ uk
k
k
2
2
2 3 /2
√ 1+∣∇ uk∣
( 1+∣∇ u k∣ )
√ 1+∣∇ uk∣
(
) (
)
(
)
This is in fact a symmetric and positive definite problem.
http://www.dealii.org/
Wolfgang Bangerth
Application to the minimal surface equation
Goal: Solve
(
−∇⋅ A
∇u
= f
2
√ 1+∣∇ u∣
)
Newton's method: Iterate on
[R ' (uk )] δ u k = −R (u k ),
uk +1 = u k +δ uk
Pros and cons:
●
Rapid, quadratic convergence
●
Only converges when started close enough to the solution
●
Operator has different structure than in Picard
http://www.dealii.org/
Wolfgang Bangerth
Summary for nonlinear problems
●
●
Nonlinear (stationary) PDEs are difficult because there are
no direct algorithms for nonlinear systems of equations
In the context of PDEs, we typically use one of three
classes of methods:
– Picard iteration
. converges frequently, but slowly
– Pseudo-timestepping
. converges most reliably, but slowly
– Newton iteration
. does not always converge
. if it converges, then very rapidly
http://www.dealii.org/
Wolfgang Bangerth
MATH 676
–
Finite element methods in
scientific computing
Wolfgang Bangerth, Texas A&M University
http://www.dealii.org/
Wolfgang Bangerth
Download