1. The Scalar Conservation Law 1.1 Introduction and smooth solution

advertisement
1
1. The Scalar Conservation Law
1.1 Introduction and smooth solution
In this text we consider the initial value problem
ut + f (u)x = 0 ; 1 < x < 1 0 < t
(1:1)
u(0 x) = u0(x)
where the function u(t x) is the unknown and f (u) and u0(x) are given functions.
It is a generalization of the hyperbolic problem
ut + aux = 0 ; 1 < x < 1 0 < t
(1:2)
u(0 x) = u0(x)
with which the reader is supposed to be familiar. Problem (1.2) is usually analyzed
using Fourier series. Since problem (1.1) is in general non linear, Fourier methods can
not be used.
The choice f (u) = u2=2 yields the inviscid Burger's equation, an equation interesting because of its resemblance to the equations of uid dynamics. It is widely used as
a model problem.
The equation ut + f (u)x = 0 is called a conservation law. By integrating over
;1 < x < 1 one gets
d Z 1 u(x t) dx = 0
dt
;1
assuming that f (u) vanishes as jxj ! 1. Thus the name derives from the fact that the
integral of u is conserved in time.
The function f (u) is called ux function. By integrating over a < x < b one gets
d Z b u(x t) dx = f (u(t a)) ; f (u(t b))
(1:3)
dt a
which can be given the interpretation that the integral of u over a nite interval can
change due to in- or outow at the boundaries x = a and x = b.
If we carry out the x dierentiation we get
ut + a(u)ux = 0
where a(u) = f 0 (u). In the same way as for problem (1.2), we can make the denition
Denition 1.1. The characteristics are the curves in the x-t plane dened by
dx(t)=dt = a(u(t x(t)))
(1:4)
We have a theorem similar to the one for the linear case.
2
Theorem 1.2. If the solution ( ) is dierentiable, it is constant along the characu t x
teristics.
Proof: The chain rule is used to evaluate the derivative of along a characteristic
curve
( ( )) = + ( ) = + ( ) = 0
t
x
t
x
u
du t x t
u
dt
dx t
dt
u
u
a u u
using (1.4). The derivative is zero and the solution constant.
The theorem and (1.4) implies that the characteristics are straight lines. The
following theorem further shows that there are many similarities between (1.1) and
(1.2).
Theorem 1.3. The solution, , to problem (1.1) satises
= 0( ; ( ) )
(1 5)
if it is dierentiable.
Proof: Insert (1.5) into the PDE and use the chain rule. The result from doing this
is
(1 + 0( ; ( ) ) ( ) )( t + ( ) x ) = 0
We dierentiate (1.5) with respect to time and obtain
t = 0 ( ; ( ) )(; ( )
t ; ( ))
Solve for t
0
t = ;
1+
u
u
u
u
0
x
0
u
u
x
0
a u t a
x
a u t
a u t
u t
u
0
a
:
a u u
u tu
a u
u
0
u
u a
0
u0 a0 t
Since we assume that has continous derivative, the denominator 1 + 0 must be
dierent from zero, and thus the factor multiplying t + ( ) x can be divided out and
the proof is complete.
If the above non linear algebraic equation has a unique solution, a very ecient
solution procedure for problem (1.1) is to solve (1.5) by Newton's method.
0
u
0
u a t
u
a u u
3
1.2 Non smoothness, Jump condition
The major dierence between the linear and the non linear equations is that for the
latter, the solution in the class of continuous functions may fail to exist after a nite
time, no matter how smooth the initial data are. We give three examples to show how
this failure occurs.
Example 1.1 (Geometric description of smoothness failure)
ut + (u2 =2)x = 0 ; 1 < x < 1 0 < t
u(0 x) = sin x
By dierentiation a(u) = u and thus the slope of the characteristics are u. Initially
in the point x = =2, the slope and the solution are 1 and in the point x = 3=2 the
slope and function are -1.
t
???
u=1
u = -1
x
Figure 1.1. Values are transported along the characteristics
The value 1 is transported to the right and the value -1 to the left, at some time
they will meet, thereby causing a failure of smoothness in the solution.
Example 1.2 (Algebraic description of smoothness failure) Consider (1.5). By
implicit dierentiation with respect to t we get
ut = u0(x ; a(u)t)(;a (u)tut ; a(u))
Solve for ut
(1:6)
ut = ; 1 +uu0aa t
0
0
0
0
0
0
if a u0 is < 0 at some point, we see from (1.6) that there will be a blow up of the
derivative at t = ;1=(u0a ).
Example 1.2 shows that under certain conditions, such as e.g. a (u) > 0 and
u0(x) > 0, a smooth solution does exist.
Example 1.3 (Dynamic description of smoothness failure) The same problem as
in example 1.1 is considered
ut + (u2 =2)x = 0 ; 1 < x < 1 0 < t
u(0 x) = sin x
0
0
0
0
0
0
4
The dierential equation can be written
t+
x=0
and can, in analogy with the linear hyperbolic equation, be interpreted as the speed
with which the initial data propagates. For the sine wave below, the maxima travels
to the right with speed 1 and the minima to the left with speed -1. This causes a
gradual sharpening of the gradients with time,
u
uu
u
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
-1
0
50
100
150
200
250
-1
0
300
50
100
150
200
250
300
a) at t = 0
b) at t=0.1
Fig. 1.2. A solution to Burgers' equation.
and nally the waves break into discontinuities.
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
0
50
100
150
200
250
300
c) at t = 0.5
The examples shows the necessity to extend the solutions into the class of functions with discontinuities. The partial dierential equation does not make sense for
non dierentiable functions. We can however interpret the derivatives in the sense of
distributions. More specically this means that the equation is multiplied by a smooth
test function, 2 o1(R+ R), and then integrated in time and space. Integration by
parts afterwards moves the derivatives to the smooth test functions. Doing this yields
'
C
1Z 1
;1
0
Z
't u
+
The boundary terms at j j =
compact support. We dene
t x
( )
'x f u dx dt
+
1
(0 ) 0 (0 ) = 0
;1
Z
'
x u
x dx
1 does not contribute, since
'
(1 7)
:
is assumed to have
5
Denition 1.4. A weak solution to (1.1) is a function ( ) satisfying (1.7) for all
smooth test functions 2 01.
u t x
'
C
In the specic case of one discontinuity, separating two smooth parts of the solution
we can use the conservation property of the original problem (1.1) to obtain the following
theorem.
Theorem 1.5. (Rankine-Hugoniot) Assume that a discontinuity is moving with speed
and that the value of to the left of the jump is L and to the right R. The the
following holds
( L ; R) = ( L ) ; ( R )
Proof: Use the integrated form (1.3)
s
u
u
s u
d
u
Zb
u dx
a
dt
f u
u
f u
= ( ( )) ; ( ( ))
f u t a
(1 8)
f u t b
:
assume there is one discontinuity moving on the curve ( ) and that the solution is
smooth otherwise. Separate (1.8) into smooth parts
x t
d
dt
(
Z x(t)
u dx
a
+
Zb
x(t)
) = ( ( )) ; ( ( ))
u dx
f u t a
f u t b
The dierentiation can now be carried out, giving
Z x(t)
t
u dx
a
+ ( ( );) 0 ( ) +
u t x t
x
t
Zb
x(t)
t
u dx
; ( ( )+) 0 ( ) = ( ( )) ; ( ( ))
u t x t
x
t
f u t a
f u t b
Now use t = ; x in the integrals. Performing the integration gives
( ( )) ; ( ( ( );)) + ( ( );) 0 ( ) + ( ( ( )+));
( ( )) ; ( ( )+) 0 ( ) = ( ( )) ; ( ( ))
The desired result is obtained by rearranging this expression, and using the notations
( ( );) = L, ( ( )+) = R, 0 ( ) = .
u
f
f u t a
f u t x t
u t x t
f u t b
u t x t
u
u t x t
u
x
x
t
u t x t
t
s
f u t x t
x
t
f u t a
f u t b
6
1.3. Uniqueness, Entropy condition
When we extend the class of admissible solution from the dierentiable functions to
non dierentiable functions, we unfortunately loose uniqueness. The extended class of
functions is too large.
We therefore impose an extra condition the so called entropy condition which tells
us, in case of multiple solutions, which solution is the correct one. The name derives
from application to gas dynamics, in which case there is only one solution satisfying the
physically correct condition of entropy decrease.
As we will see later, entropy conditions are important when we study numerical
methods, since some convergent numerical methods does not converge to the solution
singled out by the entropy condition.
The theory is considerably simplied if the ux function is convex (f (u) > 0).
Therefore we start with that case. The typical example of non uniqueness is the following
Example 1.4 Two possible solutions to the problem
ut + (u2 =2)x = 0 ; 1 < x < 1 0 < t
n
0
u(0 x) = 01 xx <
>0
are
2
u1(t x) = 01 xx < t=
t=2
00
The jump is moving with the speed s = 1=2 obtained from the Rankine-Hugoniot
condition, and
(0 x < 0
u2(t x) = x=t 0 x t
1 x>t
The second solution is a so called expansion wave (or rarefaction wave). It is easy
to see that these functions solves the problem, by inserting them into the dierential
equation. By looking at the characteristics in the x ; t plane, we get the following
picture of the solution 1 in the example above
t
x
Fig. 1.3. Diverging characteristics.
This solution is not a good one for the following reasons
1. Sensitivity to perturbations. A small disturbance in the discontinuity will propagate
out into the solution and aect the smooth parts.
7
2. There are characteristics emanating from the discontinuity. We would like the
solution to be determined by the initial data. Consequently, if at some time t we
trace a characteristics backwards we should end at some point at the time zero.
This is not true for this solution.
For the following example point one and two are resolved in a satisfactory way.
Example 1.5 The problem
ut + (u2 =2)x = 0 ; 1 < x < 1 0 < t
n1 x < 0
u(0 x) = 0 x > 0
has a solution
2 :
u(t x) = 10 xx < t=
t=2
The jump is moving with the speed s = 1=2 obtained from the Rankine-Hugoniot
condition. The characteristics are pointing into the jump
t
x
Fig. 1.4. Converging characteristics.
Here a small disturbance in the jump will immediately disappear into the discontinuity, and at a given time, we can always follow a characteristic backwards to time
zero.
Example 1.5 gives a motivation for the following denition.
Denition 1.6. A discontinuity with left state uL and right state uR, moving with
speed s for a conservation law with convex ux function is entropy satisfying if
f (uL ) > s > f (uR )
(1:9)
This means that the characteristics are going towards the discontinuity as time increases.
An entropy satisfying discontinuity is also called a shock. The signicance of the
above denition can be seen in the following theorem
Theorem 1.7. The initial value problem (1.1) with convex ux function and arbitrary
integrable initial data has a unique weak solution in the class of functions satisfying
(1.9) across all jumps.
Proof: The proof of existence uses the exact solution formula which we describe in
the next section. We refer to 17] for the details, and the uniqueness.
0
0
8
For the non convex conservation law the condition (1.9) has to be satised for all
u between uL and uR. The ux function could look like in g. 1.5.
f(u)
ur
ul
u
Fig. 1.5. Non convex ux function.
Here a jump between uL and uR, satises condition (1.9), but is still not the correct
solution. It has turned out that it is necessary to require the following entropy condition
for a general non convex conservation law
f (uL ) ; f (u) f (uR ) ; f (uL ) all u 2 u u ] or u u ]
(1:10)
L R
R L
uR ; uL
It is important that all values u between uL and uR are involved. Intuitively we can
uL ; u
understand (1.10) as requiring the characteristics to go into the shock for the entire
family of shocks between uL and u, u 2 uL uR]. Geometrically (1.10) can be interpreted
as the graph u ; f (u) must lie below the chord between (uL f (uL )) and (uR f (uR )) if
uL > uR, and above if uL < uR. (1.10) can be derived from the inviscid limit of the
problem
ut + f (u)x = uxx
(1:11)
where is a positive parameter. (1.11) has a unique smooth solution. The physically
relevant solution of (1.1) is dened as the solution of (1.11) as ! 0. We give a
derivation of (1.10) later in this section.
There is a result similar to theorem 1.7 for the entropy condition (1.10).
Theorem 1.8. The initial value problem (1.1) with arbitrary integrable initial data
has a unique weak solution in the class of functions satisfying (1.10) across all jumps.
Proof: Not given here. We refer to 16].
An example of a conservation law with non convex ux function is the so called
Buckley-Leverett equation
2
ut + ( u2 + (1u; u)2=4 )x = 0
which occurs in the theory of ow through porous media.
9
There is an alternative way of getting entropy conditions which we now describe.
First the equation
ut + f (u)x = uxx
and E (u) are multiplied. E (u) is a strictly convex (E (u) > 0), dierentiable function,
which we will call the entropy function.
E ut + E f (u)x = E (u)uxx
Dene F (u) = E (u)f (u), the equation takes a form similar to the original one
E (u)t + F (u)x = E (u)uxx:
Using the identity
E (u)xx = E (u)(ux)2 + E (u)uxx
we rewrite the viscosity term and get
E (u)t + F (u)x = (E (u)xx ; E (u)(ux)2 ) E (u)xx
where the last inequality follows from the fact that E (u) is convex. Let now ! 0 and
we arrive at
E (u)t + F (u)x 0
(1:12)
where the inequality should be understood to be valid in the sense of distributions.
Thus we have showed that if u(t x) is a solution to the original conservation law,
obtained as the vanishing viscosity limit solution of (1.11), the additional inequality
(1.12) is valid. As previously mentioned, the vanishing viscosity solution is the physically
relevant one which we want an entropy condition to choose for us. As an entropy
condition we take (1.12), or across jump discontinuities
s(E (uR ) ; E (uL)) ; (F (uR ) ; F (uL )) 0
(1:13)
which follows from (1.12) by calculations similar to the proof of theorem 1.5. We have
now three dierent entropy conditions, (1.9), (1.10) and (1.13), we nish by investigating
the relationship between them. By using the denition E f = F it is easy to prove
the identity
0
00
0
0
0
0
0
0
0
00
0
00
0
s(E (uR ) ; E (uL)) ; (F (uR ) ; F (uL )) =
Z uR
uL
0
0
E (u)(suL ; f (uL ) ; (su ; f (u))) du
00
The function inside the integral is familiar, using the denition of s, the shock speed,
we can rewrite entropy condition (1.10) as
su ; f (u) suL ; f (uL ) uL < uR
su ; f (u) suL ; f (uL ) uL > uR
thus we immediately get (1.10)) (1:13) from
Z uR
uL
E (u)(suL ; f (uL ) ; (su ; f (u))) du 0:
00
10
and E 00 (u) > 0. For the implication in other direction it is necessary to assume that
(1.13) is valid for all convex E (u), or at least a class suciently large to assure that
Z
E 00 (u)g(u) du 0 ) g(u) 0:
One example of such a class is given in exercise 5. In the special case f (u) convex the
sign of suL ; f (uL ) ; (su ; f (u)) does not change over the interval uL uR], and one
convex entropy function is sucient. Summary:
(1.10) ) (1.13) for any convex entropy function.
(1.13) for a \large" class of entropy functions)(1.10).
(1.13) with one entropy function , (1.9).
(1.10)) (1.9),
(1.9) ) (1.10) if f (u) convex.
Here the last two implications are easily shown and left as an exercise
1.3 Exact solution formulas
For reference we here give some analytic solution formulas without proving them. The
equation
ut + (u2=2)x = uxx
can be solved exactly 15], the formula is not given here. A similar result has been
obtained for the problem
ut + f (u)x = 0 ; 1 < x < 1 0 < t
(1:14)
u(0 x) = u0(x)
with f (u) convex. The solution at a xed point (t x) is obtained from
Theorem 1.9. The solution to (1.14) is given by
u(t x) = b((x ; y)=t)
(1:15)
where b(u) is the inverse function of f 0 (u), (which exists since f (u) is convex) and y is
the value which minimizes ((t x) are still kept xed )
Zy
u0(s) ds + th((x ; y)=t):
Here h(u) is a function determined from h0 (u) = b(u), and h(f 0 (0)) = 0.
G(x y t) =
;1
We refer to 17] for a derivation of the formulas.
The problem with piecewise constant initial data, will be of importance to some of
the numerical methods encountered later on. In the scalar case it is possible to solve
the problem
ut + (f (u))x = 0 ; 1 < x < 1 0 < t
n
u(0 x) = uuL xx <> 00
R
11
analytically for any dierentiable ux function f (u). u and u are constants. First
one proves that the solution only depends on x=t. Let the solution be u(t x) = u(x=t) =
u( ). The following formulas then give a closed expression for u( ).
L
(f (w) ; w))
u( ) = ; dd ( 2min
L R]
(f (w) ; w))
u( ) = ; dd ( 2max
R L]
w
u
u
w
u
u
R
u <u
L
R
u >u
L
(1:16)
R
The dierentiation is made in the sense of distributions. We refer to 20] for a derivation
of these formulas.
Exercises
1. In 17] the following entropy condition is given
f (u + (1 ; )u ) f (u ) + (1 ; )f (u ) u < u
f (u + (1 ; )u ) f (u ) + (1 ; )f (u ) u > u
all 2 0 1]. Show that this entropy condition is equivalent to (1.10). What is
geometrical interpretation of the above entropy condition ?
2. The formula (1.5) can not be used to solve the problems
u + (u2 =2) = 0 ; 1 < x < 1 0 < t
n
u(0 x) = 10 xx <> 00
R
L
R
L
R
L
R
L
R
L
R
L
t
and
x
u + (u2 =2) = 0 ; 1 < x < 1 0 < t
n0 x < 0
:
u(0 x) = 1 x > 0
t
x
Try to use it anyway, to investigate how the formula fails. Use then formula (1.15)
in the last section to obtain a correct solution.
3. Consider the problem
u + (f (u)) = 0 ; 1 < x < 1 0 < t
n1 x < 0
:
u(0 x) = 0 x > 0
t
x
with f (u) = 1:1u4 ; 2u3 + u2. One possible solution is
n
u(t x) = 10 xx <> 00::11tt :
Show that this solution satises the entropy condition (1.9) but not (1.10).
12
4. Solve the problem
u + (u3 =3) = 0 ; 1 < x < 1 0 < t
n
:
u(0 x) = 1;1 xx >< 00
t
x
using the exact formula (1.16).
5. Show that the entropy condition
E (u) + F (u) 0
t
for all
x
nu ;c u c
E (u) = 0
c
n f (u) ; fu(<
c) u c
F (u) = 0
u<c
with c a real constant, implies the entropy condition (1.10). Thus instead of requiring (1.13) for all convex E (u), the subclass above can be used.
13
2. Numerical Methods for the Scalar Conservation Law
2.1 Notations
We will describe some numerical methods applied to a one dimensional problem using
a uniform grid. This is for clarity of exposition, the changes required for more space
dimensions and curvilinear grids are straightforward.
We consider a discretization of the axis
= ;2 ;1 0 1 2
j
The uniform spacing is = j+1 ; j . We divide the time into time levels 0 =
0 1 2 . The time step = n+1 ; n will be constant.
We here avoid boundaries by considering the problem on the entire domain ;1
1. The analysis below could have been done, using periodicity instead, as is usually
done in the linear case. In practical computations it is, of course, not possible to use an
innite number of grid points. Thus, in order to verify numerically the results below, it
is necessary to use a periodic problem.
The following notations will be used
n
j)
j = The numerical solution at the point ( n
+ j = ( j +1 ; j ) + j = j+1 ; j
; j = + j ;1
; j = + j;1
1
0 j = 2 ( + + ;) j
0 j = 21 (+ + ;) j
The operators + and ; approximates
to rst order accuracy, 0 gives second
order accuracy. We write this as
+ j = x( j ) + ( )
0 j = x ( j ) + ( 2 )
Thus e= ( p) denotes a quantity which goes to zero with the same rate as p when
goes to zero i.e.
0 1 lim
j j p 2
x!0
with 1 and 2 positive constants.
x
x
j
x
t t :::
:::
x
x
t
t
t
:::
t
<
x <
t x
u
D
D
u
u
u
u
u
D
u
u
u
u
D
D
u
u
u
D
@ =@ x
D
u
D u
O
x
u
D u
D
=
u
u
x
D
O
x
O
x
x
x
x
x
< C
C
C
e =
x
C
14
2.2 Denitions and General Results
We now consider the method
(1) = un ; tD f (un )
; j
j
j
(2)
(1)
u
= uj ; tD+f (u(1)
j
j )
n+1
u
= (u(2) + un)=2
u
j
j
j
which approximates the scalar conservation law
t + ( )x = 0
to second order accuracy in both time and space. The method is popular in computational aerodynamics where it is known as MacCormack's scheme. We use this scheme to
demonstrate how the numerical solution can misbehave when the solution to the partial
dierential equation is discontinuous.
Example 2.1 The solution to the problem
2 2)x = 0 ; 1
10
t +(
n1
(0 ) = 0 00
u
u
f u
u =
u
< x <
< t
x <
x
x
is a translation of the initial step function with velocity 1 2 (from the Rankine Hugoniot condition). The solution satises the entropy condition. Below the solution
obtained using MacCormack's scheme is displayed. The solid line is the exact solution, the circles are the numerical solution.
=
1.5
1.5
1
1
0.5
0.5
0
0
-0.5
0
10
20
30
40
50
60
70
80
90
Fig.2.1. MacCormack
100
-0.5
0
10
20
30
40
50
60
70
80
90
100
Fig.2.2. Leapfrog
The scheme does not behave well near the shock. The oscillations around the
shock are related to the well known Gibb's phenomenon in Fourier analysis. There
is a small amount of numerical viscosity in this scheme, which keeps the oscillations
near the shock. With a scheme, like leapfrog, which only has dispersive errors and no
numerical damping, the oscillations spread out all over the computational domain. This
text describes how dierence schemes which gives a solution without these erroneous
oscillations can be designed.
15
Example 2.2 The problem
+ ( 2 2)x = 0 ; 1
10
n ;1
0
(0 ) = 1
0
has the following solution
( ;1
;
( )=
; 1
However using MacCormack's scheme, we instead get the solution
n ;1
0
( )= 1
0
which also is a weak solution to the problem, but which does not satisfy the entropy
condition. The result is plotted in g. 2.3..
ut
u
u =
< x <
< t
x <
x
x
u t x
x <
t
t
x
x=t
t
x > t
x <
u t x
x >
1.5
1
0.5
0
-0.5
-1
-1.5
0
10
20
30
40
50
60
70
80
90
100
Fig.2.3. MacCormack fails to produce entropy solution.
The consistency with the conservation law does not guarantee that a scheme picks
up the entropy satisfying solution. We will try to nd dierence schemes were such a
guarantee is available.
One usual standard form for dierence approximations to the conservation law is
the conservative form,
n+1
n
n
n
n
= nj ; ( ( nj;q+1 nj;q+2
(2 1)
j +p+1 ) ; ( j ;q
j ;q +1
j +p ))
j
n
The notation = is used. The function ( nj;q
j +p ) is called the numerical
ux function. By Taylor expansion one can show that consistency with the conservation
law requires
( )= (
)
Here we mean consistency in the sense that a smooth solution inserted into the dierence
formula gives a truncation error proportional to ( p + q ), with
0
0. The
n
conservative form implies that ( if j ! 0 as ! 1)
u
u
h u
t=
u
:::u
h u
x
h u
f u
u
j
=;1
:::u
:
:::u
h u u : : : u :
t
1
X
u
t
j
1
X
+1 =
n
u
j
j
=;1
n
uj x
p >
q >
16
the discrete counterpart of (1.3) holds.
n ), and thus the conservative apWe often write nj;1=2 = ( nj;q nj;q+1
j +p
proximation becomes
n+1
; nj nj+1=2 ; nj;1=2
j
+
=0
It is possible to invent schemes that are consistent with the dierential equation, but not
on conservative form. For such a scheme one can obtain solution were the shocks move
with incorrect speed. Consistency in the usual sense does not take in to account the
discontinuities, and therefore not the Rankine-Hugoniot condition. Loosely speaking,
we can say that the conservative form means consistency with the Rankine-Hugoniot
condition. The following theorem states this more precisely
Theorem 2.1. If nj is computed with a consistent di erence approximation on conservative form and nj ! ( ) as ! 0 in 1loc(R+ R) then ( ) is a weak
solution to the conservation law.
h
h u
u
u
:::u
h
u
h
t
:
x
u
u
u t x
t
x
Proof: We write the scheme (2.1)
n+1
; nj
j
+
u
u
L
n
j +1=2
h
;
h
n
j ;1=2
u t x
=0
Multiply with a test function 2 01(R+ R), and sum over and ,
t
'
1 X
1
X
n=0 j =;1
C
n+1
uj
n
'j
;
n
uj
t
+
Now, do partial summation using the rule
d d+1 . We get
a b
1 X
1
X
(
;
n=0 j =;1
n+1
n+1 'j
u
j
;
t
n
'j
:
x
+
n
n
h
j +1=2
n
'j
P=
;
d
j c aj
n
j ;1=2
h
x
+ j = ;
b
n
'j +1
n
hj +1=2
;
n
'j
x
);
j
=0
:
P = +1
d
j c
1
X
bj
; j ;
a
ac bc
+
0 0
=0
uj 'j
j =;1
t
All boundary terms except the one at = 0 disappears, since is compactly supported. Multiply and use the assumption that nj ! . The sum will converge
towards the integral
t
t
;
'
x
u
Z 1Z 1
0
;1
u't
+ ( )
f u 'x dx dt
;
Z1
;1
u
0
u ' dx
=0
here the consistency (
) = ( ) is used. Thus, by denition 1.4, the limit
function is a weak solution of the conservation law.
Remark: The theorem is an if-then statement (implication in one direction). It
is possible to have non-conservative form, but still get a solution with correctly moving
2 2)x
shocks. An example is the approximation
j 0 j on non conservative form to (
P
1
is in fact conservative in the sense that j=;1 j 0 j = 0 if j ! 0 as ! 1.
h u u : : : u
f u
u
u D u
u D u
u =
u
j
17
Example 2.1 showed that there might be problems near the shocks in certain dierence methods. We now turn to the problem of characterize a good numerical solution
without oscillations around the shock.
The most popular measure for oscillations is
Denition 2.2. A di erence method is called total variation decreasing (TVD) if it
produces a solution satisfying
1
X
j
=;1
j+ +1j 1
X
n
uj
j
=;1
j+ njj
u
for all 0.
Pj=;1 j nj+1 ; njj. Originally the
We will sometimes use the notation ( n) = 1
concept was called total variation non-increasing (TVNI), but TVD has become the
standard term. We give an example to clarify the meaning of the denition.
Example 2.3 Consider the problem t + ( 2 2)x = 0 with initial data
0=1
0
j
0=0 0
j
Approximate with the Lax-Wendro scheme at some CFL number. After one time
step the solution is
1 = 1 ;1
j
1 = 1 34
0
1 = 0 23
1
1=0
1
j
thus the scheme produced a small overshoot. The variation at 0 was =1. The
variation at 1 is + 0 + 0 34 + 1 11 + 0 23 + 0 + = 1 68. The overshoot shows
as an increase in the total variation. Thus the Lax-Wendro scheme is not TVD.
It is natural to require TVD, since the solution to the continuous problem ( )
satises
Z1
j j 0
n
TV
u
u
u
u
u =
u
j <
u
j
u
j
u
:
u
:
u
j >
t
t
:::
:
:
:
:::
:
u t x
@u
d
dt
;1
@x
dx
:
It has turned out that the TVD criterion is sometimes too restrictive. We will later on
in some cases replace it with
Denition 2.3. A di erence method is called essentially non oscillatory (ENO) if it
produces a solution satisfying
1
X
j
=;1
j+ +1j 1
X
n
uj
j
=;1
j+ njj + ( p )
u
O
x
18
for all 0 and some 1.
Example 2.2 shows that the numerical solution can fail to satisfy the entropy condition. The theorem below provides one way to investigate whether a scheme is entropy
satisfying or not.
Theorem 2.4. If a di erence method produces a solution, which also satises the
discrete entropy condition
n
n
n
( ( nj +1) ; ( nj)) + ( ( nj;q+1
j +p+1) ; ( j ;q
j +p )) 0
with ( j;q
j +p ) a numerical entropy ux consistent with the entropy ux of the
di erential equation,
(
)= ( )
0( ) = 0( ) 0 ( )
then if nj converges the limit will satisfy
( )t + ( )x 0
Proof: Similar to the proof of theorem 2.1.
There is an important class of schemes satisfying a discrete entropy condition.
Denition 2.5. The di erence scheme
n+1
= nj ; ( nj+1=2 ; nj;1=2 )
j
is called an E scheme if
j +1
j +1=2 ( ) all 2 j j +1 ] if j
j
j +1=2 ( ) all 2 j +1 j ] if j +1
This denition is made because of the following theorem
Theorem 2.6. An E scheme satises the semi discrete entropy condition
( j ( )) +
0
n
E u
H u
p
E u
=
t
H u
:::u
H u
:::u
=
x
:::u
H u : : : u
F
u
F u
E
u f
u
u
E u
h
u
u
F u
h
h
f u
u
u u
u
h
f u
u
u
u
dE u
t
u
< u
< u
+ Hj ;1=2
D
dt
for all convex ( ). The numerical entropy ux is given by
0 ( j )( j ;1=2 ; ( j ))
j ;1=2 = ( j ) +
Proof: Start from the dierence method
= ;( j+1=2 ; j;1=2 ) j( )
Multiply by 0 ( j ), where ( ) is a convex function, so that we get the rst term in
the semi discrete entropy condition
;
)
( j ) = ; 0( )(
E u
H
F u
du
E
u
t =dt
E
u
h
h
f u
h
=
x
E u
x
dE u
dt
E
uj
hj
+1=2
hj
;1=2
19
where we also multiplied by . Introduce the entropy ux 0 ( ) = 0( ) 0 ( ) by
( j ) + ( j+1) ; ( j ) = ; 0( j )( j+1=2 ; j;1=2 ) + ( j+1 ) ; ( j ) (2 2)
x
x
dE u
F u
dt
F
F u
E
We can write
( +1) ; ( j ) =
F uj
Z
Z
F u
uj+1
u
h
uj+1
F
uj
h
E
F u
u f
u
F u
:
0 (u) du =
0
0
0 uj+1 ;
E (u)f (u) du = E f ]
uj
uj
u
Z
uj+1
E
uj
00 (u)f (u) du
Using this expression for + ( j ) in the right hand side of (2.2) yields
( j ) + ( ) ; ( ) = ; 0 ( )(
;
)+
F u
x
dE u
F uj
dt
E
+1
F uj
E
uj
hj
+1=2
0 (uj +1 )f (uj +1 ) ; E 0 (uj )f (uj ) ;
Z
hj
uj+1
uj
;1=2
E
00 (u)f (u) du
Add and subtract 0 ( j+1) j+1=2 to the right hand side
( j ) + ( j+1) ; ( j ) = ; 0( j+1)( j+1=2 ; ( j+1 ))+
E
x
dE u
u
h
F u
dt
F u
E
u
h
f u
0
0
0
E (uj )(hj ;1=2 ; f (uj )) + (E (uj +1 ) ; E (uj ))hj +1=2 ;
Z
uj+1
E
uj
00 (u)f (u) du
which can be written
( j ) + ( j+1) ; ( j ) + +( 0 ( j )( j;1=2 ; ( j ))) =
x
dE u
F u
Z
dt
F u
uj+1
uj
E
00 (u) duhj +1=2 ;
Z
E
u
h
uj+1
E
uj
f u
00 (u)f (u) du
and thus by dening j;1=2 = ( j )+ 0 ( j )( j;1=2 ; ( j )) the result follows from
Z uj+1
(
j)
00 ( )( j +1=2 ; ( ))
+ + j;1=2 =
H
x
dE u
dt
F u
E
H
u
uj
h
E
f u
u
h
f u
du
The convexity of ( ) means that 00 ( ) 0, thus the right hand side is non positive
if j+1=2 satises the requirements in the theorem.
Remark: It is possible to prove the Entropy condition for the time discretized
approximation (forward Euler) as well. We gave the proof for the semi discrete approximation, because it gives a clear picture of how denition 2.5. enters into it, and because
the proof is considerably simpler than the proof for the fully discrete case.
It is possible to prove that an E scheme has at most order of accuracy one.
Note that in the denition of an E scheme, a statement is made about all values
of between j+1 and j . The theory in chapter one makes it probable that for non
E u
E
h
u
u
u
u
>
20
convex ux functions it is necessary to have information of how the ux function behaves
between the grid points. We give an example to clarify this statement.
Example 2.4 The problem
2
2)x = 0 ; 1
10
t +(
n1
(0 ) = ;1 00
u
u
u =
< x <
< t
x <
x
x
has solution
n
( ) = 1;1 00
Assume that a dierence method is given which gives the steady solution prole
n
;1
j = 1
n
0 =08
n
1 = ;0 8
n
2
j = ;1
for all . Make a deformation of the ux function as in g. 2.4. below.
x <
u t x
u
u
u
u
:
x
j
:
:
j
n
f(u)
u
x
Fig.2.4. Deformed ux function
The steady shock does not satisfy the entropy condition for the deformed ux
function (cf. chapter 1). The deformed ux coincides with ( ) = 2 2 for j j 0 8,
and a scheme which only relies on ux values at the grid points, does not have sucient
information to distinguish between the deformed ux and the quadratic one.
As an example we now give two classes of dierence methods, monotone schemes
and three point schemes, where the TVD and entropy properties have been worked out.
f u
u =
u >
:
21
2.3 Monotone Schemes
The TVD and ENO properties are usually dicult to investigate for a given scheme. We
therefore start analysing the subclass of monotone schemes, which is easy to distinguish.
We write an explicit dierence method in general as
unj +1 = G(unj;q : : : unj+p+1):
(2:3)
With this notation we introduce the class of monotone schemes
Denition 2.7. The scheme (2.3) is monotone if the function G is an increasing function of all its arguments, i.e.
@G(u;q : : : up+1) 0
;q ip+1
@ui
We let Gi denote the partial derivative of G with respect to its ith argument. Note
that it is implicitly assumed that G is a dierentiable function.
Theorem 2.8. Monotone conservative schemes are TVD.
Proof:
X1 junj+1+1 ; unj +1j = X1 jG(unj;q+1 : : : unj+p+2) ; G(unj;q : : : unj+p+1)j =
j =;1
X1 jG(unj;q + j=;1
+ unj;q : : : unj+p+1 + +unj+p+1) ; G(unj;q : : : unj+p+1)j =
j =;1
X1 j pX+1 Z 1 Gk(unj;q + +unj;q : : : unj+p+1 + +unj+p+1)+unj+k dj j =;1 k=;q 0
X1 pX+1 Z 1 Gk(unj;q + +unj;q : : : unj+p+1 + +unj+p+1)j+unj+kj d =
Use the monitonicity and the triangle inequality
j =;1 k=;q 0
Change index from j to m = j + k
X1 Xp+1
k=;q
Zm=;1
1
G (un
0
k m;k;q + + unm;k;q : : : unm;k+p+1 + + unm;k+p+1) dj+ unm j
The result follows from the fact that
Xp+1 Gk(vm;k;q : : : vm;k+p+1) = 1
k=;q
(2:4)
22
since we then get from above
X1 j nj+1+1 ; nj +1j j =;1
X1 Xp+1
k=;q
Zm=;1
1
n
n
k ( nm;k;q + + nm;k;q
m;k+p+1 + + m;k+p+1)
0
X1 Z 1 j+ nmj = X1 j nm+1 ; nmj
u
G
u
u
d
m=;1 0
:::u
u
u
m=;1
u
u
d
j+ nm j =
u
u
It remains to prove (2.4). To make the formulas simpler, we do this only for the three
point scheme
( m;1 m m+1 ) = m ; ( ( m m+1 ) ; ( m;1 m ))
where ( ) is the numerical ux function. If we denote the derivative of with
respect to its rst argument 1 and let 2 be the derivative with respect to the second
argument, we get
;1 ( m;1 m m+1 ) = 1 ( m;1 m )
0 ( m;1 m m+1 ) = 1 ; ( 1 ( m m+1 ) ; 2 ( m;1 m ))
1 ( m;1 m m+1 ) = ; 2 ( m m+1 )
For the three point scheme (2.4) becomes
;1 ( m m+1 m+2 ) + 0 ( m;1 m m+1 ) + 1 ( m;2 m;1 m ) =
1 ( m m+1 ) + 1 ; ( 1 ( m m+1 ) ; 2 ( m;1 m )) ; 2 ( m;1 m ) = 1
and the proof is complete. The general (2.4) follows similarly by converting to derivatives of the numerical ux function.
Theorem 2.9. Except for one trivial case, monotone schemes are at most rst order
accurate.
G v
v
v
v
h v
v
h v
v
h u v
h
h
G
G
h
v
v
v
v
v
G
v
v
v
G
v
v
v
v
v
v
h
h
v
h
G
h
v
v
h
v
v
v
v
v
v
v
h
v
v
v
G
h
v
v
v
v
h
v
v
v
To prove this we rst state the following theorem, which does not use the monotonicity of the scheme, and thus holds in general for all rst order schemes on conservative
form.
Theorem 2.10. The truncation error of the method
n+1 = n ; ( n
n
j
j
j +1=2 ; j ;1=2 )
is
n
2
2
2
j = ; ( ( ) x )x + ( + )
u
u
t
q u u
h
h
tO
t
x
23
where
and we denote
Proof:
q(u) = ( 12
u = u(tn xj )
X k2Gk(u : : : u) ; f 0(u)2)=2
.
By denition the truncation error jn is
jn = u(tn+1 xj ) ; G(u(tn xj;q ) : : : u(tn xj+p+1)) =
p+1
2
t
u + tut + 2 utt ; (G(u : : : u) +
Gk (u : : : u)(u(tn xj+k ) ; u)+
k=;q
X
XX
1 p+1 p+1 G (u : : : u)(u(t x ) ; u)(u(t x ) ; u)+
n j +k
n j +m
2 k=;q m=;q km
O(t(t2 + x2)))
where we use the notation u = u(tn xj ). We have here Taylor expanded the dierence
scheme in u. Next we expand the functions u in x and arrive at (modulo second order
terms)
2
jn = u + tut + 2t utt;
p+1
p+1 p+1
p+1
2
2
(u + xux
kGk + 2x uxx
kmGkm)
k2Gk + 2x (ux)2
k=;q
k=;q m=;q
k=;q
X
XX
X
We use G to denote G(u : : : u), and similarly for the derivatives of G. Now add and
subtract the expression
x2 (u )2 p+1 p+1 k2G
(2:5)
2 x k=;q m=;q km
XX
The reason for this is that the last term together with (2.5) becomes
XX
x2 (u )2 p+1 p+1 (km ; k2)G = 0
km
2 x k=;q m=;q
(2:6)
We omit the proof that this sum is zero for the moment. If we accept (2.6) as a fact,
we get
p+1
2
t
n
kGk +
j =tut + 2 utt ; (xux
k=;q
X
X
XX
x2 u p+1 k2G + x2 (u )2 p+1 p+1 k2G )
2 xx k=;q k 2 x k=;q m=;q km
24
Now use
X+1
p
k
=;q
) = ; 0( )
(
kGk u : : : u
f
(2 7)
u
:
to eliminate the zero order terms. We omit the proof of (2.7) as well. The rst order
terms remains
X+1
X
2 p+1 2
x
2
k Gk +
(k Gk )x ) =
u
uxx
ux
2
k =;q
k =;q
t2 u ; x2 (u p+1 k2G )
2 tt 2 x k=;q k x
Finally we remove the time derivatives by substituting utt with ((f 0 )2 ux)x . This can
n
j
2
= 2
2
;
(
tt
2
p
x
t
X
be done because
= ; xt = ;(
; ( 00 ( ) t x +
( 0 x )x = (( 0 )2
and the truncation error becomes
utt
f
f
0 (u)ux )t =
0
00
0
f (u)uxt ) = f (u)fx ux + f fxx =
f
u u u
f f
f
)
ux x
p+1
2
X 2 ) )
1
0
2
= 2 ((( ) ; 2
k
x x
k =;q
which is what we wanted to prove. It remains to prove (2.6) and (2.7). That can be
done by writing out the conservative form of the method and let the derivatives of
instead become derivatives of the numerical ux function . It is a straightforward
calculation similar to the one that was done in the last part of theorem 2.8., and we
do not give it here.
Finally we give the proof of theorem 2.9. By (2.7)
t
n
j
f
k G
u
G
h
2 0 2
(f (u)) = (
X+1
p
k
=;q
2
kGk ) = (
X+1
p
k
=;q
p
k
were the monotonicity is used to split
inequality gives
2 0 2
(f (u)) X+1 p p
Gk
2
k Gk
=;q
k
Gk
Gk
)2
into square roots. The Cauchy-Schwartz
X+1
p
k
=;q
+1
X
=
p
Gk
k
2
=;q
k Gk
P
by writing out the conservative form, it is easy to see that pk+1
=;q
that
X2 (
) ; 0 ( )2 ) 2 0
( ) = ( 12
k
q u
k G
u : : : u
f
u
=
Gk
= 1. It follows
25
where ( ) is the function dened in theorem 2.10.,
n
2
j = ; ( ( ) x )x
From Cauchy-Schwartz, we know that strict inequality ( ) holds except if k =
k ) the method is a pure translation. This is the trivial case mentioned
in theorem 2.9. We conclude that strict inequality holds, except in this case, and
therefore the truncation error does not vanish. The accuracy is one.
Theorem 2.11. Monotone schemes satisfy the discrete entropy condition
( ( nj +1) ; ( nj)) + ( jn+1=2 ; jn;1=2 ) 0
for the class of entropies ( ) = j ; j all 2 R, and where the numerical entropy
ux, jn+1=2 is consistent with the entropy ux
q u
t
q u u
:
<
kG
const:G
E u
E u
=
E u
t
u
H
c
H
=
x
c
H
( )=
F u
( ; )( ( ) ; ( ))
sign u
c
f u
f c
Proof: Introduce the notation _ = max( ) and ^ = min( ). Dene the
numerical entropy ux as
n
n
n
j +p ) =
j ;1=2 = ( j ;q
^ nj+p)
( _ nj;q
_ nj+p) ; ( ^ nj;q
where is the numerical ux of the scheme. With this numerical entropy ux, we
obtain
( nj) ; + jn;1=2 = j nj ; j;
^ nj+p)
_ nj+p) ; + ( ^ nj;q
+ ( _ nj;q
and since j ; j = _ ; ^ , we arrive at
( nj) ; + jn;1=2 = nj _ ; nj ^ ; + jn;1=2
(2 8)
^ nj+p+1)
_ nj+p+1) ; ( ^ nj;q
= ( _ nj;q
From the monotonicity of the method we get
n+1
n
n
= ( nj;q
_ nj+p+1)
j +p+1 ) ( _ j ;q
j
= (
) ( _ nj;q
_ nj+p+1)
and thus
n+1
_ ( _ nj;q
_ nj+p+1)
j
similarly we see that
;( nj +1 ^ ) ; ( ^ nj;q
^ nj+p+1)
a
H
a b
a
b
a b
:::u
H u
h c
b
u
:::c
h c
u
u
:::c
u
h
E u
u
H
h c
c
u
E u
u
c
:::c
u
c
u
h c
u
:::c
u
c
c
u
H
u
c
u
H
:
G c
u
c
u
:::c
G u
G c : : : c
G c
u
:::u
G c
u
c
u
c
u
G c
G c
u
:::c
u
u
G c
:::c
u
:::c
u
:::c
u
:::c
u
u
u
26
Finally,
( n+1) = j nj +1 ; j = n+1 _ ; nj +1 ^ ( _ nj;q
_ nj+p+1) ; ( ^ nj;q
= j nj ; j ; + jn;1=2
c
u
E uj
G c
u
u
u
c
:::c
c
u
c
u
G c
u
:::c
^
+p+1)
n
uj
H
where (2.8) was used in the last equality, This is the desired entropy inequality.
Remark: The class of entropy functions in the previous theorem is suciently
large to assure that the entropy condition (1.10) will hold for the limit solution.
2.4 Three point schemes
For three point schemes ( j+1=2 = ( j+1 j ) ), there is a complete characterization
of TVD schemes in terms of the numerical viscosity coecient.
We rst state the theorem on which all proofs that a scheme is TVD is based.
To apply this theorem, it is necessary to write the dierence method dierently. The
incremental form or I-form of the dierence approximation to (1.1) is
n+1
= nj + j+1=2+ nj ; j;1=2 ; nj
j
from the I-form the TVD property can be obtained through
Theorem 2.12. If
j +1=2 0
j +1=2 0
j +1=2 + j +1=2 1
then the method is TVD.
h
h u
u
u
C
u
C
u
D
D
C
u
D
Proof: Apply + to both sides of the I-form and sum over .
X1 j+
j =;1
n+1
u
j
1
X
j +
j=
j =;1
j
n
uj
+ +( j+1=2 + nj) ; +( j;1=2 ; nj)j
C
D
u
u
rearranging terms gives
X1 j+ +1j =
=;1
X1 j +3 2+ +1 + (1 ;
n
j
uj
j =;1
Cj
n
=
uj
Cj
+1=2 ; Dj +1=2 )+ unj + Dj ;1=2 + unj;1 j
Apply the triangle inequality on the right hand side
X1 j+ +1j X1 j +3 2+ +1j+
=;1
=;1
1
X j(1 ; +1 2 ; +1 2)+ j + X1 j
u
j
n
j
j =;1
Cj
j
Cj
=
Dj
n
=
uj
=
n
uj
j =;1
Dj
;1=2 +unj;1 j
27
Now use the assumption that
j +1=2 1.
Cj
+1=2 Dj +1=2 are positive and that the sum Cj +1=2 +
D
1
X
j +
j =;1
u
1
X
n+1
j
j
1
X
j =;1
Cj
(1 ; j+1=2 ;
C
j =;1
+3=2 j+ unj+1 j+
Dj
+1=2 )j
+unj
j+
1
X
j =;1
Dj
;1=2 j+unj;1 j
Finally shift the indices in the rst and third sum on the right hand side
1
X
j +
j =;1
n+1
j
u
1
X
j =;1
j
1
X
j =;1
Cj
+1=2 j+unj j+
(1 ; j+1=2 ; j+1=2 )j
C
D
+ unj
j+
1
X
j =;1
Dj
+1=2 j
+ unj
j=
1
X
j =;1
j+ njj
u
and the TVD property is proved.
Remark: The condition j+1=2 + j+1=2 1 corresponds to the CFL condition in the
linear case, and is not required for a semi discrete method of lines approximation.
We now introduce the viscosity form or Q-form of the dierence approximation to
(1.1) as
n+1
= nj ; 0 ( nj ) + 1 +( j;1=2 ; nj)
(2 9)
j
2
where is the numerical viscosity coecient. A three point scheme is uniquely dened
through its coecient of numerical viscosity as can be seen from the conversion formulas
at the end of this section. Thus there is only one degree of freedom in chosing a three
point scheme.
If we rewrite (2.9) on conservative form, the numerical ux function becomes
1
1
(2 10)
j +1=2 = ( ( j +1 ) + ( j )) ;
2
2 j+1=2 ( j+1 ; j )
This is seen by inserting this numerical ux function into the conservative (or C-form)
and rearranging terms.
If we apply theorem 2.12. to the Q-form we get the following characterization of
three point TVD schemes.
Theorem 2.13. A three point scheme is TVD if and only if the numerical viscosity
coe
cient satises
j j+1=2 j j+1=2 1
where j+1=2 is the local wave speed
C
D
u
u
tD f u
Q
u
:
Q
f u
h
f u
a
a
aj
+1=2 =
Q
u
Q
( f (u +1);f (u )
j
uj+1 ;uj
uj
0 )
f (
j
uj
uj
6=
uj
+1
= j+1
u
u
:
:
28
Starting from the conservative form, we add and subtract f (uj ), and get
hnj;1=2 ; f (unj ) n
hnj+1=2 ; f (unj ) n
n+1
n
n
(
uj +1 ; uj ) ;
(
uj ; unj;1 ))
uj = uj ; (
n
n
n
n
;u
u
u ;u
Proof.
j +1
j
Thus we can identify
Cj +1=2
Dj ;1=2
j
= ;
j ;1
hnj+1=2 ; f (unj )
= ;
unj+1 ; unj
hnj;1=2 ; f (unj )
unj ; unj;1
Insert the expression (2.10) for hj+1=2 into these formulas
f (unj+1 ) ; f (unj )
1
+
Qj +1=2 =2 = (Qj +1=2 ; aj +1=2 )
Cj +1=2 = ;
n
n
2(uj+1 ; uj )
2
f (unj;1 ) ; f (unj )
1
Dj ;1=2 = ;
+
Qj ;1=2 =2 = (Qj ;1=2 + aj ;1=2 )
n
n
2(uj ; uj;1)
2
The positivity of Cj+1=2 and Dj+1=2 means that
aj+1=2 and
Qj +1=2 ;aj +1=2
Qj +1=2
which is equivalent to the lower limit in the theorem
Qj +1=2
jaj+1=2 j
The condition Cj+1=2 + Dj+1=2 1 becomes
Qj +1=2
1
and the theorem follows.
The quantity aj+1=2 is important and will be used throughout this text. The
second order Lax-Wendro scheme has Qj+1=2 = 2a2j+1=2 , and is clearly outside the
TVD region. Thus we get the following result
Corollary 2.14. Three point TVD schemes are at most rst order accurate. The
situation can be viewed in g. 2.5.
This result and the corresponding result for monotone schemes might seem depressing. First order schemes are not accurate enough to be of use in practice. However,
higher order methods are developed using rst order methods as building blocks. This
is the motivation for the study of rst order methods.
29
Q
1
TVD
-1
1
a x
Fig.2.5. TVD domain for numerical viscosity
For reference we conclude this section by a listing of the three dierent standard
forms to write an approximation to (1.1), and formulas for converting between them.
The conservative form (C-form), the incremental form (I-form) and the viscosity form
(Q-form).
Q-form to C-form
1
1
hj +1=2 = (f (uj ) + f (uj +1 )) ; Qj +1=2 +uj
2
2
C-form to Q-form
f (uj ) + f (uj +1 ) ; 2hj +1=2
Qj +1=2 = u
Q-form to I-form
I-form to Q-form
C-form to I-form
I-form to C-form
+
j
1 (Q
2 j+1=2 ; aj+1=2 )
1
Dj +1=2 = (Qj +1=2 + aj +1=2 )
2
Cj +1=2 =
Qj +1=2 = Cj +1=2 + Dj +1=2
hnj+1=2 ; f (unj )
Cj +1=2 = ;
unj+1 ; unj
hn
; f (unj+1 )
Dj +1=2 = ; j +1=n2
uj +1 ; unj
1
hj +1=2 = f (uj ) ; Cj +1=2 + uj
= f (uj+1 ) ; 1 Dj+1=2 +uj
30
2.5 Some Schemes
Here we give some examples of three point approximations, which can be analyzed
using the theorems in section 2.4. The schemes are important in their own rights, some
of them will come up later in versions of higher order accuracy and in extension to
nonlinear systems of conservation laws.
Example 2.5 The upwind scheme. This scheme is the lower TVD limit in theorem
2.13, i.e.
j +1=2 = j j +1=2 j
Writing out the conservative form the scheme becomes
1
(
1
( j+1 ) j+1=2 0
j +1 ) ; ( j )
j
(
j +1 ; j ) =
j +1=2 = ( ( j +1 )+ ( j )) ; j
( j)
0
2
2
j +1=2
j +1 ; j
and we can see the reason why this is called the upwind scheme. The scheme takes the
ux value from the direction of the characteristics. For the linear equation t + x = 0
the wave speed j+1=2 = is constant and the scheme becomes
f u
h
f u
Q
a
f u
f u
u
:
u
u
u
f u
a
<
f u
a
>
u
a
a
n+1
j
n+1
uj
u
=
=
; n
j ; n
uj
a
u
a
+ unj
n
tD; uj
tD
a <
a >
0
0
or with + = max(0 ) and ; = min(0 ),
n+1
= nj ; ( ; + nj + + ; nj)
j
By considering the example
2 2)x = 0 ; 1
10
t+(
n ;1
0
(0 ) = 1
0
a
au
a
a
u
u
u
u
a
t a
D
u =
u
a
D
< x <
u
< t
x <
x
x
it is easy to see that the upwind scheme does not satisfy the entropy condition. The
scheme does not contain enough viscosity to break the expansion shock into an expansion
wave. The scheme is attractive because it has the least possible viscosity to suppress
oscillations.
Example 2.6. The Lax-Friedrichs scheme. At the other end of the TVD interval
in theorem 2.13 we nd the Lax-Friedrichs scheme, which has the viscosity
j +1=2 = 1
and numerical ux function
1(
1
j +1=2 = ( ( j +1 ) + ( j )) ;
2
2 j+1 ; j )
This scheme is extremely diusive, and smears shocks enormously. The advantage of
the scheme is its simplicity, and the fact that the numerical ux function is innitely
dierentiable with respect to its arguments. This is of importance for steady state
Q
h
f u
f u
u
u
31
computations when, in Newton type methods, the Jacobian of the scheme is required.
It is also a requirement when the formal order of accuracy is derived.
Example 2.7. The Godunov scheme. This scheme has the viscosity coecient
( j+1 ) + ( j ) ; 2 ( )
(2 11)
=
max
Qj
+1=2
(u;uj )(u;uj+1)0
f u
f u
uj
f u
+1 ; uj
:
The scheme was originally derived for the Euler equations in gas dynamics, where it
was constructed as solving a Riemann problem locally between each two grid points.
This derivation will be given later. Here we can instead explain the Godunov scheme
as the lower limit in the denition of the E schemes
minuj <u<uj+1 ( ) if j j+1
j +1=2 = max
uj+1 <u<uj ( ) if j
j +1
From this denition it is straightforward to derive the expression (2.11) for the viscosity
coecient. The Godunov scheme is the E scheme with smallest coecient of viscosity.
It is also a TVD scheme.
Example 2.8. The Engquist-Osher scheme. The E-O scheme was designed with
the intent of improving the upwind scheme with respect to entropy and convergence to
steady state. The viscosity coecient takes into account all values between j+1 and
j by integrating over this interval,
f u
h
f u
u
< u
u
> u
u
u
Qj
+1=2 =
uj
+1 ; uj
Z uj+1
uj
j 0 ( )j
f
u
du
If 0 does not change sign between j and j+1, then we see that the viscosity is equal
to the viscosity of the upwind scheme. The advantages with this method is that it is
an E scheme and that the numerical ux is a 1 function of its arguments, making it
suitable for steady state computations. The scheme is TVD.
Example 2.9. The Lax-Wendro scheme. The only choice of viscosity that gives
a second order accurate approximation (both in space and time) is the Lax-Wendro
scheme
2 2
j +1=2 =
j +1=2
The scheme is not TVD, but it is important because of its optimality. (The only three
point second order scheme). Later when we discuss second order TVD schemes, the
Lax-Wendro scheme will play an important role.
The Godunov, E-O and the upwind schemes coincide if the ux function derivative
0 ( ) does not change sign between j and j +1 . The sonic points are the values for
which 0 ( ) = 0. It is usually around the sonic points the entropy condition is hard to
satisfy. There have been suggested a number of xes for the upwind scheme to satisfy
the entropy condition. One which is commonly used in computational uid dynamics
(CFD) is the choice
j j+1=2 j
if j j+1=2 j 2
j +1=2 = (
2
)
(4
)
+
if
j j+1=2 j 2
j +1=2
f
u
u
C
Q
f
u
a
u
f
u
u
u
Q
a
a
a
=
a
>
32
The viscosity is prevented from going to zero when jaj+1=2 j = 0, and the viscosity
becomes a C 1 function of uj uj+1. The disadvantage is that we now have a parameter
to tune.
As a summary the schemes are plotted as function of increasing viscosity in g.2.6.
Entropy
TVD
CFL-stable
Q
Lax-Friedrics
Engquist-Osher
Godunov
Upwind
Lax-Wendroff
D0
Fig.2.6. Properties of methods as function of the viscosity
2.6. Two space dimensions
In two space dimensions we approximate the conservation law
ut + f1(u)x + f2(u)y = 0
on some domain, by the explicit dierence method
n
unij+1 = unij ; x+ihni;1=2j ; y +j gij
;1=2
where x = t=x, y = t=y. hni;1=2j = h(uni;qj : : : uni+pj ) is a ux consistent
n
n
with the ux f1 , and similarly for gij
;1=2 . We can choose hi;1=2j as a one dimensional
ux formula described in this chapter. Note however that with this straightforward
generalization the Lax-Wendro scheme will not maintain second order accuracy in two
dimensions. In the case of Lax-Wendro, it is better to use operator splitting, then
second order accuracy can be kept.
For two space dimensions, it is possible to prove that TVD, in the sense that
X yju +1 ; u
ij
n
i j
n
ij
j + xjunij+1 ; unij j
is decreasing, implies overall rst order accuracy. First order is too restrictive. Instead
we write the scheme as
unij+1 = unij + Ai+1=2j +iunij ; Bi;1=2j ;iunij + Cij+1=2 +j unij ; Dij;1=2 ;j unij
and take
Ai+1=2j 0 Bi+1=2j 0 Cij+1=2 0 Dij+1=2 0
(2:12)
Ai+1=2j + Bi+1=2j + Cij+1=2 + Dij+1=2 1
33
as a criterion for a scheme with good properties with respect to shocks. Unlike the one
dimensional case, (2.12) does not imply TVD, and thus allows for second order accurate
schemes in two space dimensions.
Exercises
1. Show that the Engquist-Osher scheme, the Godunov scheme and the upwind scheme
coincides when applied to the linear problem
ut + aux = 0
2. Determine the smallest constant d that makes the Lax-Wendro scheme with added
viscosity
unj +1 = unj ; +hj;1=2 + d+;unj
hj+1=2 = 21 (f (unj ) + f (unj+1 )) ; 21 (aj+1=2 )2 +unj
TVD. Determine the c stability condition for the resulting TVD scheme. Does
the scheme satisfy an entropy condition ?
3. Assume the initial data
0
uj = uuL jj < 00
R
are given to the general three point scheme
unj +1 = unj ; (hnj+1=2 ; hnj;1=2 )
hnj+1=2 = 21 (f (unj ) + f (unj+1 )) ; 21 Q(unj+1 ; unj)
Determine conditions on Q such that
TV (u1) TV (u0)
Compare with theorem 2.13.
4. Show that the method
tD f (un ) if f 0 (un) < 0
n+1
n
uj = uj ; tD;+f (unjj ) if f 0 (unjj ) > 0
is not conservative.
Time = t
Time = t + dt
1.1
1.1
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.5
1
1.5
2
2.5
0.3
0.5
1
1.5
2
2.5
Psi=(M+2-2A)r
TVD
Psi
M
m
TVD
Psi=(2+m)r
r
2
2
1.5
1.5
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-0.5
0
0.5
1
1.5
2
-1
-1
2
2
1.5
1.5
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-0.5
0
0.5
1
1.5
2
-1
-1
-0.5
0
0.5
1
1.5
2
-0.5
0
0.5
1
1.5
2
60
4. Higher Order of Accuracy
4.1 Point values and cell averages
In this section we will not strictly enforce the TVD constraint. As we have seen, TVD
leads to restrictions in the accuracy. It is, however, necessary to use some of the ideas
in the previous sections to make the increase in variation \as small as possible". A
pure centered dierence approach is not sucient as can be seen from the following
experiment. We solve ut +(u2=2)x = 0, with a step function as initial data. The scheme
x2 D D )f (u ) ; dx3(D D )2u
duj
=
;
D0 (I ;
+ ;
j
dt
6 + ; j
is used, discretized in time using a fourth order Runge-Kutta method. The spatial
discretization is a fourth order accurate centered dierence together with a fourth order
articial dissipation term. There is a viscosity parameter d left to tune the method
for shocks. The result in g. 4.1 shows the solution after shocks have formed for some
dierent values of the viscosity d. In the rst picture the viscosity parameter is too small
to give substantial damping of oscillations. in the second picture, d = 0:2 which was
the best value according to subjective judgement by looking at the results. In the last
picture d = 0:235, which turned out to be the largest possible viscosity due to the CFL
constraint. The dissipation operator was not taken into account in the CFL condition.
d=0.01
d = 0.200
1.2
d = 0.235
1.2
1.2
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
-0.2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-0.2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-0.2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
Fig. 4.1. Fourth order solution of Burger's equation.
Note that the results are not particularly good, not even after tuning the viscosity.
The higher order methods described in this chapter will give good results for this
problem. We must however issue a warning that the theory for higher order non oscillatory schemes is not well developed.
This far we have not made any distinction between cell averages and point values.
Consider the grid
xj j = : : : ;1 0 1 : : :
with x = xj ; xj;1 = constant. The numerical approximation unj at (tn xj ) can be
thought of as an approximation to the point value u(tn xj ). Alternatively, we introduce
the cells, cj as
cj = fxjxj ;1=2 x xj+1=2 g
1
61
where xj+1=2 = (xj + xj+1 )=2, and view unj as an approximation to the cell average
1 Z xj+1=2 u(t x) dx:
x xj;1=2 n
The situation is depicted in g. 4.2.
Cell j
Point j
j-1
j
j+1
Fig. 4.2. Grid cells and grid points.
The distinction between these two views is not important for methods with accuracy
2, since
Z xj+1=2
1
u(tn x) dx + O(x2 )
u(tn xj ) =
x
xj;1=2
In this section, however, we will treat higher order of accuracy than two. We rst
analyze semi discrete methods, and save the time discretization until the last section.
The cell average based higher order schemes are the generalization of the inner schemes
described in the previous chapter. The schemes starts from the following exact formula
for the cell average. Integrate
ut + f (u)x = 0
with respect to x over one cell at t. The result is
Z xj+1=2
f (u(t xj+1=2 )) ; f (u(t xj ;1=2 ))
d 1
=0
(4:1)
u(t x) dx +
dt x xj;1=2
x
Compare this with the numerical approximation
duj
dt
+
hj+1=2 ; hj ;1=2
x
=0
(4:2)
If the numerical ux approximates the ux of the exact solution at the cell interface
hj+1=2 = f (u(t xj+1=2 )) + O(xp )
then (4.2) is a p th order approximation to the PDE in terms of its cell averages.
One usual way to nd higher order approximations is to make a piecewise polynomial approximation, L(x) of u(tn x) from the given cell averages unj. Inside each
cell u(tn x) is approximated by a polynomial, and at the cell interfaces, xj+1=2 , there
may be jumps. From this piecewise polynomial the numerical ux is obtained as
L
h(uR
j+1=2 uj+1=2 ), where h(uj+1 uj ) is the numerical ux of a rst order TVD method
and the end values are
uR
j+1=2 = x!xlim + L(x)
uL
j+1=2
j+1=2
= x!xlim ; L(x)
j+1=2
62
Fig. 4.3 below shows a piecewise parabolic approximation.
u
u Rj+1/2
u Lj+1/2
x j-1
xj
x
x j+1
Fig. 4.3. Piecewise parabolic reconstruction.
Z
The point value based higher order methods starts from the observation that if
xj+1=2
1
f (u(xj )) =
x xj;1=2 F (x) dx
for some function F (x), then
f (u(xj ))x
=
F (xj+1=2 ) ; F (xj ;1=2 )
x
and thus if the numerical ux satises
hj+1=2 = F (xj+1=2 ) + O(xp )
the scheme (4.2) is p th order accurate in terms of point values. The function F (x) can
be obtained by interpolation of the grid function
Gj+1=2
Z
=
xj+1=2
a
F (x) dx
X(
=
j
k=a
f uk )x
and then taking the derivative of the interpolation polynomial, F (x) = dG(x)=dx. The
point based algorithm is much easier to generalize to more than one space dimension.
In section 4.2. we show some dierent ways to do the piecewise polynomial reconstruction. When the time discretization is made, extra care has to be taken to get
the same high order of accuracy as for the spatial approximation. This is the topic of
section 4.4.
63
4.2 Inner interpolation gives a cell average scheme
There are three ingredients in an inner high order scheme
1. A First order numerical ux.
2. A piecewise polynomial interpolation to nd uLj+1=2, and uRj+1=2 .
3. A time discretization.
The topic of this section is 2., polynomial interpolation. We will consider the
problem of nding the values of the solution at the cell interfaces, uRj+1=2, uLj+1=2,
j = : : : ;1 0 1 : : : from given cell averages. This is done by piecewise polynomial
interpolation, and in such a way that the variation of the interpolant is is as small as
possible. Strictly speaking, this is not an interpolation problem, since the function is
given as cell averages, while an interpolation problem requires the function at certain
points. The term reconstruction is therefore used to denote the process of nding an
approximation to a function whose cell averages are given.
One method in this class is the so called piecewise parabolic method (PPM). It contains a reconstruction step using parabolic polynomials. The reconstruction algorithm
contains limiters to ensure monotonicity. We here give the algorithm without details,
just to give the reader an understanding for the complexity of the PPM reconstruction
step.
P
(a) Dene the primitive function Vj+1=2 = jk=a uk x.
(b) Interpolate Vj+1=2 using piecewise quartic polynomial.
(c) Dene uLj+1=2 = uRj+1=2 = dV (xj+1=2 )=dx.
(d) Modify the left and right values obtained in (c), so that they both are between uj
and uj+1.
R
(e) If the parabola in cell j ( parabola through uRj;1=2 , uLj+1=2 and satisfying cj u dx =
xuj ) has an extreme point inside the cell, modify it such that it becomes monotone inside the cell.
(f) If a cell is inside a discontinuity replace the parabola with a line, which gives a
steeper shock representation than the original parabola. As discontinuity detector
the following conditions are used
if +;uj;1+;uj+1 < 0
and j+;+uj j M1
and +;+uj +uj < 0
and j0uj j M2
then there is a discontinuity in cell j .
We next describe another method for obtaining high order interpolation of discontinuous functions. The essentially non oscillatory (ENO) interpolation is a systematic
way to incrementally increase the accuracy to any order by adding points to the interpolation polynomial from the left or from the right, depending on in which direction
the function is least oscillatory.
The process is described using Newton's form of the interpolation polynomial. Assume that the function g(x) is known at the points xj j = : : : ;1 0 1 : : :. Dene the
64
divided dierences xi : : : xi+r ]g recursively by
xi ]g = g(xi )
x : : : x ]g = xi+1 : : : xi+r ]g ; xi : : : xi+r;1 ]g
i
i+r
xi+r
; xi
Newton's polynomial interpolating g at the points x1 : : : xn is then given by
P n (x) =
Xn ( ;
i=1
x
x1 )(x ; x2 ) : : : (x ; xi;1 )x1 : : : xi ]g
where (x ; xi ) : : : (x ; xj ) = 1 if i > j . This form is convinient, since if we want
to add another point to the interpolation problem, we can immediately update the
interpolation polynomial using the formula
P n+1(x) = P n (x) + (x ; x1 ) : : : (x ; xn )x1 : : : xn+1 ]g
Proof of the above statements and description of various interpolation procedures can
be found in any textbook on approximation theory.
We now give an algorithm for constructing a piecewise N degree polynomial continuous interpolant L(x) from the given grid function uj , with
L(xj ) = uj
and which does introduce as small amount of oscillations as possible. Start by dening
the linear polynomial
L1 (x) = uj + (x ; xj )(uj +1 ; uj )=x xj x < xj +1
and the indices
1
=j
kmin
1
kmax = j + 1
to bookkeep the stencil width. The interpolation proceeds recursively as follows. Dene
the divided dierences
p;1 +1 ]u
ap = xkp;1 : : : xkmax
min
p;1 ]u
bp = xkp;1 ;1 : : : xkmax
min
where thus we add one point to the right for ap and one point to the left for bp . Next
use the smallest dierence to update the polynomial.
if japj < jbpj then
Lp (x)
= Lp;1(x) + ap
p;1 + 1
= kmax
p = k p;1
kmin
min
p
kmax
Y ( ; k)
p;1
kmax
p;1
k=kmin
x
x
65
Y ( ; k)
else
p
p;1 (x) + bp
L (x) = L
p
p;1
max = kmax
p;1
p
kmin = kmin ; 1
p ;1
kmax
p;1
k=kmin
x
x
k
Thus p( ) is a degree polynomial which interpolates ( ) and which is constructed
from the smallest possible divided dierences.
We next show how this interpolation algorithm can be used to solve the reconstruction problem. There are two ways to do this.
1. Reconstruction by primitive function (RP).
2. Reconstruction by deconvolution (RD).
In the rst method (RP), we observe that the primitive function
L
x
p
u x
( j+1=2
U x
Z
x
)=
j+1=2
(n )
u t x dx
;1
j
X
=
k=;1
u
n x
k
is known at the points j+1=2 . The function ( ) is interpolated using the ENO interpolation algorithm above. The interpolation polynomial, ( ), is dierentiated to get
the approximation to ( n ). Thus the left and right values required in the numerical
ux are
( j+1=2 ;)
L
j +1=2 =
( j+1=2 +)
R
j +1=2 =
x
U x
L x
u t x
dL x
u
dx
dL x
u
dx
( ) is continuous, but the derivatives may have dierent values from the left and from
the right at the break points j+1=2 . In this way the reconstructed function becomes
piecewise continuous.
To describe the second method (RD), we rst note that
L x
x
Z
Z
x+x=2
1=2
1
() =
( + )
(4 3)
( )=
x;x=2
;1=2
where thus ( ) is the cell average. We interpolate the given cell averages, using the
ENO interpolation algorithm above, to get an approximation to ( ), and then nd the
approximation to ( ) by inverting (\deconvolute") (4.3).
To invert (4.3) use Taylor expansion
u x
u y
x
dy
u x
s
x ds
u x
u x
u x
(j
u x
X
Z
;
= NX
)=
N ;1 1
=0 !
1 2
1
;1=2 =0
( )
! j
s
Z
x
d u
dx
x
dx
+ ( N ) =
O
1=2
u
N
s ds + O (x )
(
xj )
dx
;1=2
x d
x
:
66
All the derivatives d u=dx (xj ) are unknowns but there is only one equation. To introduce more equations it is necessary to consider the derivatives of u(x). Similarly as
above one gets
Z 1=2
;k ;1 1
+k u
dk u(xj ) = NX
d
x dx+k (xj )
s ds + O(xN ;k )
dxk
!
;1=2
=0
for k = 1 : : : N ; 1. In this way N equations are obtained for the N unknowns
d u=dx (xj ) = 0 : : : N ; 1.
The reconstruction polynomial is then dened as
L
N ;1
(x) =
N ;1
X (x ; xj ) d u
! dx (xj ) xj;1=2 < x < xj+1=2
=0
(4:5)
In summary the algorithm becomes
1. Use the ENO interpolation algorithm to interpolate the cell averages uj . The result
is a polynomial of degree N , QN (x), piecewise dierentiable with breakpoints at
xj .
2. Evaluate the derivatives dQN (xj )=dx. Since xj are break points extra care has to
be taken. We dene
dQN (xj ) = minmod( dQN (xj ;) dQN (xj +) )
dx
dx
dx
and similarly for higher order derivatives.
3. Solve the upper triangular linear system of equations
Z 1=2
;k ;1 1
k+ u
dk QN (xj ) = NX
d
x dxk+ (xj )
s ds + O(xN ) k = 0 : : : N ; 1
dxk
!
;1=2
=0
to get the derivatives d u=dx (xj ).
4. Dene (4.5) as the piecewise polynomial reconstruction.
Example 4.1. We derive the second order ENO scheme through RP. Second
order means doing piecewise linear reconstruction. Thus the primitive function has to
be interpolated using degree 2 polynomials. We obtain
U (x) = Uj+1=2 + (x ; xj+1=2 )uj + 21 x (x ; xj;1=2 )(x ; xj+1=2 )m(;uj +uj )
xj;1=2 < x < xj+1=2
where
jyj
m(x y) = yx ifif jjxxjj jyj
The linear approximation inside cell j becomes
dU = u + x ; xj m( u u )
; j + j
dx j x
67
this is a scheme on the form treated in chapter 3 ( see p.50 ), with
sj = m(+uj ;uj )
the TVD condition (3.25) is satised. Thus this is a TVD scheme which and consequently it degenerates to rst order at extrema.
In general one can prove that the RP ENO scheme using degree N polynomials has
truncation error O(xN ), except at points where any of the rst N ; 1 derivatives
disappears, there the truncation error is O(xN ;1). It is also possible to prove that the
truncation error for RD ENO method using degree N polynomials is O(xN ) always.
4.3 Outer interpolation gives a point value scheme
This scheme is based on interpolation of the numerical uxes. To achieve the
desired order of accuracy, it is necessary that the interpolated uxes have a sucient
amount of derivatives. This is a very important point which somewhat restricts the
possible choices of rst order numerical ux to build the method from.
Assume that the point values
uj j = : : : ;2 ;1 0 1 2 : : :
are known. The idea of this method was outlined in section 4.1. We form the interpolant
of
j
X
Hj+1=2 = x f (uk )
k=a
by using the ENO interpolation algorithm, and then take the numerical ux as
dH (xj+1=2 )
hj+1=2 =
dx :
The interpolation is made piecewise polynomial with break points xj . This direct approach have to be modied somewhat. If we carry out the above scheme we get
Hj+1=2 + (x ; xj+1=2 )fj if jfj j < jfj+1j x < x < x
H 1 (x) = H
j
j+1
j+1=2 + (x ; xj+1=2 )fj+1 if jfj+1 j < jfj j
which leads to
jfj+1 j
hj+1=2 = ffj ifif jjffj j <j <
jf j
j+1
j+1
j
if the order of accuracy is chosen =1. Although this ux is consistent, the resulting
method is not TVD (Exercise 2). It is crucial that the rst order approximation is
TVD. From numerical experiments, it is possible to verify that this method is not non
oscillatory no matter how high the accuracy of the interpolant. Instead we make the
rst order version of this method TVD, by taking
if aj+1=2 0 x < x < x
j+1=2 + (x ; xj+1=2 )fj
H 1 (x) = H
j+1
Hj+1=2 + (x ; xj+1=2 )fj+1 if aj+1=2 < 0 j
68
the rst order method is then the upwind scheme. Continuing the interpolation to
higher order leads to a non oscillatory high order scheme, but the method does not
satisfy an entropy condition.
We obtain a more general way of choosing the starting rst order polynomial if we
consider a rst order TVD ux hj+1=2 and split it as
hj+1=2
;
= fj+ + fj+1
where f + corresponds to positive wave speeds and f ; to negative wave speeds. As an
example the Engquist-Osher scheme (section 2.5) can be written on this form with
f (u) if f 0 (u) > 0
if f 0 (u) > 0
; (u) = 0
f + (u) =
f
0
if f 0 (u) < 0
f (u) if f 0 (u) < 0
Another example is the Lax-Friedrichs scheme, where
1
f + (u) = (f (u) + u)=2
f ; (u) = (f (u) ;
1 u)=2
or the modied Lax-Friedrichs scheme
f + (u) = (f (u) + u)=2
f ; (u) = (f (u) ; u)=2
with = max jf 0 (u)j.
We dene the starting polynomials
;
1 (x) = H
H;
j+1=2 + (x ; xj+1=2 )fj+1
xj < x < xj+1
1
(x) = Hj+1=2 + (x ; xj+1=2 )fj+
H+
and then continue the ENO interpolation of f + and f ; respectively through the points
xj to arbitrary order of accuracy, p. Finally
p
dH+
(xj+1=2 ) dH;p (xj+1=2 )
hj+1=2 =
+
dx
dx
The truncation error for this method will involve dierences of the functions f + and
f ; . Thus to achieve the expected accuracy it is necessary to have f + f ; 2 C p, p
large enough. Because of this, the scheme has mostly been used together with the C 1
Lax-Friedrichs numerical ux, or the modied Lax-Friedrichs numerical ux. However
the Lax-Friedrichs scheme does not always give sucient shock resolution. Although
the higher order versions, obtained as described above, performs much better than the
rst order Lax-Friedrichs, there is still need for rst order TVD methods giving better
shock resolution than Lax-Friedrichs and having more derivatives than the upwind or
the Engquist-Osher schemes, to be used as building blocks for this method.
We conclude with some remarks about two space dimensions. For the problem
ut + f (u)x + g (u)y = 0
69
the method described in this section can be applied separately in the x- and y- directions
to approximate @=@x and @=@y respectively (see section 2.6). There are no extra complications. For the cell centered scheme, the two dimensional generalization of formula
(4.1) gives an integral around the cell boundary. This integral is required to p th order
accuracy, which can be done by a numerical quadrature formula. If e.g., p = 4 this
means using two values on each cell side. Thus for each cell, we need a two dimensional
reconstruction, which is a non trivial problem in its own right, and then we have 8 ux
evaluations to make, two on each side. The cell centered scheme quickly becomes more
computationally expensive than the point centered scheme.
4.4 Time discretization
The easiest way to obtain a high order time discretization is to use a Runge-Kutta
method. However it has been observed that e.g., the classical fourth order RungeKutta method can cause large amount of oscillations in the solution although the space
discretization is made TVD. Therefore, we have to be extra careful about how to design
Runge-Kutta schemes.
We consider the semi discrete approximation
du = L(u)
dt
to the problem
ut = ;f (u)x
where we know that the forward Euler approximation
un+1 = un + tL(un)
leads to a TVD or ENO method. The semi discrete TVD methods treated previously
can all be written
duj = C
j +1=2 +uj ; Dj ;1=2 ; uj
dt
with non negative Cj+1=2 Dj+1=2 . From theorem 2.12 it follows that the forward Euler
time discretization is TVD under the CFL constraint t(Cj+1=2 + Dj+1=2 ) 1, all j .
Thus it is not too restrictive to assume TVD for the forward Euler time discretization.
The idea of TVD Runge-Kutta methods is to write the scheme as a convex combination
of forward Euler steps. One general form for explicit m stage Runge-Kutta methods is
u(0) = un
u(i) = u(0) + t
un+1 = u(m)
Xi; cikL(u k )
1
k=0
( )
i = 1 2 : : : m
(4:5)
70
For each stage, the weights (ki), k = 0 : : : i ; 1, satisfying
(ki) 0
i;
X
i =1
1
()
k
k=0
are introduced. Then (4.5) can be rewritten
u(0) = un
u
(i)
i;
X
= i u k + i tL(u k )
1
() ( )
()
k
k=0
i = 1 2 : : : m
( )
k
(4:6)
un+1 = u(m)
P
with k(i) = cik ; is;=1k+1 csk (si). By writing
u
(i)
i;
i t
X
i
k
k
L(u k ))
= (u +
1
k=0
()
k
()
( )
( )
(i)
k
i = 1 2 : : : m
it is easy to prove
Theorem 4.1. If k(i) > 0 and the method
un+1 = un + tL(un)
is TVD under the CFL condition 0, ( = t=x), then the method (4.6) is TVD
under the CFL condition
(i)
k
(4:7)
0 min
ik k(i)
Proof: For each stage it holds
TV (u(i)) =
1
X
j =;1
j+u(ji)j 1 X
i;
X
i j
1
j =;1 k=0
()
k
(i)
(k) k
(k )
+ (uj + (i) tL(uj ))j
k
i;
X
i TV (u k )
1
k=0
()
k
( )
where we used that (ki) > 0 and that the forward Euler parts in the sum above
are TVD under the constraint (4.7). Use induction by assuming that TV (u(k)) TV (u(0)) for all k i ; 1. This is certainly true for i = 1. The inequality above gives
TV (u(i)) (
i;
X
i )TV (u
1
k=0
()
k
(0)
) = TV (u(0))
71
Thus we have proved TV (u(m)) TV (u(0)), which is the TVD condition TV (un+1) TV (un).
If, however, some k(i) < 0 then the step
(i)
k
u + (i) t L(u(k))
k
corresponds to a reversal of time, and we replace the operator L(u) with an operator
L^ (u) which is such that ;L^(u) approximates the problem
ut = f (u)x
in a TVD (or ENO) fashion. We give an example to show how the operator L^ (u) is
derived.
Example 4.3. We approximate
ut = ux
using the stable TVD approximation
un+1 = un + tD+un
Thus, using the formulas above, L(u) = tD+. The operator L^ is obtained from
approximating
ut = ;ux
using the stable TVD approximation
un+1 = un ; tD; un
From this we nd that ;L^ (u) = ;tD;, and thus
L^ (u) = tD;
(k)
The CFL condition for the case of negative k(i) with L^ (u) replacing L(u) is obtained
from using the absolute value in (4.7).
To derive some particular Runge-Kutta methods, we start from (4.6) and investigate, by Taylor expansion, the possible methods. We give one example.
Example 4.4 Require second order accuracy and m = 2. The method is
u(0) = un
u(1) = u(0) + t0(1)L(u(0))
(0)
(1)
u(2) = (2)
+ t0(2)L(u(0)) + (2)
+ t1(2)L(u(1))
0 u
1 u
un+1 = u(2)
72
where we will choose ( ) 0. To get the accuracy, take an exact solution to u = L(u),
and insert it into the method. The truncation error in time is (dropping all terms of
O(t3 ))
i
k
t
(2)
(2)
(1)
= u(t + t) ; ((2)
0 u + t0 L(u) + 1 (u + t0 L(u))+
t1(2)L(u + t0(1)L(u)))
where we use u to denote the exact solution u(t). Taylor expansion gives
(2)
(2)
(2)
(2) (1)
= (1 ; (2)
0 ; 1 )u + (1 ; 0 ; 1 ; 1 0 )tu +
t2 u ; t2 (2) (1)L(u)L0(u)
1
0
2
The observation u = (L(u)) = L (u)u = L (u)L(u) gives the conditions for second
order accuracy
(2)
(2)
0 + 1 = 1
1
(2) (1)
(2)
0 = 1 ; (1) ; 1 0
20
1
(2)
1 = (1)
20
The factors
(2)
(2)
1
0
1
t
tt
tt
t
0
t
0
j0(1)j j0(2)j j1(2)j
comes into the CFL condition. If they all can be made 1, this Runge-Kutta scheme
will be non oscillatory under the same CFL condition as the forward Euler scheme is
non oscillatory. If furthermore we can choose all non negative, we can avoid the
operator L^ (u). We rst try 0(1) = 1, which gives 1(2) = 1=2 and then to keep the next
(2)
(2)
CFL factor =1, we take (2)
1 = 1=2. This leads to 0 = 1=2 and 0 = 0. We have
obtained the method
u(1) = u(0) + tL(u(0))
1
u(2) = (u(0) + u(1) + tL(u(1)))
2
which is the same as the method given on page 58. This method gives second order in
time and retains the non oscillatory features from the semi discrete approximation. It
does not require L^ (u), which saves programming eort.
In a similar way we can derive the third order TVD Runge-Kutta method
u(1) = u(0) + tL(u(0))
1
t L(u(1))
3
u(2) = u(0) + u(1) +
4
4
4
2
2
t
1
(2)
u(3) = u(0) + u(2) +
3
3
3 L(u )
73
This method has CFL factor one, and is thus stable under the CFL condition obtained
from the forward Euler discretization. For higher accuracy than three, no method not
involving ^ ( ) is known. It is recommended that a known high order Runge-Kutta
method is written on the form (4.6), and then ( ) is replaced by ^ ( ) wherever
0.
The Runge-Kutta approach seems to be the simplest way to do time discretization
and we recommend it. There are however other ways. We conclude with a brief discussion of these alternative time discretizations. One example is a Lax-Wendro type of
method. It relies on the formula
2
n+1 = n + ( n )t + ( n )tt +
2
The time derivatives are then replaced by spatial ones.
t = ; ( )x
tt = ( ( ) ( )x )x
L u
L u
u
u
t
t u
u
f u
u
f
0
L u
u
<
:::
u f u
:::
The spatial derivatives are approximated using the ENO scheme. The procedure becomes very complicated for higher order of accuracy than 2 in time. It is not suited for
steady state computations, but the stencil will not be as wide as for the Runge-Kutta
schemes.
Another, more one dimensional method, can be given by a direct integration of the
conservation law in the ; plane. This method is derived for schemes based on cell
averages. Consider the conservation law
t + ( )x = 0
integrate around one cell in the ; plane as depicted in g. 4.4,
x
t
u
x
f u
t
t
t n+1
tn
x j-1/2 x j
x
x j+1/2
Fig. 4.4. Path of integration in the ; plane.
x
and use Green's formula to obtain
I
C
u dx
; ( ) =0
f u dt
t
74
Evaluate this integral directly
Z
xj+1=2
xj;1=2
(
)
u tn x
dx
Z
;
Z
tn+1
xj+1=2
xj;1=2
Letting
n
uj
( (
f u t xj+1=2
tn
(
)
u tn+1 x dx
+
)) ;
dt
Z
tn+1
( (
f u t xj
tn
;1=2 )) dt = 0
denote the cell average, the above integral becomes
Z tn+1
1
= ;
( ( ( j+1=2 )) ; ( ( j;1=2 )))
tn
Note that no approximation has yet been made. Assume that the cell averages nj are
known, and that ( n ) is the function obtained by piecewise polynomial reconstruction from the cell averages. As numerical approximation to the PDE we take
( n ; n )
n+1
= nj ; j+1=2
j ;1=2
j
Here, nj+1=2 approximates
n+1
u
j
n
uj
f u t x
x
f u t x
dt:
u
L t x
u
t
u
x
h
h
h
1 Z tn+1 ( (
tn
with ( ) the solution to the PDE for
order accuracy we can approximate
f v t xj+1=2
t
Z
tn+1
tn
((
f v t xj+1=2
dt
using ( n ) as initial data. For rst
t > tn
v t x
))
L t
)) ( ( n
tf v t xj+1=2
dt
x
)) = (
n
n
th uj+1 uj
)
The reconstructed function has break points at j+1=2 , and therefore ( ( n j+1=2 )) is
not uniquely dened. We use the semi discrete ux j+1=2 to be the ux at ( n j+1=2 ).
This leads to the usual forward Euler method, which is only rst order accurate in time.
For higher order accuracy the ux at other points may be needed e.g., second order
accuracy can be obtained from the approximation
Z tn+1
( ( j+1=2 )) 2 ( ( ( n j+1=2 )) + ( ( n+1 j+1=2 ))) =
tn
( ( nR
nL
j+1=2 j+1=2 ) + ( ( n+1 j+1=2 )))
2
x
f v t
h
f v t x
t
h u
t
dt
u
f v t
f v t
x
x
t x
f v t
x
x
where the value ( n+1 j+1=2 ) is found by tracing the characteristic through the point
( n+1 j+1=2 ) backward to = n, where the reconstructed function is known, and
nL
( nR
j+1=2 j+1=2 ) is the semi discrete ux function evaluated at the known time n .
There is an abundance of methods based on this idea. Another example is when
one half step using the rst order scheme is taken to obtain a value at ( n+1=2 j+1=2 )
v t
t
h u
x
x
t
t
u
t
t
x
75
and then the approximation
Z tn+1
tn
f (v(t xj+1=2 )) dt tf (v(tn+1=2 xj+1=2 ))
is used to get a second order accurate scheme.
Exercises
1. The second order cell based RD ENO scheme has second order accuracy everywhere
and is consequently not TVD. Write this scheme on slope limiter form ( as on p.50
), and thus derive the ENO limiter function
B(+uj ; uj ) = minmod(+uj ; 12 m(+; uj+1 +;uj )
;uj + 21 m(+;uj +; uj;1))
with
< jy j
m(x y) = xy ifif jjxyjj <
jxj
2. Show that the method, using the numerical ux
if jf (uj )j < jf (uj+1)j
j)
hj+1=2 = ff ((uuj+1
) if jf (uj+1 )j jf (uj )j
is not a TVD method.
3. Derive the limiter function (r) corresponding to the slope limiter in example 4.1,
< jy j
m(x y) = xy ifif jjxyjj <
jxj
and show that it can t inside the TVD domain given in g. 3.3, chapter 3.
76
5. Systems of conservation laws
5.1 Linear systems
When we apply the methods for scalar problems to systems of hyperbolic partial dierential equations, one of the most important facts is that the methods have to be applied
to the characteristic variables. We here give an example of a linear system to illustrate
this.
Example 5.1. Consider the system
0
1
0
+ 1 0
= 0
(5 1)
t
x
The PDE can be decoupled into two independent scalar problems by a diagonalizing
transformation. It is easy to verify that the eigenvectors and eigenvalues of the matrix
0 1
1 0
are
1
1
1 = 1 r1 =
2 = ;1 r2 =
1
;1
u
u
v
v
:
We introduce the diagonalizing matrix
= 11 ;11
and multiply (5.1) by ;1 . The result is
t +
x = 0
t ; x = 0
where the transformed variables are dened as
1
+
;
1
=
=2 ;
Next, we consider this PDE on ;1
1, 0, and give the initial data
n
n 1
0
(0 ) = 10 00
(0 ) = ;
0
0
which, by (5.3), corresponds to
n2
(0 ) = 0
(0 ) = 0 00
The solution for 0 is
n1 ; 0
n ;1 + 0
( )= 0 ; 0
( )= 0
+ 0
R
R
w
w
z
w
z
R
z
u
u
v
v
u
v
< x <
w
x <
x
z
x
u
x
t >
v
x <
x
x
x
x <
x
t >
w t x
x
t <
x
t
z t x
x
t <
x
t
(5 2)
:
(5 3)
:
77
as easily found from the diagonal form (5.2). In the variables ( ) this corresponds to
(0
(2
;
;
( )= 1 ; ( )= 1 ; 0 0 The solution is depicted in Figures 5.1 and 5.2 below.
u v
x <
t
t
x
u t x
x
t
x <
t
t
x
v t x
x
t
u
z
0
0
:
t
x
v
t
x
w
0
0
Fig. 5.1a. Original variables.
x
x
Fig. 5.1b. Characteristic variables.
Initial data.
u
z
0
0
x
v
x
w
0
0
x
x
Fig. 5.2a. Original variables.
Fig. 5.2b. Characteristic variables.
Solution at time 0.
>
The point with this example is that the variable is zero at = 0, but immediately
develops a square pulse at 0. Thus there is no TVD property in the variable , and
u
t >
t
u
78
therefore it is not reasonable to use a TVD scheme componentwise in (u v). A TVD
method has to be applied in the characteristic variables (w z).
In general we consider linear systems
ut + Aux = 0
where A is a diagonalizable matrix. Then we can perform the transformation
R;1 ut + R;1 ARR;1 ux = 0
where R is the matrix of eigenvectors of A. Introducing the characteristic variable
v = R;1 u
we obtain the decoupled system
vt + vx = 0
(5:4)
where is the diagonal matrix consisting of the eigenvalues of A. We thus have a set of
m independent scalar equations, (vk )t + k (vk )x = 0 which have solutions vk0(x ; k t)
for given initial data vk0(x).
We use the decoupling of the linear system to solve the problem
ut + Aux = 0
u(x 0) = uL if x < 0
uR if x > 0
where uL and uR are two constant states. A hyperbolic partial dierential equation
with the initial data consisting of two constant states is called a Riemann problem.
According to the discussion above, the solution can be written
u(x t) =
m
X
k=1
vk0 (x ; k t)rk where now all the functions vk0 (x) are step functions with a jump at x = 0. rk are the
eigenvectors of A i.e., the columns of R. We assume that the eigenvalues are enumerated
in increasing order 1 < 2 < : : : < m. Let us denote
n vkL x < 0
vk0 (x) = vkR x > 0 :
The solution is thus piecewise constant, with changes when x ; k t changes sign for
some k. From this observation the solution formula
u(x t) =
q
X
k=1
vkR rk +
m
X
k=q+1
vkL rk = uL +
q
X
(vkR ; vkL )rk q < x=t < q+1
k=1
follows easily. The solution is thus constant on wedges in the x ; t plane, as seen in
Fig. 5.3.
79
t
u2
u1
u3
uL
uR
x
Fig. 5.3. Solution of the linear Riemann problem in the x ; t plane, m = 4.
As seen above, the states inside the wedges are given by
q
X
uq = uL + (vkR ; vkL )rk
k=1
with uL = u0 uR = um.
5.2 Non linear systems
If the coecient matrix is diagonalizable, a linear system can be decoupled into a number
of independent scalar problems. This is however not true for a non linear system, the
diagonalizing transformation R is now a function of u(x t) and we can not use a relation
like (Rv)t = Rvt which was essential in deriving (5.4). The non linear system is more
complicated than a collection of scalar non linear problems.
We consider the equation
ut + f (u)x = 0
where the solution vector is u = (u1 (x t) : : : um(x t))T . The Jacobian matrix of the
ux function is denoted A(u) = @ f =@ u. The eigenvalues of A(u), i (u) are assumed to
be real, distinct, and ordered in increasing order
1(u) < 2(u) < : : : m (u)
The corresponding eigenvectors are denoted r1(u) : : : rm(u).
We generalize the convexity condition f (u) 6= 0 to systems as follows.
Denition 5.1. The kth eld is genuinely non linear if rTk ruk (u) 6= 0 for all u.
If a scalar problem is linear then f (u) = 0. This condition is generalized to
Denition 5.2. The kth eld is linearly degenerate if rTk ruk (u) = 0 for all u.
Here ru is the gradient operator with respect to u,
@a : : : @a ):
rua = ( @u
@um
1
It is not hard to verify that denition 5.1 and 5.2 degenerates to the convexity and the
linearity conditions respectively in the scalar case m = 1.
00
00
80
We will here discuss three types of solutions.
1. Shocks.
2. Rarefaction waves.
3. Contact discontinuities.
In section 5.3 we will show how these three types of solutions can be pieced together
to form a solution of the Riemann problem for the non linear system. For the scalar
equation we have seen a shock solution in example 1.5 and an expansion wave solution
in u2 in example 1.4.
We rst describe shock solutions. These satisfy the Rankine-Hugoniot condition,
s(uL ; uR ) = f (uL ) ; f (uR)
(5:5)
which is derived in the same way as for the scalar problem. We also require an entropy
condition. Since we are dealing with the generalization of the convex conservation law,
we will look for an entropy condition which generalizes the condition (1.9) i.e., the
characteristics should point into the shock.
Denition 5.3. Let k be a genuinely non linear eld. A k-shock is a discontinuity
satisfying (5.5) and for which it holds
k (uL ) > s > k (uR )
k;1(uL ) < s < k+1(uR )
The meaning of this denition is rst that the shock is in the kth characteristic
variable, and second that the number of undetermined quantities at the shock (i.e.,
the number of characteristics pointing out from the shock) is equal to the number of
equations given by (5.5). If we consider the shock as a boundary we see that denition
5.3 means that the characteristics 1 : : : k ; 1 are inow quantities into the region on
the left of the shock. The characteristics k + 1 : : : m are inow quantities into the
region on the right of the shock. Thus there are m ; 1 inow variables which we must
specify. Eliminating s from (5.5) gives m ; 1 equations, thus the number of equations
and unknown are equal.
Assume that uL is given, we investigate which states uR can be connected to uL
through a shock wave. (5.5) is a system of m equations for the m + 1 unknowns, uR s.
We expect to nd a one parameter family of solutions uR. Furthermore, it is natural to
have one such family of solutions for each eigenvalue, corresponding in the linear case
to placing the discontinuity in any of the m characteristic variables vi i = 1 : : : m.
These intuitive ideas are stated in the following theorem. The proof is not given here.
See e.g., 18] for a proof.
Theorem 5.4. Assume that the kth eld is genuinely non linear. The set of states uR
near uL which can be connected to uL through a k-shock form a smooth one parameter
family uR = u(p) ;p0 p 0. uR(0) = uL. The shock speed, s is also a smooth
function of p.
Formally we could use (5.5) to obtain a shock solution for p > 0 as well, but it turns
out that the entropy condition is not satised for p positive. The situation is similar
to the scalar equation, where the entropy condition imposes the restriction that shocks
only can jump downwards ( see examples 1.4 and 1.5 ).
81
We next investigate the rarefaction wave solutions. A rarefaction wave centered at
x=0 is a solution which only depends on x=t i.e., u(x t) = b(x=t). Inserting this ansatz
into the equation gives
; tx2 b + 1t A(b)b = 0
0
0
We denote = x=t and b = db=d and we thus have
(A(b) ; )b = 0:
The solution is given in terms of eigenvalues and eigenvectors
= (b()) b = cr(b)
c is a constant. Here it is possible to use genuine non linearity to show that c = 1. For
a given state uL, we thus can solve the ordinary dierential equation
b () = r(b()) 0 0 + p
(5:6)
0 = (b(0 ))
to some nal point 0 + p, where p is a suciently small parameter value. The state
uR = b(0 + p) is in this way connected to the state uL = b(0) through a k-rarefation
wave. From the above computations we obtain the following theorem.
Theorem 5.5. Assume that the kth eld is genuinely non linear. The set of states uR
near uL which can be connected to uL through a k-rarefaction wave form a smooth one
parameter family uR = u(p) 0 p p0. uR (0) = uL .
Fig. 5.4a shows an example of the k-characteristics for a rarefaction wave and
Fig. 5.4b shows one example of a component of the solution u(x=t) at a xed time.
0
0
0
0
u1
t
x
x
Fig. 5.4a. Characteristics in one eld.
Fig. 5.4b. Solution at a time t > 0.
We summarize the shock and rarefaction cases above as follows.
82
Theorem 5.6. Assume that the kth eld is genuinely non linear. Given the state
uL , there is a one parameter family of states uR = u(p) ;p0 p p0 which can be
connected to uL through a k-shock (p 0) or a k-rarefaction wave (p 0). u(p) is
twice continuously dierentiable.
The dierentiability is proved by expanding the function u(p) around p = 0, and
can be found in e.g., 18]. For the example m = 2, the situation is displayed in Fig. 5.5.
The curves show where it is possible to place uR in order to connect it to the given
state uL through a shock or a rarefaction wave.
u2
2-S
uL
1-S
1-R
2-R
u1
Fig. 5.5. Phase plane. 2-S = 2-shock. 1-R = 1-rarefaction.
Next we dene the Riemann invariants. They are quantities which are constant on
rarefaction waves, and can be
Denition 5.7. A k-Riemann invariant is a smooth scalar function w(u1 : : : um ),
such that
rTk ruw = 0
i.e., the gradient of w is perpendicular to the kth eigenvector of A.
Theorem 5.8. There exist m ; 1 k-Riemann invariants with linearly independent gradients.
Proof: The vector eld
rTk ru =
Xm ri(u) @
@ui
can by a coordinate transformation v = u(v) be written
@
@v1
and we choose
w1(v) = v2 w2(v) = v3 : : : wm;1 (v) = vm :
i=1
83
The functions wi i = 1 : : : m ; 1 will then satisfy
@wi = 0
@v1
and have linearly independent gradients. Transforming back yields functions wi(u)
with the desired properties.
Riemann invariants are used for computing the states across a rarefaction wave.
The useful property is given in the following theorem.
Theorem 5.9. The k-Riemann invariants are constant on a k-rarefaction wave.
Proof: We have seen above that on a k-rarefaction wave, the solution is a function
of = x=t and satises
u () = rk (u())
0
Let w be a k-Riemann invariant. We obtain
dw = dui @w = u ()T r w = r (u)T r w = 0
u
k
u
d i d @ui
X
0
by the denition of Riemann invariant. Thus dw=d = 0 and w is constant on the
k-rarefaction wave.
Theorem 5.9 gives the following equations for two states connected by a k-rarefaction
wave
wi (uL ) = wi (uR ) i = 1 : : : m ; 1
(5:7)
where wi are the k-Riemann invariants. For a rarefaction wave, these relations can be
used similarly as (5.5) is used for a shock e.g. to determine the state uR from a given
uL .
Let us nally investigate the linearly degenerate elds. Assume that the kth eld
is linearly degenerate, and dene the curve u(p) through
du(p) = r (u(p))
(5:8)
k
dp
The kth eigenvalue is constant on this curve, since the linear degeneracy gives
dk (u) = du r = r (u)T r = 0:
u k
dp
dp u k k
Theorem 5.10. Assume that the kth eld is linearly degenerate. The states on the
curve (5.8) can all be connected to uL through a discontinuity moving with speed
s = k (uL ) = k (u(p)).
Proof: Dene the function
G(u(p)) = f (u(p)) ; su(p)
84
which appears in the Rankine-Hugoniot condition. Dierentiate this function with
respect to p to obtain
dG = (A(u(p)) ; s) du
dp
dp
which is zero due to (5.8), and the denition of s. Thus
f (u(p)) ; su(p) = const: = f (uL ) ; suL
and the Rankine-Hugoniot condition is satised.
These discontinuities are called contact discontinuities. The kth characteristics are
parallel to the discontinuity. This wave have many similarities with the solution of a
linear problem, ut +aux = 0 with a single discontinuity as initial data. The discontinuity
is propagating along the characteristics with the wave speed a. There is no entropy
condition for a linearly degenerate eld, as in the linear equation the solution in the
weak sense is unique.
For systems which do not consist of linearly degenerate and genuinely nonlinear
elds, the entropy condition in denition 5.3 has to be replaced with something more
general. The situation is similar to the scalar equation when the entropy condition (1.9)
is not enough for non-convex conservation laws. We can dene the more general entropy
condition for systems in the same way as for the scalar equation i.e., in terms of a class
of entropy functions E (u), which satisfy an inequality
E (u)t + F (u)x 0
with ruF T = ruE T A(u).
The formulas are is similar to the formulas for the scalar case.
We will apply the numerical methods to the equations of gas dynamics, which
consist of genuinely non linear and linearly degenerate elds. Thus denition 5.3 will
be sucient, and we do not here develop more general entropy conditions.
85
5.3. The Riemann problem for non linear hyperbolic systems
We here solve the Riemann problem
ut + f (u)x = 0
u(x 0) = uuRL ifif xx >< 00
where uL and uR are two constant states. The solution of this problem will sometimes
be used in numerical methods where we solve a Riemann problem locally between the
grid points.
We assume that all characteristic elds are genuinely non linear, and that uL and
uR are suciently close, such that we can apply the parametrization in theorem 5.6.
The solution is similar to the solution for the linear equation, in the sense that it consists
of m + 1 constant states separated by shocks or rarefaction waves.
To construct the solution, we connect uL to a new state u1 by a 1-wave (shock or
rarefaction). We write this as
u1 = u(p1 uL ):
Next the state u1 is connected to the a state u2 by a 2-wave.
u2 = u(p2 u1) = u(p1 p2 uL )
We continue this to the mth state, given as a function of m parameters and the left
state,
um = u(p1 p2 : : : pm uL )
By requiring
uR = um
we obtain the system of m algebraic equations
uR = u(p1 p2 : : : pm uL)
(5:9)
for the m unknown p1 : : : pm . If uL and uR are close enough, it follows from the
inverse mapping theorem that we can always solve (5.9). To see this it is necessary to
check that the Jacobian of the mapping
f (p1 p2 : : : pm ) = ;uR + u(p1 p2 : : : pm uL )
is non-singular at (0 : : : 0). From the rarefaction wave solutions (5.6), we see that
u(p) = uL + prk + O(p2 )
which by the smoothness at p = 0 ( theorem 5.6 ) must hold for p both positive and
negative. For the Riemann problem we thus obtain
u1 = uL + p1r1 + O(p21 )
u2 = u1 + p2r2 + O(p22 ) = uL + p1 r1 + p2r2 + O((p1 + p2 )2)
:::
uR = uL + p1 r1 + p2r2 + : : : + pm rm + O((p1 + p2 + : : : + pm )2)
86
which shows that the Jacobian of the mapping at (0 : : : 0) is R, the matrix of eigenvectors. This matrix is non singular by the hyperbolicity. Thus the inverse mapping
theorem applies and we have a solution of the type shown in Fig. 5.6.
t
2-rarefaction
1-shock
3-shock
u1
u2
uR
uL
x
Fig. 5.6. Example wave structure in the solution of the Riemann problem.
In Fig. 5.7a we give an example of a solution for m = 2 in the phase plane. Fig. 5.7b
indicates the wave structure in the x ; t plane, and Fig. 5.7c shows the corresponding
solution in variable u1(x t) as function of x for a given time.
u2
t
1-shock
1-s
uR
2-r
2-s
1-r
uM
2-rarefaction
uL
uM
1-s
uL
2-s
1-r
2-r
uR
u1
x
Fig. 5.7a. Example of a phase plane plot. Fig. 5.7b. Corresponding wave structure.
u
uR
uL
uM
x
Fig. 5.7c. Solution at a time t > 0 for the waves in Fig. 5.7b.
87
5.4. Existence of solution
We saw in section 5.3 that there exist a solution to the Riemann problem if the states
uL and uR are suciently close. The only known result on existence for the problem
ut + f (u)x = 0
u(0 ) = u0 ( )
x
x
has been proved under a similar assumption, namely that the initial data have suciently small variation.
Theorem 5.10. Assume a non linear hyperbolic system is given, with all the elds
genuinely non linear. There are constants
such that if the variation of the initial
data is suciently small in the sense that
jju0 ; cjj1 + (v0 ) for some constant state c, then a weak solution u( ) exist, and is such that
(u( )) (u0)
It is not known whether this solution is unique or satises the entropy condition.
The theorem is proved by showing convergence of the random choice dierence method.
The method is interesting because of the convergence properties, but is in practical
cases outperformed by most other methods. Therefore, we describe the method here
and not in the chapter on dierence methods for systems.
The method is dened on a staggered grid, the approximation is 02j at 0 and 12j;1
at 1 e.t.c., see Fig. 5.8.
K
TV
x t
TV
t :
K TV
u
t
u
t
t
tn+1
tn
xj-1 xj
xj+1
Fig. 5.8. Staggered grid.
n
n n
Given a solution
, the random choice method consists of the
j ;2 j j +2
+1
following steps to determine the solution at nj+1
1. Solve the Riemann problem at j+1 with nj as left state and nj+2 as right state.
2. Let the new time level n+1 be such that no waves from ( n j+1 ) is outside the
interval j j+2 ] at n+1. This corresponds to a CFL condition.
:::u
u u
:::
u
x
t
x x
t
u
u
t
x
88
3. Choose a point ( n+1 ( )) = ( n+1 j + ( j+2 ; j )) where is a random number
in 0 1]. The same random number is used for all cells.
+1
4. Dene the new value, nj+1
, as the value of the solution of the Riemann problem
at this random point.
+1
The same procedure is then repeated to get nj +2 from nj;+11 nj+1
etc.
In Fig. 5.9, we give a picture of the local Riemann problems in the ; plane as
obtained by this algorithm.
t
x t
x
x
x
u
u
u
u
x
t
t
tn+1
tn
xj-1 xj
xj+1
Fig. 5.9. Riemann problems are solved locally at cell interfaces.
The main advantage of the random choice method is that all grid values are obtained
as solutions of local Riemann problems. Thus no new intermediate values are introduced
in shocks, which in some applications can be of value. Because of the local Riemann
problems, control of the variation can be achieved by using estimates for the solutions of
the Riemann problem. We do not give the proof of theorem 5.10 here. It is technically
complicated, but does not rely on any advanced mathematical concepts.
89
6. Numerical methods for systems of conservation laws
6.1. Simple waves in gas dynamics
We will consider the generalization of the rst order schemes in chapter 2 to systems of
equations. For the special case of the gas dynamics equations
0 1 0 m 1 001
@ m A + @ u2 + p A = @ 0 A
(6:1)
(e + p)u x
0
e t
specic formulas will be given. In (6.1) m = u is the momentum, the density, u
the velocity, p the pressure and e the total energy of an inviscid uid. An additional
relation to link p to the other variables m e is obtained by assuming the perfect gas
law
p = ( ; 1)(e ; 21 u2)
where is a constant specic for the uid in question. For air one usually takes = 1:4.
Since there is not sucient theory available to derive a systematic treatment, the
ideas for systems are based on the TVD ideas for scalar equations. The methods for
systems are derived in a heuristic way. Thus this chapter can only describe \how to"
derive methods for system and not \if" or \why" the methods will give correct answers.
We will give rst order accurate methods. Similar to the scalar case second order
methods can be derived from the rst order ones by piecewise linear interpolation.
In the random choice method and the Godunov method, we have to solve a Riemann
problem exactly. In section 6.2 we show how to do this for (6.1) through an iterative
procedure. We begin by giving some formulas for simple wave solutions of (6.1) i.e.,
shocks, rarefaction waves, and contact discontinuities. We noted in chapter 5 that the
eigenvalues and eigenvectors are important for the wave structure. Thus we begin by
nding these quantites for (6.1).
Theorem 6.1. For the gas dynamics equations (6.1), where
p = ( ; 1)(e ; 21 u2)
the eigenvalues and eigenvectors of the Jacobian @ f =@ u are
1 = u ; c 2 = u 3 = u + c
0 1 1
0 1 1
0 1 1
r1 = @ u ; c A r2 = @ u A r3 = @ u + c A
h ; uc
u2=2
h + uc
where the sound of speed, c, and the enthalpy h are dened by
r p
h = e+p
c=
Proof: Straightforward calculations, not given here.
90
The formulas for the eigenvectors and eigenvalues enables us to verify that
c 6= 0 rT r 0 rT r = ( + 1) c 6= 0:
rT
2
3
1 r1 = ;( + 1)
2
3
2
2
For example
@u + u @u + u2=2 @u =
rT2 r2 =
@
@m
@e
;m + u + 0 = 0
2
Thus the 1 and 3 elds are genuinely non linear and can cause a shock wave or a
rarefaction wave to appear in the solution. The 2 eld is linearly degenerate and can
only give rise to contact discontinuities.
The jump condition is
s ] = m]
(6:2)
sm] = u2 + p]
se] = u(e + p)]
where we use the notation q] = qR ; qL . The jump condition can be rewritten in the
following form
v] = 0
(6:3a)
2
v + p] = 0
(6:3b)
2
vL ; 1 c2 + v2] = 0
(6:3c)
where we have dened v = u ; s as the speed relative to the shock wave. The derivation
of (6.3) from (6.2) is somewhat tedious and we omit it here. The form (6.3) is easier to
use in proving some of the theorems below.
Finally to obtain conditions which connect states separated by a rarefaction wave,
we need the Riemann invariants. From the denition of the Riemann invariant wk ,
rTk rwk = 0
we get by straightforward calculations the following results. For the 1-eld
(6:4a)
w1(1) = u + ;2 1 c w1(2) = p ; the 2-eld
w2(1) = u w2(2) = p
(6:4b)
and nally the 3-eld
w3(1) = u ; ;2 1 c w3(2) = p ;
(6:4c)
Thus there are two Riemann invariants for each characteristic eld. As seen in the
previous chapter the Riemann invariants are constant on rarefaction waves, so that e.g.,
91
for two states uL uR, separated by a 1 rarefaction
2 c =u + 2 c
uL +
L
R
R
;1
;1
;
pL ;
L = pR R
and similarly for the other elds.
The conditions that connects two states are dierent if the separating wave is a
shock wave or a rarefaction wave. It is therefore necessary to distinguish between these
two cases when solving the Riemann problem. One useful criterion is derived in the
following theorem.
Theorem 6.2.
for 3-waves
For 1-waves
pL < pR
pL > pR
for shocks
for rarefaction waves
pL > pR
pL < pR
for shocks
for rarefaction waves
and for the contact discontinuity
pL
= pR
Proof: The conditions for shocks are derived from the jump condition (6.3) and the
entropy inequalities. We prove the theorem for the 1-waves. First assume a 1-shock.
The entropy condition according to denition 5.3 is
uL ; cL > s > uR ; cR from which
s < uR
0 < vR < cR
follows. Note that vL and vR are positive. (6.3c) gives
2c2L + v2 = 2c2R + v2 < + 1 c2
+1 2
cL <
L
R
;1
;1
;1
;1 R
and hence
vL > cL cL < cR :
Use (6.3c) again to obtain
0 < ; 1 ; ; 1 = 12 (vL2 ; vR2 )
c2R
c2L
Since vL vR are positive we nd that
vL > vR
92
From (6.3b) we obtain the pressure dierence
2 ;
2
pL ; pR = R vR
L vL
which by (6.3a) is
pL ; pR = L vL (vR ; vL ) < 0
The result pL < pR have been obtained. Next consider a 1 rarefaction wave. We have
the inequality
uL ; cL < uR ; cR
which states that the head of the wave travels faster than its tail, see Fig. 6.1 below.
The slope of the 1- characteristics is uL ; cL to the left of the wave and uR ; cR to
the right.
t
uL - c L
uR - c R
x
Fig. 6.1. 1-rarefaction wave for (6.1).
From the 1 Riemann invariant (6.4a) we obtain
;
pR ;
R = pL L
pR
pL
=
R
L
We use the denition c2 = p= to eliminate , with the result
pR
pL
=
c 2;1
R
cL
The second 1 Riemann invariant gives
+1
2 c =u + 2 c =
uL ; cL +
cL = u L +
L
R
R
;1
;1
;1
+1
+1
cR > u L ; cL +
cR
uR ; cR +
;1
;1
and hence
cL > cR
(6:5)
93
(6.5) nally gives the result
pL > pR
The proof for the 3 waves is similar and we omit it. For the contact discontinuity, p
is a Riemann invariant and according to chapter 5, does not change across it.
6.2. The Riemann problem in gas dynamics
We are now ready to solve the Riemann problem in gas dynamics. Assume that the
initial data
n uL x < 0
u(0 x) = uR x > 0
are given. We want to solve (6.1) for these data forwards in time. The solution consists
of a 1-wave a contact discontinuity and a 3-wave as seen in Fig. 6.2 below.
t
3-Wave
1-Wave
pc
p
pR
L
x
Fig. 6.2. General wave structure for (6.1).
The 1-wave and the 3-wave can be either a shock or a rarefaction wave. The 2wave is always a contact discontinuity. The pressure does not change across the contact
discontinuity, and therefore it should be easier to nd an equation for the single unknown
pressure than for a quantity which changes across the contact and thus has two unknown
states. We begin with nding the intermediate pressure pc from an iterative process.
The rest of the state variables can then be found from direct formulas.
Theorem 6.3. Dene
ML
then the formulas
; pc
= ; upL ;
u
L
c
MR
; pc
= upR ;
u
R
c
p
ML =
L pL (pc =pL )
p
MR =
R pR (pc =pR )
are valid. The function (x) is dened as
(x) =
(
+1
;1 1=2
2 x
;1 1;x2
p
2
1;x( ;1)=2
(
+
)
if x 1
if x < 1
94
We prove the theorem for the left wave, the proof for the right wave is
similar and leads to the same conclusion. First assume that the left wave is a shock.
The jump conditions (6.3 a,b) gives
pL ; pc = L vL (vc ; vL ):
since v = u ; s this means that
M L = L vL
(6:6)
Solve (6.3a) for c and insert into (6.3b,c). We obtain
pL + L vL2 = pc + L vL vc
1 v2 = pcvc + 1 v2
pL
+
( ; 1)
2 L ( ; 1) v 2 c
Proof:
L
L L
Next solve the rst equation above for vc and insert it into the second. After some
simplifying algebra the result is
2
; 1 + 1 pc
L vL
=
pL
2 + 2 pL
Using (6.6), one gets
ML2
= ; 1 + + 1 pc
p
2
2 p
L L
L
From theorem 6.2 pc =pL > 1, since the left wave must be a 1 wave. After taking
the squareroot the positive sign should be chosen, since by (6.6) ML is positive (vL
positive can be seen from the proof of theorem 6.2). Thus we have proved that
p p (p =p ):
ML =
L L
L c
if pL =pc > 1.
Next assume that the left wave is a rarefaction wave. Then the rst Riemann
invariant for the 1 wave gives
2 c =u + 2 c
uL +
L
c
c
;1
;1
,
uL ; uc
= ; 1 ( cc ; 1)
c
2 cL
,L
; pcL ;Mpc = ;2 1 ( ccc ; 1)
(6:7)
L L
L
The second Riemann invariant gives
;
pL ;
L = pc c :
95
By using the denition c2 = p= to eliminate , we can rewrite this as
cc
cL
Inserting into (6.7) yields
; pcL ;Mpc
L L
=
p 2;1
c
pL
;1
2
= ;2 1 ( ppc
; 1)
L
Finally use c2 = p= to eliminate cL from the left hand side. We know from theorem
6.2 that pc=pL < 1, and thus the obtained result
p 2;1
p L pL
2
; 1)
pML (pc =pL ; 1) = ; 1 ( pLc
is easily seen to be equivalent to
= p LpL (pc =pL ):
The derivation of the expressions for MR is analogous and not given here.
The dierent cases x < 1 and x 1 corresponds to rarefaction waves and shock waves
respectively, as seen from theorem 6.2. Note that the computation of
1;x
1 ; xp
becomes numerically ill conditioned for x close to one. It is therefore good practice in
a computer program to replace this function by the polynomial approximation
1 + p ; 1 (1 ; x)
p
2p
if 1 ; < x < 1, where is a small number and depends on the machine precision used.
Eliminating uc from the denition of ML and MR gives nally the formula
pc = (uL ; uR + pR =MR + pL =ML )=(1=MR + 1=ML ):
The iterative method for nding pc is dened as
pc = (uL ; uR + pR =MRk + pL =MLk )=(1=MRk + 1=MLk )
(6:8a)
pkc +1 = max(pc 1 )
(6:8b)
where 1 is introduced topprevent negative pressure during
the iteration. Theorem 6.3 is
p
k
k
k
used to evaluate ML = L pL (pc =pL ) and MR = RpR (pkc =pR). The initial guess,
p0c = (pR + pL )=2 has turned out to work well in computer programs. Convergence
is usually fast, but for strong rarefactions degradation in convergence rate has been
observed. If no convergence is achieved after a xed number of iterations, we replace
(6.8b) by
pkc +1 = max(
1 pc ) + (1 ; )pkc
ML
96
where = 1 2. If there is still convergence problems, we reduce further.
After c is found, we compute
c = R ; ( R ; c) R
Theorem 6.2. gives complete information about the wave conguration. If c L 1 the
1-wave is a rarefaction, otherwise a shock and if c R 1 the 3-wave is a rarefaction,
otherwise a shock. The contact discontinuity lies between the 1 wave and the 3 wave,
and propagates with velocity c.
For each point ( ) in which we want to compute the solution, we make tests to
decide whether the point is
a) To the left of the 1- wave.
b) Inside the 1-wave if it is a rarefaction.
c) To the right of the 1 wave but to the left of the contact discontinuity.
d) To the right of the contact discontinuity but to the left of the 3-wave.
e) Inside the 3-wave, if it is a rarefaction.
f) To the right of the 3 wave.
The jump condition, or the invariance of the Riemann invariants over the rarefaction
waves, gives formulas for the intermediate quantities. Because we know the intermediate
pressure c it turns out that there is no need to solve any equations, but all required
quantities are found from direct formulas. A fortran program which solves the Riemann
problem in gas dynamics is supplied in the appendix.
The formulas to determine u( ) will be dierent in each of the dierent cases. The
computer program will thus contain a certain amount of formulas, but the execution time
will be reasonable, since only one branch of the alternatives is actually executed. On a
vector computer the situation becomes more troublesome, since there are diculties in
making IF statements vectorize.
We have now constructed a solution of the Riemann problem. By further analysis
of the solution procedure it is possible to prove
Theorem 6.4. There is a unique solution of the Riemann problem for the gas dynamics
equations (6.1) if
2 ( + )
(6 9)
R; L
;1 L R
=
p
u
u
p
p
=M
p =p
p =p
<
<
u
x t
p
x t
u
u
<
c
c
:
When (6.9) is not satised, there will be a vacuum present in the solution and the
intermediate state will therefore not be well dened.
97
6.3. The Godunov, Roe, and Osher methods
In this section we give a description of three of the best shock capturing methods for
systems of conservation laws. First the Godunov scheme is described, since its main
feature is the solution of a Riemann problem, most of the description has already been
made in section 6.2. This scheme is important since other methods are often thought
of as its simplication. However, it is not necessary to make this interpretation.
Second we give the generalization of the upwind scheme to system, known as Roe's
method. Finally the Engquist-Osher scheme for systems is described, it is usually called
Osher's method.
6.3.1. Godunov's method. Godunov's method has many features in common
with the random choice method. The following algorithm describes the method.
1. We start from given unj the numerical solution at time n. The solution is dened
for all by piecewise constant interpolation
un( ) = unj j;1=2
j+1=2
2. We then solve the Riemann problems at all break points j+1=2 . The next time
level n+1 is made small enough such that no waves from two dierent Riemann
problems interact. This gives a CFL condition const. Let
w(( ; j+1=2 ) ( ; n) uj uj+1)
denote the solution of the Riemann problem at ( j+1=2 n).
3. The new solution is dened as the average over cell of the solution of the Riemann
problems in 2 i.e.,
Z xj
1
n+1
w(( ; j;1=2 ) ( n+1 ; n) uj;1 uj )
uj = (
xj;1=2
t
x
x
x
< x < x
:
x
t
t=
x
x
= t
t
x <
x
t
j
+
Z
x
x
xj+1=2
w(( ;
x
xj
x
= t
)(
xj+1=2 = tn+1
t
dx
; n) uj uj+1 ) )
t
dx
We write this algorithm on conservative form,
un+1
= unj ; +hnj;1=2
j
by, taking a contour integral around one cell in the ; plane. Since the solution in
n n+1 satises the PDE exactly the integral is zero, and we get
x
t
t
t < t
Z
xj+1=2
xj;1=2
u( n ) ;
t x dx
Z
Z
tn+1
xj+1=2
xj;1=2
f (w(0 uj+1 uj )) ;
tn
u( n+1 ) +
t
Z
x dx
dt
tn+1
f (w(0 uj uj;1 )) = 0
tn
(6 10)
:
dt
Since we have dened un+1
as the cell average of the solution, and since the integrals
j
in time have time independent integrands, we can rewrite (6.10) as a dierence approximation on conservative form with
hnj+1=2 = f (w(0 uj+1 uj ))
:
98
Thus Godunov's method is implemented using the solution procedure in section 6.2. to
solve one Riemann problem in each grid point. This solution is evaluated at x = 0, and
the ux function is evaluated with the solution as argument.
It has often been argued that in Godunov's method a large amount of work is spent
to solve a Riemann problem exactly. The information thus computed is mostly wasted
since it is only used to form an average.
In this context the generalization of the upwind and the Engquist-Osher schemes
to systems, which we now proceed to describe, can be viewed as a Godunov scheme with
a simplied solution of the Riemann problem. For each of the scalar methods there are
usually a number of dierent generalizations to systems, some complicated and some
very simplied.
6.3.2. Roe's method. We now describe the upwind scheme. For a scalar conservation law, this is the method with numerical ux
hnj+1=2 = 12 (fj+1 + fj ) ; 21 jaj+1=2j(unj+1 ; unj)
This scheme is generalized using the eigenvalues of a jacobian matrix as wave speeds.
A matrix
Aj+1=2 = A(uj uj+1)
with A(u u) = A(u) = @ f =@ u is dened, and the scheme becomes
hnj+1=2 = 21 (fj+1 + fj ) ; 12 jAj+1=2 j(unj+1 ; unj)
where the absolute value of the matrix is dened as
0 j1j 0 : : : 0 1
B 0 j2 j : : : 0 C
jAj = R B
@ 0 : : : ... 0 C
A R;1 :
0 : : : 0 jmj
Here j are the eigenvalues and R is the matrix with the eigenvectors as columns. We
can see this as a local diagonalization of the system. The matrix can be chosen as
Aj+1=2 = A((uj+1 + uj )=2), but the best result seems to be obtained by using a matrix
which satises the straightforward generalization of a division in the scalar case,
f (uj+1 ) ; f (uj ) = Aj+1=2 (uj+1 ; uj ):
P. Roe has showed how to construct such matrices, the upwind scheme is therefore
sometimes called Roe's method. The Roe matrix for the gas dynamics equations is
found by evaluating the jacobian at a weighted average,
Aj+1=2 = A(m(uj+1 uj ))
99
where m(u v) is the weighting procedure described below. The mean value density,
velocity and enthalpy is computed using the weights
p
p
j
w1 = p + p
j
j +1
w2 = p + jp+1
j
j +1
Thus we compute
= w1 j + w2 j+1
u = w1uj + w2uj+1
h = w1hj + w2hj+1
c2 = ( ; 1)(h ; 21 u2)
from which the eigenvalues and eigenvectors of Aj+1=2 are found using theorem 6.1. In
practice the term
jAj+uj = RjjR;1+uj
is evaluated as
m
jk jk rk
X
k=1
where k is solution of the linear system of equations
R = +uj :
For the Euler equations, k can be derived analytically with the following result
2 = c;2 1 ((h ; u2)+ j + u+mj ; +ej )
1 = 21 (+ j ; 2 + u+ j ;c +mj )
3 = 21 (+ j ; 2 ; u+ j ;c +mj )
It is assumed that the matrix R contains the eigenvectors of the jacobian A(u). The
quantities without index thus belong to the state in which the jacobian is evaluated.
6.3.3. Osher's method. The Engquist-Osher scheme for systems is usually called
the Osher scheme. The numerical ux is
uj+1
hnj+1=2 = 21 (fj+1 + fj ) ; 12
jA(u)j du
uj
The integral of the absolute value of the jacobian,
Z
Z
uj+1
uj
jA(u)j du
(6:11)
is not path independent, and thus we have to describe an integration path in order to
dene the method. Osher chose a path which follows the eigenvectors. If u(s) 0 s 1
100
is a parametrization of the integration path then the following formulas describes the
curve.
du = r 0 s s
1
ds 1
du = r s s s
2
ds 2 1
:::
du = r s s 1
ds m m;1
The rst step in the algorithm consists of determining the points u(sk ). The Riemann
invariants, wk are constant on path k, because
dwk (u) = rwT du = rwT r = 0:
k ds
k k
ds
Thus we have m ; 1 relations
wk(j)(uk ) = wk(j) (uk+1) j = 1 : : : m ; 1
(6:12)
for each subpath sk < s < sk+1. The total number of equations is m(m ; 1). The
unknowns are the intermediate states uk , k = 1 : : : m ; 1. Since each state is a vector
of m components, the total number of unknowns are also m(m ; 1). We begin with
solving the non linear system of equations (6.12) for the unknowns uk . Second, we
evaluate the integral
m uk
uj+1
j Aj d u =
j Aj d u
Z
Z
uj
XZ
Z
s
=
k=1 uk;1
Z
s
ds =
where each subpath integral is evaluated using the formula
uk
uk;1
jAj du
k
sk;1
jAjrk
k
sk;1
jk jrk ds:
The subpath integral over sk;1 sk ] is further divided into pieces where k has constant
sign. Without absolute value the integrals are easy to evaluate. As an example, assume
that k is positive on sk;1 s ] and negative on s sk ], where s 2 sk;1 sk ]. Dene
u = u(s ). The subpath integral becomes
Zs
sk;1
k rk
Z
s
ds ;
k
s
k k
r
Z
ds =
u
uk;1
A du
Z
;
uk
u
A du = ;f (uk ) ; f (uk;1) + 2f (u ):
In a computer program, the integral (6.11) is determined by adding or subtracting a
number of terms f (uc), where uc are points of changing subpath or points where k
changes sign on a subpath.
We next describe how to implement this method for the Euler equations of gas
dynamics. In one space dimension m = 3 and we have the following path of integration
101
for the integral (6.11)
du
ds
du
ds
du
ds
= r1
0 s s1
= r2
s1
s s2
= r3 s2 s 1
The following notation has become standard
uj = u(0) = u0
u(s1 ) = u1=3
u(s2 ) = u2=3
uj +1 = u(1) = u1
We start by determining the intermediate states u1=3 and u2=3 . The Riemann invariants
(6.4) leads to the following system of equations
2 c =u + 2 c
u0 +
0
1=3
;1
; 1 1=3
;
p0 ;
0 = p1=3 1=3
u1=3 = u2=3
p1=3 = p2=3
2 c =u ; 2 c
u1 ;
1
2=3
;1
; 1 2=3
p1 ;
1
First dene
= p2=3 ;2=3
!1=2 ; !1=2 ;
p
1
=
3
p0 0
1=3
=
=
=
;
;
p1 1
p2=3 2=3
2=3
1=2
1=3
where we use p1=3 = p2=3. From the denition c2 = p= it follows that
=
c1=3
c2=3
This equation together with
= u0 + ;2 1 (c0 ; c1=3)
2 (c ; c )
u2=3 = u1 ;
1
2=3
;1
leads to the following formula for u1=3 ( since we know that u2=3 = u1=3 )
u0 + 2=( ; 1)c0 + (u1 ; 2=( ; 1)c1 )
u1=3 =
(1 + )
u1=3
(6:13)
(6:14)
102
Thus in a program, one rst form and then use (6.14) to nd u1=3. c1=3 and c2=3 are
then easily evaluated from (6.13). Then the density is computed as
c 2
= c0
0
1=3
1; c 2
2=3
= c1
1
2=3
1=3
1;
We then have complete information about the states u1=3 and u2=3 . We can proceed to
the evaluation of the numerical viscosity integral,
Z uj+1
uj
jA(u)j du =
Z u1=3
u0
jA(u)j du +
Z u2=3
u1=3
jA(u)j du +
Z u1
u2=3
jA(u)j du
(6:15)
For the Euler equations of gas dynamics, because of the genuine non linearity, the
eigenvalues u ; c and u + c can change sign in at most one point on their paths. The
eigenvalue u is a Riemann invariant and is constant on path 2.
We start with the smallest eigenvalue 1 = u ; c for the rst path. The sign of 1
is either constant or changes at the single sonic point where
u s 1 ; cs 1 = 0
Thus if (u0 ; c0)(u1=3 ; c1=3) < 0 we include the sonic point in the path and obtain for
the rst third part of the integral
Z u1=3
u0
jA(u)j du = sign(u0 ; c0)(2f (us1 ) ; f (uj ) ; f (u1=3 ))
Here the sonic state us1 is found from the Riemann invariants and the sonic condition
u0 + ;2 1 c0 = us1 + ;2 1 cs1
:
p0 ;0 = ps1 ;s1
us1 ; cs1 = 0
This system is easy to solve for the sonic state. If 1 does not change sign between the
0 and the 1/3 state then
Z u1=3
u0
jA(u)j du = sign(u0 ; c0)(f (u1=3 ) ; f (uj ))
For the second part, the eigenvalue 2 = u is constant, and the integral becomes
Z u2=3
u1=3
jA(u)j du = sign(u1=3 )(f (u2=3 ) ; f (u1=3))
The third part of the integral is similar to the rst part, if (u2=3 + c2=3)(u1 + c1) < 0
then the sonic point
us2 + cs2 = 0
103
is included and the integral from 2/3 to 1 becomes
Z u1
u2=3
jA(u)j du = sign(u2=3 + c2=3 )(2f (us2 ) ; f (u2=3 ) ; f (uj+1))
The sonic state is found in the same way as the state s1 on path 1. If 3 does not
change sign we obtain
Z u1
u2=3
jA(u)j du = sign(u2=3 + c2=3)(f (uj+1 ) ; f (u2=3 ))
Summing the three integrals according to (6.15) yields the nal result for the integral
from uj to uj+1. From this integral we obtain the numerical ux as
Z uj+1
1
1
n
n
n
hj+1=2 = 2 (fj+1 + fj ) ; 2
jA(u)j du
uj
Remark: Osher originally proposed to order the integration path with the largest
eigenvalue rst. In practice it has turned out that the method described above, with
the smallest eigenvalue rst, works much better. The method starting with the smallest eigenvalue is sometimes called the P-version of the scheme, while Osher's original
ordering is called the O-version.
104
6.4. Flux vector splitting
We here give some methods which are somewhat simpler than the methods in the
previous section. They are all based on ux vector splitting, the idea of which we can
be understand from Engquist-Osher scheme for a scalar problem. The numerical ux
for the scalar E-O scheme can be written
Z uj+1
1
1
n
jf 0 (s)j ds = f +(uj ) + f ; (uj+1)
hj+1=2 = 2 (fj+1 + fj ) ; 2
uj
with
Zu
1
+
f (u) = f (0) + 2 (f 0 (s) + jf 0 (s)j) ds f ; (u) = f (u) ; f + (u):
0
Thus the ux function is split in two parts corresponding to positive and negative wave
speeds respectively.
f (u) = f + (u) + f ; (u)
The approximation of the ux derivative becomes
D; hnj+1=2 = D+ f ; (uj ) + D; f + (uj )
such that the derivative is approximated in a stable, upwind way. The Osher scheme
for systems can similarly be written as a splitting of the ux function into one part
corresponding to positive wave speeds and one part corresponding to negative wave
speeds.
In this section we use the ux splitting technique to obtain other, simpler methods
than the Osher scheme. The methods are all based on the idea that we split the ux
vector
f (u) = f +(u) + f ; (u)
where we try to achieve that the matrices
A+ = @ f +=@ u
A; = @ f ;=@ u
are such that A+ has positive eigenvalues and A; has negative eigenvalues. The numerical ux
hj+1=2 = f +(uj ) + f ;(uj+1 )
then denes a method of upwind type.
6.4.1. Steger-Warming splitting. For the rst method given here, we need an
additional property of the problem to be approximated. It is not hard to prove that for
the Euler equations
f (u) = A(u)u
(6:16)
where A is the Jacobian matrix @ f =@ u (show the homogeniety f (u) = f (u) then
dierentiate with respect to ). If (6.16) holds we dene a ux splitting as
f ( u) = A+ u + A; u
105
where
0 +1 0 : : :
0 +2 : : :
B
+
B
A = R@
..
0
0
0
1
C
C
A R;1 :
0 ::: .
0 : : : 0 +m
and similarly for A; . As usual i are the eigenvalues and R the matrix of eigenvectors
of the Jacobian matrix A. For a scalar we dene + = max(0 ) and ; = min(0 ).
The ux splitting above is named after Steger and Warming.
6.4.2. van Leer splitting. Another example, which does not require the property
(6.16), is the so called van Leer ux vector splitting. The method is, however, closely
linked to the structure of the Euler equations. In the van Leer splitting, we use the
(signed) Mach number
M = uc
to determine the number of positive and negative eigenvalues. If M > 1 then the ow
is supersonic, and all eigenvalues are positive. We then dene
f + = f f ; = 0:
Similarly we dene
f+ = 0 f; = f:
in the case M < ;1.
If jM j < 1 then there are both positive and negative eigenvalues. We dene a
splitting using the identity
M = ((M + 1)2 ; (M ; 1)2)=4:
The rst component of the ux vector is
f1 = u = cM = c((M + 1)2 ; (M ; 1)2)=4
and the denition
f1+ = c(M + 1)2 =4 f1; = ; c(M ; 1)2 =4
gives the property f1+ = f1 for M ! 1, f1; = f1 for M ! ;1.
The other components are treated similarly, the algebra becomes more complicated
and we do not give the derivation here. The result is the following formulas
0
1
1
2
(u + c) @ (2c + ( ; 1)u)
A
=
f+ =
4c
2
2
(2c + ( ; 1)u) =(2( ; 1))
f+ =
0
(u ; c) @ (2c ; (;1; 1)u)
4c
2
2
=
;(2c ; ( ; 1)u) =(2( 2 ; 1))
1:
A
106
It is an easy exercise to verify that f = f + + f ; . The common factors of ux vectors
can be written
(u + c)2 = c (M + 1)2
4c
4
2
(u ; c) = c (M ; 1)2
4c
4
so that we have the desired property f + ! 0 when M ! ;1 and f ; ! 0 when M ! 1.
The van Leer ux splitting is less expensive than the Osher scheme, but gives slightly
worse accuracy, especially for resolution of contact discontinuities.
6.4.3. Pressure splitting. We next describe the method of pressure splitting. It
is based on the observation that
0 u 1 0
1 001
f (u) = @ u2 + p A = u @ u A + @ p A = ufc + fp
u(p + e)
(e + p)
0
It turns out that the eigenvalues of the rst term, ufc are u uandu. The eigenvalues
of the second term are 0 0 and ;( ; 1)u. Thus it is natural to try a splitting according
to the sign of u.
1 ; sign(u) f
f + = u+fc +
p
2
:
1
+
sign(
u)
;
;
f = u fc +
fp
2
However, when the discontinuity in the switch (1 +sign(u))=2 is dierenced the method
becomes unstable. It is important that the switch of sign is continuous, as in the rst
term where u+ and u; are continuous at u = 0. To overcome this diculty, we replace
the sign function by a smoother version. In many applications the function
M (3 ; M 2 )=2 jM j < 1
g (M ) =
sign(M )
jM j > 1
is used instead of the sign function. The signed Mach number M = u=c has of course
the same sign as u. The total pressure splitting method then becomes
1 ; g (M ) f
f + = u+ fc +
p
2
:
1
+
g (M )
;
;
f = u fc +
2 fp
Usually it is necessary to add an entropy x i.e., increase the amount of articial dissipation, for the ufc terms when u is near zero, in the same way as this is done for
the upwind method when the wave speed changes sign. The advantage of the pressure
splitting method is that it requires a relatively few number of arithmetic operations.
6.4.4. Lax-Friedrichs splitting. Finally we show how the Lax-Friedrich scheme
can be viewed as a ux splitting method. The numerical ux for a system is
1
1
hj+1=2 = (fj+1 + fj ) ; (uj+1 ; uj )
2
2
107
which we split in two parts by dening
f
+ (u) = 1 (f (u) + 1 u)
2
1
1
;
f (u) = (f (u) ; u)
2
where now = t=x.
Since the CFL condition max jak j 1 is used, with ak eigenvalues to the jacobian
matrix, we see that the matrices
@f +
@u
@f ;
@u
have positive and negative eigenvalues respectively. Thus the denition is reasonable.
We can generalize this and dene
1
+
f (u) = (f (u) + k u)
2
1
;
f (u) = (f (u) ; k u)
2
with k > 0. If k = max jakj j, the largest eigenvalue of the jacobian matrix at uj , the
scheme is called Rusanov's method.
All the methods described in this section are inferior in shock resolution to the
methods in the previous section, (Godunov, Roe and Osher). For example some of
the methods in this section does not permit a steady shock solution spread over a xed
number of grid points, instead all steady shocks will be smeared out over a large number
of grid points. The better schemes does admit such steady shock proles. The schemes
in this section are however somewhat simpler to implement and uses fewer arithmetic
operations.
108
6.5. Interpretation as approximate Riemann solver
The schemes described in this chapter can be viewed as approximations to the Godunov
scheme. We can construct methods in the same way as we dened the Godunov scheme
in section 6.3, but with w( ), the solution of the Riemann problem, replaced by an
approximate solution to the same Riemann problem. Let
w(( ; j;1=2 ) ( ; n) uj uj;1 )
be an approximate solution of the Riemann problem between the states uj and uj;1 .
Assume the same set up as in the description of the Godunov scheme in section 6.3. We
thus dene the cell average n+1
on the new time level as
j
x=t
x
x
= t
t
u
u
n+1
j
Z xj
1
= (
w(( ; j;1=2 ) ( n+1 ; n) uj;1 uj )
xj;1=2
+
Z
x
x
xj+1=2
w(( ;
x
xj
x
= t
)(
xj+1=2 = tn+1
t
dx
(6 17)
:
; n) uj uj+1 ) )
t
dx
First we give a necessary condition that such an approximate solver has to satisfy.
Theorem 6.5. If the approximate solution of the Riemann problem between uL and
uR with jump at = 0, w( uR uL), is consistent with the conservation law in the
sense that
Z x=2
(6 18)
w( uR uL ) = 2 (uL + uR) ; (fR ; fL )
x
x=t
;x=2
x=t
x
dx
t
:
then (6.17) denes a scheme on conservative form, consistent with the conservation law.
We do not give the proof of this theorem, but we derive a formula for the numerical
ux associated with a given approximate Riemann solver.
Start by integrating around the square j j+1=2 ] n n+1] and set this integral
equal to zero. We obtain
x x
Z
xj+1=2
xj
Z tn
u
n dx
j
;
Z
tn+1
hj+1=2 +
dt
tn
f (unj ) = 0 ,
tn+1
n
u
h
= j + fj ; 1
Z
xj
xj+1=2
t t
w(( ;
x
)(
xj+1=2 = tn+1
; n) uj+1 uj );
t
dt
j+1=2
2
x
t
t
Z
xj+1=2
xj
w(( ;
x
)(
xj+1=2 = tn+1
; n) uj+1 uj )
t
(6 19)
where the numerical ux j+1=2 is now unknown. We only use the approximate Riemann
solution on the time level n+1, but not in between the time levels. The formula (6.18)
guarantees that if we integrate around the square j+1=2 j+1 ] n n+1] instead, we
:
h
t
x
x
t t
109
obtain the same numerical ux. Integrate
Z xj+1
unj+1 dx ;
Z tn+1
f (unj+1 ) dt +
Z xj+1=2
w((x ; xj +1=2 )=(tn+1 ; tn ) uj +1 uj );
tn
xj+1=2
xj+1
Z tn
h0j +1=2 dt = 0 ,
tn+1
Z xj+1
unj+1 x
1
0
hj +1=2 =
2t + fj+1 + t xj+1=2 w((x ; xj+1=2 )=(tn+1 ; tn) uj+1 uj )
(6:20)
j +1=2 .
Note that we can not, as we did for the Godunov scheme, obtain the numerical
ux as f (w(0 uj uj;1 )). This is because the solution between the time levels is not an
exact solution of the continuous problem, and thus the contour integral (6.10) is not
equal to zero. The numerical ux is uniquely determined from w by the formula (6.19).
It is not hard to show that the upwind scheme can be obtained in this way, if we
dene w(x=t uR uL ) as the solution to the linearized Riemann problem
ut + A(uR uL )ux = 0
n uL x < 0
u(0 x) =
u x0
and combine (6.19) and (6.20), it is easy to see that (6.18) gives hj+1=2 = h0
R
where A(u v) is e.g., the Roe matrix (excercise).
Next we derive a simplied scheme for the Euler equations of gas dynamics. Assume
an approximate solution of the following form
( uL x < b1t
w(x=t uL uR ) = um b1 t x b2 t
uR
b2 t < x
we thus assume that there are two waves moving with speeds b1 and b2 , and we require
b1 < b2 . The intermediate state um is determined from the consistency condition (6.19),
with the result
b u ;b u f ;f
um = 2 L 1 R ; R L
b ;b
b ;b
2
1
2
1
By evaluating the integrals (6.18) we obtain the numerical ux for this method
b+j+1=2 fj ; b;j+1=2 fj+1 b+j+1=2 b;j+1=2
+ +
(u ; u )
hj+1=2 = +
bj+1=2 ; b;j+1=2
bj+1=2 ; b;j+1=2 j+1 j
where b+j+1=2 = max(b2j+1=2 0) and b;j+1=2 = min(b1j+1=2 0). The wave speeds are
parameters we can tune to obtain a method with desired properties. For the Euler
equations, we can use the largest and smallest eigenvalues
b1j+1=2 = uj+1=2 ; cj+1=2 b2j+1=2 = uj+1=2 + cj+1=2
110
where uj+1=2 and cj+1=2 are the velocity and the speed of sound evaluated at an intermediate point. One reasonable choice is to use the average procedure in Roe's method
i.e.,
uj+1=2 = w1j+1=2uj + w2j+1=2uj+1
hj+1=2 = w1j+1=2hj + w2j+1=2hj+1
c2j+1=2 = ( ; 1)(hj+1=2 ; 21 uj+1=2 )
where h is the entalphy, h = (p + e)= , and the weights are
p
j
p
w1j+1=2 = p + p
w2j+1=2 = p + jp+1
j
j +1
j
j +1
(c.f. section 6.3). This scheme is sometimes called the HLL scheme from the initials of
its inventors ( Harten, Lax, van Leer).
Note that the wave speeds b1 = ; and b2 = gives the Lax-Friedrichs scheme.
6.6. Generalization to second and higher order of accuracy
The same ideas as were used in chapter 3 are here used for systems. Assume that a rst
order method is given. We obtain a second order method by using the numerical ux
hj +1=2 = h(uj +1 ; sj +1 =2 uj + sj =2)
where s are slopes of a piecewise linear reconstruction and where hnj+1=2 is the ux of
the rst order method. We can use inner ux limiters as described in chapter 3, adapted
to systems analogously, but here we only describe piecewise linear interpolation.
One additional diculty is to determine a good coordinate system for the interpolation. The following strategies are in use
(a) Do interpolation componentwise in the conserved variables ( m e).
(b) Do interpolation componentwise in the variables ( u p).
(c) For each uj dene the characteristic coordinate system spanned by the left eigenvectors of the jacobian A evaluated at uj , lk (uj ), k = 1 : : : m. All the interpolation
or limiting for the cell j is then made in this coordinate system. e.g., dene the
characteristic variables
cki = lk (uj )T uj+i i = ;1 0 1 k = 1 : : : m
and then the componentwise slopes
skj = B(ck1 ; ck0 ck0 ; ck;1)
where B(x y) is a limiter function described in chapter 3. Finally the slopes in the
original coordinates are obtained as
m
X
sj =
skj rk (uj )
k=1
with rk , the right eigenvectors to A.
111
(d) Use the Roe matrix and associated quantities. For a description of the matrix, see
Section 6.3. The quantities ( cf. Section 6.3) kj+1=2 are used to represent +uj in
characteristic coordinates. The slope limiting can be dened as
skj = B(kj+1=2 kj;1=2 )
with B(x y) a limiter function. In the physical coordinates the slopes are added to
u at interfaces j + 1=2, and we thus take
X
m
1
uj + sj =2 = uj + 2 skj rkj+1=2:
k=1
At the j ; 1=2 interface we use the eigenvectors rkj;1=2 to the matrix Aj;1=2 instead.
Note that in (c) the coordinate system is kept xed, at a cell j when the slopes in
j are computed. While in (d) the limiting is done on quantities belonging to dierent
coordinate systems (e.g., j+3=2 and j+1=2 ).
The scheme with outer limiter (3.30) can be generalized to systems in a similar
way. We can use the Roe matrix decomposition or a xed chararacteristic coordinate
system.
(d) can also be viewed as a general way to generalize schemes for scalar problems
to systems. All occurencies of +uj in the scalar method are replaced by kj+1=2 for
all components k. Finally the numerical ux function, hj+1=2 is evaluated by using
the coordinate system spanned by rkj+1=2. This method is usually applied to the LaxWendro type TVD schemes described in Chapter 3. We give an example to clarify
this.
Example 6.1 The method with the following numerical ux is a second order ENO
scheme based on the Lax-Wendro scheme, derived for a scalar conservation law.
hnj+1=2 = 21 (f (unj+1 ) + f (unj ))+ 21 a+j+1=2 (1 ; aj;1=2 )=(1 + (aj+1=2 ; aj;1=2 ))sj
; 12 a;j+1=2 (1 ; aj+3=2 )=(1 + (aj+3=2 ; aj+1=2 ))sj+1
where
sj = minmod(+unj ; unj):
aj+1=2 is as usual the local wave speed (cf. Chapter 2), and = xt . The derivation
is omitted since we just want to show how to generalize this method to systems using
(d) above. For systems the wave speeds are the eigenvalues of the jacobian. We thus
replace aj+1=2 with akj+1=2 the eigenvalue of the Roe matrix. The coecients kj+1=2
are used instead of +uj , since by denition
R = +uj
or written on vector form
m
kj+1=2 rkj+1=2 = +uj :
X
k=1
112
The matrix R have the eigenvectors of the Roe matrix, rkj+1=2 as columns. The numerical
ux for a system becomes
hnj+1=2 = 12 (fj + fj+1)+
m
1X
+k
k
k
k
k
2 (aj+1=2 (1 ; aj;1=2 )=(1 + (aj+1=2 ; aj;1=2 ))sj ;
k=1
1 a;k (1 ; ak )=(1 + (ak ; ak ))sk )rk
j +3=2
j +3=2
j +1=2 j +1 j +1=2
2 j+1=2
with the slope limited as
skj = minmod(kj+1=2 kj;1=2 ):
k = max(0 ak
;k
Here we use the notation a+j+1
j +1=2 ) and similarly for aj +1=2 .
=2
It has been observed that the ENO scheme can give a solution with spurious oscillations if the interpolation is not made in the characteristic variables.
6.7. Some test problems
In this section we have collected some test problems for the one dimensional Euler
equations. We will always let u denote the vector with components ( m e), the density,
momentum and total energy.
First we give some Riemann problems, which have become standard to use in tests
of numerical methods. The rst problem is
8 0 0:445 1
>
>
@ A x<0
>
< 08::311
1
u1(0 x) = > 0 0928
(6:21)
:
5
>
>
0 A x0
: @ 1:4275
The solution of this problem at a time >0 is given in Fig. 6.3.
Density
Pressure
1.4
Velocity
4
1.6
1.4
3.5
1.2
1.2
3
1
1
2.5
0.8
0.8
2
0.6
0.6
1.5
0.4
0.4
0.2
-1
1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0.5
-1
0.2
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0
-1
-0.8
Fig. 6.3. Solution of Riemann problem (6.21).
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
113
A 1-rarefaction wave is travelling to the left. Travelling to the right, a 3-shock is
followed by a contact discontinuity. Note that the pressure does not change across the
contact discontinuity. The intermediate states are
0 0 345 1
0 1 304 1
u1 = @ 0 528 A
u2 = @ 1 995 A
6 571
7 693
and the wave speeds
1 = 2 480 for the shock
2 = c = 0 5290 for the contact discontinuity
L ; L = ;2 6326 1 ; 1 = ;1 6365 across the rarefaction wave
Problem number two is the following
80 1 1
>
>
@ A
0
>
< 205
1
u2 (0 ) = 0
(6 22)
>
0
125
>
>
: @ 0 025 A 0
for which the solution is displayed in Fig. 6.4.
s
:
:
:
:
:
:
:
s
u
u
c
:
:
u
c
:
x <
:
x
:
:
x
:
Density
Pressure
Velocity
1
1
1
0.8
0.8
0.6
0.6
0.9
0.8
0.7
0.6
0.5
0.4
0.4
0.4
0.2
0.2
0.3
0.2
0.1
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Fig. 6.4. Solution of Riemann problem (6.22).
It has similar structure to the previous problem, with a 1-rarefaction followed by a
contact discontinuity and a 3-shock. The intermediate states are
0 0 42632 1
0 0 26557 1
u1 = @ 0 39539 A
u2 = @ 0 24631 A
0 94118
0 87204
:
:
:
:
:
:
0.8
1
114
and the wave speeds
1 = 1 75222 for the shock
2 = c = 0 92745 for the contact discontinuity
L ; L = ;1 183216 1 ; 1 = ;0 07027 across the rarefaction wave
The second problem contains a rarefaction wave which is close to being transonic, and
is a good test for the entropy condition.
These two problems can usually be solved without diculties, although the quality
of the solution of course diers from dierent methods. A more dicult problem is the
following
80 1 1
>
>
@ A
0
>
< ;32
(6 23)
u3 (0 ) = 0 1
1
>
>
>
: @ 23 A 0
The solution is displayed in Fig. 6.5.
s
:
s
u
u
c
:
:
u
c
:
x <
:
x
:
x
Density
Velocity
Pressure
0.5
1
2
0.45
1.5
0.4
1
0.8
0.35
0.5
0.3
0.6
0
0.25
0.2
0.4
-0.5
0.15
-1
0.1
0.2
-1.5
0.05
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Fig. 6.5. Solution of Riemann problem (6.23).
The intermediate state is
0 0 02185 1
@ 0 A
:
0 004735
The state of low density and pressure will sometimes lead to diculties with negative
pressure. Of the schemes described in this chapter only Godunov and the P-version of
the Osher scheme can solve this problem, without crashing because of negative pressure.
Another common test problem is the so called blast wave problem. This problem
is dened on 0
1 with solid walls at = 0 and = 1. The initial data is
8 ( 1 0 2500 )T
01
<
(6 24)
u4 (0 ) = ( 1 0 0 025 )T 0 1
09
: ( 1 0 250 )T 0 9
:
< x <
x
x
x <
x
:
:
:
< x <
:
< x
:
:
0.8
1
115
At the walls the boundary conditions for the velocity
( 1) = 0 ( 0) = 0
are imposed. A shock wave and a rarefaction wave are formed at = 0 038 they have
interacted to produce the solution in Fig. 6.6
u t
u t
t
:
Density
7
6
5
4
3
2
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 6.6. Solution of the blast wave problem (6.24) at = 0 038.
t
:
after one reection at the boundary. The very large dierences in pressure and the
more complex structure of the solution makes this a more challangeing problem than a
single Riemann problem.
The fth problem is
8 0 3 857143 1
>>
;4
>< @ 10 141852 A
(6 25)
u5 (0 ) = 0 39 166666 1
>> 1 + sin 5
>: @ 0 A ;4
25
here one shock wave interacts with a sine wave of small amplitude. If = 0 this is a
shock wave moving to the right. Usually one takes = 0 2. The solution for this is
:
:
x <
:
x
:
x
x
:
:
Density
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
-5
-4
-3
-2
-1
0
1
2
3
4
5
Fig. 6.6. Solution of the oscillatory problem (6.25).
116
The diculty lies in resolving the oscillations that are formed behind the shock
wave in the density. This is a good test for resolution, usually the higher order accurate
ENO methods perform much better than second order TVD methods for this problem.
The solutions in the two last problems can not be found analytically, but in one
space dimension it is possible to obtain a converged solution by putting in a very large
amount of grid points. We do this to nd the \exact" solution for comparison.
Exercises
1. Solve the linear Riemann problem
ut + Aux = 0
n uL x < 0
u(0 x) =
uR x 0
where A is a constant matrix with a basis of eigenvectors.
2. We dene an approximate Riemann solver, w as the solution of the Riemann problem for a linear equation with the Roe matrix as coecient matrix. The techniques
in section 6.4 are used to dene a dierence method from this approximate Riemann solver. Show that the method thus obtained is the same as Roe's method,
described in section 6.2.
3. Give an example which shows that for the Osher scheme applied to the Euler
equations
hj+1=2 6= fj
even if all wave speeds in uj+1 and uj are positive.
Download