One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes

advertisement
J Sci Comput
DOI 10.1007/s10915-014-9946-6
One-Sided Position-Dependent Smoothness-Increasing
Accuracy-Conserving (SIAC) Filtering Over Uniform
and Non-uniform Meshes
Jennifer K. Ryan · Xiaozhou Li ·
Robert M. Kirby · Kees Vuik
Received: 14 January 2014 / Revised: 17 October 2014 / Accepted: 2 November 2014
© Springer Science+Business Media New York 2014
Abstract In this paper, we introduce a new position-dependent smoothness-increasing
accuracy-conserving (SIAC) filter that retains the benefits of position dependence as proposed in van Slingerland et al. (SIAM J Sci Comput 33:802–825, 2011) while ameliorating
some of its shortcomings. As in the previous position-dependent filter, our new filter can
be applied near domain boundaries, near a discontinuity in the solution, or at the interface
of different mesh sizes; and as before, in general, it numerically enhances the accuracy and
increases the smoothness of approximations obtained using the discontinuous Galerkin (dG)
method. However, the previously proposed position-dependent one-sided filter had two significant disadvantages: (1) increased computational cost (in terms of function evaluations),
brought about by the use of 4k + 1 central B-splines near a boundary (leading to increased
kernel support) and (2) increased numerical conditioning issues that necessitated the use of
quadruple precision for polynomial degrees of k ≥ 3 for the reported accuracy benefits to be
realizable numerically. Our new filter addresses both of these issues—maintaining the same
support size and with similar function evaluation characteristics as the symmetric filter in
Jennifer K. Ryan, Xiaozhou Li: Supported by the Air Force Office of Scientific Research (AFOSR), Air
Force Material Command, USAF, under Grant No. FA8655-13-1-3017.
Robert M. Kirby: Supported by the Air Force Office of Scientific Research (AFOSR), Computational
Mathematics Program (Program Manager: Dr. Fariba Fahroo), under Grant No. FA9550-08-1-0156.
J. K. Ryan (B)
School of Mathematics, University of East Anglia, Norwich NR4 7TJ, UK
e-mail: Jennifer.Ryan@uea.ac.uk
X. Li · K. Vuik
Delft Institute of Applied Mathematics, Delft University of Technology, 2628 CD Delft, The Netherlands
e-mail: X.Li-2@tudelft.nl
R. M. Kirby
School of Computing, University of Utah, Salt Lake City, UT, USA
e-mail: kirby@cs.utah.edu
K. Vuik
e-mail: C.Vuik@tudelft.nl
123
J Sci Comput
a way that has better numerical conditioning—making it, unlike its predecessor, amenable
for GPU computing. Our new filter was conceived by revisiting the original error analysis
for superconvergence of SIAC filters and by examining the role of the B-splines and their
weights in the SIAC filtering kernel. We demonstrate, in the uniform mesh case, that our
new filter is globally superconvergent for k = 1 and superconvergent in the interior (e.g.,
region excluding the boundary) for k ≥ 2. Furthermore, we present the first theoretical
proof of superconvergence for postprocessing over smoothly varying meshes, and explain
the accuracy-order conserving nature of this new filter when applied to certain non-uniform
meshes cases. We provide numerical examples supporting our theoretical results and demonstrating that our new filter, in general, enhances the smoothness and accuracy of the solution.
Numerical results are presented for solutions of both linear and nonlinear equations solved
on both uniform and non-uniform one- and two-dimensional meshes.
Keywords Discontinuous Galerkin method · Post-processing · SIAC filtering ·
Superconvergence · Uniform meshes · Smoothly varying meshes · Non-uniform meshes
1 Introduction
Computational considerations are always a concern when dealing with the implementation
of numerical methods that claim to have practical (engineering) value. The focus of this
paper is the formerly introduced smoothness-increasing accuracy-conserving (SIAC) class of
filters, a class of filters that exhibit superconvergence behavior when applied to discontinuous
Galerkin (dG) solutions. Although the previously proposed position-dependent filter (which
we will, henceforth, call the SRV filter) introduced in van Slingerland et al. [20] met its stated
goals of demonstrating superconvergence, it contained two deficiencies which often made it
impractical for implementation and usage within engineering scenarios. The first deficiency
of the SRV filter was its reliance on 4k +1 central B-splines, which increased both the width of
the stencil generated and increased the computational cost (in terms of functions evaluations)
a disproportionate amount compared to the symmetric SIAC filter. The second deficiency
is one of numerical conditioning: the SRV filter requires the use of quadruple precision to
obtain consistent and meaningful results, which makes it unsuitable for practical CPU-based
computations and for GPU computing. In this paper, we introduce a position-dependent SIAC
filter that, like the SRV filter, allows for one-sided post-processing to be used near boundaries
and solution discontinuities and which exhibits superconvergent behavior; however, our new
filter addresses the two stated deficiencies: it has a smaller spatial support with a reduced
number of function evaluations, and it does not require extended precision for error reduction
to be realized.
To give context to what we will propose, let us review how we arrived at the currently
available and used one-sided filter given in van Slingerland et al. [20]. The SIAC filter has
its roots in the finite element superconvergence extraction technique for elliptic equations
proposed by Bramble and Schatz [1], Mock and Lax [12] and Thomée [19]. The linear
hyperbolic system counterpart for discontinuous Galerkin (dG) methods was introduced by
Cockburn et al. [5] and extended to more general applications in [9–11,13,14,18]. The postprocessing technique can enhance the accuracy order of dG approximations from k + 1 to
2k + 1 in the L 2 -norm. This symmetric post-processor uses 2k + 1 central B-splines of
order k + 1. However, a limitation of this symmetric post-processor was that it required a
symmetric amount of information around the location being post-processed. To overcome
this problem, Ryan and Shu [15] used the same ideas in Cockburn et al. [5] to develop a
123
J Sci Comput
one-sided post-processor that could be applied near boundaries and discontinuities in the
exact solution. However, their results were not very satisfactory as the errors had a stairstepping-type structure, and the errors themselves were not reduced when the post-processor
was applied to some dG solutions over coarse meshes. Later, van Slingerland et al. [20] recast
this formulation as a position-dependent SIAC filter by introducing a smooth shift function
λ(x̄) that aided in redefining the filter nodes and helped to ease the errors from the stairstepping-type structure. In an attempt to reduce the errors, the authors doubled to 4k + 1 the
number of central B-splines used in the filter when near a boundary. Further, they introduced
a convex function that allowed for a smooth transition between boundary and symmetric
regions.
The results obtained with this strategy were good for linear hyperbolic equations over
uniform meshes, but new challenges arose. Issues were manifest when the position-dependent
filter was applied to equations whose solution lacked the (high) degree of regularity required
for valid post-processing. In some cases, this filter applied to certain dG solutions gave worse
results than when the original one-sided filter which used only 2k + 1 central B-splines [15].
Furthermore, it was observed that in order for the superconvergence properties expressed in
van Slingerland et al. [20] to be fully realized, extended precision (beyond traditional double
precision) had to be used. Lastly, the addition of more B-splines did not come without cost.
Figure 1 shows the difference between the symmetric filter, which was introduced in [1,5]
and is applied in the domain interior, and the SRV filter when applied to the left boundary. The
solution being filtered is at x = 0, and the filter extends into the domain. Upon examination,
one sees that the position-dependent filter has a larger filter support and, by necessity, is not
symmetric near the boundary. The vast discrepancy in spatial extent is due to the number
of B-splines used: 2k + 1 in the symmetric case versus 4k + 1 in the one-sided case. The
practical implications of this discrepancy is two-fold: (1) filtering at the boundary with the
4k + 1 filter is noticeably more costly (in terms of function evaluations) than filtering in the
interior with the symmetric filter; and (2) the spatial extent of the one-sided filter forces one
to use the one-sided filter over a larger area/volume before being able to transition over to
the symmetric filter.
Fig. 1 Comparison of (a) the symmetric filter centered around x = 0 when the kernel is applied in the domain
interior and (b) the position-dependent filter at the boundary (represented by x = 0) before convolution with
a quadratic approximation. Notice that the boundary filter requires a larger spatial support, the amplitude is
significantly larger in magnitude and the filter does not emphasize the point x = 0, which is the point being
post-processed
123
J Sci Comput
Recall that the SRV filter added two new features when attempting to improve the original Ryan and Shu one-sided filter: the authors added position-dependence to the filter and
increased the number of B-splines used. In attempting to overcome the deficiencies of the
filter in van Slingerland et al. [20], we reverted to the 2k +1 B-spline approach as in Ryan and
Shu [15] but added position-dependence. Although going back to 2k +1 B-splines does make
the filter less costly in the function evaluation sense, unfortunately, this approach did not lead
to a filter that reduced the errors in the way we had hoped (i.e., at least error-order conserving,
if not also superconvergent). We were forced to reconsider altogether how superconvergence
is obtained in the SIAC filter context; we outline a sketch of our thinking chronologically in
what follows.
To conceive of a new one-sided filter containing all the benefits mentioned above with
none of the deficiencies, we had to harken back to the fundamentals of SIAC filtering. We
not only examined the filter itself (e.g., its construction, etc.), but also the error analysis in
[5,8]. From the analysis in [5,8] we can conclude that the main source of the aforementioned
conditioning issue (expressed in terms of the necessity for increased precision) is the constant
term found in the expression for the error, which relies on the B-spline coefficients cγ(2k+1,)
in the SIAC filtering kernel
cγ(2k+1,) .
γ
This quantity increases when the number of B-splines is increased. Further, the condition
(2k+1,)
number of the system used to calculate cγ
becomes quite large, on the order of 1024
4
for P , increasing the possibility for round-off errors and thus requiring higher levels of
precision. As mentioned before, we attempted to still use 2k + 1 position-dependent central
B-splines, but this approach did not lead to error reduction. Indeed, the constant in the error
term remains quite large at the boundaries. To reduce the error term, we concluded that one
needed to add to the set of central B-splines the following: one non-central B-spline near
the boundary. This general B-spline allows the filter to maintain the same spatial support
throughout the domain, including boundary regions; it provides only a slight increase in
computational cost as there are now 2k + 2 B-splines to evaluate as part of the filter; and
possibly most importantly, it allows for error reduction. We note that our modifications to the
previous filter (e.g., going beyond central B-splines) do come with a compromise: we must
relax the assumption of obtaining superconvergence. Instead, we merely require a reduction
in error and a smoother solution. This new filter remains globally superconvergent for k = 1.
The new contributions of this paper are:
• A new one-sided position-dependent SIAC filter that allows filtering up to boundaries and
that ameliorates the two principle deficiencies identified in the previous 4k + 1 one-sided
position-dependent filter;
• Examination and documentation of the reasoning concerning the constant term in the error
analysis that led to the proposed work;
• Demonstration that for the linear polynomial case the filtered approximation is always
superconvergent for a uniform mesh; and
• Application of the scaled new filter to both smoothly varying and non-uniform (random)
meshes. In the smoothly varying case, we prove and demonstrate that we obtain superconvergence. For the general non-unform case, we still observe significant improvement in
the smoothness and an error reduction over the original dG solution, although full superconvergence is not always achieved. We show however that we remain accuracy-order
conserving.
123
J Sci Comput
These important results are presented as follows: first, as we present the SIAC filter in the
context of discontinuous Galerkin approximations, we review the important properties of the
dG method and the position-dependent SIAC filter in Sect. 2. In Sect. 3, we introduce the
newly proposed filter and further establish some theoretical error estimates for the uniform
and non-uniform (smoothly varying) cases. We present the numerical results over uniform and
non-uniform one-dimensional mesh structures in Sect. 4 and two-dimensional quadrilateral
mesh structures in Sect. 5. Finally, conclusions are given in Sect. 6.
2 Background
In this section, we present the relevant background for understanding how to improve the
SIAC filter, which includes the important properties of the discontinuous Galerkin (dG)
method that make the application of SIAC filtering attractive as well as the building blocks
of SIAC filtering—B-splines and the symmetric filter.
2.1 Important Properties of Discontinuous Galerkin Methods
We frame the discussion of the properties of dG methods in the context of a one-dimensional
problem as the ideas easily extend to multiple dimensions. Further details about the discontinuous Galerkin method can be found in [2–4].
Consider a one-dimensional hyperbolic equation such as
u t + a1 u x + a0 u = 0, x ∈ = [x L , x R ]
(2.1)
u(x, 0) = u 0 (x).
(2.2)
N
To obtain a dG approximation, we first decompose as =
j=1 I j where I j =
1
1
[x j− 1 , x j+ 1 ] = [x j − 2 x j , x j + 2 x j ]. Then Eq. (2.1) is multiplied by a test func2
2
tion and integrated by parts. The test function is chosen from the same function space as the
trial functions, a piecewise polynomial basis. The approximation can then be written as
u h (x, t) =
k
()
()
u j (t)ϕ j (x), for x ∈ I j .
=0
()
Herein, we choose the basis functions ϕ j (x) = P () (2(x − x j )/x j ), where P () is the
Legendre polynomial of degree over [−1, 1]. For simplicity, throughout this paper we
represent polynomials of degree less than or equal to by P .
In order to investigate the superconvergence property of the dG solution, it is important
to look at the usual convergence rate of the dG method. By estimating the error of the dG
solution, we obtain u − u h ∼ O(h k+1 ) in the L 2 -norm for sufficiently smooth initial data
u 0 [2]:
u − u h 0 ≤ C h k+1 u 0 H k+2 ,
where h is the measure of the elements, h = x for a uniform mesh and h = max j x j for
non-uniform meshes. Another useful property is the superconvergence of the dG solution in
the negative-order norm [5], where we have
α
∂ (u − u h )
≤ C h 2k+1 ∂ α u 0 h
−,
h
k+1,
123
J Sci Comput
for linear hyperbolic equations. This expression represents why accuracy enhancement
through post-processing is possible. Unfortunately, this superconvergence property does not
hold for non-uniform meshes when α ≥ 1, which makes extracting the superconvergence over
non-uniform meshes challenging. However, we prove that, for certain non-uniform meshes
containing smoothness in their construction (i.e., smoothly varying meshes), the accuracy
enhancement through the SIAC filtering is still possible.
2.2 A Review of B-splines
As the SIAC filter relies heavily on B-splines, here we review the definition of B-splines
given by de Boor in [7] as well as central B-splines. The reader is also directed to [17] for
more on Splines.
Definition 2.1 (B-spline) Let t := (t j ) be a nondecreasing sequence of real numbers that
create a so-called knot sequence. The jth B-spline of order for the knot sequence t is
denoted by B j,,t and is defined, for = 1, by the rule
1, t j ≤ x < t j+1 ;
B j,1,t (x) =
0, otherwise.
In particular, t j = t j+1 leads to B j,1,t = 0. For > 1,
B j,,t (x) = ω j,l,t B j,−1,t + (1 − ω j+1,,t )B j+1,−1,t ,
(2.3)
with
ω j,,t (x) =
x − tj
.
t j+−1 − t j
This notation will be used to create a new kernel near the boundaries.
The original symmetric filter [5,16] relied on central B-splines of order whose knot
−2 sequence was uniformly spaced and symmetrically distributed t = − 2 , − −2
2 ,..., 2 , 2,
yielding the following recurrence relation for central B-splines:
ψ (1) (x) = χ[−1/2,1/2] (x),
ψ (+1) (x) = (ψ (1) ψ () )(x)
+1
() () x + 21 + +1
x − 21
2 +x ψ
2 −x ψ
=
, ≥ 1.
(2.4)
For the purposes of this paper, it is convenient to relate the recurrence relation for central Bsplines to the definition of general B-splines given in Definition 2.1. Relating the recurrence
relation to the definition can be done by defining t = t0 , . . . , t to be a knot sequence, and
()
denoting ψt (x) to be the 0th B-spline of order for the knot sequence t,
()
ψt (x) = B0,,t (x).
Note that the knot sequence t also represents the so-called breaks of the B-spline. The Bspline in the region [ti , ti+1 ), i = 0, . . . , − 1 is a polynomial of degree − 1, but in
the entire support [t0 , t ], the B-spline is a piecewise polynomial. When the knots (t j ) are
sampled in a symmetric and equidistant fashion, the B-spline is called a central B-spline.
Notice that Eq. (2.4) for a central B-spline is a subset of the general B-spline definition where
the knots are equally spaced. This new notation provides more flexibility than the previous
central B-spline notation.
123
J Sci Comput
2.3 Position-Dependent SIAC Filtering
The original position-dependent SIAC filter is a convolution of the dG approximation with a
central B-spline kernel
(2k+1,)
(2.5)
u (x̄) = K h
u h (x̄).
The convolution kernel is given by
K (2k+1,) (x) =
2k
γ =0
cγ(2k+1,) ψ () (x − xγ ),
(2.6)
where 2k + 1 represents the number of central B-splines, the order of the B-splines and
(2k+1,)
are obtained from the property that the kernel
K h = h1 K ( hx ). The coefficients cγ
reproduces polynomials of degree ≤ 2k. For the symmetric central B-spline filter [5,16],
= k + 1 and xγ = −k + γ , where k is the highest degree of the polynomial used in the dG
approximation. More explicitly, the symmetric kernel is given by
K (2k+1,) (x) =
2k
γ =0
cγ(2k+1,) ψ () (x − (−k + γ )).
(2.7)
Note that this kernel is by construction symmetric and uses an equal amount of information
from the neighborhood around the point being post-processed. While being symmetric is
suitable in the interior domain when the function is smooth, it is not suitable for application
near a boundary, or when the solution contains a discontinuity.
The one-sided position-dependent SRV filter defined in van Slingerland et al. [20] is called
“position-dependent” because of its change of support according to the location of the point
being post-processed. For example, near a boundary or discontinuity, a translation of the
filter is done so that the support of the kernel remains inside the domain. Furthermore, in
these regions, a greater number of central B-splines is required. Using more B-splines aids in
improving the magnitude of the errors near the boundary, while allowing superconvergence.
In addition, the authors in van Slingerland et al. [20] increased the number of B-splines
used in the construction of the kernel to be 4k + 1. The position-dependent (SRV) filter for
elements near the boundaries can then be written as1
K (4k+1,) (x) =
4k
γ =0
cγ(4k+1,) ψ () (x − xγ ),
(2.8)
where xγ depends on the location of the evaluation point x̄ used in Eq. (2.5) and at the
boundaries is given by
xγ = −4k + γ + λ(x̄),
with
⎧
x̄−x L
⎨ min 0, − 4k+
,
2 + h
λ(x̄) =
⎩ max 0, 4k+ + x̄−x R ,
2
h
R
x̄ ∈ [x L , x L +x
2 ),
R
x̄ ∈ x L +x
2 , xR .
(2.9)
Here x L and x R are the left and right boundaries, respectively.
1 Note that the notation used in the current manuscript is slightly different from the notation used in van
Slingerland et al. [20]. Instead of using r2 = 4k to denote the SRV filter we chose to use the number of
B-splines directly for the clarity of the discussion.
123
J Sci Comput
The authors chose 4k + 1 central B-splines because, in their experience, using fewer
(central) B-splines was insufficient for enhancing the error. Furthermore, in order to provide
a smooth transition from the boundary kernel to the interior kernel, a convex combination of
the two kernels was used:
(2k+1,l)
(4k+1,l)
u h (x) = θ (x) K h
(2.10)
u h (x) + (1 − θ (x)) K h
u h (x),
where θ (x) ∈ C k−1 such that θ = 1 in the interior and θ = 0 in the boundary regions. This
position-dependent filter demonstrated better behavior in terms of error than the original
one-sided filter given by Ryan and Shu in [15]. Throughout the article, we will refer to the
position-dependent filter using 4k + 1 central B-splines as the SRV filter.
3 Proposed One-Sided Position-Dependent SIAC Filter
In this section, we propose a new one-sided position-dependent filter for application near
boundaries. We first discuss the deficiencies in the current position-dependent SIAC filter.
We then propose a new position dependent filter that ameliorates the deficiencies of the SRV
filter; however, our new filter must make some compromises with regards to superconvergence
(which will be discussed). Lastly, we prove that our new filter is globally superconvergent
for k = 1 and superconvergent in the interior of the domain for k ≥ 2.
3.1 Deficiencies of the Previous Position-Dependent SIAC Filter
The SRV filter was reported to reduce the errors when filtering near a boundary. However,
applying this filter to higher-order dG solutions (e.g., P4 -or even P3 -polynomials in some
cases) required using a multi-precision package (or at least quadruple precision) to reduce
round-off error, leading to significantly increased computational time. Figure 2 shows the significant round-off error near the boundaries when using double precision for post-processing
the initial condition. The multi-precision requirement also makes the position-dependent
kernel [20] near the boundaries, unsuitable for GPU computing.
To discover why this challenge arises requires revisiting the foundations of the filter—in
particular, the existing error estimates. The L 2 -error estimate given in Cockburn et al. [5],
Fig. 2 Comparison of the pointwise errors in log scale of the (a) original L 2 projection solution, (b) the SRV
filter in van Slingerland et al. [20] for the 2D L 2 projection using basis polynomials of degree k = 4, mesh
80 × 80. Double precision was used in these computations
123
J Sci Comput
2k+1
u − u ,
h 0, ≤ Ch
(3.1)
provides us insight into the cause of the issue by examining the constant C in more detail.
The constant C depends on:
r (r +1,) (3.2)
κ (r +1,) =
cγ
,
γ =0
where cγ denotes the kernel coefficients and the value of r depends on the number of Bsplines used to construct the kernel. The kernel coefficients are obtained by ensuring that the
kernel reproduces polynomials of degree r by the convolution:
K (r +1,) (x) (x) p = x p ,
p = 0, 1, . . . , r.
(3.3)
We note that for the error estimates to hold, it is enough to ensure that the kernel reproduces
polynomials up to degree 2k (for r ≥ 2k), although near the boundaries it was required that
the kernel reproduces polynomials of degree 4k (r = 4k) in [8,20]. For filtering near the
boundary, the value κ defined in Eq. (3.2) is on the order of 105 for k = 3 and 107 for k = 4,
as can be seen in Fig. 4. This indicates one possible avenue (i.e., lowering κ) by which we
might generate an improved filter. A second avenue is by investigating the round-off error
stemming from the large condition number of the linear system generated to satisfy Eq. (3.3)
and solved to find the kernel coefficients. The condition number of the generated matrix is on
the order of 1024 for k = 4. This leads to significant round-off error (e.g., the rule of thumb
in this particular case being that 24 digits of accuracy are lost due to the conditioning of
this system), hence requiring the use of high-precision/extended precision libraries for SIAC
filtering to remain accurate (in both its construction and usage).
The requirement of using extended precision in our computations increases the computational cost. In addition, the aforementioned discrepancy in the spatial extent of the filters due
to the number of B-splines used—2k + 1 in the symmetric case versus 4k + 1 in the one-sided
case—leads to the boundary filter costing even more due to extra function evaluations. These
extra function evaluations have led us to reconsider the position-dependent filter and propose
a better conditioned and less computationally intensive alternative.
In order to apply SIAC filters near boundaries, we first no longer restrict ourselves to
using only central B-splines. Secondly, we seek to maintain a constant support size for both
the interior of the domain and the boundaries. The idea we propose is to add one general
B-spline for boundary regions, which is located within the already defined support size.
Using general B-splines provides greater flexibility and improves the numerical evaluation
(eliminating the explicit need for precision beyond double precision). To introduce our new
position-dependent one-sided SIAC filter, we discuss the one-dimensional case and how to
modify the current definition of the SIAC filter. Multi-dimensional SIAC filters are a tensor
product of the one-dimensional case.
3.2 The New Position-Dependent One-Sided Kernel
Before we provide the definition of the new position-dependent one-sided kernel, we first
introduce a new concept, that of a knot matrix. The definition of a knot matrix helps us to
introduce the new position-dependent one-sided kernel in a concise and compact form. It
also aids in demonstrating the differences between the new position-dependent kernel, the
symmetric kernel and the SRV filter. Informally, the idea behind introducing a knot matrix is to
exploit the definition of B-splines in terms of their corresponding knot sequence t := (t j ), in
123
J Sci Comput
the definition of the SIAC filter. In order to introduce a knot matrix, we will use the following
()
notation: ψt (x) = B0,,t (x) for the zeroth B-Spline of order with knot sequence t.
Definition 3.1 (Knot matrix) A knot matrix, T, is an n × m matrix such that the γ th row,
T(γ ), of the matrix T is a knot sequence with + 1 elements (i.e., m = + 1) that are used
()
to create the B-spline ψT(γ ) (x). The number of rows n is specified based on the number of
B-splines used to construct the kernel.
To provide some context for the necessity of the definition of a knot matrix, we first
redefine some of the previous SIAC kernels discussed in terms of their knot matrices. Recall
that the general definition of the SIAC kernel relies on r + 1 central B-splines of order .
Therefore, we can use Definition 3.1 to rewrite the symmetric kernel given in Eq. (2.7) in
terms of a knot matrix as follows
(2k+1,)
K Tsym
(x) =
2k
γ =0
()
cγ(2k+1,) ψTsym (γ ) (x),
(3.4)
where Tsym in this relation is a (2k + 1) × ( + 1) matrix. Each row in Tsym corresponds to the
knot sequence of one of the constituent B-splines in the symmetric kernel. More specifically,
the elements of Tsym are defined as
Tsym (i, j) = − + j + i − k, i = 0, . . . , 2k; and j = 0, . . . , .
2
For instance, for the first order symmetric SIAC kernel we have: = 2 and k = 1. Therefore,
the corresponding knot matrix can be defined as
⎞
⎛
−2 −1 0
Tsym = ⎝ −1 0 1 ⎠.
(3.5)
0
1 2
The definition of a SIAC kernel in terms of a knot matrix can also be used to rewrite the
original boundary filter [15], which uses only 2k + 1 central B-splines at the left boundary.
The knot matrix Tone for this case is given by
⎞
⎛
−4 −3 −2
Tone = ⎝ −3 −2 −1 ⎠.
(3.6)
−2 −1 0
Now we can define our new position-dependent one-sided kernel by generating a knot
matrix. The new position-dependent one-sided kernel consists of r + 1 = 2k + 1 central
B-splines and one general B-spline, and hence the knot matrix is of size (2k + 2) × ( + 1).
At a high-level, using the scaling of the kernel, the new position-dependent one-sided kernel
can be written as
r +1
(r +1,)
()
K hT
(x) =
cγ(r +1,) ψhT(γ ) (x),
(3.7)
γ =0
where T(γ ) represents the γ th row of the knot matrix T, which is the knot sequence
T (γ , 0), . . . , T (γ , ). For the central B-spline, γ = 0, . . . , 2k and
1 () x ()
.
ψhT(γ ) (x) = ψT(γ )
h
h
123
J Sci Comput
The added B-spline is a monomial defined as
()
ψhT(r +1) (x) =
where
−1
xT(r
+1)
=
1 −1 x ,
x
h T(r +1) h
(x − T (r + 1, 0))−1 ,
T (r + 1, 0) ≤ x ≤ T (r + 1, );
0,
otherwise.
Therefore near the left boundary, the kernel in Eq. (3.7) can be rewritten as
(r +1,)
K hT
r
(x) =
γ =0
()
cγ(r +1,) ψhT(γ ) (x)
(r +1,)
+ cr +1
()
ψhT(r +1) (x) .
General B-spline
Position-dependent kernel with r +1=2k+1 central B-splines
(3.8)
(r +1,)
The kernel coefficients, cγ
, γ = 0, . . . , r + 1 are obtained through reproducing polynomials of degree up to r + 1. We have imposed further restrictions on the knot matrix for the
definition of the new position-dependent one-sided kernel. First, for convenience we require
T (γ , 0) ≤ T (γ , 1) ≤ · · · ≤ T (γ , ), for γ = 0, . . . , r, +1
and
T (γ + 1, 0) ≤ T (γ , ), for γ = 0, . . . , r.
Second, the knot matrix, T, should satisfy
T (0, 0) ≥
x̄ − x R
x̄ − x L
and T (r, ) ≤
,
h
h
where h is the element size in a uniform mesh. This requirement is derived from the support
of the B-spline as well as the support needing to remain inside the domain. Recall that
()
the support of the B-spline ψT (γ ) is [T (γ , 0), T (γ , )], and the support of the kernel is
[T (0, 0), T (r, )]. For any x̄ ∈ [x L , x R ], the post-processed solution at point x̄ can then be
written as
∞
(r +1,)
(r +1,)
u (x̄) = K hT
u h (x̄) =
K hT
(x̄ − ξ )u h (ξ )dξ
=
−∞
x̄−hT (0,0)
x̄−hT (r,)
(r +1,)
K hT
(x̄
− ξ )u h (ξ )dξ,
(3.9)
where hT represents the scaled knot matrix. For the boundary regions, we force the interval
[x̄ − hT (r, ), x̄ − hT (0, 0)] to be inside the domain = [x L , x R ]. This implies that
x L ≤ x̄ − hT (r, ), x̄ − hT (0, 0) ≤ x R ,
R
L
and hence the requirement of T (0, 0) ≥ x̄−x
and T (r, ) ≤ x̄−x
h
h . Finally, we require that
the kernel remain as symmetric as possible. This means the knots should be chosen as
x̄ − x L
x̄ − x L
3k + 1
Left : T ← T − T (r, ) −
,
for
<
,
(3.10)
h
h
2
x R − x̄
3k + 1
x̄ − x R
, for
<
,
(3.11)
Right : T ← T − T (0, 0) −
h
h
2
123
J Sci Comput
This shifting will increase the error and it is therefore still necessary to increase the number
of B-splines used in the kernel.
Because the symmetric filter yields superconvergence results, we wish to retain the original
form of the kernel as much as possible. Near the boundary, where the symmetric filter cannot
be applied, we keep the 2k + 1 shifted central B-splines and add only one general B-spline.
To avoid increasing the spatial support of the filter, we will choose the knots of this general
B-spline dependent upon the knots of the 2k + 1 central B-splines in the following way:
near the left boundary, we let the first 2k + 1 B-splines be central B-splines whereas the last
B-spline will be a general spline. The elements of the knot matrix are then given by
⎧
⎪
⎨ − − r + j + i +
L
T (i, j) = x̄−x
h − 1,
⎪
⎩ x̄−x L
h ,
x̄−x L
h ,
0 ≤ i ≤ r, 0 ≤ j ≤ ;
i = r + 1, j = 0;
(3.12)
i = r + 1, j = 1, . . . , .
Similarly, we can design the new kernel near the right boundary, where the general B-spline
is given by
()
ψT(0) (x)
=
−1
xT(0)
=
(T (0, ) − x)−1 , T (0, 0) ≤ x ≤ T (r + 1, );
0,
otherwise.
The elements of the knot matrix for the right boundary kernel are defined as
⎧
⎪
⎨
T (i, j) =
⎪
⎩
x̄−x R
h ,
x̄−x R
h + 1,
i = 0, j = 0, . . . , − 1;
i = 0, j = ;
j +i −1+
x̄−x R
h ,
(3.13)
1 ≤ i ≤ r + 1, 0 ≤ j ≤ ,
and the form of the kernel is then
(r +1,)
K hT
(r +1,)
(x) = c0
()
ψhT(0) (x) +
r +1
γ =1
()
cγ(r +1,) ψhT(γ ) (x).
(3.14)
x R −x̄
L
< 3k+1
< 3k+1
We note that this “extra” B-spline is only used when x̄−x
h
2 or h
2 , otherwise
the symmetric central B-spline kernel is used.
We present a concrete example for the P1 case with = 2. In this case, the knot matrices
for our newly proposed filter at the left and right boundaries are
⎞
⎛
−4 −3 −2
0
⎜ −3 −2 −1 ⎟
⎜0
⎟
⎜
⎜
, TRight = ⎝
=⎝
−2 −1 0 ⎠
1
−1 0
0
2
⎛
TLeft
0
1
2
3
⎞
1
2⎟
⎟.
3⎠
4
(3.15)
These new knot matrices are 4×3 matrices where, in the case of the filter for the left boundary,
the first three rows express the knots of the three central B-splines and the last row expresses
the knots of the general B-spline. For the filter applied to the right boundary, the first row
expresses the knots of the general B-spline and the last three rows express the knots of the
central B-splines.
123
J Sci Comput
Fig. 3 Comparison of the two boundary filters before convolution. (a) The SRV kernel and (b) the new kernel
at the boundary with k = 2. The boundary is represented by x = 0. The new filter has reduced support and
magnitude
If we use the same form of the knot matrix to express the SRV kernel introduced in van
Slingerland et al. [20] at the left boundary for k = 1, we would have
⎞
⎛
−6 −5 −4
⎜ −5 −4 −3 ⎟
⎟
⎜
⎟
(3.16)
TSRV = ⎜
⎜ −4 −3 −2 ⎟ .
⎝ −3 −2 −1 ⎠
−2 −1 0
Comparing the new knot matrix with the one used to obtain the SRV filter, we can see that
they have the same number of columns, which indicates that they use the same order of
B-splines. There are fewer rows in the new matrix (2k + 2) than the number of rows from
the original position-dependent filter (4k + 1). This indicates that the new filter uses fewer
B-splines than the SRV filter.
To compare the new filter and the SRV filter, we plot the kernels used at the left boundary for
k = 2. Figure 3 illustrates that the new position-dependent SIAC kernel places more weight
on the evaluation point than the SRV kernel, and the SRV kernel has a significantly larger
magnitude and support which we observed to cause problems, especially for higher-order
polynomials (such as P3 or P4 ). For this example, using the filter for quadratic approximations,
the scaling of the original position-dependent SIAC filter has a range from −150 to 150 versus
−6 to 6 for the newly proposed filter.
3.3 Theoretical Results in the Uniform Case
The previous section introduced a new filter to reduce the errors of dG approximations
while attempting to ameliorate the issues concerning the old filter. In this section, we discuss
the theoretical results for the newly defined boundary kernel. Specifically, for k = 1 it is
globally superconvergent of order three. For higher degree polynomials, it is possible to
obtain superconvergence only in the interior of the domain.
Recall from Eq. (3.7) that the new one-sided scaled kernel has the form
(r +1,)
K hT
(x) =
r +1
γ =0
()
cγ(r +1,) ψhT(γ ) (x).
(3.17)
123
J Sci Comput
In the interior of the domain the symmetric SIAC kernel is used which consists of 2k + 1
central B-splines
2k
(2k+1,)
()
K hT
(x) =
cγ(2k+1,) ψhT(γ ) (x),
(3.18)
γ =0
and, near the left boundary the new one-sided kernel can be written as
⎛
⎞
r
(r +1,)
()
(r +1,) ()
(x) = ⎝
cγ(r +1,) ψhT(γ ) (x)⎠ + cr +1 ψhT(r +1) (x), r = 2k
K hT
γ =0
where 2k +1 central B-splines are used together with one general B-spline. The scaled kernel
(r +1,)
(r +1,)
has the property that the convolution K hT
u h only uses information inside the
K hT
domain .
Theorem 3.1 Let u be the exact solution to the linear hyperbolic equation
ut +
d
Ai u xi + A0 u = 0, x ∈ × (0, T ],
i=1
u(x, 0) = u 0 (x), x ∈ ,
(3.19)
where the initial condition u 0 (x) is a sufficiently smooth function. Here, ⊂ Rd . Let u h be
the numerical solution to Eq. (3.19), obtained using a discontinuous Galerkin scheme with
(r +1,)
an upwind flux over a uniform mesh with mesh spacing h. Let u h (x̄) = (K hT
u h )(x̄)
be the solution obtained by applying our newly proposed filter which uses r + 1 = 2k + 1
central B-splines of order = k + 1 and one general B-spline in boundary regions. Then the
SIAC-filtered dG solution has the following properties:
(i) (u − u h )(x̄)0, ≤ C h 3 for k = 1. That is, u h (x̄) is globally superconvergent of order
three for linear approximations.
(ii) (u − u h )(x̄)0,\supp{K s } ≤ C h r +1 when r + 1 ≤ 2k + 1 central B-splines are used in
the kernel. Here supp{K s } represents the support of the symmetric kernel. Thus, u h (x̄)
is superconvergent in the interior of the domain.
(iii) (u − u h )(x̄)0, ≤ C h k+1 globally.
Proof We neglect the proof of properties (i) and (ii) as they are similar to the proofs in
Cockburn et al. [5] and Ji et al. [8]. Instead we concentrate on (u − u h )(x̄) ≤ Ch k+1 ,
which is rather straight-forward.
Consider the one-dimensional case (d = 1). Then the error can be written as
(r +1,)
uh ≤ h,1 + h,2 ,
u − K hT
0,
where
(r +1,)
h,1 = u − K hT
(r +1,)
u0, and h,2 = K hT
(u − u h )0, .
The proof of higher order convergence for the first term, H,1 , is the same as in Cockburn
et al. [5] as the requirement on K hT does not change (reproduction polynomials up to degree
r + 1). This means that
h,1 ≤
123
h r +1
C1 |u|r +1, .
(r + 1)!
J Sci Comput
Now consider the second term, h,2 . Without loss of generality, we consider the filter for
the left boundary in order to estimate h,2 . The proofs for the filter in the interior and right
boundary are similar. We use the form of the kernel given in Eq. (3.8), which decomposes
the new filter into two parts: 2k + 1 central B-splines and one general B-spline. That is, we
write
⎛
⎞
r
(r +1,)
()
(r +1,) ()
K hT
(x) = ⎝
cγ(r +1,) ψhT(γ ) (x)⎠ + cr +1 ψhT(r +1) (x) .
γ =0
general B-spline
central B-splines
Setting e(x) = u(x) − u h (x), then
(r +1,) e
h,2 = K hT
0,1
(r +1,) ≤ K hT
Hence
⎛
h,2 ≤ C sup ⎝
x∈
r
γ =0
⎛
L1
e0 ≤ sup ⎝
x∈
|cγ(r +1,) | +
r
γ =0
(r +1,)
|c2k+1 |
|cγ(r +1,) | +
(r +1,)
|c2k+1 |
⎞
⎠ e0 .
⎞
⎠ h k+1 .
Remark 3.1 Note that in this analysis we steered away from the negative-order norm argument. Technically, the terms involving the central B-splines have a convergence rate of
r + 1 ≤ 2k + 1 as given in [5,8]. It is the new addition, the term involving the general
B-spline that presents the limitation and reduces the convergence rate to that of the dG
approximation itself (i.e., it is accuracy-order conserving).
To extend this to the multidimensional case (d > 1), given an arbitrary x = (x1 , . . . , xd ) ∈
Rd , we set
d
()
ψT(γ ) (x) =
()
ψT(γ ) (xi ).
i=1
The filter for the multidimensional space considered is of the form
(r +1,)
K hT
(x) =
r +1
γ =0
()
cγ(r +1,) ψhT(γ ) (x),
()
where the coefficients cγ are tensor products of the one-dimensional coefficients. To emphasize the general B-spline used near the boundary, we assume, without loss of generality, that
in the xk1 , . . . , xkd0 directions we need the general B-spline, where 0 ≤ d0 ≤ d. Then
()
d0
ψhT(2k+1) =
()
ψhT(2k+1) (xki ).
i=1
By applying the same idea we used for the one-dimensional case, the theorem is also true for
multi-dimensional case.
We note that the constant in the final estimation is a product of two other constants, one of
(r+1,)
!
|c
|
(r +1,)
them is determined by the filter ( rγ =0 |cγ
| + r+1 ) and the other one is determined
123
J Sci Comput
Fig. 4 Plots demonstrating the effect of the coefficients on the error estimate for (a) P3 - and (b) P4 !
(r +1,)
polynomials. Shown is rγ =0 |cγ
| using 2k + 1 central B-splines (blue dashed), using 4k + 1 central
B-splines (green dash dot-dot) and
!r
(r +1,)
|c
(r +1,)
| + r +1
γ =0 |cγ
|
for the new filter (red line)
by the dG approximation. To further illustrate the necessity of examining the constant in the
error term which contributed by the filter, we provide Fig. 4. This figure demonstrates the
!
(r +1,)
difference between rγ =0 |cγ
| for the previously introduced filters and our new filter in
(r+1,)
!
|c
|
(r +1,)
which the constant gets modified to rγ =0 |cγ
| + r+1 . In Fig. 4, one can clearly see
that by adding a general spline to the r + 1 central B-splines, we are able, in the boundary
regions, to reduce the constant in the error term significantly.
3.4 Theoretical Results in the Non-uniform (Smoothly Varying) Case
In this section, we give a theoretical interpretation to the computational results presented in
Curtis et al. [6]. This is done by using the newly proposed filter for non-uniform meshes and
showing that the new position-dependent filter maintains the superconvergence property in
the interior of the domain for smoothly varying meshes and is accuracy order conserving near
the boundaries for non-uniform meshes. We begin by defining what we mean by a smoothly
varying mesh.
Definition 3.2 (Smoothly varying mesh) Let ξ be a variable defined over a uniform mesh on
domain ⊂ R, then a smoothly varying mesh defined over is a non-uniform mesh whose
variable x satisfies
x = ξ + f (ξ ),
(3.20)
where f is a sufficiently smooth function and satisfies
f (ξ ) > −1, ξ ∈ ∂ ⇐⇒ ξ + f (ξ ) ∈ ∂.
For example, we can choose f (ξ ) = 0.5 sin(ξ ) over [0, 2π]. The multi-dimensional definition can be defined in the same way.
Lemma 3.2 Let u be the exact solution of a linear hyperbolic equation
ut +
d
n=1
123
An u xn + A0 u = 0, x ∈ × (0, T ],
(3.21)
J Sci Comput
with a smooth enough initial function and ⊂ Rd . Let ξ be the variable for the uniform
mesh defined on with size h, and x be the variable of a smoothly varying mesh defined in
(3.20). Let u h (ξ ) be the numerical solution to Eq. (3.21) over uniform mesh ξ , and u h (x)
be the approximation over smoothly varying mesh x, both of them obtained by using the
discontinuous Galerkin scheme. Then the post-processed solution obtained by applying SIAC
filter K h (ξ ) for u h (ξ ) and K H (x) for u h (x) with a proper scaling H , are related by
u(x) − K H u h (x)0, ≤ Cu(ξ ) − K h u h (ξ )0, .
Here, the filter K can be any filter we mentioned in the previous section (symmetric filter,
the SRV filter, and newly proposed position-dependent filter). Note that this means that we
obtain the full 2k + 1 superconvergence rate behavior for both the SRV and symmetric filters.
Proof The proof is straightforward. If the scaling H is properly chosen, a simple mapping
can be done from the smoothly varying mesh to the corresponding uniform mesh. The result
holds if the Jacobian is bounded (from the definition of smoothly varying mesh).
2
u(x) − K H u h (x)0, =
(u(x) − K H u h (x))2 d x
2
x→ξ
u(ξ ) − u h (ξ ) (1 + f (ξ ))dξ ≤ u(ξ ) − K h u h (ξ )20,˜ · max |1 + f (ξ )|.
=
˜
˜ we have
According to the definition of smoothly varying mesh, = ,
u(x) − K H u h (x)0, ≤ Cu(ξ ) − K h u h (ξ )0, ,
1
where C = max |1 + f | 2 .
Remark 3.2 The proof seems obvious, but it is important to choose a proper scaling for H
in the computations. Due to the smoothness and computational cost requirements, we need
to keep H constant when treating points within the same element. Under this condition, the
natural choice is H = x j when post-processing the element I j . It is now easy to see that
there exists a position c in the element I j , such that
H = x j = h(1 + f (c)).
Remark 3.3 Note that Theorem 3.1 (iii) still holds for generalized non-uniform meshes. This
is due to the proof not relying on the properties (i.e., structure) of the mesh.
We have now shown that superconvergence can be achieved for interior solutions over
smoothly varying meshes. In the subsequent sections, we present numerical results that
confirm our results on uniform and non-uniform (smoothly varying) meshes.
4 Numerical Results for One Dimension
The previous section introduced a new SIAC kernel by adding a general B-spline to a modified
central B-spline kernel. The addition of a general B-spline helps to maintain a consistent
support size for the kernel throughout the domain and eliminates the need for a multi-precision
package. This section illustrates the performance of the new position-dependent SIAC filter
on one-dimensional uniform and non-uniform (smoothly varying and random) meshes. We
compare our results to the SRV filter [20]. In order to provide a fair comparison between
123
J Sci Comput
the SRV and new filters, we mainly show the results using quadruple precision for a few
one-dimensional cases. Furthermore, in order to reduce the computational cost of the filter
that uses 4k +1 central B-splines, we neglect to implement the convex combination described
in Eq. (2.10). This is not necessary for the new filter, and was implemented in the SRV filter
to ensure the transition from the one-sided filter to the symmetric filter was smooth.
This is the first time that the position-dependent filters have been tested on non-uniform
meshes. Although tests were performed using scalings of H = x j and H = max x j ,
we only present the results using a scaling of H = x j . This scaling provides better results
in boundary regions, which is one of the motivations of this paper. We note that the errors
produced using a scaling of H = max x j are quite similar and often produce smoother
errors in the interior of the domain for smoothly varying meshes.
Remark 4.1 The SRV filter requires using quadruple precision in the computations to eliminate round-off error, which typically involves more computational effort than natively supported double precision. The new filter only requires double precision. In order to give a fair
comparison between the SRV filter and the new filter, for the one-dimensional examples we
have used quadruple precision to maintain a consistent computational environment.
4.1 Linear Transport Equation Over a Uniform Mesh
The first equation that we consider is a linear hyperbolic equation with periodic boundary
conditions,
u t + u x = 0, (x, t) ∈ [0, 1] × (0, T ]
(4.1)
u(x, 0) = sin(2π x), x ∈ [0, 1].
(4.2)
The exact solution is a periodic translation of the sine function,
u(x, t) = sin(2π(x − t)).
For T = 0, this is simply the L 2 -projection of the initial condition. Here, we consider a final
time of T = 1 and note that we expect similar results at later times.
The discontinuous Galerkin approximation error and the position-dependent SIAC filtered error results are shown in Tables 1 and 2 for both quadruple precision and double
precision. Using quadruple precision, both filters reduce the errors in the post-processed
solution, although the new filter has only a minor reduction in the quality of the error. However, using double precision only the new filter can maintain this error reduction for P3 and P4 -polynomials. We note that we concentrate on the results for P3 - and P4 -polynomials
as there is no noticeable difference between double and quadruple precision for P1 - and
P2 -polynomials.
The pointwise error plots are given in Figs. 5 and 6. When using quadruple precision as
in Fig. 5, the SRV filter can reduce the error of the dG solution better than the new filter
for fine meshes. However, it uses 2k − 1 more B-splines than the newly generated filter.
This difference is noticeable when using double precision, which is almost ten times faster
than using quadruple precision for P3 and P4 . For such examples the new filter performs
better both computationally and numerically (in terms of error). Tables 1 and 2 show that the
SRV filter can only reduce the error for fine meshes when using P4 piecewise polynomials.
The new filter performs as good as when using quadruple precision and reduces the error
magnitude at a reduced computational cost.
123
L 2 error
dG
2.58E−04
80
1.67E−06
80
8.07E−09
80
3.19E−08
1.00E−09
3.14E−11
20
40
80
P4
2.06E−06
1.29E−07
20
40
P3
1.07E−04
1.34E−05
20
40
P2
4.02E−03
1.02E−03
20
40
P1
Quadruple precision
Mesh
5.00
4.99
–
4.00
4.00
–
3.00
3.00
–
1.99
1.97
–
Order
7.14E−11
2.25E−09
7.02E−08
2.38E−08
3.80E−07
6.04E−06
5.78E−06
4.62E−05
3.67E−04
9.79E−04
3.82E−03
1.45E−02
L ∞ error
4.98
4.97
–
4.00
3.99
–
3.00
2.99
–
1.96
1.92
–
Order
2.23E−15
1.99E−12
7.53E−03
1.22E−12
2.70E−10
1.53E−07
2.48E−09
9.42E−08
3.73E−06
3.02E−05
2.44E−04
1.98E−03
L 2 error
SRV filter
9.80
31.82
–
7.79
9.15
–
5.24
5.31
–
3.01
3.02
–
Order
3.19E−15
3.12E−12
7.33E−02
1.73E−12
4.00E−10
1.02E−06
3.52E−09
1.34E−07
5.82E−06
4.28E−05
3.46E−04
2.80E−03
L ∞ error
9.93
34.45
–
7.85
11.32
–
5.26
5.44
–
3.01
3.02
–
Order
1.37E−13
2.97E−10
5.31E−07
8.18E−12
4.14E−09
2.30E−06
4.79E−08
5.52E−07
1.21E−05
3.03E−05
2.44E−04
1.98E−03
L 2 error
New filter
11.08
10.80
–
8.98
9.12
–
3.53
4.45
–
3.01
3.02
–
Order
1.55E−12
1.58E−09
1.99E−06
1.20E−10
2.27E−08
8.71E−06
6.19E−07
5.31E−06
8.27E−05
4.28E−05
3.46E−04
2.80E−03
L ∞ error
9.99
10.30
–
7.56
8.58
–
3.10
3.96
–
3.01
3.02
–
Order
Table 1 L 2 - and L ∞ -errors for the dG approximation together with the SRV and new filters for the linear transport equation using polynomials of degree k = 1, . . . , 4 (quadruple
precision) over uniform meshes
J Sci Comput
123
123
L 2 error
dG
8.07E−09
80
3.19E−08
1.00E−09
3.14E−11
20
40
80
P4
2.06E−06
1.29E−07
20
40
P3
Double precision
Mesh
5.00
4.99
–
4.00
4.00
–
Order
7.14E−11
2.25E−09
7.02E−08
2.38E−08
3.80E−07
6.04E−06
L ∞ error
4.98
4.97
–
4.00
3.99
–
Order
1.48E−11
3.97E−11
7.53E−03
1.25E−12
2.70E−10
1.53E−07
L 2 error
SRV filter
1.42
27.50
–
7.75
9.15
–
Order
3.28E−10
6.14E−10
7.33E−02
3.85E−12
4.00E−10
1.02E−06
L ∞ error
0.90
26.83
–
6.70
11.32
–
Order
1.37E−13
2.97E−10
5.31E−07
8.18E−12
4.14E−09
2.30E−06
L 2 error
New filter
11.08
10.80
–
8.98
9.12
–
Order
1.55E−12
1.58E−09
1.99E−06
1.20E−10
2.27E−08
8.71E−06
L ∞ error
9.99
10.30
–
7.56
8.58
–
Order
Table 2 L 2 - and L ∞ -errors for the dG approximation together with the SRV and new filters for the linear transport equation using polynomials of degree k = 3, 4 (double
precision) over uniform meshes
J Sci Comput
Fig. 5 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the linear
transport equation over uniform meshes using polynomials of degree k = 3, 4 (top and bottom rows, respectively). Quadruple precision was used in the computations
J Sci Comput
123
Fig. 6 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the linear
transport equation over uniform meshes using polynomials of degree k = 3, 4 (top and bottom rows, respectively). Double precision was used in the computations
J Sci Comput
123
J Sci Comput
Additionally, we point out that the accuracy of the SRV filter depends on (1) having higher
regularity of C 4k+1 , (2) a well-resolved dG solution, and (3) a wide enough support (at least
5k + 1 elements). The same phenomenon will also be observed in the following tests such
as for a nonlinear equation. For the new filter, the support size remains the same throughout
the domain – 3k + 1 elements—and a higher degree of regularity is not necessary.
4.2 Non-uniform Meshes
We begin by defining three non-uniform meshes that are used in the numerical examples.
The meshes tested are:
Mesh 4.1 Smoothly varying mesh with periodicity The first mesh is a simple smoothly
varying mesh. It is defined by x = ξ + b sin(ξ ), where ξ is a uniform mesh variable and
b = 0.5 as in Curtis et al. [6]. We note that the tests were also performed for different values
of b; similar results were attained in all cases. This mesh has the nice feature that it is a
periodic mesh and that the elements near the boundaries have a larger element size.
Mesh 4.2 Smooth polynomial mesh The second mesh is also a smoothly varying mesh but
does not have a periodic structure. It is defined by x = ξ − 0.05(ξ − 2π)ξ. For this mesh,
the size of elements gradually decrease from left to right.
Mesh 4.3 Randomly varying mesh The third mesh is a mesh with randomly distributed
elements. The element size varies between [0.8 h, 1.2 h], where h is the uniform mesh size.
We will now present numerical results demonstrating the usefulness of the positiondependent SIAC filter in van Slingerland et al. [20] and our new one-sided SIAC filter for
the aforementioned meshes.
4.3 Linear Transport Equation
The first example that we consider is a linear transport equation,
u t + u x = 0, (x, t) ∈ [0, 2π] × (0, T ]
u(x, 0) = sin(x),
(4.3)
with periodic boundary conditions and the errors calculated at T = 2π. We calculate the
discontinuous Galerkin approximations for this equation over the three different non-uniform
meshes (Meshes 4.1, 4.2, 4.3). The approximation is then post-processed at the final time
in order to analyze the numerical errors. We note that although the boundary conditions for
the equation are periodic, in the boundary regions we implement the one-sided filter in van
Slingerland et al. [20] as the SRV filter and compare them with the new filter presented above.
The pointwise error plots for the periodically smoothly varying mesh are given in Fig. 7
with the corresponding errors presented in Table 3. In the boundary regions, the SRV filter
behaves slightly better for coarse meshes than the new filter. However, we recall that this
filter essentially doubles the support in the boundary regions. Additionally, we see that the
new filter has a higher convergence rate than k + 1 which is better than the theoretically
predicted convergence rate.
For the smooth polynomial Mesh 4.2 (without a periodic property), the results of using
the scaling of H = x j are presented in Fig. 8 and Table 3. Unlike the previous example,
without the periodic property, the SRV filter leads to significantly worse performance. The
SRV filter no longer enhances the accuracy order and has larger errors near the boundaries.
123
Fig. 7 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the linear
transport Eq. (4.3) over smoothly varying mesh (Mesh 4.1). The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
L 2 error
dG
Order
1.65E−03
4.04E−04
40
80
3.02E−05
3.77E−06
40
80
3.39E−07
2.12E−08
40
80
4.83E−09
1.51E−10
40
80
5.00
5.01
–
4.00
4.01
–
3.00
3.01
–
2.03
2.11
–
1.26E−03
3.08E−04
80
5.36E−03
40
20
P1
2.03
2.10
–
Mesh 2: Smooth polynomial mesh
1.56E−07
20
P4
5.45E−06
20
P3
2.43E−04
20
P2
7.13E−03
20
P1
Mesh 1: Smoothly varying mesh
Mesh
1.23E−03
4.69E−03
1.93
1.84
–
4.99
5.22E−10
1.68E−02
4.97
–
4.01
3.96
–
3.00
2.98
–
1.93
1.86
–
Order
1.66E−08
5.20E−07
7.48E−08
1.20E−06
1.87E−05
1.95E−05
1.55E−04
1.22E−03
1.43E−03
5.43E−03
1.97E−02
L ∞ Error
3.63E−05
2.93E−04
2.38E−03
1.25E−11
6.43E−09
5.15E−07
1.45E−10
3.11E−07
6.43E−06
8.30E−09
9.97E−07
2.41E−04
5.37E−05
4.35E−04
3.57E−03
L 2 error
SRV filter
3.01
3.02
–
9.01
6.32
–
11.06
4.37
–
6.91
7.92
–
3.02
3.04
–
order
5.43E−05
4.37E−04
3.48E−03
1.80E−10
6.56E−08
4.11E−06
1.81E−09
3.25E−06
5.51E−05
1.34E−08
9.73E−06
1.51E−03
7.94E−05
6.42E−04
5.83E−03
L ∞ error
3.01
2.99
–
8.51
5.97
–
10.81
4.09
–
9.50
7.28
–
3.02
3.18
–
order
3.64E−05
2.94E−04
2.39E−03
1.15E−11
2.56E−08
1.72E−05
2.81E−10
1.72E−07
6.36E−05
1.98E−07
2.55E−06
1.56E−04
5.37E−05
4.34E−04
3.54E−03
L 2 Error
New filter
3.01
3.03
–
11.13
9.39
–
9.26
8.53
–
3.68
5.94
–
3.02
3.03
–
Order
5.43E−05
4.37E−04
3.48E−03
7.77E−11
1.12E−07
5.41E−05
2.16E−09
7.62E−07
2.02E−04
2.13E−06
2.29E−05
7.32E−04
7.94E−05
6.42E−04
5.29E−03
L ∞ error
Table 3 L 2 - and L ∞ -errors for the dG approximation together with the SRV and the new filters for the linear transport Eq. (4.3) over the three Meshes 4.1, 4.2 and 4.3
3.01
2.99
–
10.50
8.91
–
8.46
8.05
–
3.42
5.00
–
3.02
3.04
–
Order
J Sci Comput
123
123
1.80E−05
2.25E−06
80
1.96E−07
1.22E−08
40
80
1.96E−09
6.14E−11
40
80
5.00
5.00
–
4.00
4.01
–
3.00
2.99
–
Order
1.15E−03
2.87E−04
40
80
1.93E−06
80
20
2.49E−06
1.52E−05
40
P3
1.17E−04
20
P2
4.88E−03
20
P1
–
2.98
2.95
–
2.00
2.09
–
Mesh 3: Randomly varying mesh
6.25E−08
20
P4
3.15E−06
20
P3
1.43E−04
40
L 2 error
dG
20
P2
Mesh
Table 3 continued
1.06E−05
9.85E−06
7.90E−05
5.63E−04
1.20E−03
4.45E−03
–
3.00
2.83
–
1.89
1.74
–
4.97
2.79E−10
1.49E−02
4.93
–
4.02
3.96
–
2.99
2.95
–
Order
8.77E−09
2.67E−07
4.97E−08
8.05E−07
1.25E−05
1.31E−05
1.03E−04
8.00E−04
L ∞ Error
3.35E−05
8.04E−06
3.35E−05
3.78E−04
4.85E−05
2.84E−04
2.31E−03
3.03E−11
2.13E−09
6.79E−07
2.31E−09
9.80E−08
1.52E−05
3.91E−07
2.24E−06
2.88E−05
L 2 error
SRV filter
–
2.06
3.50
–
2.55
3.02
–
6.13
8.32
–
5.40
7.27
–
2.52
3.68
–
order
3.61E−04
1.17E−04
3.45E−04
3.57E−03
1.60E−04
8.11E−04
6.73E−03
5.01E−10
2.34E−08
5.38E−06
3.77E−08
1.50E−06
1.75E−04
6.37E−06
2.43E−05
2.45E−04
L ∞ error
–
1.56
3.37
–
2.35
3.05
–
5.54
7.85
–
5.32
6.86
–
1.93
3.33
–
order
5.64E−06
1.27E−07
9.58E−07
2.23E−05
4.71E−05
2.76E−04
2.18E−03
1.74E−12
4.12E−09
4.45E−06
5.63E−11
3.50E−08
1.53E−05
9.79E−08
1.22E−06
4.62E−05
L 2 Error
New filter
–
2.92
4.54
–
2.55
2.98
–
11.21
10.08
–
9.28
8.77
–
3.64
5.24
–
Order
2.90E−05
8.51E−07
7.44E−06
1.06E−04
1.37E−04
6.41E−04
3.83E−03
1.59E−11
2.76E−08
2.13E−05
8.26E−10
2.34E−07
7.35E−05
1.37E−06
9.29E−06
2.18E−04
L ∞ error
–
3.13
3.83
–
2.23
2.58
–
10.76
9.59
–
8.15
8.29
–
2.76
4.55
–
Order
J Sci Comput
1.02E−08
1.37E−09
4.40E−11
40
80
4.96
4.88
–
3.93
4.00
Order
1.70E−10
5.25E−09
1.49E−07
4.67E−08
7.46E−07
L ∞ Error
4.95
4.83
–
4.00
3.83
Order
4.03E−10
1.42E−08
1.40E−06
2.46E−08
7.42E−07
L 2 error
SRV filter
A scaling of H = x j along with quadruple precision was used in the computations
4.03E−08
20
P4
1.55E−07
80
L 2 error
dG
40
Mesh
Table 3 continued
5.13
6.63
–
4.91
5.50
order
7.93E−09
1.78E−07
1.47E−05
4.57E−07
9.11E−06
L ∞ error
4.49
6.36
–
4.32
5.31
order
3.68E−13
3.20E−10
1.52E−06
1.54E−10
6.23E−09
L 2 Error
New filter
9.77
12.21
–
5.34
9.82
Order
3.95E−12
1.98E−09
7.85E−06
8.71E−10
3.80E−08
L ∞ error
8.97
11.95
–
5.45
9.58
Order
J Sci Comput
123
Fig. 8 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the linear
transport Eq. (4.3) over smooth polynomial mesh (Mesh 4.2). The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
J Sci Comput
On the other hand, the new filter still improves accuracy when the mesh is sufficiently refined
(N = 40). Numerically the new filter obtains higher accuracy order than k + 1. For higher
order polynomials, P3 and P4 , we see that it achieves accuracy order of 2k + 1, but this is
not theoretically guaranteed.
Lastly, the filters were applied to dG solutions over a randomly distributed mesh. For
this randomly varying mesh, the new filter again reduces the errors except for a very coarse
mesh (see Table 3). The accuracy order is decreased compared to the smoothly varying mesh
example, but it is still higher than k + 1. Unlike the smoothly varying mesh, there are more
oscillations in the errors (Fig. 9). However, the oscillations are still reduced compared to the
dG solutions. We note that the results suggest that the SRV filter may be only suitable for
uniform meshes.
4.4 Variable Coefficient Equation
In this example, we consider the variable coefficient equation:
u t + (au)x = f, x ∈ [0, 2π] × (0, T ]
a(x, t) = 2 + sin(x + t),
u(x, 0) = sin(x),
(4.4)
at T = 2π. Similar to the previous constant coefficient Eq. (4.3), we also test this variable
coefficient Eq. (4.4) over three different non-uniform meshes (Meshes 4.1, 4.2, 4.3). Since
the results are similar to the previous linear transport Eq. (4.3), here we do not re-describe the
details of the results. We only note that the results of variable coefficient equation have more
wiggles than the constant coefficient equation. This may be an important issue in extending
these ideas to nonlinear equations. To save space, we only show the P3 and P4 results, P1
and P2 are similar to the previous examples.
Figure 10 shows the pointwise error plots for the dG and post-processed approximations
over a smoothly varying mesh. The corresponding errors are given in Table 4. The results are
similar to the linear transport equation. The two filters perform similarly, with the new filter
using fewer function evaluations.
For the smooth polynomial mesh 4.2, we show the pointwise error plots in Fig. 11. The
corresponding errors are given in Table 4. In this example we see that the new filter behaves
better at the boundaries than the SRV filter. This may be due to the more compact kernel
support size.
Finally, we test the variable coefficient Eq. (4.4) over randomly varying mesh 4.3. Similar
to the linear transport example, the pointwise errors plots (Fig. 12) show more oscillations
than the smoothly varying mesh examples. We again see the new filter has better errors at
the boundaries than the SRV filter.
5 Numerical Results for Two Dimensions
5.1 Linear Transport Equation Over a Uniform Mesh
To demonstrate the performance of the new filter in two-dimensions, we consider the solution
to a linear transport equation,
u t + u x + u y = 0, (x, y) ∈ [0, 2π] × [0, 2π],
(5.1)
123
Fig. 9 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the linear
transport Eq. (4.3) over randomly varying Mesh 4.3. The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
Fig. 10 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the variable
coefficient Eq. (4.4) over smoothly varying mesh (Mesh 4.1). The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
123
L 2 error
dG
Order
2.12E−08
80
1.53E−10
80
–
5.01
5.03
–
4.01
4.02
1.22E−08
80
6.18E−11
80
–
5.00
5.01
–
4.00
4.01
20
P3
2.49E−06
–
Mesh 3: Randomly varying mesh
6.40E−08
1.98E−09
20
40
P4
3.15E−06
1.96E−07
20
40
P3
Mesh 2: Smooth polynomial mesh
1.62E−07
4.95E−09
20
40
P4
5.54E−06
3.41E−07
20
40
P3
Mesh 1: Smoothly varying mesh
Mesh
–
5.00
2.80E−10
9.61E−06
4.98
–
4.02
3.98
8.94E−09
2.82E−07
4.98E−08
8.06E−07
–
5.02
5.48E−10
1.27E−05
5.00
–
4.01
4.00
–
Order
1.77E−08
5.69E−07
7.50E−08
1.21E−06
1.93E−05
L ∞ error
1.11E−04
1.51E−09
6.84E−07
2.95E−06
7.51E−09
1.31E−07
2.70E−05
1.26E−11
5.74E−09
1.89E−05
1.45E−10
3.14E−07
4.40E−06
L 2 error
SRV filter
–
8.83
2.11
–
4.13
7.69
–
8.83
11.68
–
11.08
3.81
–
Order
8.98E−04
3.50E−08
1.12E−05
2.42E−05
1.27E−07
1.54E−06
3.05E−04
1.76E−10
5.82E−08
1.44E−04
1.81E−09
3.25E−06
3.66E−05
L ∞ error
–
8.33
1.10
–
3.60
7.62
–
8.37
11.28
–
10.81
3.49
–
Order
5.63E−06
1.59E−12
4.12E−09
4.45E−06
6.25E−11
3.55E−08
1.53E−05
1.16E−11
2.56E−08
1.72E−05
2.78E−10
1.72E−07
6.36E−05
L 2 error
New filter
–
11.34
10.08
–
9.15
8.75
–
11.11
9.39
–
9.27
8.53
–
Order
2.90E−05
1.55E−11
2.76E−08
2.13E−05
7.84E−10
2.38E−07
7.36E−05
7.26E−11
1.12E−07
5.41E−05
2.05E−09
7.61E−07
2.02E−04
L ∞ error
–
10.80
9.60
–
8.24
8.28
–
10.59
8.91
–
8.53
8.05
–
Order
Table 4 L 2 - and L ∞ -errors for the dG approximation together with the SRV and the new filters for the variable coefficient Eq. (4.4) using a dG approximation of polynomial
degree k = 3, 4 over the three Meshes 4.1, 4.2 and 4.3
J Sci Comput
4.41E−11
80
4.96
4.89
–
3.93
4.00
Order
1.73E−10
5.31E−09
1.56E−07
4.72E−08
7.18E−07
L ∞ error
4.94
4.87
–
3.93
3.74
Order
1.26E−09
4.48E−07
2.45E−05
5.91E−08
2.12E−06
L 2 error
SRV filter
8.48
5.77
–
5.17
5.71
Order
The filters are using scaling H = x j . Quadruple precision was used in the computations
4.07E−08
1.37E−09
20
40
P4
1.56E−07
1.02E−08
40
L 2 error
dG
80
Mesh
Table 4 continued
1.91E−08
6.18E−06
1.96E−04
1.06E−06
2.55E−05
L ∞ error
8.33
4.99
–
4.59
5.14
Order
2.64E−12
2.98E−10
1.52E−06
3.15E−10
7.96E−09
L 2 error
New filter
6.82
12.31
–
4.66
9.47
Order
2.19E−11
1.79E−09
7.85E−06
1.91E−09
4.31E−08
L ∞ error
6.35
12.09
–
4.50
9.39
Order
J Sci Comput
123
Fig. 11 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the variable
coefficient Eq. (4.4) over smooth polynomial mesh (Mesh 4.2). The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
Fig. 12 Comparison of the pointwise errors in log scale of the original dG solution (left column), the SRV filter (middle column) and the new filter (right column) for the variable
coefficient Eq. (4.4) over randomly varying mesh (Mesh 4.3). The kernel scaling H = x j and quadruple precision was used in the computations
J Sci Comput
123
123
1.29E−08
80 × 80
4.44E−11
80 × 80
5.04
5.01
–
4.00
4.00
–
Order
1.43E−10
4.50E−09
1.41E−07
4.76E−08
7.60E−06
1.21E−05
L ∞ error
Double precision was used in the computations
4.71E−08
1.46E−09
20 × 20
40 × 40
P4
3.30E−06
2.06E−07
20 × 20
L 2 error
dG
40 × 40
P3
Mesh
4.98
4.97
–
4.00
3.99
–
Order
2.73E−08
2.55E−08
2.77E−08
1.74E−11
4.69E−10
2.60E−07
L 2 error
SRV filter
−0.10
0.12
–
4.75
9.11
–
Order
7.86E−06
2.84E−06
1.35E−06
5.50E−09
3.11E−09
1.12E−04
L ∞ error
-1.47
−1.07
–
-0.82
5.17
–
Order
3.00E−13
3.83E−10
5.25E−07
7.97E−11
7.01E−09
2.39E−06
L 2 error
New filter
10.31
10.42
–
6.46
8.41
–
Order
3.12E−12
3.40E−09
3.77E−06
1.02E−09
5.11E−08
1.80E−05
L ∞ error
10.09
10.11
–
5.65
8.46
–
Order
Table 5 L 2 - and L ∞ -errors for the dG approximation together with the SRV and new filters for a 2D linear transport equation solved over a uniform mesh using polynomials
of degree k = 3, 4
J Sci Comput
Fig. 13 Comparison of the pointwise errors in log scale of the original dG solution (left), the SRV filter (middle) and the new filter (right) for the 2D linear transport equation
using polynomials of degree k = 4 and a uniform 80 × 80 mesh. Double precision was used in the computations
J Sci Comput
123
123
L 2 error
dG
Order
5.45E−07
3.40E−08
40 × 40
80 × 80
6.00E−09
1.88E−10
40 × 40
80 × 80
1.04E−09
5.00
2.85E−07
1.78E−08
40 × 40
80 × 80
2.65E−09
8.31E−11
40 × 40
80 × 80
5.58E−10
5.00
20 × 20
P3
3.47E−06
–
2.16E−05
1.74E−08
3.27E−07
1.20E−07
1.92E−06
5.00
–
4.00
4.00
–
Mesh 3: Randomly varying mesh
8.48E−08
20 × 20
P4
4.56E−06
20 × 20
P3
3.03E−05
3.32E−08
1.05E−06
2.06E−07
3.39E−06
5.39E−05
L ∞ error
5.01
–
4.00
4.00
–
Mesh 2: Smooth polynomial mesh
1.93E−07
20 × 20
P4
8.74E−06
20 × 20
P3
Mesh 1: Smoothly varying mesh
Mesh
–
4.96
4.92
–
4.00
3.98
–
5.00
4.98
–
4.03
4.00
–
Order
3.46E−05
2.51E−08
3.21E−08
1.38E−06
3.35E−09
1.23E−07
1.55E−05
2.07E−08
3.38E−08
9.25E−07
1.50E−10
3.68E−07
6.94E−06
L 2 error
SRV Filter
–
0.35
5.43
–
5.20
6.98
–
0.71
4.77
–
11.76
4.24
–
Order
6.43E−04
6.88E−06
4.99E−06
1.48E−05
8.01E−08
3.04E−06
3.49E−04
9.13E−06
4.17E−06
8.56E−06
9.01E−09
6.49E−06
1.10E−04
L ∞ error
–
-0.46
1.57
–
5.25
6.84
–
-1.13
1.04
–
9.49
4.08
–
Order
3.90E−06
3.29E−12
4.65E−09
5.92E−06
2.43E−10
4.67E−08
1.59E−05
1.90E−11
2.67E−08
3.26E−05
7.33E−10
2.09E−07
6.71E−05
L 2 error
New Filter
–
10.46
10.30
–
7.59
8.41
–
10.46
10.25
–
8.16
8.33
–
Order
3.83E−05
3.49E−11
5.79E−08
3.27E−05
4.61E−09
5.14E−07
1.38E−04
1.61E−10
2.31E−07
1.12E−04
7.76E−09
1.66E−06
4.04E−04
L ∞ error
–
10.27
9.14
–
6.80
8.67
–
10.49
8.92
–
8.92
7.93
–
Order
Table 6 L 2 - and L ∞ -errors for the dG approximation together with the SRV and new filters for the 2D linear transport Eq. (5.1) using polynomials of degree k = 3, 4 over the
three meshes: Meshes 4.1, 4.2 and 4.3
J Sci Comput
1.41E−08
1.90E−09
6.06E−11
40 × 40
80 × 80
4.97
4.94
–
3.98
3.96
Order
3.48E−10
1.04E−08
2.84E−07
9.65E−08
1.52E−06
L ∞ error
4.90
4.77
–
3.98
3.83
Order
1.46E−08
2.64E−08
3.06E−06
9.94E−08
1.90E−06
L 2 error
SRV Filter
0.85
6.86
–
4.26
4.19
Order
7.09E−06
2.64E−06
5.19E−05
2.71E−06
3.59E−05
L ∞ error
-1.43
4.30
–
3.73
4.16
Order
10.41
10.20
6.60E−13
–
5.43
8.25
Order
7.80E−10
1.06E−06
2.97E−10
1.28E−08
L 2 error
New Filter
The filters use the scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision was used in the computations
5.83E−08
20 × 20
P4
2.23E−07
80 × 80
L 2 error
dG
40 × 40
Mesh
Table 6 continued
7.85E−12
1.01E−08
8.87E−05
3.88E−09
1.25E−07
L ∞ error
10.33
9.78
–
5.01
8.26
Order
J Sci Comput
123
Fig. 14 Comparison of the pointwise errors in log scale of the original dG solution (left), the SRV filter (middle) and the new filter (right) for the 2D linear transport Eq. (5.1)
using polynomials of degree k = 3, 4 over a smoothly varying 80 × 80 mesh (Mesh 4.2). The filters use the scaling of Hx = x j in x-direction and H y = y j in y-direction.
Double precision was used in the computations
J Sci Comput
123
Fig. 15 Comparison of the pointwise errors on a log scale between the SRV and new filters for the 2D linear transport equation using polynomials of degree k = 3, 4. A
smooth-varying mesh defined by x = ξ − b(ξ − 2π )ξ, y = ξ − b(ξ − 2π )ξ with b = 0.05 was used. Filter scaling was based upon the local element size. Double precision was
used in the computations
J Sci Comput
123
Fig. 16 Comparison of the pointwise errors in log scale of the original dG solution (a), the SRV filter (b) and the new filter (c) for the 2D linear transport Eq. (5.1) using
polynomials of degree k = 3, 4 over a randomly varying 80 × 80 mesh. The filters use the scaling of Hx = x j in x-direction and H y = y j in y-direction. Double precision
was used in the computations
J Sci Comput
123
J Sci Comput
u(x, y, 0) = sin(x + y)
(5.2)
at T = 2π. Due to the computational cost to obtain the post-processed solution, we only
present the 2D results using double precision. Table 5 shows that the accuracy is affected
by the round-off error, especially for the previous one-sided position-dependent filter. Such
significant round-off error appears to destroy the accuracy. Although the error magnitude
near the boundaries is larger than the regions where a symmetric filter is used, the new filter
reduces the error and improves smoothness of the dG solution (Fig. 13).
5.2 Linear Transport Equation Over a Non-uniform Mesh
For the 2D example, we consider the same linear transport equation as above, now over nonuniform meshes. The non-uniform meshes we consider are rectangular grids, in which the
tessellations in the x- and y-directions are generated similar to Meshes 4.1, 4.2 and 4.3. Double precision was used for all two-dimensional computations. Unlike the one-dimensional
example, the results of the SRV filter are significantly affected by the round-off error, especially near the four corners of the grids. This round-off error completely destroys the accuracy
and smoothness near the boundaries. Compared to the SRV filter, the new filter performs much
better. In the following examples, we can clearly see the improvement in the accuracy and
smoothness compared to the original dG approximations. From all the tests we performed, it
is easy to see that the new filter is more suitable than the SRV filter over non-uniform meshes,
and the practical performance of the new filter is better than the theoretical prediction.
For the P3 case, because of the periodicity, the SRV filter seems slightly better in L 2
norm than the new filter. However, if we look at the L ∞ norm, we can see the new filter still
behaves better than the SRV filter (see Table 6). For the P4 case, we can see that even the
ideal periodic property can not hide the fact that the SRV filter is not suitable for non-uniform
meshes—the SRV filter is worse than the new filter and even the original dG solution. In
Fig. 14, the round-off error of the SRV filter is noticeably demonstrated. The new filter has
better errors in L 2 and L ∞ norm when the mesh is sufficiently refined.
Unlike the smoothly varying mesh we used in the previous example, the smoothpolynomial mesh and the randomly varying mesh do not have the nice periodic property
which is exactly where a one-sided filter is needed. The deficiencies of the SRV filter become
significant. The results near the boundaries are worse than the original dG solution (Figs. 15,
16). The previous filters work well when all of their preconditions are met; however, if any
one of its assumptions are violated, we may not obtain the full benefit of the filter. The new
filter appears to still perform well in such circumstances.
6 Conclusion
In this paper, we have proposed a new position-dependent SIAC filter applied to discontinuous
Galerkin approximations over uniform and non-uniform meshes. The new filter was devised
as a consequence of analyzing the constant in the previous error estimates. This filter was
created by introducing an extra general B-spline to a filter consisting of 2k + 1 central Bsplines. This strategy allows us to overcome two shortcomings of the SRV filter: we can
now reliably use double-precision to both produce and use our filter, and our new filter has a
smaller geometric footprint and hence costs less (in terms of operations) to evaluate. We have,
for the first time, proved the accuracy-order conserving nature of the SIAC filter globally
and shown that this boundary filter does not affect the interior superconvergence properties.
123
J Sci Comput
Additionally, we are able, for the first time, to extend our proofs of superconvergence for our
symmetric and SVR SIAC filters used over smoothly varying meshes. We demonstrated the
applicability of the position-dependent filter for non-uniform meshes by choosing a proper
scaling, H , which is obtained by analyzing smoothly varying meshes. Numerical results
indicate that this scaling idea works, even in the random mesh case (although no proof exists
to assert this). Future work will concentrate on extending these concepts to derivative filtering.
Acknowledgments We would like to first thank Dr. Mahsa Mirzargar for helping us drive this draft to
completion. Without her help, we would have possibly never finished this manuscript. We would like to thank
Dr. Hanieh Mirzaee for her discovery and discussion concerning the conditioning issues of the SRV filter.
The first and second authors are sponsored by the Air Force Office of Scientific Research (AFOSR), Air
Force Material Command, USAF, under Grant No. and FA8655-13-1-3017. The third author is sponsored in
part by the Air Force Office of Scientific Research (AFOSR), Computational Mathematics Program (Program
Manager: Dr. Fariba Fahroo), under Grant No. FA9550-12-1-0428. The U.S Government is authorized to
reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
References
1. Bramble, J.H., Schatz, A.H.: Higher order local accuracy by averaging in the finite element method. Math.
Comput. 31, 94–111 (1977)
2. Cockburn, B.: Discontinuous Galerkin Methods for Convection-Dominated Problems, High-Order Methods for Computational Physics, vol. 9 of Lecture Notes in Computational Science and Engineering.
Springer, New York (1999)
3. Cockburn, B., Hou, S., Shu, C.-W.: The Runge–Kutta local projection discontinuous Galerkin finite
element method for conservation laws IV: the multidimensional case. Math. Comput. 54, 545–581 (1990)
4. Cockburn, B., Lin, S.-Y., Shu, C.-W.: TVB Runge–Kutta local projection discontinuous Galerkin finite
element method for conservation laws II: one dimensional systems. J. Comput. Phys. 84, 90–113 (1989)
5. Cockburn, B., Luskin, M., Shu, C.-W., Süli, E.: Enhanced accuracy by post-processing for finite element
methods for hyperbolic equations. Math. Comput. 72, 577–606 (2003)
6. Curtis, S., Kirby, R.M., Ryan, J.K., Shu, C.-W.: Post-processing for the discontinuous Galerkin method
over nonuniform meshes. SIAM J. Sci. Comput. 30, 272–289 (2007)
7. de Boor, C.: A Practical Guide to Splines, Revised Edition. Springer, New York (2001)
8. Ji, L., van Slingerland, P., Ryan, J.K., Vuik, C.: Superconvergent error estimates for a position-dependent
smoothness-increasing accuracy-conserving filter for dG solutions. Math. Comput. 84, 2239–2262 (2014)
9. King, James, Mirzaee, Hanieh, Ryan, J.K., Kirby, R.M.: Smoothness-increasing accuracy-conserving
(SIAC) filtering for discontinuous Galerkin Solutions: improved errors versus higher-order accuracy. J.
Sci. Comput. 53, 129–149 (2012)
10. Mirzaee, H., Ryan, J.K., Kirby, R.M.: Quantification of errors introduced in the numerical approximation
and implementation of smoothness-increasing accuracy-conserving (SIAC) filtering of discontinuous
Galerkin (dG) fields. J. Sci. Comput. 45, 447–470 (2010)
11. Mirzaee, H., Ryan, J.K., Kirby, R.M.: Efficient implementation of smoothness-increasing accuracyconserving filters for discontinuous Galerkin solutions. J. Sci. Comput. 52, 85–112 (2012)
12. Mock, M.S., Lax, P.D.: The computation of discontinuous solutions of linear hyperbolic equations. Comm.
Pure Appl. Math. 31, 423–430 (1978)
13. Ryan, J.K.: Local Derivative Post-processing: Challenges for a Non-uniform Mesh, Report 10–18. Delft
University of Technology, Delft, The Netherlands (2010)
14. Ryan, J.K., Cockburn, B.: Local derivative post-processing for the discontinuous Galerkin method. J.
Comput. Phys. 228, 8642–8664 (2009)
15. Ryan, J.K., Shu, C.-W.: One-sided post-processing for the discontinuous Galerkin methods. Methods
Appl. Anal. 10, 295–307 (2003)
16. Ryan, J.K., Shu, C.-W., Atkins, H.L.: Extension of a post-processing technique for the discontinuous
Galerkin method for hyperbolic equations with application to an aeroacoustic problem. SIAM J. Sci.
Comput. 26, 821–843 (2005)
17. Schumaker, L.: Spline Functions: Basic Theory, 3rd edn. Cambridge University Press, Cambridge (2007)
18. Steffan, M., Curtis, S., Kirby, R.M., Ryan, J.K.: Investigation of smoothness enhancing accuracyconserving filters for improving streamline integration through discontinuous fields. IEEE Trans. Vis.
Comput. Graph. 14, 680–692 (2008)
123
J Sci Comput
19. Thomée, V.: High order local approximations to derivatives in the finite element method. Math. Comput.
31, 652–660 (1977)
20. van Slingerland, P., Ryan, J.K., Vuik, C.: Position-dependent smoothness-increasing accuracy-conserving
(SIAC) filtering for accuracy for improving discontinuous Galerkin solutions. SIAM J. Sci. Comput. 33,
802–825 (2011)
123
Download