Variable order panel clustering (extended version)

advertisement
Max-Planck-Institut
fur Mathematik
in den Naturwissenschaften
Leipzig
Variable order panel clustering
(extended version)
(revised version: September 1999)
by
Stefan A. Sauter
Preprint no.:
52
1999
Variable order panel clustering (extended version)
Stefan A. Sauter
Abstract
We present a new version of the panel clustering method for a sparse representation of boundary integral equations. Instead of applying the algorithm
separately for each matrix row (as in the classical version of the algorithm)
we employ more general block partitionings. Furthermore, a variable order of
approximation is used depending on the size of blocks.
We apply this algorithm to a second kind Fredholm integral equation and
show that the complexity of the method only depends linearly on the number,
say n;, of unknowns. The complexity of the classical matrix oriented; approach
is O n2 while, for the classical panel clustering algorithm, it is O n log7 n .
1 Introduction
Elliptic boundary value problems with constant coe cients can be transformed into
integral equations on the boundary of the domain via the method of integral equations. From the numerical point of view, this approach is interesting especially for
problems on unbounded domains where the direct discretization with nite element
or nite dierences is not straightforward.
Boundary integral equations are discretised in many engineering applications via
the boundary element method by lifting conventional nite element spaces onto the
surface of the domain. Due to the non-localness of the integral operators the arising
system of equations is fully populated. Hence, the work for the classical matrixoriented approach grows quadratically in the number (n) of unknowns.
In 6], 7], and 11], the panel clustering algorithm was introduced for collocation
methods. By using polynomial approximations of the kernel function of the integral
operator it was possible to split the dependence of the integration variable from the
source points. The algorithm was applied for each matrix row separately. In 7],
it was shown that the complexity of the algorithm is proportionally to O (n log n)
with moderate . In 15], 8], 13], 5], the panel clustering algorithm was introduced
for the Galerkin discretization of boundary integral equations. The key role plays a
symmetric factorization of the kernel function with respect to both variables. Again,
the algorithm is applied to each matrix row separately.
1
In 12], 2], 2], the fast multipole method was introduced for the e cient evaluation of sums in multiple particle systems. Here, the algorithm was applied not
pointwise but appropriate block partitionings
are
;
employed. The complexity of the
k
algorithm is again proportionally to O n log n . In 10], a block version of the panel
clustering algorithm was introduced. The complexity is still O (n log n) while the
constants in the complexity estimates are smaller than for the classical approach.
In our paper, we introduce a variable order approximation on the clusters resulting
in an algorithm with complexity O (n). As a model problem we consider a Galerkin
discretization of a second kind Fredholm integral equation. The fact that boundary
integral equations can be realized (with full stability and consistency) in O (n) operations whilst the classical matrix oriented approach has complexity O (n2) seems to
be of interest. Generalizations of our approach to more general integral equations are
the topic of future research.
Another way of a sparse approximation of boundary integral operators are wavelet
discretizations. In the past decade they were intensively developed for boundary
integral equations. There are versions for second kind integral equations by 18], 17]
(complexity O (n log n)). The approach presented in 16] reduces the complexity to
O (n). However, the e ciency of wavelet methods depends on the number of (smooth)
charts being employed for the representation of the surface. If the surface is rough
and complicated the e ciency breaks down while the panel clustering method works
especially well for complicated surfaces.
An algebraic approach to the data-sparse realization of non-local operators are
H-matrices (see 3], 4]). Matrix blocks are approximated by low rank matrices. The
choice of the approximation system can be based on a singular value decomposition
and is suited to approximate inverses of sparse matrices e ciently.
In this paper, various notations and conventions will be used. In order to improve
readability, we have collected below the most relevant ones along with a link to their
rst appearance.
2
Notations:
k
kc(m)
(c) (c)
~c~
kc(m)
(c) (c)
Nc() Nc()
A(c) u] Bc() u] B
B! , M! , !
~c
sc
, h
P (2) N F
level, L
m (c), m (c), m (`)
III
m , m , c , c
ref, invref
L(ki) (!)
kernel function (see (2)),
panel clustering approximation (see Assumption 20) and (27),
expansion system (see Denition 32),
coe cients appearing in the shift of the expansion system
(see (45)),
auxiliary Taylor approximation of k (see (22)),
auxiliary (Taylor) expansion system (see (23)),
auxiliary expansion functions (see (71), (72)),
partial sums related to the panel clustering representation
(see (29), (50), (51)),
C ebysev ball, centre, radius (cluster ball, centre, radius)
(see Denition 5),
approximate cluster radius (see (38)),
maximal side length of circumscribing box bc (see (38)),
minimal element radius, step size (see (31), (32)),
covering of ; ;, neareld, fareld (see Denition 14 and (8)),
Cluster/block level, maximal depth of the tree (see Denition
10 and (35)),
Distribution of approximation order (see (16), Remark 41,
Denition 60),
Index sets (see Assumption 20, Denition 32, Notation 34),
Mapping / pullback to reference tree (see (36)),
Layers around ! (see Denition 26).
S
S
For a set M , we write M short for m2M M . The area of a surface piece ;
is denoted by j j.
Throughout the paper, we use the convention that, for a nite element function
u, its coe cient vector in the basis representation is denoted by u.
Similarly, a block of clusters is denoted by c (bold) while its components are c1,
c2 , indicating the correspondence c = (c1 c2).
Two approximation systems will appear for the approximation of the kernel function. One corresponds to the variable order panel clustering approximation and one
has auxiliary character. All quantities being related to the auxiliary function system
are denoted with a superscript , as, e.g., k, , etc., while the true expansion system
is denoted without a superscript.
3
2 Setting
Let ; R 3 denote an orientable, su ciently smooth manifold (; 2 C 2 is su cient).
On ;, we consider the integral equation with the classical double layer potential in
the weak form: For given f 2 L2 (;), nd u 2 L2 (;) so that
2 (v u)0; + (v Ku)0; = (v f )0; holds with
K u] (x) =
8v 2 L2 (;)
(1)
Z
k (x y) u (y) dsy k (x y) = @n@(y) kx ;1 yk :
(2)
Here, L2 (;) denotes the set of all measurable functions on ; which are square integrable and ( )0; the L2 -scalar product on ;. The vector eld n (y) denotes the
oriented normal vector eld at a surface point y 2 ;.
The Galerkin discretization of (1) is given by replacing the innite dimensional
space L2 (;) by a nite dimensional subspace V . The Galerkin solution uG is the
solution of
2 (v uG)0; + (v KuG)0; = (v f )0; 8v 2 V :
(3)
Our aim is to use nite element spaces lifted to the manifold ; as the subspace V .
Finite element spaces are dened on nite element grids. We introduce the twodimensional master triangle Q having the vertices (0 0)|, (1 0)|, (1 1)|.
;
Denition 1 A nite element grid of ; is a set G = f 1 2 : : : ng consisting of
disjoint, open surface pieces i ; satisfying
; = S G
for all 2 G , there exists a C k -dieomorphism : Q ! (k suciently
large) which can be extended to a C k -dieomorphism ? : Q? ! ? for some
open neighbourhoods ? and Q? Q.
Notation 2 The elements of a nite element grid are called \geometric nite elements". In the context of boundary element methods they are alternatively called
\panels".
In this paper, we restrict to piecewise constant approximations on triangulations.
Denition 3 The space S ;10 is given by
S ;10 = v 2 L2 (;) : 8 2 G : v j = const :
4
A local basis of S ;10 is formed by the characteristic functions on the triangles:
b : ; ! R x2 8 2 G : b (x) = 10 otherwise.
By using the basis representation
uG (x) =
X
2G
uG ( ) b (x) (4)
the Galerkin discretization can be transformed into a system of linear equations:
(M + K) uG = g
where M K 2 R GG and uG , g 2 R G are given, for all t 2 G , by
Mt = 2 (b bt)0; uG = (uG j ) 2G Kt = (b K bt ])0; g = (b f )0; :
2G
The matrix M is diagonal while K is a fully populated n n-matrix. Hence, the
classical matrix oriented approach costs (at least) O (n2) operations.
The idea of the panel clustering method is to use an alternative representation of
the discrete integral operator which can be written in the form
K N + B|FC
(5)
where the matrix N is sparse containing only O (n) non-zero entries. Furthermore,
B, C 2 C mn with m n and F 2 C mm . Note that, by using this representation,
the matrix elements of K are not known, i.e., direct solvers cannot be applied to this
system. However, for large n, iterative solvers are much more e cient than direct
solvers and should be used instead. For iterative solvers, the matrix elements of
K are not required. Matrix-vector multiplications appear as elementary operations
which can be performed e ciently by using the splitting (5). The rest of the paper
is concerned with the denition and analysis of an approximate factorization of the
integral operator in (3).
First, we have to introduce some geometric notations.
Denition 4 A cluster is the union of one or more panels.
The geometric size of a cluster can be described via the C ebysev radius of the
cluster.
Denition 5 For a subset ! the Cebysev ball B! is the ball with minimal
radius containing !. The Cebysev centre M! is the midpoint of this ball and the
Cebysev radius ! its radius.
Rd ,
5
Notation 6 For a cluster c, the Cebysev ball, Cebysev centre, and Cebysev radius
are alternatively denoted by cluster ball, cluster centre, and cluster radius.
For the e ciency of the algorithm, it is important to organize the clusters in a
hierarchical tree. In this light, a set (set of sons) has to be associated with each
cluster.
Denition 7 A set of sons (c) associated with a cluster c
1. is either the empty set,
2. or is the union of one or more disjoint clusters satisfying
c=
(c):
3. If (c) = then c 2 G .
A cluster c with (c) = is called a leaf.
Denition 8 A cluster tree T corresponding to a grid G consists of clusters with
associated sets of sons satisfying:
1. ; 2 T .
2. Any c 2 T with associated set of sons (c) satis es either
(a) (c) = S
(c):
(b) c =
Remark 9 We do not require that ] (c) 6= 1. For the later constructions, it will be
convenient to allow ] (c) = 1 implying c~ = c for c~ 2 (c).
In the next step, we will associate to each cluster a level indicating the depth in
the cluster tree. Since the largest cluster, i.e., the surface ;, is subdivided recursively
into smaller clusters, it is natural to use the depth of a cluster as an indication of
the geometric size as well.
Denition 10 The function level: T ! N 0 is the recursive function
level (;) = 0
level (~c) = level (c) + 1
8c~ 2 (c) 8c 2 T nG :
The depth of the cluster tree is
L = max flevel (c) : c 2 T g
while the minimal depth is given by
Lmin := min flevel ( ) : 2 Gg :
(6)
For 0 ` L, the tree level T (`) contains all clusters c 2 T with level(c) = `.
6
The term (v Ku)0; in (3) contains an integral over ; ;:
(v Ku)0; =
Z
;;
v (x) u (y) k (x y) dsy dsx:
In the next step, the product ; ; is partitioned into pairs of clusters dening a
block partitioning of ; ;. A pair c = (c1 c2) 2 T T is called a block.
Denition 11 Let 2 (0 1). A block c 2 T T is -admissible if
max fc c g dist (c1 c2)
1
2
(7)
holds with c1 c2 as in De nition 5.
If there is no ambiguity we write \admissible" short for \-admissible".
Denition 12 Let c = (c1 c2) 2 T T . The set of sons of c is given by
(c) = (c1 ) (c2 ) provided (c1) 6= and (c2) 6= (c) = (c1 ) fc2 g provided (c1) 6= and (c2) = (c) = fc1g (c2 ) provided (c1) = and (c2) 6= (c) = provided (c1) = (c2) = .
A block c 2 T T is called a leaf if (c) = . The tree T induces a block cluster
tree T (2) of ; ;.
Denition 13 T (2) is a block cluster tree if
(; ;) 2 T (2) every c 2 T (2) satis es one of the alternatives:
{ c is a leaf,
{ c = S (c).
Note that the block cluster tree T (2) is fully determined by the cluster tree T .
Denition 14 A subset P (2) T (2) is a block partitioning of ; ; if the elements
of P (2) are disjoint and
; ; = P (2) :
It is an -admissible block partitioning if every c 2 P (2) satis es one of the alternatives
7
c is a leaf,
c is -admissible.
It is a minimal, -admissible block partitioning if there is no -admissible block
partitioning with less elements.
Algorithm 15 The minimal, -admissible block partitioning of ; ; is obtained as
the result of the procedure divide((; ;) ) de ned by (see 7])
procedure divide(c P )
begin
if (c is a leaf) then P := P fcg
else if (c is admissible) then P := P fcg
else for all ec 2 (c) do divide(ec P ) end
The partitioning Pmin contains non-admissible leaves and admissible blocks. These
subsets are denoted by N (neareld) and F (fareld):
N : = fc 2 Pmin : c is non-admissibleg (8)
F : = PminnN:
On blocks c 2 F , the kernel function will be replaced by an approximation of a
certain order. The idea is that, on blocks consisting of clusters of similar size, the
approximation order is the same and, in addition, the approximation order is smaller
on smaller blocks.
Denition 16 Let Lmin be as in (6). The order distribution function m : F ! N 0
depends on two constants a b 2 N 0 and is given by
m (c) := a (Lmin ; `min (c))+ + b
(9)
with
and
`min (c) = min flevel (c1) level (c2)g
()+ = max f0 g :
The order distribution is extended to a function m : F T ! N 0 by
m (c) = max fm (c) : c 2 F ^ c 2 fc1 c2gg c 2 T:
(10)
Remark 17 One could generalise the function m by allowing a b 2 R 0 and de ning
m (c) := a (Lmin ; `min (c))+ + b where dxe denotes the smallest integer larger than or equal to x.
8
Remark 18 De nition 16 implies that the approximation order on a block (c1 c2) is
determined by the \larger cluster" c = argmin flevel (c1) level (c2 )g. The approx-
imation order is high on large clusters, e.g., m (; ;) = aLmin + b and small for small
clusters as, e.g.,1
m (c1 c2) = b
for all c1 c2 satisfying level(c1 ) level(c2 ) Lmin.
Remark 19 In Subsection 3.1, a construction for the sets T , P (2) is presented which
always guarantees that (c1 c2 ) 2 P (2) implies that c1 and c2 belong to the same T (`)
for some `. In this case, the order distribution m only depends on the level `.
3 The variable order panel clustering algorithm
In this section, we will dene the panel clustering algorithm. In the previous section,
we have dened a partitioning ofS; ; into a minimal, -admissible block partitioning
Pmin = N F . OnSthe portion N ; ;, the standard, matrix oriented approach
is used while, on F , the kernel function is approximated by suitable expansions.
Let the kernel function k be as in (2).
Assumption 20 There exist positive constants C1, C3, C4 , C5 2 R >0 , integers 1,
I , II , dI , dII 2 N 0 and C2 2 (0 1) having the following properties. For all
2 (0 ) and all -admissible block partitionings P (2) of ; ;, for all c 2 F , there
is a family of approximations kc(m) , m 2 N 0 , of the kernel function k satisfying
k (x y) ; kc(m) (x y) C1C2m dist;1 (c1 c2) 8 (x y) 2 c:
(11)
having the form
kc(m) (x y) =
with index sets Im N d0I
N d0II ,
X
()2Im
(m) (c) ( ) (x) () (y )
c1
c2
m 2 N 0 , satisfying
]Im
C3n(m + 1)1 o
Im : = j 9 2 N d0I : ( ) 2 Im n
o
IIm : = j 9 2 N d0II : ( ) 2 Im ]sm
C4 (m + 1)s s 2 fI II g j j
C5 (m + 1) 8 2 sm s 2 fI II g Im IIM IIm IIM 80 m M:
1
(12)
For a block c = (c1 c2 ), the notation m (c1 c2 ) is short for m ((c1 c2 )).
9
(13)
(14)
(15)
(16)
(17)
(18)
The approximation of the kernel function is based on a modication of Taylor
expansions. In this light, we begin with analysing the (true) Taylor approximation
of the kernel function kc(m) and, then, explain the modication. We begin with introducing some notations.
For c 2 F , the dierence domain d (c) is given by
d (c) = c1 ; c2 = z 2 R 3 j 9 (x y) 2 c : z = x ; y :
(19)
Put zc = Mc1 ; Mc2 (cf. Denition 5). One easily checks that, since c is admissible,
zc 6= 0. The kernel function in relative coordinates denes the function krel : c1
d (c) ! R
krel (y z) = hn (y) zi k3 (z)
with
(20)
k3 (z) := 1 3 :
kzk
Taylor expansion of k3 about zc yields (writing n short for n (y)):
X (z ; zc ) ( )
( m)
krelc (y z) = hn zi
(21)
! k3 (zc ) j j<m
where we employed the usual multi-index notation for 2 N 30 . Re-substituting z =
x ; y, factorizing (x ; y ; zc) with respect to x ; Mc1 and y ; Mc2 , and rearranging
the terms results in2
3
X
(m)
kc(m) (x y) := krel
(
y
x
;
y
)
=
c
X
i=1 j j+jjm
with
(m)
i (c) =
(;1)jj
! !
(
(x ; Mc1 ) ni (y ; Mc2 )
(m)
i (c)
(22)
(i + i) k3(+;ei) (zc ) + (zc )i k3(+) (zc ) j j + jj < m
(i + i) k3(+;ei) (zc )
j j + jj = m:
Here, feig3i=1 denotes the set of canonical unit vectors in R 3 . Introducing the sevendimensional index set:
I = ( i) 2 N 30
em
and the function system
() (x) = (x ; Mc1 ) f1 2 3g : j j + jj m :
N 30
(i) (y) = (y ; Mc2 ) ni (y)
(23)
For the variable order panel clustering algorithm, the functions ( Mc ) will be replaced by
suitable approximations. This is the reason why we denote the function in (22) by kc(m) instead of
kc(m) .
2
;
10
results in an expansion of the form (12). In order to reduce the number of indices
;!
the three-dimensional coe cients and functions ;! and c are introduced by
;!(m) (c) = n
;!
( )
o3
( m)
(
c
)
i
i=1
The expansion (22) can be rewritten as
kc(m) =
X
()2Im
= (i) (y) 3i=1 :
;!
m) (c)
(c) (c) ;!(
with the six-dimensional index set
Im = ( ) 2 N 30 N 30 : j j + jj m
and the convention
3
X
;!
m) (c) :
m) (c) =
(c) ;!(
(ci) (i
i=1
In 7, Appendix A], it was proved that there exists constants Cf1, Cf2, and e0 so that
m
f
f
C1 C2~
k (x y) ; kc(m) (x y)
jk (x y)j
(24)
holds for all (x y) 2 c satisfying kx ; y ; zc k ~ kzc k and all ~ 2 (0 e0).
(2)
Lemma 21 Let
n P odenote an -admissible block partitioning of ; ; with 2 (0 )
and := min
1 1
4 5Cf2
. Then, Assumption 20 is satis ed.
Proof. Since the block partitioning was assumed to be -admissible we conclude:
(7)
kx ; y ; zck kx ; Mc k + ky ; Mc k c + c
1
2
1
2
2 dist (c1 c2) :
The distance can be estimated by:
dist (c1 c2) kMc1 ; Mc2 k + c2 + c1 kMc1 ; Mc2 k + 2 dist (c1 c2) :
Using < 41 we get
dist (c1 c2) 2 kMc1 ; Mc2 k
and, nally,
kx ; y ; zck 4 kzck :
By choosing ! = e0=4 in Assumption 20 with e0 as in (24) results in 4 =: ~ 2 (0 e0)
in (24). Hence, (11) holds with C2 = 4Cf2 < 4=5. Let C; denote the smallest
constant so that, for all c 2 P (2) and all (x y) 2 c:
jhn (y) x ; yij C; kx ; yk2 :
11
Then,
jk (x y)j = hn (y) x ;3 yi C; kx ;1 yk C; dist;1 (c1 c2) kx ; yk
8 (x y) 2 c
and Assumption 20 is satised for kcm = kc(m) with C1 = Cf1C;.
Some combinatorial manipulations yield
]~Im 3 (m + 1)6 C3 = 3
i.e.,
1 = 6:
Obviously:
Im = 2 N 30 : j j m
~IIm = 2 N 30 : j j m f1 2 3g :
(25)
(26)
Again, some combinatorial manipulations lead to
C4 = 3
I1 = II2 = 3
dI = 3
dII = 4:
Finally, C5 = 2 is trivial.
In 15], 8], it was proved that all kernel functions corresponding to elliptic boundary value problems admit an approximation satisfying Assumption 20.
Remark 22 For the variable order panel clustering algorithm, the Taylor-based ex-
pansion derived in the previous example will be modi ed by replacing the expansion
functions (23) by approximations having more hierarchical structure with respect to
the order m.
Remark 23 The panel clustering method is by no means linked to Taylor based
expansions. Other expansions as, e.g., expansions in spherical harmonics could be
preferable for special applications.
The panel clustering approximation of (v Ku)0; is given by
(v Ku)0;
XZ
c2N c
+
v (x) u (y) k (x y) dsxdsy
X
X
c2F
()2Im(c)
;!(m) (c)
Z
c1
(c1 ) (x) v (x) dsx
(27)
Z
c2
;!
() (y) u (y) ds :
c2
y
The function m (c) determines the order of approximation on blocks c 2 F (`). It
was dened in Denition 16 while the constants a b 2 N 0 will be xed in Denition
60. Assumption 20 implies
Im(c) Im(c) IIm(c):
12
Let c = (c1 c2) 2 F . Then, m (ci) m (c) for i = 1 2 (cf. (10)), resulting in
Im(c) Im(c1) IIm(c2):
(28)
Property (28) will allow to decompose the computations related to the index set Im(c)
into separate computations on the index sets Im(c1), IIm(c2) .
For the evaluation of a matrix-vector multiplication, expression (27) has to be
evaluated for all basis functions v = b , 2 G .
The variable order panel clustering algorithm
The variable order panel clustering algorithm depends on various parameters:
: The constant appearing in the denition of -admissibility.
The choice of the constants a b in the denition of m = m (c) (as in (9)) in
(11). The precise choice of a and b is given in Denition 60.
Setup phase:
1. For a given mesh G , build up the cluster tree T and compute all cluster radii
and cluster centres.
2. Compute Pmin by using the procedure divide of Algorithm 15.
3. For all fareld blocks c 2 F , ( ) 2 Im(c) : compute the coe cients ;!m(c) (c).
4. Compute the near eld matrix entries:
Nt =
Z
k (x y) dsy dsx
t
8 ( t) 2 N:
5. For all 2 G : compute the basis fareld coe cients:
I
J
=
;!
J II =
Z
Z
() (x) dsx
;!
() (x) ds x
8 2 Im( )
8 2 IIm( ) :
Evaluation phase:
P
Let u 2 S ;10 and u 2 R G so that u = 2G u ( ) b as in (4).
1. Compute the fareld coe cients: For all c 2 T :
Z
;!
II
J c u] := ;!
(c) (x) u (x) dsx
8 2 IIm(c):
c
13
2. For all c = (c1 c2) 2 P (2) (`) 2 Im(c) :
A(c) u] :=
X
:()2Im(c)
;!m(c) (c) ;!
J IIc u] :
(29)
2
3. Approximate a matrix vector multiplication by
X
t2G
Ntu (t) +
X X Z
c2F 2 Im(c)
c1
A(c) u] (c1 ) (x) b (x) dsx
8 2 G:
(30)
Remark 24 Let c = (c1 c2) 2 F and mi = m (ci ), i = 1 2. For the realization of
the algorithm, it is essential that
Im(c) Im
1
IIm2
holds. This condition is guaranteed since, in view of (10), we have mi m (c) and
Im(c) Im(c) IIm(c) Im
1
IIm2 :
In the sequel, we will comment on the realization of the single steps in the algorithm which is essential for both, the practical implementation and the complexity
analysis. Some further approximations and relaxations will occur.
3.1 Construction of the cluster tree
Let G denote the given mesh of ;. In a rst step, one has to compute the centre and
radius (cf. Denition 5) of each panel 2 G . The smallest radius denes the quantity
:= min
2G (31)
h = max
diam :
2G
(32)
while the \step size" h of G is given by
We give a construction based on an auxiliary uniform grid with a uniform partitioning. This grid is not needed in the true computations but inherits a simple
logical structure to the true grid G . Let Q denote the smallest cube containing ;
with edges parallel to the coordinate axes. Without loss of generality we may assume
that Q = (0 1)3. We introduce a sequence of physically and logically nested grids on
Q.
For ` 2 N 0 , let h` = 2;` and n` = 2`. The interval i` is dened, for 1 i n`,
by i` := ((i ; 1) h` ih`). For 2 (N n` )3, a cell q` is given by
q` = 1 ` 2 ` 3 `:
14
Lemma 25 For 2 (N n` )3 and ` 2 N 0 , the radius and centre of q` are given by
p h`
` : = q` = 3 2 ;
M` : = Mq` = h` ; 2;1 (1 1 1)| :
The reference grid Q` is dened by
Q` := q` : 2 (N n` )3 :
Obviously, each element q 2 Q` has exactly eight sons in Q`+1 satisfying
q=
(33)
(34)
(q):
In other words, fQ` g`2N0 is an oct-tree. This tree will be associated to G . Let L
denote the smallest number so that
p hL
32 (35)
holds with as in (31). Hence, a cluster tree for the auxiliary grid QL is given by
Q = fQ`g0`L.
Any element 2 G is associated to that element q 2 QL containing the centre of
. (If there are multiple possibilities, choose one of them). This denes a mapping
ref: G ! QL . Since the mapping ref is injective (cf. 4, Remark 5.1]) the pullback
is well-dened on Range (ref). In this light we dene invref: QL ! G f g via
invref (q ) =
if q = ref ( )
otherwise.
(36)
The following procedure builds up the cluster tree along with the tree levels. Before
we present the formal description of the algorithm we explain the underlying ideas.
Our aim is to generate a balanced tree with the additional properties that
1. the number of sons of any cluster is dierent from one,
2. the geometric size of a cluster on level T (`) is of order 2;`, i.e., there exists
C7 1 so that, for all c 2 T (`):
C7;12;` c C72;`:
(37)
The cluster ball, centre, and radius are approximated as follows. A box is a
rectangular parallelepiped with axes parallel to the coordinate system. For a cluster,
it is quite simple to determine the minimal box b (c) containing c. The approximate
15
cluster ball, centre, and radius are dened as the C ebysev ball, centre, and radius of
b (c) and are denoted by B~ (c), M~ c, and ~c. By this construction it is guaranteed that
c B~ (c) sc=2 c ~c
(38)
where sc denotes the maximal side length of b (c).
The clusters (corresponding
to a reference cube q 2 Q`) are built recursively by
S
collecting the clusters q~2 (q) finvref (~q)g. However, if a cluster contains only one
son or the maximal side length sc is so small that (37) is violated, this cluster is
absorbed in the neighbouring cluster. The choice of the neighbouring cluster involves
the denition of layers around a set !.
Denition 26 Let Q` be as in (34). For ! R 3 , the layers Lik around ! are given
by L0k (!) := ! and, for 0 k L, i 2 N , by the recursion:
L1k (!) : = fq 2 Qk j q \ ! 6= g ;
Lik+1 (!) : = L1k Lik (!) :
If a cluster c 2 T (`) has only one son or is too small it will be \absorbed" in a
\neighbouring" cluster c~ 2 T (`) (with reference cluster q~ :=ref(~c)) satisfying
1. c L1`+1 (~q), i.e., c~ is \close" to c
2. sc~ cmin2;`, i.e., c~ is \su ciently big" (cf. (38)).
The algorithm depends on the parameter cmin < 1 controlling the relative smallness of a cluster. The precise choice of cmin is given in Lemma 52.
The recursion starts on the panel level and we put T (L) = G and T (L ; 1) = .
On the panel level, we assume that the cluster centres, balls, and radii are computed
exactly. Then, the procedure build cluster tree generates a coarser level from the
ner level recursively. As a side result, the mapping ref is extended to all clusters
and the pullback invref is dened. The procedure is called by
` := L ; 1
while T`+1 6= do begin
build cluster tree(T`+1 T` `) ` := ` ; 1
end
while the procedure build cluster tree is dened by
procedure build cluster tree(T`+1 T` `) begin
for all q 2 Q` do begin c := (c) := invref(q) := Comment: The sons of the reference cluster q will be collected
16
for all q~ 2 (q) do begin
c~ :=invref(~q) c := c c~
if c~ 6= then (c) := (c) fc~g
end
if c 6= then begin
T` := T` fcg ref(c) = q
end
invref(q ) := c level(c) = `
end
Comment: Clusters having only one son or too small radius are absorbed in a
neighbouring cluster
for all c 2 T` do begin
S
compute b (c) as the minimal box containing c~2
the approximate cluster centre M~ c
the approximate cluster radius ~c
and the maximal side length sc of b (c)
if ] (c) = 1 or sc cmin2;` then begin3
c),
(c) b (~
N (c) := c~ 2 T` j c L1`+1 (qc~) (39)
if N (c) 6= then determine c~ 2 N (c) so that
sc~ sc 8c0 2 N (c) T` := T`n fcg c~ := c~ c
(~c) := (~c) (c) ~
invrefqc = update b (~c), M (~c), ~c~, and sc~
end end end end
0
The approximations of the cluster radii and cluster centres will be employed to
check whether a pair of clusters is -admissible. A su cient condition is given in the
next lemma.
Lemma 27 Let the approximate cluster centre, radius and ball be as in the procedure
build cluster tree. Let c1 c2 2 T and put, for i = 1 2, ~i := ~ci and B~i := B~ (ci).
Then, the condition
max f~1 ~2g dist B~1 B~2 implies that the block (c1 c2 ) is -admissible.
(40)
We employ the notation, that, for a cluster c, the reference cube is denoted by qc :=ref(c) and,
for a cube q ` , the pullback by cq :=invref(c).
3
2 Q
17
Proof. Let i 2 f1 2g. Our construction directly implies that the minimal ball Bi
containing ci is contained in B~i . Hence, the true cluster radii 1 , 2 can be estimated
by
max f1 2 g max f~1 ~2 g :
Since ci is contained in B~i evidently
dist B~1 B~2
dist (c1 c2) :
We have proved that condition (40) implies
max f1 2g max f~1 ~2 g dist B~1 B~2
dist (c1 c2)
and (c1 c2 ) is -admissible.
The lemma above motivates the denition of strong -admissibility. We employ
the same notation as in that lemma.
Denition 28 A block c = (c1 c2) 2 T (2) is strongly -admissible i
max f~1 ~2g dist B~1 B~2 :
In order to check the strong -admissibility the approximate centre and radius of
the clusters have to be stored. The computation of an -admissible, block partitioning
P (2) of ; ; is performed by using Algorithm 15, where the check of -admissibility
is replaced by checking the strong -admissibility.
Remark 29 If the clusters along with the associated set of sons are constructed by
the algorithm build cluster tree, then, T is a cluster tree.
Remark 30 For 0 ` L, the construction of the cluster tree implies that the tree
levels
satisfy
T (`) = fc 2 T j level (c) = `g :
; =
T (`) G = T (L) :
80 ` L
The block-cluster tree T (2) is determined from T (cf. Denition 13).
Remark 31 All blocks (c1 c2) 2 T (2) consist of clusters of the same level:
level (c1 ) = level (c2 ) :
18
(41)
3.2 Computation of the expansion coecients, neareld matrix and basis fareld coecients
In Lemma 21, we have shown that expansion (22) has the required approximation
property (Assumption 20). For the variable order panel clustering method, the expansion system (23) in (22) will be replaced by approximations to it (see Subsection
m) (c) are as in (22). For the
3.3) while the corresponding expansion coe cients ;!(
realization of the algorithm, they have to be computed and stored for each fareld
m) (c) are developed, for collocation disblock. E cient algorithms for computing ;!(
cretizations, in 7] and 14] and, for Galerkin discretizations, in 8], 15], 9]. We do
not recall here the details of the algorithms.
It will turn out from the error analysis that, in the special situation under consideration (discretization of an integral equation of zero order with piecewise constant
elements), the neareld matrix N can be replaced by the zero-matrix. No work at all
is needed for this step.
It remains to compute the basis fareld coe cients. We consider here only the
more involved case II :
Z
;!
II
J = ;!
() (y) dsy 8 2 IIm( ) 8 2 G:
It will turn out that on the panel level, we restrict to polynomial expansions, i.e.,
;!
() (x) = (x ; M ) () = (y ; M ) n (y) :
In the case of at panels the normal vector n is constant on and the integration can
be performed analytically (cf. 14]). For more general parametrisations, the integrals
have to be evaluated numerically. Transforming onto the master element Q (cf.
Denition 1) yields:
Z 1Z
;!
II
J =
g ( ) ;!
() ( ) d
1
0
0
where g denotes the surface element. Since
smooth as well. Due to (17) we know
and n are smooth the integrand is
j j C5 (m (L) + 1) (9)
= C5 (b + 1) :
Hence, standard quadrature formulae as, e.g., conical Gau" rules, could be applied.
For given , the number of quadrature points for conical Gau" rules approximating
II with an accuracy of O (diam ) is independent of diam .
J
3.3 Computation of the fareld coecients
In this subsection, we will dene precisely the approximation system for the variable
order panel clustering method. Before we present the formal algorithm, we start with
19
some motivations. We have shown that, on a cluster c 2 T , the expansion system4
(ci) = ni (c) (cf. 23) has the required approximation property. In order to compute
the fareld coe cients Jc(i) u] one has to evaluate the integrals
Jc(i) u] =
Z
c
ni (y) (c) (y) u (y) dsy :
(42)
For the e ciency of the panel clustering method, the hierarchical structure of the
cluster tree T plays a key role. The splitting of the integral
Jc(i) u] =
X Z
c~ (c) c~
ni (y) (c) (y) u (y) dsy
is based on this tree structure. Assume that the \consistency condition"
n
o
) ~ 2 I
(c) jc~ 2 span (~
m(~c) c~ : holds, i.e.,
(c) jc~ =
X
~2 Im
8 2 Im(c)
~ec (~ec):
(43)
(44)
Then, (42) can be evaluated by the recursion:
Jc(i) u] =
X
X
c~2 (c) ~2 Im(~c)
~c~Jc~(~i) u]
yielding a fast tree algorithm. Since (c) are polynomials, the restrictions (c) jc~ are
polynomials as well and (43) holds provided Im(c) Im(~c) . Some algebraic manipulation yields that, in this case, the coe cients ~c~ in (44) are given by
~ec :=
;
(M
~
0
c
e
; Mc);~
~ otherwise.
(45)
For the variable order panel clustering algorithm, the expansion order m (c) satises
(9) resulting in Im(c) 6 Im(~c) and, in general, (43) is violated.
This is the reason why we modify the expansion system for the variable order
panel clustering. The system (c) should satisfy the consistency condition and have
the same approximation property as the polynomials (c) . On the panel level we put
() = () , while on the larger cluster, the consistency condition (44) with ~c~ as
in (45) is explicitly used as the recursive de nition of (c) .
4
The same arguments apply to the function system (c ) .
20
Denition 32 The index sets Im , IIm , Im are given by
Im : = IIm := 2 N 30 j j j m Im : = ( ) 2 N 30 N 30 : j j + jj m
;!
and the expansion functions (c ) , (c) by the recursion:
(46)
for the panels 2 G :
() (x) = (x ; M ) 8 2 Im( ) ;!
() = (y ; M ) n (y) 8 2 IIm( ) (47)
for the clusters c 2 T nG :
X
(c) jec= e2 I eec(ece) 8 2 Im(c) 8ec 2 (c) X m(~c)
;!
( )
c jec= e2 II eec;!
e(ce) 8 2 IIm(c) 8ec 2 (c) :
(48)
m(~c)
Remark 33 Since the functions (c),;!
(c) are de ned separately for each son c~ 2
(c), they are, in general, discontinuous. On each panel c, they are polynomials
of degree b (cf. (9)). (c ) can be regarded as an approximation to the polynomial
(c) = (x ; Mc) by piecewise polynomials of degree b.
Notation 34 Due to (46) we skip the superscript I II in III
m(c) . Furthermore, we
write c short for m(c) and, for a block c 2 F , c short for m(c) (cf. De nition 16).
For the computation of the fareld coe cients, the hierarchical denition of the
expansion functions is used. The initial step is performed on the panel level G .
Compute and store, for all 2 G :
;!
J II u] := u ( ) ;!
J II 8 2 :
Assume inductively that all coe cients ;!
J IIc~ u] are computed for all c~ 2 (c) and
c 2 T nG . Then
Z
X Z ;
;!
;
!
!
II
(
)
J u] = (y) u (y) ds =
() (y) u (y) ds 8 2 :
c
c
y
c
By using (48) we obtain
;!
J II u] =
c
X X
c2
e
(c) e2 c~
ec2
(c)
c
e
c
eec;!
J IIece u] 21
y
8 2 c:
c
(49)
3.4 Evaluation of a matrix vector multiplication
The computation of the quantities Ac u] is straightforward. It will turn out that it
is preferable to compute and store directly the quantities5:
X
Bc(1) u] :=
A~(c) u] 8c1 2 T 8 2 c1
(50)
c2 :(c1 c2 )2F
with
A(c) u] if 2 c 8c = (c1 c2) 2 F:
c
0
if 2 c1 nc
We turn to the evaluation of the sum in (30). Since we replaced the neareld matrix
N by the zero, the sum in (30) consists only of the fareld evaluation:
A~() u] :=
v =
XXZ
c2F 2
c
A(c) u] (c1 ) (x) b (x) dsx:
c1
(51)
In view of (50), it is advantageous to rewrite this formula as
v =
X XZ
c1 2T 2 c1 c1
Bc(1) u] (c1 ) (x) b (x) dsx:
(52)
In the next step, we will derive a hierarchical representation of this formula. The
summation over c1 2 T nG in (52) can be split into a sum over the sons (c1). We
obtain
XZ
2 c1 c1
Bc(1) u] (c1 ) (x) b (x) dsx
X
=
XZ
ce1 2 (c1 ) 2 c1 ce1
Bc(1) u] (c1 ) (x) b (x) dsx: (53)
On the other hand, the summation in (52) contains a partial sum over (c1 ) of the
form:
X XZ
Bc(~e1) u] (~ce1 ) (x) b (x) dsx:
(54)
ce1 2 (c1 ) ~2 cf1 ce1
In the next step, the right-hand side in (53) will be added to (54). Plugging in (48)
into (53) and re-organizing the terms shows that the sum in (53) equals
X
XZ
ce1 2 (c1 ) e2 cf1 ce1
with
5
Rc(ee1 ) u] (cee1 ) (x) b (x) dsx
Rc(ee1 ) u] =
X
2 c1
ece1 Bc(1) u] :
Recall that, for c = (c1 c2 ) F , we have m (c1 ) m (c) (cf. (35), (18)) implying Im(c)
2
22
Im(c1 ) )
Hence, (53) and (54) can be added resulting in
X
X Z (~ )
ce1 2 (c1 ) ~2 cf1 ce1
Rce1 u] + Bc(~e1) u] (~ce1 ) (x) b (x) dsx:
Iterating this algorithms over the hierarchical structure of T leads to Algorithm
35 for the evaluation of v . The tree levels T (`) are as in Denition 10 and Bc() u]
as in (50).
Algorithm 35 The procedure evaluate sums shifts the expansions from the coarse
levels to the ner ones and accumulates the coecients on the nest level.
procedure evaluate()sum begin
for all 2 ; do R; u] := B;() u] for all ` := 0 to L ; 1 do
for all c 2 T (`) do
for all ec 2 (c) do beginP for all ~ 2 c~
R(~ec) u] := Bc~(~) u] + 2 c ~ecR(c) u] end
for all 2 G do
X
I v := R() u] J
end
2 4 Error analysis
4.1 Abstract error estimates
We have presented a variable order panel clustering algorithm based on block partitionings of ; ; for the discretization of the second kind integral equation in (1).
The discretization is based on piecewise constant nite element spaces. It is well
known in the theory of boundary elements that the Galerkin solution to this problem
converges as
ku ; uGk0; Ch kf k1;
(55)
provided f 2 H 1 (;) where kk1; denotes the H 1-norm. Let ufG 2 S ;10 denote the
solution if the integral operator in (3) is replaced by the panel clustering approximation. In this section, we will prove that, under the abstract Assumption 20, the
solution ufG exists and satises the error estimate (55), too, with a possibly larger
constant C . In this section, the error estimates will be derived from abstract assumptions while, in Sections 4.2 and 4.3, we will show that these assumptions are satised
for shape regular, quasi-uniform meshes.
23
Denition 36 The uniformity of a mesh G is characterized by the smallest constant
Cu satisfying
where h is as in (32) and
8 2G
h Cuh h := diam :
Denition 37 The quality of panels is characterized by the smallest constant Cq
satisfying
h2 Cq j j 8 2 G:
Remark 38 Since G only contains nitely many panels, the constants Cu Cq are
always bounded. However, it will turn out that the constants in the estimates below
behaves critically with increasing values of Cq , Cu and we assume here that Cq and
Cu are of moderate size.
Assumption 39 The tree T is balanced in the sense that all panels 2 G have the
same depth in the tree:
level (
) = L
8 2 G:
Remark 40 By using the construction of Subsection 3.1, Assumption 39 is always
guaranteed.
Remark 41 De nition 13 and 14 imply that all blocks c = (c1 c2) 2 P (2) consist of
clusters of the same level:
level (c1 ) = level (c2 ) :
For 0 ` L, we introduce the far eld levels F (`) by
F (`) = f(c1 c2) 2 F : level (c1 ) = level (c2) = `g :
Then, the function m : F ! N 0 as in De nition 16 only depends on the level `. For
c 2 F (`), we have
m (c) = a (L ; `) + b:
(56)
The right-hand side in (56) de nes a function m
~ : N 0 ! N 0 . If there is no ambiguity
we write again m instead of m
~.
Assumption 42 There exist constants C6 < 1 and 1 < C7 < 1 so that, for all
0 ` L and any c 2 T (`):
C7;1 2;`
diam c
c C72;`
C6h2L;`:
24
Assumption 43 The constants a in (9) is chosen so that a > 1 and 2C2a =: C8 < 1
holds with C2 as in Assumption 20.
We need an assumption estimating, for c1 2 T (`), the number of clusters c2
forming a block (c1 c2) in F (`).
Assumption 44 There exist positive constants C9I , C9II < 1 so that, for all 0
` L and all c 2 T (`):
] fc 2 F (`) : c1 = cg
C9I ] fc 2 F (`) : c2 = cg
C9II :
The neareld matrix is replaced by zero. In order to estimate the arising error we
need an assumption concerning the number of neareld matrix entries.
Assumption 45 There exist positive constants C10I C10II < 1 so that, for all 0
` L and all 2 G :
] ft 2 G : ( t) 2 N g
C10I ] ft 2 G : (t ) 2 N g
C10II
with N as in (8).
The error estimate of the Galerkin discretization including panel clustering is
based on the second Strang lemma 1]. For u v 2 S ;10, let
E := v K u] ; Ke u] where K~ denotes the panel clustering approximation to K . In order to estimate E ,
we need an auxiliary result.
Lemma 46 Let Assumption 20, 39, and 42 be satis ed. There exists a constant
C11 < 1 so that, for all 0 ` L and every c 2 F (`)
p
jc1 j jc2jC2m(`) dist;1 (c1 c2 ) C11 hC8L;`:
(57)
Proof. Recall that 0 C2 < 1. Let c 2 F (`). Without loss of generality we
assume that
Hence,
c1 = max fc1 c2 g :
p
jc1j jc2 j C2c 1
25
where C depends only on (the curvature of) the surface ;. Using (7), Assumptions
42 and 39 we obtain
p
jc1 j jc2jC2m(`) dist;1 (c1 c2 ) Cc C2m(`) CC6h2L;`C2m(`)
1
and, by employing Assumption 43,
p
jc1j jc2jC2m(`) dist;1 (c1 c2) CC6h (2C2a)L;` = CC6hC8L;`:
Lemma 47 Let Assumptions 20, 39, 42, 43, 44, and 45 be satis ed. There exists a
constant C so that, for all u v 2 S ;10 :
E Ch kuk0; kvk0; :
Proof. We employ the splitting E = E1 + E2 with
E1 =
E2 =
L X Z
X
`=0 c2F (`)
X Z
(t )2N t
;
c
u (x) v (y) k (x y) ; kcm(`) (x y) dsy dsx
u (x) v (y) k (x y) dsy dsx
and estimate E1 , E2 separately.
E1
L X Z
X
ju (x)j jv (y)j k (x y) ; kcm(`) (x y) dsy dsx
`=0 c2F (`) c
L X p
(11) X
jc1j jc2 jC1 C2m(`) dist;1 (c1 c2 ) kuk0c kvk0c
1
2
`=0 c2F (`)
L
X
X
(57)
C11 C1h C8L;`
kuk0c1 kvk0c2
`=0
c2F (`)
8
91=2 8
91=2
L
< X
= < X
=
X
C11 C1h C8L;` :
kuk20c1 :
kvk20c2 `=0
c2F (`)
c2F (`)
8
91=2 8
91=2
L
< X
= < X
=
X
X
X
2
2
L
;
`
C11 C1h C8 :
kuk0c1
1 :
kvk0c2
1
`=0
c1 2T (`)
c2 :c2F (`)
c2 2T (`)
c1 :c2F (`)
p
L
q
X
C
C
C9I C9II h kuk kvk :
11
1
C11 C1 C9I C9II h kuk0; kvk0; C8L;`
0;
0;
1 ; C8
`=0
26
For the estimate of E2 we begin with considering a single pair of panels (t ) 2 N :
Z
Z
t
Z
Z
C; jv (x)j dx ju (y)j kx ; yk;1 dsy dsx
v (x) u (y) k (x y) dsy dsx
t
Z
C; kvk1t kuk1
Since u is constant on t and v on , we get
kvk1t kuk1 =
t
kx ; yk;1 dsy dsx: (58)
kvk0t kuk0
p
:
jtj j j
We turn to the integral in (58). We distinguish two cases:
(a) dist ( t) > 0. The shape regularity and the quasi-uniformity of the meshes
imply:
dist ( t) Ch:
Hence,
kvk1t kuk1
Z
p
t
kx ; yk;1 dsy dsx C ;1h;1 jtj j j kvk0t kuk0 C ;1h kvk0t kuk0 :
(b) dist ( t) = 0. There exists a mapping : R 2 ! R 3 which is su ciently
smooth, independent of h along with a subset U R 2 with (U ) = t . Furthermore,
we may assume that there exists a constant C independent of h so that U is contained
in a ball B centred at the origin with radius Ch. Then,
Z
kx ; yk dsy dsx C
;1
t
Z
B B
k ( ) ; ()k;1 dd:
We introduce polar coordinates at :
= + r ()
with () = (cos sin )|. Hence,
Z
kx ; yk dsy dsx C
;1
t
Z Z Ch Z 2
The quotient
B 0
0
r k ( ) ; ( + r)k;1 ddrd:
r
k ( ) ; ( + r)k
stays bounded as r ! 0 as a consequence of the regularity of ;. Thus,
Z
kx ; yk dsy dsx C
;1
t
kvk1t kuk1
Z
t
Z Z Ch Z 2
B 0
0
kx ; yk;1 dsy dsx
27
1ddrd Ch3 Ch kvk0t kuk0 :
Summing all neareld entries yields:
E2
Ch
X
(t )2N
8
<X
Ch :
t2G
8
< X
91=2 8
= < X
kvk0t kuk0 Ch :
kvk20t
(t )2N
91=2 8
= <X
X
:(t )2N
1
:
2G
kvk20t
kuk20
:
X
t:(t )2N
(t )2N
91=2
=
1
91=2
=
kuk20 q
C C10I C10II h kvk0; kuk0; :
Theorem 48 Let the assumptions of Lemma 47 be satis ed and h suciently small.
Then, the solution ufG to (3) with K replaced by the panel clustering approximation
exists for any f 2 L2 (;). If f 2 H 1 (;) the error estimate
ku ; ufGk0; Ch kf k1;
holds.
Proof. In view of Lemma 47 the result follows from 1, Lemma 4.1.1].
4.2 Verifying the assumptions on the cluster tree and the
partitioning P (2)
In this subsection, the abstract geometric Assumptions 42, 44, and 45 are proved
for shape regular, quasi-uniform meshes. In 7], related results have been proved
by formulating certain abstract assumptions (see 7, Criterion (B.1 d), (B.7)]) for
the cluster tree. Then, it was shown that surface meshes being images of Cartesian
grids in the parameter plane satisfy these assumptions. Our construction (procedure
build cluster tree) applies for any quasi-uniform surface mesh while the analysis is
dierent from 7].
We impose two further geometric assumptions on the surface ;: The rst one is
satised for all reasonable surfaces and the second one is imposed to reduce technicalities.
Notation 49 The three dimensional ball (with respect to the maximum norm) centred
at x 2 R 3 with radius r is denoted by Br1 (x). For r > 0, the r-neighbourhood of ; is
Ur (;) := x 2 R 3 j 9y 2 ; : kx ; yk r :
Assumption 50 There exist positive constants c;, C; so that, for all x 2 ; and all
0 < r diam ;
jBr1 (x) \ ;j c;r2
jUr (;)j
C;r:
28
For all subsets ;, the diameter diam can be estimated by
p
diam c; j j
where j j denotes the two-dimensional surface measure of .
Assumption 51 ; is a closed, simply connected surface.
Lemma
52 Let Assumption 50 and 51 be satis ed. Let ` Ln; p
witho :=
; p
lb 16 3Cu . In procedure build cluster tree choose cmin min 4 c3; =2 =32.
Assume c 2 T (`) satis es
sc cmin2;`:
Then, there exists c 2 T (`) (with reference cluster q :=ref(c)) satisfying (cf. (26)):
c L1`+1 (q) sc cmin2;`:
Proof. We may assume that ` > 0 since, for ` = 0, we have c = ; and s; =
1 cmin. Let c 2 T (`) with ` L ; ( as above). Put qc :=ref(c). Since ; is
closed (cf. Assumption 51) there exists 2 G with \ c 6= and \ c = . Hence,
\ @q 6= implying
dist (@q c) h:
Choose x 2 (implying x 2 ;). Hence, c 2 Br11 (x) (cf. Notation 49) with r1 = h + sc.
Condition (35) and Denition 36 imply:
p
p
r1 2 3Cu2;L + cmin2;` = 2;`;2 8 3Cu2`;L + 4cmin
2;`;2:
(59)
Now, consider Br12 (x) with r2 = 2;`;3 and put = ; \ Br12 (x). Denote
G ( ) := f 2 G : \ 6= g :
Hence, all approximate cluster centres of 2 G ( ) are contained in Br13 (x) with
r3 = 2;`;3 + h. Similarly to the estimate of r1 in (59) one derives r3 2;`;2. Dene
N (x) := q~ 2 Q` j Br13 (x) \ q~ 6= :
Then, all q~ 2 N (x) have the property bc L1`+1 (~q). Assumption 50 implies that
q~2N (x)
invref (~
q)
Br1 (x) \ ; c;r22
2
holds. Obviously, ]N (x) 8 and, hence, there exists q~ 2 N (x) with associated
cluster c~ :=invref(~q) satisfying jc~j r22c; =8. By using Assumption 50 we obtain:
3=2
p
sc~ c~ 12 diam c~ 12 c; jc~j p1 c;r2 pc; = c;p 2;` cmin2;`:
2 8
32 2
In view of this Lemma we formulate the assumption concerning the choice of cmin.
29
Assumption 53 The constant cmin in the procedure build cluster tree is chosen
so that
cmin min 4
q
c3;=2
=32:
Lemma 54 Let the cluster tree be generated by the procedure build cluster tree.
Let Assumption 50, 51, and 53 be satis ed. Then, there exist constants C6 < 1 and
1 C7 < 1 so that, for all 0 ` L and every c 2 T (`):
C7;1 2;`
diam c
c C72;`
C6h2L;`:
(60)
(61)
Proof. (a) The largest cluster c = ; clearly satises (60) and we may restrict to
the case c 6= ;.
(b) We rst prove the left-hand side in (60). Lemma 52 implies that, for all
` L ; , the procedure build cluster tree guarantees
sc cmin2;`
8c 2 T (`) :
For the cluster radius we obtain
1
1
c 2 diam c sc=2 2 cmin 2;`:
For ` > L ; and c 2 T (`) we obtain with as in (31)
p hL 1 p `;L ;` 1 p ; ;` 1 ;`
c 3 2 = 2 32
2 2 32 2 = 32C 2 :
u
(c) Next, we will prove the right-hand side in (60). First, we will show that any
c 2 T (`) is contained in
;
;
;
;
LdL L1L;1 L1L;2 : : : L1`+1 (qc)
p
with qc :=ref(c) and d = 2 3Cu .
The proof of this assertion is given by induction with respect to `.
` = L. Then, T (L) = G . Choose 2 T (L) and put q :=ref( ). The side
length of q is hL. Let t 2 G so that t = holds (cf. (35)). Due to (35) we
know
p
h Cuht 2Cu 2 3CuhL
p
and, hence, is contained in LdL (q) with d = 2 3Cu .
Assume that the assertion holds for k = ` + 1 ` + 2 : : : L.
30
The assumption of the induction implies that each c~ 2 T (` + 1) is contained in
;
;
; ;
LdL L1L;1 L1L;2 : : : L1`+2 (~q)
with q~ :=ref(~c). Since c is either not \absorbed" (cf. procedure build cluster tree)
or absorbed in a cluster c0 satisfying c L1`+1 (q0) with q0 :=ref(c0) the cluster
c is contained in
;
;
;
;
LdL L1L;1 L1L;2 : : : L1`+1 (q)
:
(62)
4dhq = 4d2;`:
(63)
(d) The right-hand side in (62) is a cube with side length
;
hq + 2 hq 2;1 + hq 2;2 + : : : + hq 21;L + dhq 2;L
Hence, the cluster radius of any c 2 T (`) can be estimated by
p
c diam c 4
3d2;`
p
24Cu + 4 3 2;`
p
(64)
and the assertion holds with C7 = 24Cu + 4 3.
(e) The estimate (61) follows from (64) via
diam c C72;`
C7hL2L;`
(35)
C7 p23 2L;` C7 p23 h2L;` =: C6 h2L;`:
Lemma 55 Let the Assumptions of Lemma 54 be satis ed and the construction of
P (2) be based on the strong admissibility condition. Then, there exist positive constants
C9I , C9II < 1 so that, for all 0 ` L and all 2 T (`):
] fc 2 F (`) : = c1 g C9I ] fc 2 F (`) : = c2g C9II :
(65)
Proof. For ` = 0, the left-hand sides in (65) are zero since F (0) = due to
T (2) (0) = f(; ;)g and (; ;) is non-admissible.
Let ` 1 and c~1 2 T (`). The father of c~1 is denoted by c1 2 T (` ; 1) and
characterized by
c~1 2 (c1) :
Let c~2 2 T (`) with c = (c1 c2) 2 F (`). The father of c~2 is denoted by c2. Since the
construction of P (2) was based on strong admissibility, we have
max f~c1 ~c2 g > dist B~ (c1) B~ (c2) ~
~
max f~c~1 ~c~2 g
dist B (~c1) B (~c2) :
31
(66)
Using (33) we get
dist B~ (~c1 ) B~ (~c2)
(66)
dist B~ (c1) B~ (c2 ) + 2~c1 + 2~c2 < (2 + 1=) (~c1 + ~c2 ) :
The approximate ~c1 was dened as the C ebysev radius of the minimal cube containing
c1 . This cube is smaller than the box (62) yielding (by using (63))
~c1
p
3sc1
p
4d 32;`:
This implies that, for xed c~1, all strongly -admissible clusters c~2 are contained in
the ball B (c1 ) centred at the approximate cluster centre z~c~1 with radius
~ `:
~c1 + (2 + 1=) (~c1 + ~c2 ) + 2~c2 Ch
The number of clusters c2 being strongly -admissible with respect to c1is bounded
~ ` 3 and
by the number of cubes q` 2 Q` touching B (c1). Since jB (c1)j 4=3 Ch
jq`j = h3` , this number is bounded by a constant C9I independent of ` and c1. The
estimate of C9II C 0 is just a repetition of the arguments.
Lemma 56 Let the Assumptions of Lemma 54 be satis ed and the construction of
P (2) be based on the strong-admissibility condition. Then, there exists positive constants C10I , C10II < 1 so that, for all 0 ` L and all 2 G
] ft 2 G : ( t) 2 N g
] ft 2 G : (t ) 2 N g
C10I C10II :
Proof. Let 2 G . Every t 2 G with ( t) 2 N satises
max ft g > dist (Bt B )
since, on the panel level, the cluster balls and radii are computed exactly. Hence,
p
1
1
1
2 + 4 Cu 2 + 4 Cu 3hL:
dist (t ) dist (Bt B )+2t+2 h + 4
Any t 2 G with ( t) 2 N is contained in a ball B ( ) centred at the centre of with
radius
p
p
1
1
2 + 4 Cu 3hL + + ht 2 + 6 Cu 3hL :
The number of those panels t is bounded by the numbers of cubes q 2 QL touching
B ( ). Since jB ( )j = O (h3L) and jqj = h3L, this number is bounded independent of
and `.
32
4.3 Verifying the approximation property of the expansion
system
In this subsection, we will prove that the expansion system dened in Denition 32
satises Assumption 20 with m (`) as in (60). In order to reduce the technicalities in
the proofs below we impose a weak assumption (Assumption 58) on the sizes of the
sons of a cluster.
Denition 57 Let c 2 T (i) and c~ 2 T (j ) with j > i and c~ c. The chain
Kc~c = (cj cj;1 cj;2 : : : ci)
is given by the recursion:
cj = c~
ck;1 : (ck;1) 3 ck k = j j ; 1 : : : i + 1:
Assumption 58 There exist positive constants c12 ! < 1 so that, for all clusters
c 2 T and all sons c~ 2 (c), either c~ = c or the ratio of the cluster radii satis es:
c12 c~=c !:
(67)
For all clusters c 2 T and all panels c~ 2 G with c~ c, the number of repeated clusters
in the chain Kc~c is bounded by :
sup sup nc~c n
(68)
nc~c := ] fc 2 Kc~c j ] (c) = 1g :
(69)
c2T c~2G
c~c
with
Denition 59 Let ! R d with centre M! . The Taylor operator T!(m) is given formally by
with
and
T!(m) f ] (x) =
X
j jm
! f ] (!) (x)
! f ] = 1! f () (M! )
(!) (x) = (x ; M! ) :
;!
The auxiliary functions (!) are dened by
;!
( )
! (y) := (!) (y) n (y) :
33
(70)
The expansion functions (c) and ;!
(c) (cf. Denition 32) can be regarded as
;!
approximations to the functions (!) and (!) . The precision is concerned in Lemma
61. The normal derivatives of the Taylor polynomials are denoted by
Nc() :=
3
X
i=1
ni (y) (c+ei) (y) = hn y ; Mci (c)
(71)
while an analogous quantity for the true expansion system is dened by
Nc()
:=
3
X
i=1
ni (y) (c+ei) (y) :
(72)
It remains to dene the constants a b for the function m (`) = a (L ; `) + b determining the degree of approximation on a block.
Denition 60 Let Assumption 20 be satis ed. For 0 ` L, the function m (`)
determining the variable order of approximation is given by
m (`) = a (L ; `) + b
(73)
with a b 2 N 0 chosen so that
a > 1 and 2C2a =: C8 < 1
(74)
(cf. Remark 65) and
8
<
a log 4 2 a log 4 ; 1 log 2 b max : jlogjlog j j 1 + jlog
1+
j
log 2
jlog j
1;
2
2
a +C ;1 9
log 2 41;; a =
(
log 43
;
4
)
(75)
with 4 > 8.
Note that the conditions on a and b stem from the proof of the approximation
property which we expect are by far too restrictive. In a forthcoming paper, the
results of numerical experiments will be presented dealing with the optimal choice of
a b ! for practical problems.
Lemma 61 Let Assumptions 20, 42, 58, 50, 51, and 53 be satis ed. For all 4 c;121
(cf. (67)),
! > max
n
(a+3)a+1 2 e
4
2(a;1)! 1;
o
!2 > C7 (1 + !) max
q
a+2
2C; (a(+3)
a;1)! e4
and, for all 0 ` L and all c 2 T (`), the estimates
( )
( ) a4n;m` (!`)jj
8 : j j m (`) c ; c ( )
Nc
;
L (c)
1
Nc() L (c)
1
hold with n! as in (68).
C;4an ;m` ;! 2;`jj+2 8 : j j m (`) ; 1
2
1 ; ;4 a 4
34
(76)
(77)
Since the proof of this Lemma is rather technical it is postponed to the Appendix.
The approximation of the kernel function on a block c = (c1 c2 ) 2 P (2) (`) is given
by
X
;!(m) (c) (c1 ) (x) ;!
k (x y) kc(m) (x y) :=
(c2 ) (y)
(78)
()2Im
!()
m) as in (22) and ( ) , ;
with m = m (`) and ;!(
c1 c2 as in Denition 32. The error
analysis consists of a consistency and stability part.
For the error analysis, it is preferably to write the Taylor approximation according
to (22) in a dierent form (with n = n (y))6
3
X
X
kc(m) (x y) =
i=1
j j+jjm;1
;
X
with
ni (c1 +ei) (x) (c2 ) (y)
(c1 ) (x)
j j+jjm;1
X
+ hn zci
= :
!
X
j j+jjm;1
s2fIIIIII g
(m) (c) :=
3
X
(m) (c)
!
ni (c2 +ei) (y)
(79)
(m) (c)
i=1
m) (c)
(c2 ) (y) (c1 ) (x) (
kcs(m) (x y)
(80)
1 (;1)jj k(+) (z )
c
3
!!
(81)
and , as in Denition 59.
Proposition 62 The approximation kc(m) can be written in the form (79) by replacing
the Taylor polynomials (c ) by the hierarchical approximations (c ) .
Proof. By denition, kc(m) has the representation
3
jj
X
X
(
;
1)
(
m
)
(
)
(
)
kc (x y) : =
c2 (y) c1 (x) nii ! ! k3(+;ei) (zc )
i=1 j j+jjm
+
3
X
(c2 ) (y) (c1 ) (x) ni i (;1) k3(+;ei) (zc)
! !
i=1 j j+jjm
+ hn zci
= :
6
h
jj
X
X
X
j j+jj<m
s2fIIIIII g
jj
(
;
1)
(
)
(
)
c2 (y) c1 (x) ! ! k3(+) (zc)
kcs(m) (x y) :
(82)
This expansion is derived by re-substituting z zc = x Mc1 (y Mc2 ) in (21), writing
n y Mc2 + n zc , and re-organizing the sums and products.
n z = n x Mc1
i
h
;
;
i ;h
;
i
h
i
35
;
;
;
Performing the same index manipulations as for the derivation of (22) yields the
assertion.
For the estimate of the approximation error, we employ the splitting:
ec (x y) := k (x y) ; kc(m) (x y) =k| (x y) ;{zkc(m) (x y}) + k|c(m) (x y) {z
; kc(m) (x y}) :
=:eIc (xy)
=:eII
c (xy )
(83)
The estimate of c
directly follows from Lemma 21 and we proceed with conII
sidering ec (x y). By employing (79), (80), and (82) the dierence kc(m) ; kc(m) can
be split into three parts:
X
X
eIIc = kc(m) ; kc(m) =
kcs(m) ; kcs(m) =:
esc :
(84)
eI (x y)
s2fIIIIII g
s2fIIIIVV g
IVV
We work out the details only for the case eIII
c while the estimate of the errors ec
is just a repetition of the arguments:
eIII
c (x y ) =
X
j j+jj<m
+
=
; ( )
(x)
X
c1
; (c) (x)
(c1 ) (x)
c
3
X
1
3
X
ni
i=1
V
I
V
II
: e (x y) + e (x y) :
j j+jj<m
i=1
!
ni c2+ei (y)
; +e
i (y)
c2
;
(m) (c)
!
(m) (
+ei
c2
c)
(85)
c
In order to estimate eVc I eVc II we need an auxiliary result estimating the size of
Lemma 63 Let c = (c1 c2) be -admissible. Then,
(m)
.
j +j+3
4
:
dist (c1 c2)
Proof. We start with estimating the derivatives of the function k3 : dc ! R
3
as in (20). Note
that
all z 2 dc satisfy kz k dist (c1 c2 ). For any w 2 C with
; p
kwk1 kzck = 2 3 (with zc = Mc1 ; Mc2 ), we have
(m) (c) = 27 ( + )!
!!
p
kzc + wk kzc k ; kwk kzck ; 3 kwk1 kzc k =2 12 dist (c1 c2) :
Hence, the function
g3 (w) := kzc + wk;3
is holomorphic with respect to each component in;Bp
r1 (0),
i.e., in the ball in complex
plane centred at the origin with radius r1 := kzc k = 2 3 . Applying Cauchy's integral
formula in each component results in
I
I
1 g() (w) = 1 I
g3 (v) dv
3
3
!
(2i) jv1 j=r jv2 j=r jv3 j=r (v ; w)+1
36
with 1 = (1 1 1)| and r = 3 kzc k =4. The function g3 can be estimated by
;3
1
jg3 (v)j 4 kzc k :
The denominator satises
i +1
3
3 Y
Y
;
1
+
1
+1
(v ; w) (jvij ; jwij) i k
z
= 41 kzck jj+3
ck
i=1
i=1 4
and the length of a single integral path 2r = 3 kzc k =2. Hence,
jj+3
1 g() (w)
1 (3 kz k =2)3 ; 1 kz k;3
1
4
:
c
;1
jj+3 = 27
4 c
! 3
83
k
z
ck
k
z
k
c
4
The connection between g3 and k3 is given by k3() (zc) = g3() (0) and, hence,
jj+3
4
()
()
:
k3 (zc) = g3 (0) 27! kz k
c
In view of (81), we obtain the assertion:
4 j+j+3 :
(m) (c) = 27 ( + )!
!! kzc k
Theorem 64 Let Im, Im , IIm and the expansion systems (c) , (c) be chosen as in
De nition 32 and the distribution of the expansion order as in De nition 60. Let the
Assumptions of Lemma 61 be satis ed. Then, there exists depending only on C6,
C7 , 3 , a, b, c12, ! so that the expansion (78) satis es Assumption 20.
Proof. In view of (83) along with Lemma 21 we may restrict to the estimate of
eIIc . As before, we work out the proof only for the partial error eIII
c in (84) while
IVV
the estimate of ec is just a repetition of the arguments. Hence, it is su cient to
estimate the errors eVc I and eVc II (see (85)). Using Lemma 61 and 63 along with
3
X
i=1
ni c +ei (y) = jhn (y) y ; Mci (y ; Mc) j C;j`j+2 we obtain (putting m = m (`)):
eV I (x y)
c
27
X
j j+jj<m`
a4n;m` (!`)jj
C~ dist;1 (c1 c2) ;4 m`
C;`
X
j j+jj<m`
37
jj+2
(
8x 2 c 2 T (`) + )!
4
!! dist (c1 c2)
+
4!`
dist (c1 c2)
j +j+3
j +j+2
with C~ = 4 27C;a4n . By using `= dist (c1 c2) and choosing ! < (4!);1 we get
X
j j+jj<m`
+
4!`
dist (c1 c2)
j +j+2
X
j j+jj<m`
+
23m` = 8m` :
Thus,
eVc I (x y) C~ dist;1 (c1 c2) ;4 m` 8m` :
(86)
By choosing 4 > 8, we have proven an estimate of the form (11).
It remains to estimate eVc II . The norm of the expansion functions (c1 ) can be
estimated by
( ) ( ) c1 L (c ) + (c1 ) ; (c1 )L (c )
c1 L (c ) = 1
1
1
1
1
1
;
` + a4n;m` (!`)jj j`j 1 + a4n ;m` !jj
j`j (1 + a4n !)jj =: (~!` )jj
j j
(87)
with suitable !~ . Hence, by using (85) in combination with (87), (77), Lemma 63, and
Assumption 42 we get
eV II (x y)
c
C;a4n X (~! )jj ;m` ;! 2;`jj+2 27 ( + )!
4
`
2
4
;
a
!! dist (c1 c2)
1 ; 4 jj+jj<m
4
( + )! 4M! 2;`
1 c2 )
!! dist (c1 c2)
j j+jj<m
;m` dist;1 (c
C
X
j +j+3
j +j+2
where M! = max fC7 !~ !2g. Assumption 42 implies 2;` C7 ` and, hence, the
rest of the estimate is just a repetition of the arguments used for proving (86): For
su ciently small !, an estimate of the form
` dist;1 (c c )
~ 2mIII
eIII
CC
1 2
c (x y )
V
holds with C2III < 1. Similarly, the error contributions eIV
c and ec (cf. (84)) can be
estimated.
Remark 65 For the approximation kc(m) (x y) as in (78), the constant C2 in Assumption 20 is independent of a in (56) (cf. (86)). Hence, the constant a could be
chosen so that
2C2a =: C8 < 1:
38
5 Complexity analysis
In this section, we will prove that, for quasi-uniform and shape regulars meshes, the
storage amount and complexity of the variable order panel clustering method depend
only linearly on the number of unknowns without any logarithmic terms. The key
role in these proofs is played by sharp estimates on the number of blocks contained
in the fareld levels F (`) and in the neareld N . Let n denote the number of panels,
i.e., n = ]G = dim S ;10.
Lemma 66 Let Assumption 50 be satis ed. There exist positive constants C13 C14
so that, for all 0 ` L, the number of near eld and far eld blocks is bounded by
]F (`)
C134`
(88)
]N
C14n:
(89)
Proof. First, we prove (88). By construction (cf. procedure build cluster tree)
the cluster tree T is balanced implying F (`) T (`) T (`). In view of Lemma 55
and Lemma 56 it is su cient to proof that there exists a constant C15 so that, for all
0 ` L,
]T (`) C154`:
All cluster centres M of panels 2 G are contained in an h-neighbourhood Uh (;)
of ; which was already introduced in Notation 49. The number of clusters contained
in T (`) is bounded from above by the number of cubes q 2 Q` satisfying
Uh (;) \ q 6= :
(90)
p
All cubes with this property are contained in Uh+d (;) with d = 3 2;`. Due to the
quasi-uniformity of the grid, there exists C < 1 so that h + d C 2;`. Hence, all
cubes with property (90) are contained in UC 2 ` (;). Due to Assumption 50 we have
jUC 2 ` j C;C 2;`.
The volume of q is 2;3` and, hence, the number of such cubes are bounded from above
by
jUC 2 ` j C C 4`:
;
2;3`
It remains to prove (89). The estimate follows directly from ]G = n and Lemma
56.
The depth of the cluster tree is concerned in the next lemma.
Lemma 67 Let Assumption 50, 51, and 53 be satis ed and the cluster tree constructed by the procedure build cluster tree. Then,
2
4L 12j;Cju n:
;
;
;
39
p
Proof. Condition (35) implies 3hL . Taking into account Denition 36 and
37 along with
j;j =
we obtain
4L
X
2G
j j nh2
12Cu2 n:
j;j
12Cu2
h2
3;2
The following lemma estimates the amount of work per tree and fareld level. It
has auxiliary character and will be used in the complexity estimates below.
Lemma 68 Let a b s 0. Then,
L
X
`=0
s
2 s (aln+2 b) 4L:
(a (L ; `) + b)s 4`
Proof. Simple analysis yields:
L
X
`=0
(a (L ; `) + b)s 4`
L
X
s
(a + b)
(L ; ` + 1)s 4`
`=0
(a + b)s 4L
s
ln 2
L
s X
`=0
L
X
s
L
= (a + b) 4
(` + 1)s 4;`
2`4;` = 2
`=0
s (a + b) s 4L:
ln 2
In the sequel, we will estimate the number of operations in the single steps of the
variable order panel clustering algorithm.
(a) Procedure build cluster tree.
Clearly, the complexity of the procedure is proportional to the number of elements
in the cluster tree:
L
2
X
]T = 4` 34 4L 16j;Cju n:
`=0
(b) Computation of the expansion coe cients.
In 8], 15], 9], algorithms are presented where the computation of
;!m (c) 8 2 Im
can be performed in O (m7) operation. Hence, the computation for all coe cients
and all fareld blocks costs (cf. (88) and Lemma 68)
L
X
`=0
m7 (`)
]F (`) C13
L
X
`=0
40
(a (L ; `) + b)7 4` Cn
(91)
where C only depends on a b C13 Cu, and j;j.
(c) Computation of the fareld coe cients.
I and J II for the panels 2 G and
The computation of all fareld coe cients J
2 Im(L) , 2 IIm(L) is proportionally to n due m (L) = b.
The procedure build cluster tree implies that the number of sons of a cluster
is bounded from above by7 64. Thus, the evaluation of the recursion (49) costs O (1)
operation per fareld coe cients. Similar computations as in (91) yield that the
number of operations is proportional to n.
(d) Computation of the recursion coe cients ~c~ in (48). From (45), it follows
directly, that the amount of computational work per coe cients is O (1) while the
total number of coe cients is bounded by O (n).
(e) Evaluation of a matrix vector multiplication.
By similar considerations, one obtains that the evaluation of a matrix vector
multiplication, i.e., Algorithm 35, costs O (n) operations.
(f) Storage amount.
By using the same technique as for the computational complexity one can prove
I , J II , and
that the amount of memory for storing the quantities G , T , F , m (c), Jc
c
~c~ (as in (45)) is proportionally to n.
Theorem 69 Let Assumption 50, 51, and 53 be satis ed and the cluster tree constructed by the procedure build cluster tree. The variable order panel clustering
algorithm has linear complexity with respect to the computing time and the memory
consumption.
6 Appendix
In this appendix, we prove Lemma 61 and some auxiliary estimates. We adopt the
notation of the proof of Lemma 61.
Proof of Lemma 61:
Let m~ = m (` + 1). We dene an intermediate approximation to (c) by restricting
the sum in (44) to ~ 2 c~:
~ (c~) :=
X
~2 c~
ec~ (~c~)
8c~ 2 (c)
with ~c~ as in (45). On c~, the representation
(c) ; (c) = (c) ; ~ (c~) +
X
~2 c~
~c~ (~c~) ; (~c~)
Let c T (`) with reference cube q =ref(c)
` . The possible sons are the pullbacks of all
cubes q~ `+1 with q~ q = . The number of those cubes is 64.
7
2
2 Q
2 Q
\
6
41
holds. Let 2 G so that c~ 2 (c) and let the chain Kc be as in Denition 57.
For ` i L, dene
i()
:= (ti)
Then,
; (ti) L ( ) "(i)
1
i() "(i) +
X
~2 m(i+1)
= (ti)
; ~ (ti+1) L ( )
1
:
(92)
) :
~ti+1 i(~+1
The cluster radius ti is abbreviated by i . In Lemma 70, we prove
i() a4n ;mi (!i)jj
(93)
for some 4 > 1 and su ciently large ! depending only on a, !, 4 . Since 2 G was
chosen arbitrary, we have shown that
( )
c ; (c) L (c) 4an;mi (!`)jj
1
holds.
We turn to the second estimate in (77) and introduce the intermediate approximation to Nc() (cf. (71))
N~c() (y) :=
3
X
i=1
ni (y) ~ (c+ei) (y) :
In Lemma 72, the recursions (for x 2 c~)
X
N~c() (x) = hn Mc~ ; Mci
~c~ (~c~) (x) +
Nc() (x) = hn Mc~ ; Mci
j~jm
~
X
j~jm
~
~c~c~~ (x) +
are proved. For ` i L, dene
i()
:=
( )
Nti
; Nt(i) L ( ) i()
1
=
(94)
X
j~jm
~ ;1
X
j~jm
~ ;1
( )
Nti
~c~Nc~(~) (x)
(95)
~c~Nc~(~) (x)
; N~t(i+1) L ( )
1
:
(96)
Equations (71), (72), and (94) imply that
i() = 0
8 : j j + 1 b 8` i L:
(97)
Hence, it remains to consider the case j j b. The error i() can be estimated by
i() i() + n Mti+1 ; Mti
X
j~jmi+1
42
) +
j~c~j i(~+1
X
j~j<mi+1
) :
j~c~j i(~+1
(98)
In Lemma 73, we will prove that
(
i()
0
8 : j j < mi+1
mi+1 +1 ;
j j
Cijij+2 i+1i
mi+1 +1 8 : mi+1 j j < mi
(99)
holds with Ci = C; (mi+1 + 3)3 .
In the next step, the second term (98) will be estimated. The smoothness of the
surface and Lemma 70 imply
X X
(~ )
an
2
i
+1 ;mi+1 j j
n Mti+1 ; Mti
j~c~j i+1
C;i 4
i
!j~j (100)
~
~
j~jm
~
i+1 ;mi+1 (~
C;an
!i)jj+2 =: fi()
4
with !~ = (1 + !). For j j < mi+1, estimate (99) implies Vi() := i() + fi() = fi()
and, for mi+1 j j < mi, we have
i() + fi()
i+1 (~
2Cian
!i)jj+2
4
i+1
i
mi+1 j j
( )
=:
V
i
mi+1 + 1
(101)
provided 4 c;121 (cf. Assumption 58). Together, we have proved that the error i()
can be estimated by
i() Vi() +
X
j~j<mi+1
) j~c~j i(~+1
8 : j j < mi:
(102)
In Lemma 71, we will prove that
i()
;
i ;mi ! 2;i j j+2 + S ( )
an
2
4
i
with
Si()
=
L;1
X
( )
k=i
fk
(103)
holds. By using (120) and ;4 a < 1 we obtain
;
i() ;4 mi !22;i
j j+2
C;4an :
1 ; ;4 a
Lemma
70 Let the Assumptions of Lemma 61 be satis ed. Then, the coecients
( )
i de ned in (92) can be estimated by
i()
i ;mi (! )j j
an
i
4
with ni := nti (cf. De nition 58).
43
Proof. Set m = mi and m~ = mi+1. Estimates (122) and (123) imply (for
j j > m~ ):
(c) ; ~ (c~)
with
m
~ +1
j
j
i
+1
j
j
~
Ci m~ + 1 i (x)
i
C~i = (m~ + 3) (m~ + 2) =2
resulting in
"(i)
(
0
~ +1
; j j j j i+1 m
~
Ci m~ +1 i i
2 m~ 2 m nm~ :
(104)
(105)
The coe cients ~c~ can be estimated,
for c~ = c, by
jeecj
1 = ~
0 otherwise
(106)
for c~ 2 (c) and c~ 6= c, by
jeecj
; j ;ej
e i
0
if e otherwise
(107)
Next, the coe cients i() are estimated. Denition 32 implies that the Taylor
polynomials (c) coincide with (c) up to the order b (cf. Denition 60):
i() = 0
Recall that m(L)
2= m(L) , i.e.,
8 2 m(L) 8` i L.
(108)
= b = f 2 N 30 : j j bg. Hence, we may assume below that
j j > b
(109)
due to (73). The proof for j j > b is based on an induction over i = L L ; 1 : : : ` :
i = L:
The assertion directly follows from (108).
Assumption: Assertion holds up to an index i + 1:
k ;mk (! )j j k() an
k
4
8 2 m(k) 8k = i + 1 i + 2 : : : L
(110)
where nk = ntk denotes the number of repeated clusters in the chain Ktk (cf.
(69)).
44
i + 1 ! i:
We rst consider the case that (ti) = fti+1 g, i.e., ti = ti+1 (cf. Assumption
58). This implies that the number of repeated cluster is increased by one:
nti+1 + 1 = nti (cf. (69)). We write short ni+1 = nti+1 and ni = nti and
note that ni 1 holds. Taking into account (106) the denition of i() reduces
to
(
( )
8 2 m(i+1) i() = i(+1
(111)
)
"i 8 2 m(i) nm(i+1) :
For 2 m(i+1), we employ (110) and obtain, by using (109):
) ani+1 ;mi+1 (! )j j = a ani+1 ;mi (! )j j ani ;mi (! )j j :
i() = i(+1
i+1
i
i
4
4 4
4
It remains to consider 2 m(i) nm(i+1) , which implies
mi ; a < j j mi:
(112)
Simple combinatorial manipulations show that (cf. (105))
a;1
j
j
( )
"i
Ci m + 1 jij Ci (am;i 1)! jij
i+1
a;1 mi ;ani
j j mi 4
an
;
m
;j j :
i
i
!
4
(!i) Ci
(113)
(a ; 1)!
|
{z
}
=:
It remains to prove that is bounded by 1, for su ciently large !. Estimate
(112) and (104) along with ni 1 imply
a;1 mi ;a
m
Ci (ia ; 41)! !;(mi ;a+1)
1
(mi + 3)a+1 4
2! (a ; 1)!
!
mi ;a
:
a+1
Simple analysis yields (with 4 =! < 1=e and ! (a2(+3)
a;1)! )
1 (a + 3)a+1 1:
! 2 (a ; 1)!
It remains to discuss the case ti 6= ti+1 implying i+1 =i
The coe cients i() can be estimated by
i()
8 X ; j ;~j (~ )
>
>
i+1
~ i
>
>
>
< ~2 m(i+1)
~;
X ; j ;~j (~ )
j j j j +
>
C
>
i
mi+1 +1 i
e i i+1
>
>
>
~
2
m(i+1)
:
~
45
(114)
! < 1 and ni = ni+1.
8 2 m(i+1) 8 2 m(i) nm(i+1) :
(115)
Using the assumption (110) we obtain for the sum in (115) the estimate:
X j ;~j (~ )
X j ;~j an ;m
i+1
4 i+1 i+1 (!i+1)j~j
(116)
e i
e i
~2 m(i+1)
~
~
=
X m4 i ;mi+1 !;jj
~ e
j j
1
i+1
j j
n
a
;
m
a
i
i
:
4
(!i) 4 ! + i
{z
}
|
=:
n4 ia;mi (!i)jj
!i+1
i
j~j
The quantity can be estimated by using Assumption 58:
a4
j j
1 + !
!
:
Choosing ! 2= (1 ; !) yields
1 + ! 1 + ! < 1:
!
2
Taking into account (109) and choosing b 1 + a ln 4 = ln 1+
2 results in
a4
1 + !
2
b
Similar as in (113) along with i+1=i
can be estimated by
"(i)
1 + ! 1:
2
a+1
( )
! and ! (a2(+3)
a;1)! , the quantity "i
a+1
j j mi+1 +1 1 (a + 3)
an
;
m
i
i
4
(!i) !
! 2 (a ; 1)!
j j b+1
an
;
m
i
i
4
(!i) ! :
; = jlog !j results in
Choosing b log 12
i ;mi (! )j j :
"(i) + n4 ia;mi (!i)jj an
i
4
Altogether we have proved that
i ;mi (! )j j :
i() an
i
4
The analogous estimate for the error i() is proved in Lemma 71.
46
Lemma 71 Let the assumptions of Lemma 61 be satis ed. Then, the coecients i()
de ned in (96) can be estimated by
;
i()
i ;mi ! 2;i
an
2
4
j j+2
+ Si()
with ni := nti (cf. De nition 58) and Si( ) as in (103).
Proof. The proof is similar as the proof of Lemma 70 and is based on an induction
over i = L L ; 1 : : : ` as well. We adopt the notation from that proof.
i = L: The assertion follows directly from (97).
Assumption: Assertion holds up to an index i + 1 :
;
k ;mk ! 2;k j j+2 + S ( ) k() an
8 : j j < mk 8k = i + 1 i + 2 : : : L:
2
4
k
We rst consider the case that (ti) = fti+1g, i.e., ti = ti+1. This implies
ni = ni+1 + 1 and ni 1. Similar as in (111) one derives with fi() , Vi() as in
(100), (101):
(
) 8 : j j < m fi() + i(+1
i+1
Vi()
8 : mi+1 j j < mi:
In the upper case, we obtain (by using ani+1 ; mi+1 = ani ; mi)
i()
;
) f ( ) +ani+1 ;mi+1 ! 2;i;1
i() fi()+i(+1
2
4
i
j j+2
;
)
+Si(+1
i ;mi ! 2;i
an
2
4
j j+2
+Si() :
For the lower case, the quantity Vi() has to be estimated similarly as "(i) in
(113) by using (101), i+1 i , (113), and putting q = !2= (C7!~ ):
Vi()
;
j j+2
a+2 mi ;a
(
m
+3)
an
;
m
;
i
i
i
i
:
4
!2 2
2C; q2 (a;1)! q4
; j j i+1 (~
2Cian
!i )jj+2 mi+1
4
+1
Provided 4=q 1=e and q ;
q
a+2
2C; (a(+3)
a;1)!
(117)
we obtain
;
i ;mi ! 2;i j j+2
i ;mi ! 2;i j j+2 + S ( ) :
i() Vi() an
an
(118)
2
2
4
4
i
This concludes the case of ti+1 = ti.
It remains to consider ti 6= ti+1, i.e., i+1=i ! < 1. We begin with estimating
the sum in (102) by using the assumption of the induction:
X
X j ;~j an ;m ;
(~ )
i+1 i+1 ! 2;i;1 j~j+2 + S (~ ) :
j~c~j i+1
2
4
i+1
~ i
j~j<mi+1
j~j<mi+1
~
(119)
47
We begin with considering the sum Si() as in (103). A single summation term
fk() satises by putting q = !2= (C7!~ ) and by using Assumption 42:
k+1 ;mk+1 (~
fk() = C;an
!k )jj+2
4
j j+2
;
i ;mi ! 2;i j j+2 a(nk+1 ;ni )+mi ;mk+1 2 k
C;an
2
4
4
q2 i
;
|
;
{z
}
=:'
The quantity ' can be estimated by using q 2 and b 2 (a log 4 = log 2 ; 1)
;
' a4(k+1;i) 2i;k;1
b+2
a4(k+1;i) 22a(i;k;1) log 4 = log 2 = ;4 a(k+1;i)
This leads to
Si()
1
X ;a(k+1;i)
;
i ;mi ! 2;i j j+2 C
an
4
2
;
4
k=i
;
i ;mi ! 2;i
an
2
4
j j+2
Hence, as single term in the brackets of (119) can be estimated by
; j ;~j ani+1 ;mi+1 ;
~
i
!2 2;i;1
4
j~j+2
)
+ Si(~+1
;
i ;mi C 2;i
an
7
4
C;;4 a :
1 ; ;4 a
(120)
j j+2 ; ~
!2 j~j+2 a4 +C; ;1
C7 2
1;4 a
;
and the sum by
;
!2
2;i jj+2 ani;mi
4
C7 + 1
!2 2
j j a
4;+ C; 1
;a
The condition !2 4C7 implies
X
j~j<mi+1
)
j~c~j i(~+1
;
ani ;mi ! 2;i jj+2
2
4
;
4 1 ; 4
!
b a
3
4;+ C; 1 :
;a
4
;
4 1 ; 4
It remains to estimate Vi() in (102). Similarly as in (117), (118) we obtain,
{ for j j < mi+1 :
{ for mi+1 j j < mi :
Vi() fi() Si() ;
i ;mi ! 2;i
Vi() an
2
4
48
j j+2 b
! :
Altogether, we have proved that the right-hand side in (102) satises
!
a + C; ; 1 3 b
X
;
j
j
+2
) ani ;mi ! 2;i
4;
b
Vi() +
j~c~j i(~+1
2
4
;a 4 + ! :
4
1
;
4
j~j<mi
+1
|
The condition
{z
=:
}
)
(
a
b max log 4;+ C; ;;a1 = log 43 log 2= log 1!
2 1 ; 4
implies 1 yielding the proof.
Next, we will prove the representation formulae for Nc() and N~c() (cf. (95)).
Lemma 72 Let Nc(), N~c() be as in (72), (94). Then, the representation and recur-
sion formulae
N~c() (x) = hn Mc~ ; Mci
Nc() (x) = hn Mc~ ; Mci
X
j~jm
~
X
j~jm
~
~c~ (~c~) (x) +
~c~c~~ (x) +
X
j~jm
~ ;1
X
j~jm
~ ;1
~c~Nc~(~) (x)
~c~Nc~(~) (x)
hold.
Proof. Let c 2 T (`) and j j < m (`). Put m~ = m (` + 1) and n = n (x). Leibnitz'
rule for dierentiation of products yields
3
X
X (~ )
(
)
~
Nc (x) =
ni
c~ (x) ~1! @ ~ (c+ei) (Mc~)
i=1 j~jm~
X (~ )
=
c~ (x) 1 @ ~ hn ; Mci (c) (Mc~)
~!
j~jm
~
X (~ )
= hn Mc~ ; Mci
c~ (x) ~1! @ ~ (c) (Mc~)
j~jm
~
+
X
j~jm
~
(~c~) (x)
= hn Mc~ ; Mci
= hn Mc~ ; Mci
3
X
ni
i=1
X
j~jm
~
X
j~jm
~
~;e ( ) (M )
i@ i c~
c
~!
~c~ (~c~) (x) +
~c~ (~c~) (x) +
3 X
X
i=1 j~jm~ ;1
X
j~jm
~ ;1
In the same fashion, the second recursion is proved.
49
ni (~c~+ei) (x) ~1! @ ~ (c) (Mc~)
~c~Nc~(~) (x) :
Lemma 73 Let i() be as in (96), Then, the estimate
(
0
8 : j j mi+1 ; 1
( )
mi +1 ;
i
j j+2 i
j
j
Cii
i
mi +1 8 : mi+1 j j mi ; 1
(121)
+1
+1
+1
holds with
Ci = C; (mi+1 + 3)3 :
Proof. ~ (ti+1) is the Taylor expansion of (ti) jti+1 of order m~ = m (i + 1) about
Mi+1 := Mti+1 . Hence,
i() = 0 8 : j j m~ ; 1:
We introduce the abbreviations
) = ) = (ti) jti+1 ~ (ti+1) (i+1
~ (i+1
n = n (y) :
For j j m~ , the Taylor remainder $(c) := Nc() ; N~c~()can be written in the form
3
m~ +1
X
h
y
;
M
+ej )
i+1 rz i
( )
$i+1 (y) = nj
(i+1
(z) =
(m~ + 1)!
j =1
hy ; Mi+1 rz im~ +1 hn z ; M i () (z)
i
(m~ + 1)!
i+1
at an intermediate value z = = (y) 2 y Mi+1. Applying Leibnitz' rule for
dierentiation of products results in
m~ +1
hy ; Mi+1 rz im~ () (z) :
) (y ) = hn ; M i hy ; Mi+1 rz i
( )
(
z
)+
h
n
y
;
M
i
$(i+1
i
i+1
i+1
i+1
(m~ + 1)!
m~ !
The smoothness of the surface ; implies that there exists C; so that
jhn ; Miij C;2i jhn y ; Mi+1ij C;2i+1:
The derivatives can be estimated by
hy ; Mi+1 rz im~ +1 () (z) =
i+1
(m~ + 1)!
X j j;j~j j~j
j~j=m
~ +1
~
~ i
X ~
(y ; Mi+1)~ (z ; Mi );~
j~j=m
~ +1
~
i+1 j~j
j j X
i+1 i
j~j=m
~ +1
~
~
i
j j
= i
i+1
i
(122)
m
~ +1
X j~j=m
~ +1
~
Some combinatorial manipulations yield:
X X j j j j X
(
m
~
+
3)
(
m
~
+
2)
j
j
1=
:
~
j
~
j
m
~
+
1
2
m
~
+
1
j~j=m
~ +1
j~j=m
~ +1
j~j=m
~ +1
~
~
(123)
50
~ :
Similarly,
hy ; Mi+1 rz im~ () (z)
i+1
m~ !
(m~ + 2) (m~ + 1) j j jj i+1
2
m~ i i
m
~
is derived. Altogether, we have proved that
i()
=
( ) $i+1 L ( )
1
~ jij+2 i+1
C
i
m
~ +1 j j
m~ + 1
with C~ = C; (m~ + 3) (m~ + 2) (m~ + 1).
Acknowledgments: It is a pleasure to acknowledge the support provided by the
Mathematische Forschungsinstitut Oberwolfach. Most of the results presented in this
paper have been derived during a stay of the author as a guest of the institute.
Furthermore, I would like to thank my colleagues D. Braess, W. Hackbusch, and
B. Khoromskij for fruitful discussions concerning the subject of the paper.
References
1] P. Ciarlet. The nite element method for elliptic problems. North-Holland, 1987.
2] Greengard. The Rapid Evaluation of Potential Fields in Particle Systems. MIT
Press, Cambridge, USA, 1988.
3] W. Hackbusch. A sparse matrix arithmetic based on H-Matrices. Part I: Introduction to H-Matrices. Computing, 62:89{108, 1999.
4] W. Hackbusch and B. Khoromskij. A sparse H-Matrix Arithmetic. Part II:
Applications to Multi-Dimensional Problems. Technical Report 22, Max-PlanckInstitut f&ur Mathematik in den Naturwissenschaften, Leipzig, 1999.
5] W. Hackbusch, L. Lage, and S. Sauter. On the E cient Realization of Sparse
Matrix Techniques for Integral Equations with Focus on Panel Clustering, Cubature and Software Design Aspects. In W. Wendland, editor, Boundary Element
Topics, number 95-4, pages 51{76, Berlin, 1997. Springer-Verlag.
6] W. Hackbusch and Z. Nowak. On the Complexity of the Panel Method (in russ.).
In Proc. Of the Conference: Modern Problems in Numerical Analysis, Moscow,
Sept 1986.
7] W. Hackbusch and Z. Nowak. On the Fast Matrix Multiplication in the Boundary Element Method by Panel-Clustering. Numerische Mathematik, 54:463{491,
1989.
51
8] W. Hackbusch and S. Sauter. On the E cient Use of the Galerkin Method to
Solve Fredholm Integral Equations. Applications of Mathematics, 38(4-5):301{
322, 1993.
9] C. Lage. Software Development for Boundary Element Mehtods: Analysis and
Design of Ecient Techniques (in German). PhD thesis, Lehrstuhl Prakt. Math.,
Universit&at Kiel, 1995.
10] C. Lage and C. Schwab. Advanced boundary element algorithms. Technical Report 11, SAM, ETHZ-Z&urich, 1999. To appear in: Highlights of the MAFELAP
X Conference, J. Whiteman (ed.), Elsevier Publ. (2000).
11] Z. Nowak. E cient Panel Methods for the Potential Flow Problems in the three
Space Dimensions. Technical Report 8815, Universit&at Kiel, 1988.
12] V. Rokhlin. Rapid solutions of integral equations of classical potential theory.
Journal of computational Physics, 60(2):pp. 187{207, 1985.
13] S. Sauter. The Panel Clustering Method in 3-d BEM. In G. Papanicolaou,
editor, Wave Propagation in Complex Media, pages 199{224, Berlin, 1998. IMAVolumes in Mathematics and Its Applications, Springer Verlag.
14] S. A. Sauter. Der Aufwand der Panel-Clustering-Methode f&ur Integralgleichungen. Technical Report 9115, Institut f&ur Praktische Mathematik, University of
Kiel, 24118 Kiel, Germany, 1991.
die eziente Verwendung des Galerkinverfahrens zur Losung
15] S. A. Sauter. Uber
Fredholmscher Integralgleichungen. PhD thesis, Inst. f. Prakt. Math., Universit&at
Kiel, 1992.
16] R. Schneider. Multiskalen- und Wavelet-Matrixkompression. Teubner, Stuttgart,
1998.
17] R. Schneider, T. v.Petersdor, and C. Schwab. Multiwavelets for Second Kind
Integral Equations. SIAM, J. Numer. Anal., 34:2212{2227, 1997.
18] T.v.Petersdor and C. Schwab. Fully Discrete Multiscale Galerkin BEM. In
W. Dahmen, P. Kurdila, and P. Oswald, editors, Multiresolution Analysis and
Partial Dierential Equations, pages 287{346, New York, 1997. Academic Press.
52
Download