Seventh Mississippi State - UAB Conference on Differential Equations and Computational Simulations, Electronic Journal of Differential Equations, Conf. 17 (2009), pp. 133–148. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu A MULTILEVEL ADAPTIVE MESH GENERATION SCHEME USING KD-TREES ALFONSO LIMON, HEDLEY MORRIS Abstract. We introduce a mesh refinement strategy for PDE based simulations that benefits from a multilevel decomposition. Using Harten’s MRA in terms of Schröder-Pander linear multiresolution analysis [20], we are able to bound discontinuities in R. This MRA is extended to Rn in terms of n-orthogonal linear transforms and utilized to identify cells that contain a codimension-one discontinuity. These refinement cells become leaf nodes in a balanced Kd-tree such that a local dyadic MRA is produced in Rn , while maintaining a minimal computational footprint. The nodes in the tree form an adaptive mesh whose density increases in the vicinity of a discontinuity. 1. Multilevel Representation There are several methods by which to locate fast transitions across multiple scales. Most commonly used are wavelets [6, 9, 22], and their lifted extensions [7, 15, 23, 24]. Also used are the adaptive stencil selection methods first proposed by Harten [13] and their subsequent extensions by Aràndiga, et al. [1] and SchröderPander, et. al. [20] to form a general multiresolution analysis. 1.1. Harten’s MRA. The following multiresolution analysis (MRA) holds for Banach spaces [20]; however, for our exposition purposes, we restrict this introduction to finite sets of operators on Euclidean spaces. Let {D0 , . . . , Dk , . . . , DL } be a set of linear operators such that Dk : V → V k , where V and V k are Euclidean spaces. The operators {Dk }L k=0 have the following two properties: (1) Dk is onto for all k. (2) if Dk f = 0, then Dk−1 f = 0 for any f ∈ V . Operators having these two properties are said to be nested. A restriction operator Rk−1 projects elements from a finite dimensional linear k space V k to V k−1 , and has the property that Rk−1 v k = Dk−1 f for all v k , where k k k v = Dk f ∈ V . Moreover, this restriction operator is well defined because Rk−1 k is independent of the particular f ∈ V . Note that there is at least one element of f ∈ V such that Dk f = v k because Dk is onto. Therefore, supposing Dk f1 = 2000 Mathematics Subject Classification. 35R05, 65N50. Key words and phrases. Adaptive grid refinement; Wavelet refined mesh; quadtree grids; multilevel decomposition; codimension-one discontinuities. c 2009 Texas State University - San Marcos. Published April 15, 2009. 133 134 A. LIMON, H. MORRIS EJDE/CONF/17 Dk f2 = v k for f1 6= f2 , where f1 , f2 ∈ V ; then by linearity Dk (f1 − f2 ) = 0 and by the nested property Dk−1 (f1 − f2 ) = 0. Thus, Rk−1 is independent of f ∈ V [1]. k k In the other direction, a prolongation operator Pk−1 is defined to be a right k−1 k k−1 k inverse of Rk , so that Pk−1 : V → V . By the definition of a restriction operator, Rk−1 (D f ) = D f . The existence of the inverse such that Dk−1 : V k → k k−1 k V for every Dk and ∀f ∈ V , implies that a prolongation operator can be defined −1 k as Pk−1 = Dk Dk−1 . Note that D−1 is well defined, because Dk is onto from V to −1 V k , so Dk (Dk (W k )) = W k for all W k ⊂ V k , where Dk−1 (W k ) ⊂ V as defined by Dk f ∈ W k for f ∈ Dk−1 (W k ). Thus, Rk−1 (Dk f ) = Dk−1 f for any f ∈ V k −1 k−1 −1 k−1 −1 k−1 and the same holds for f = Dk−1 v , so Rk−1 (D ) = Dk−1 Dk−1 v . k Dk−1 v k −1 k−1 k k−1 k−1 k k−1 k−1 Therefore, Pk−1 v = Dk−1 Dk−1 v , so Rk Pk−1 v = Ik−1 v , and hence k Pk−1 is a right inverse of Rk−1 [20]. k These operators provide a framework by which to approximate v k in terms of the space V k−1 . Suppose v L ∈ V L and via repeated restrictions, v k−1 = −1 k−1 Rk−1 v k for k = L, . . . , 1. Then Dk−1 v is a recovery of f ∈ V via a space k k−1 k−1 V . Therefore, any element in V can be used to compute an approximation error by comparing the projection to the corresponding element in V k , i.e., k ek = v k − Pk−1 v k−1 . In fact, ek is in the null space of the restriction oper k−1 k−1 k k ator, because Rk e = Rkk−1 v k − Rkk−1 Pk−1 v = 0, so it can be rewritPS k−1 k k k k k ten in terms of a basis µj in N (Rk ). Thus, e = j=1 dj µj ≡ Ek d , where k−1 k k−1 k S ≡ dim N Rk = dim V − dim V , and d are known as the scale coefficients [20]. Therefore, there exists a one-to-one correspondence between v k and a pair dk , v k−1 , because k v k−1 = Rk−1 v k , dk = Ek−1 v k − Pk−1 Rk−1 vk . k k And conversely, given dk , v k−1 , v k can be recovered via k k k Pk−1 v k−1 + Ek dk = Pk−1 Rk−1 v k + Ek Ek−1 v k − Pk−1 Rk−1 vk = vk . k k (1.1) (1.2) Therefore, any element v k ∈ V k can be decomposed across L levels in terms of the scale coefficients {dL , . . . , d1 , v 0 }, thus forming a multiresolution representation of v L in terms of the scale coefficients. 1.2. Application of Harten’s MRA. Suppose we are given an infinite space of functions V ⊂ {f | f : Ω ⊂ Rm → R}, where Ω is a bounded region, and f is sampled by Dk onto V k a finite linear space. Using Harten’s MRA, a multiresolution scheme adapted specifically to sequences obtained from Dk can be constructed. Recall that Dk and its inverse provide a mechanism by which to construct different resolution versions of V −1 Dk−1 D V k k ←−− −1 Dk −−−→ V ←−−− Dk−1 V k−1 , (1.3) −−−→ where dim(V k−1 ) < dim(V k ). For our applications, V is not directly accessible, so we need a mechanism by which to represent f ∈ V using only the space V L , which represents some finest sampled version of V . EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 135 k Given Rk−1 and Pk−1 , a nested set of spaces {0, . . . , V k−1 , V k , . . . , V L } can be k constructed, resulting in k Pk−1 V k ←−−− Rk−1 k V k−1 . (1.4) −−−→ k Using this framework, v k ∈ V k can be approximated by Pk−1 Rk−1 : V k → V k , i.e., k k k−1 k the information lost across loop V → V → V . Moreover, this approximation k k error is ek = (Ik − Pk−1 Rk−1 ) v . From the MRA, ek is in the null space of Rk−1 , k k k and as a consequence, expressing ek in terms of a basis in V results in redundant information. However, this redundant information can be discarded by projecting onto N (Rk−1 ), so ek = Ek dk , and hence the approximation error is encoded in k terms of the scale coefficients. In other words, the scale coefficients dk represent k the information at V k that cannot be reconstructed by Pk−1 from V k−1 . Using the k k−1 k one-to-one correspondence between v and d , v , the two-level scheme can be expanded to form the pyramid scheme, vk → v k−1 & dk → v k−2 & → dk−1 → ··· & → ··· , (1.5) where any element v k ∈ V k can be decomposed across L levels, thereby forming a multiresolution representation of V L . More importantly, the scale coefficients represent the information content at each level that is not reproducible by the projection operator; hence, if the information loss is small, so are elements dkj for certain j. Thus, a compressed version of f can be produced via scale coefficient thresholding, thereby mimicking Donoho’s wavelet thresholding strategy [9]. 2. Adaptive Multilevel Refinement in R Section 1 introduced Harten’s generalized multiresolution analysis in terms of linear operators; the following sections link this MRA to grid generation by interpreting large-scale coefficients as grid refinement points. These ideas are extended to define a dual transform capable of bounding jumps by two grid points in V L so as to segment the adaptive grid into smooth disjoint subsets. The dual transform is then compared to several multiresolution schemes in R. In Section 3, we extend these ideas to higher dimensions. 2.1. Ideas From Wavelet Refinement. A refinement process is achieved by controlling the magnitude of the wavelet coefficients. Thus, by eliminating small wavelet coefficients, functions in R can be represented more compactly and the error between the original function and its compressed version differs by at most a constant [9]. However, a problem arises with classical wavelet thresholding methods: applying them changes the nature of the grid. This is because the thresholding procedure leaves gaps and thus destroys the dyadic constraint used to enforce MRA stability. Theorem 2.1 (Multi-Level Stability [5]). Suppose the scaling bases are uniformly stable and the MRA is dense in L2 , and suppose that Wt is a wavelet transform within this MRA, then the corresponding wavelet basis is a Riesz basis if and only if Wt is uniformly stable. 136 A. LIMON, H. MORRIS EJDE/CONF/17 Theorem 2.1 implies that the distance between neighboring grid-points across all levels must satisfy a dyadic measure. However, when the grid is non-uniform, the wavelet transform is no longer simply a sequence of identical operators, so it is not sufficient to require that Wt be non-singular to guarantee stability. Therefore, a weaker necessary condition for stability is needed for non-uniform grids. This condition is stated in Theorem 2.2, as a relation between consecutive multiscale levels. Theorem 2.2 (Single-Level Stability [21]). A single transform Wtk at level k is uniformly stable if the wavelet basis is uniformly stable within the detail space W k it spans, and the detail space W k is a stable complement of scale space S k ; i.e., the angle between S k and W k must be uniformly bounded away from zero. This theorem suggests a strategy to stabilize the wavelet transform on nonuniform grids: every operator across a single level must be bounded and boundedly invertible, the subspace must be a stable splitting [16], and the series of subsequent subspaces should change in some homogeneous fashion so as to have a stable MRA. The first point can be resolved by constructing operators with singular values bounded away from zero and infinity. The second and third points require a homogeneity measure on both the grid at level k and at the subsequent level. Hence, the grid refinement strategy must prevent gaps from growing arbitrarily large between neighboring points, which mixes scales and destabilizes the transform [8]. Therefore, the homogeneity of the grid must be smooth enough to form a stable wavelet transform. In R, a measure of the homogeneity of a grid can be defined as max xkj+2 − xkj+1 , xkj − xkj−1 k . (2.1) γ = xkj+1 − xkj Note that γ ≥ 1, and the grid is uniform when equality holds. Therefore, by restricting γ k < M , we have a method to homogenize the grid at level k [8]. Because of the simple splitting between levels via the Lazy wavelet, this homogeneity constant can be interpreted as the ratio between the scales supported between two consecutive levels. Thus, we can enforce the maximum overlap between scales across subsequent levels by a similar homogeneity measure k+1 k+1 min xk+1 , xk+1 2j+1 − xj 2j+2 − x2j+1 k β = . (2.2) xkj+1 − xkj In this case, 0 ≤ β ≤ 1/2, and when β = 1/2 the grid is dyadic [8]. In practice, it is a good idea to initialize the grid refinement procedure by setting β = 1/2 and γ = 1 so as to follow the AMR cell constraint [3]. As we relax γ and β, the magnitude of wavelet coefficients is more likely to be contaminated by spurious neighboring modes, so care must be taken when relaxing the homogenization constants. 2.2. From Global to Locally Dyadic Spaces. In Section 2.1, we assumed the MRA structure was defined on a set of uniform grids across k levels, where spacing between levels changed by a factor of two. This globally dyadic scheme implies that the underlying function requires uniform sampling across all scales; however, this is not the case for applications that benefit from adaptive grid refinement. After all, k in regions where the solution is smooth Pk−1 reconstructs f well, so a coarser grid can be used, thus limiting the number of grid points used to represent f ∈ V k . In k places where f cannot be reconstructed by Pk−1 , i.e., where the scale coefficients EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 137 are large, the grid is refined to some finest level L so as to capture the essential features of f ∈ V L . In Harten’s multiscale framework, a function can be decomposed across L scales as f = (v 0 , d 0 , . . . , dL−1 ). Assuming the function is discontinuous at finitely many places a compressed version f˜ can be achieved by applying a hard threshold to each level (i.e., discarding dkJ scale coefficients, where J = {j : 2k |dkj | > τ }; τ being a predefined tolerance). This compressed version no longer has a full set of dkj at each level k; instead each level has an associated index set I k that assigns each remaining dkj to a location on the non-uniform grid Gk at level k, where ∪k Gk forms the set k of function values from which Pk−1 forms fˆ. As a consequence of the thresholding procedure, each Gk may contain gaps larger than 2k−L 2N for a grid that initially had 2N + 1 uniform points. This results in multiple frequencies being associated to each level k if the gaps larger than 2k−L 2N are not discarded. Hence, each Gk containing large gaps should be interpreted as a disjoint set of local grids V k indexed by I k . On each of these local V k , the multiresolution framework acts as previously defined with single operators depending only on level k. Concatenating across all levels ∪k Gk forms the adaptive grid, G∪k , whose point density changes with the smoothness of f . Figure 1 illustrates the relationship between active grid points, as defined by dkJ for all levels, and the adaptive grid G∪k . Figure 1. Locally dyadic grid with large scale coefficients in black However, locally dyadic grids tend to generate hanging nodes, i.e., places where the distance between grid points in G∪k changes abruptly (e.g., third node in Figure 1 from left to right). The adaptive grid is non-uniform, so we can apply ideas from wavelet refinement to form a framework to balance the grid point distribution. Note that each grid point xj ∈ G∪k has been assigned a local level k depending on the gap size between adjacent points, i.e., reverse application of the homogenization criteria (2.2) for fixed β. Using homogenization criteria (2.1) on the adaptive grid G∪k with γ ≤ 2 balances the grid by providing the location where nodes should be added to enforce the γ constraint. There is one problem with the previous method: the addition of nodes is not uniquely defined. As illustrated by Figure 2, G∪k can be balanced (i.e., having γ ≤ 2) by either setting the fourth dL−1 active (bracketed node), or by activating k both the seventh and eighth dL k (hatted nodes). By choosing to activate nodes in V L , we over-resolve discontinuities and decrease the compression ratio. Thus, activating nodes in V L should be suppressed unless there is some added benefit. We explore this question further in Section 2.2 and introduce a dual transform. 138 A. LIMON, H. MORRIS EJDE/CONF/17 Figure 2. Balanced locally dyadic grid 2.3. Bounding Jumps with Dual Transforms. Local dyadic grids generate hanging nodes, and therefore a procedure is required to balance the grid. The balancing procedure described in Section 2.2 cannot uniquely identify the nodes used to balance the adaptive grid G∪k . Supposing the discontinuity (or large gradient) is located between the grid points, k then the associated large dkj for a linear Pk−1 is located either before or after the jump, so there is ambiguity as to the location of the discontinuity due to the color scheme defined by Rk−1 . Remember that the Lazy wavelet defines the scale and k detail branches via a predefined ordering scheme (odd vs. even in R), so there is no smoothness information about f at the time the branch splitting occurs. Therefore, whether we use branch V L or V k depends on the location of the discontinuity in relation to the location of the hanging node. Because we are using a point based method based on a simple splitting scheme, we require additional information to find the exact location of the discontinuity. Suppose f contains a single jump in the function’s value, as opposed to a derivative; then for a sufficiently small threshold tolerance there exists one dL j which locates k the jump location to within one grid point of its true location because Pk−1 is linear. Therefore, we need only check the immediate left or right neighbors to locate the second grid point that bounds the discontinuity (or large gradient). This requires that we shift the color scheme by one grid point and apply a second local transform around the node associated with dL j ; we refer to this procedure as the dual transform. By bounding the discontinuity, we are able to define a new coloring scheme that takes into account the smoothness of f before splitting the domain into scale and detail branches. The coloring scheme utilizes the dual transform to locate the left and right neighbors in V L that bound the jump in f ∈ V . The associated grid point pair thus defines disjoint segments within G∪k , where f is smooth. Figure 3 illustrates classical coarsening via odd/even splitting (plot a) for a function containing a large gradient (plot b) and compares classical splitting to our dual transform coarsening scheme (plot c), which splits the domain into smooth disjoint segments by first bounding the large gradient and then coarsening the domain. Moreover, by using smooth disjoint sets in G∪k and linear dyadic operators, no prolongation operation crosses any discontinuity as defined by dL J . Hence, no Gibbs’ effects contaminate the compressed version f˜ of f ∈ V using our dual transform. 2.4. Locating Jumps in R. We restrict Harten’s multiresolution framework to compact locally dyadic subspaces of R and suppose that all discontinuities have codimension-one and exist on non-dyadic points. This construction assures that any jump in f ∈ V ⊂ R has two neighboring points on the dyadic mesh V k and EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 139 Figure 3. (a) classical coarsening; (b) sketch of f ; (c) find jump, color, then coarsen that one of the points, when projected to V k−1 , is in the detail branch of the multiresolution transform. This point in the detail branch can be located by Donoho’s hard thresholding strategy [9]. The second neighboring point that bounds the discontinuity can be located by applying the same procedure to the scale branch of the transform, or by the local dual transform detailed in Section 2.3. In this way, we are able to locate the two points that bound the discontinuity in R, and thus construct an adaptive grid G∪k . Note that the application of the dual transform must be computationally inexpensive because the adaptive grid G∪k will be constructed countless times over the course of a single simulation. Hence, the down-sampling is accomplished by restriction operator Rk−1 , and consistent with the goal of remaining computationally k minimal, the action of Rk−1 , and Ek on elements v k , require no computations: the k operators simply discard even or odd entries, respectively. The dk coefficients are k found by the application of the forward transform dk = EkT v k − Pk−1 Rk−1 vk , k k Eq. (1.1). Only the linear distance-weighted prolongator Pk−1 requires computation; however, even these require only a compact three element stencil. Therefore, given a discontinuous f , the dual transform first locates large dkj by applying a single forward transform. The first set of dk are in the detail branch k k and, if 2 d > τ , are labeled for a predefined tolerance τ . Any dkj ∈ V L is defined as a refinement point boundary, i.e., the first half of the bounding jump pair. To locate the second point in the pair, one could apply the same procedure to the scale branch, thereby computing a full wavelet packet, but this is unnecessary because k Pk−1 has a compact stencil by design. Therefore, the second point in the pair can be uniquely identified by switching from the detail to the scale branch, computing |dkj−1/2 | and |dkj+1/2 |, and determining which is larger. This procedure defines the bounding point pair to any well-resolved jump and is summarized as Algorithm 2.3. 140 A. LIMON, H. MORRIS EJDE/CONF/17 We define well-resolved jumps to be discontinuities (or large gradients) that have k no jump nearer than two scale branch nodes for Pk−1 linear. Algorithm 2.3. [Locating Jump Boundaries] (1) Given f : R → R, and tolerance τ (2) apply forward transform to f ; Equation (1.1) (3) for all dkj > τ /2k compute dkj−1/2 , dkj+1/2 k (4) if |dkj−1/2 | > |dkj+1/2 | return [vjk , vj−1 ] k k (5) else return [vj , vj+1 ] This localized search lowers the complexity from O(2N ) to O(N +NJ ), where N is the number of data points and NJ the number of jumps. Furthermore, as N → ∞, NJ /N → 0, so the extra work to localize the second neighboring point becomes negligible for large problems. Hence, this scheme achieves the same complexity as Chan’s ENO-Wavelet method [4], which combines wavelets with another of Harten’s ideas, Essentially Non-oscillatory Scheme [14], to achieve good reconstructions for functions with jumps. In Section 2.5, we compare both of these methods to another of Harten’s methods [11, 12] which utilizes a hierarchy of nested grids obtained through dyadic coarsening to locate discontinuities. 2.5. Balancing G∪k and Comparing Representations. Section 2.4 detailed a method to locate grid point pairs that bound well-resolved jumps in f . These point pairs in V L segment G∪k into smooth intervals where the MRA can be applied without generating Gibbs’ phenomena. However, the adaptive grid remains to be balanced as it may possess hanging nodes, so we detail a method by which to span nodes with dyadic intervals (or dyadic gaps). Suppose we have two V L nodes that are ten elements apart, as measured by elements in V L , then the sequence of gaps should be {1,2,4,2,1} to maintain a dyadic measure. Hence, for this symmetric dyadic sequence the largest gap is four and, once we know it, we know the gap sequence. In general, for symmetric dyadic sequences, the maximum gap size, 2n̄ , defines the total distance N = 3(2n̄ ) − 2. Solving for n̄ results in the maximum dyadic level between nodes v1 , v2 ∈ V L . Using the dual transform to find jump point bounding pairs and then balancing hanging nodes via dyadic sequences, the adaptive grid G∪k can be segmented into local V k spaces. Each of these local spaces is smooth and thus well represented k by Pk−1 . The largest overlap between local spaces is no more than one level; therefore, the homogenization measure of an adaptive grid is always bounded by two. The next task is to show how well G∪k represents f when compared to other multiresolution methods akin to our balanced dual transform method. A function, f , proposed by Chan [4] is illustrated in Figure 4 (top). A coarser version f˜ on G∪k with five levels is shown in the middle subfigure, and the absolute error between f and f˜ is plotted in the bottom subfigure. The original function f ∈ V L is uniformly sampled using 1,024 points. The application of Algorithm 2.3 to f returns all associated jump pairs satisfying |dL | > 0.02. The resulting grid is then balanced using dyadic sequences, and the resulting adaptive grid represents f˜ using only 170 points. Projecting from G∪k to V L , our balanced dual transform achieves |f − f˜|∞ ≈ 4 × 10−14 on Chan’s function. Utilizing the 170 points as a baseline for comparison, we repeat the approximation |f − f˜|∞ using various multiresolution methods, listed in Table 1. Using EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 141 Figure 4. (top) Chan’s original f ; (middle) f˜ using five levels; (bottom) absolute error Traditional Wavelets ENO Type Wavelets Harten Type Schemes Haar 1.E+00 ENO–Haar 8.E-02 ENO–SR (pts) 1.E-07 DB4 1.E+00 DB4–ENO 1.E-04 ENO–SR (cell) 1.E-09 DB6 1.E+00 DB6–ENO 2.E-06 Dual Transform 4.E-14 Table 1. Comparing dual transform to wavelets, ENO-wavelets and ENO-SR classical wavelets (i.e., Haar and Daubechies’ fourth– and sixth–order [6]) results in large approximation errors due to Gibbs’ effects near jumps (first column). These errors can be reduced by incorporating ENO-type function extensions to the scaling (or wavelet) coefficients near the jumps, as proposed by Chan [4]. These extensions reduce the approximation errors according to the accuracy of the method used to extend f across the jumps. This phenomenon can be seen in the second column of Table 1, where Haar-ENO is essentially first-order, while DB4-ENO is fourth-order and DB6-ENO is sixth-order accurate. The third column depicts the approximations achieved by Harten’s methods: the first ENO-SR method is based on point values, while the second is based on cell-averaged values. Both use single transform methods and do very well, but our dual transform outperforms both. By bounding the discontinuity (or large gradient) our dual transform method is able to reconstruct Chan’s function up to the discretization level. In Section 3, we extend these ideas to higher dimensions where G∪k becomes a balanced Kd-tree. 142 A. LIMON, H. MORRIS EJDE/CONF/17 3. Adaptive Multilevel Refinement in Rn In Section 2, Harten’s generalized multiresolution analysis in terms of linear operators was used to form an adaptive mesh refinement procedure, where largescale coefficients are treated as grid refinement points. This procedure in terms of our dual transforms is capable of bounding jumps by two grid points in V L so as to segment the adaptive grid into smooth disjoint subsets, which provide smooth regions where the MRA can reproduce f without Gibbs’ effects. In this Section, we extend those ideas to Rn and balance the resulting adaptive multilevel grid using a global Kd-tree structure. 3.1. Locating Jumps in Rn . Utilizing the same construction as in Section 2.4, the discontinuities are assumed to have codimension-one, the grid is locally dyadic, and jumps occur on non-dyadic points. Here as before, we use Algorithm 2.3 to return two points that bound the jump-point in R. However, in Rn for all n > 1, the locations of the detail coefficients are, in general, no longer collinear to the discontinuities, which makes the correspondence between the norm of the detail coefficients and the jump locations no longer one-to-one. This issue can be circumvented by using n-orthogonal one-dimensional transforms to locate the discontinuity in Rn by taking advantage of the coordinate direction that produced each detail coefficient. Given Ω, a tiling is constructed in terms of cells C k . Each cell contains 2n vertices. Assigned to each is an element v k , and each edge in C k is assigned a detail coefficient. However, because dk ∈ / V k , a one-to-one correspondence between edges and details is not possible. This lack of detail coefficients can be circumvented by using multiple orthogonal transforms, and in this way, we can achieve a correspondence between edges and details. These n-transforms encode the projection error along the standard basis, {e1 , . . . , en } ∈ Rn , serving as an error flux across cell edges. Suppose f : R2 → R is a discontinuous surface and f |e1 is a multi-vector representation of f sorted according to direction e1 . Applying Algorithm 2.3 to f |e1 labels all points that correspond to cell edges parallel to e1 and whose |dkj |e1 | > τ /2k for some predefined tolerance τ . Repeating the same procedure along e2 labels all points corresponding to cell edges parallel to e2 . Note that we retained the detail coefficients dke1 and dke2 , and hence each cell edge can be assigned a detail coefficient depending on its edge orientation. Because any codimension-one discontinuity passing through C k ⊂ Rn must cross at least one edge, C k will contain a |dkei | > τ /2k . Therefore only one dkj |ei ∈ C k is required to determine whether the cell contains a discontinuity. This simplifies our labeling strategy by eliminating the need to store all dkj |ei , as with the locally dyadic spaces in Section 2.2, where G∪k did not have a full dk set. Thus, the labeling of each node containing a discontinuity can be accomplished by labeling a single node on a dual-grid [19] whenever 2k |dkei | > τ . Figure 5a illustrates this grid/dual-grid construct in R2 , and plot 5b depicts the large dk directions with associated jump curve. In general, given a discontinuous surface f , all edges in C k intersected by a codimension-one discontinuity can be localized, and any one of those edges can be used to assign C˜k as discontinuous. This is performed by applying Algorithm 2.3 to array f |ei n-times, and thus encoding all discontinuities onto the dual-grid ∪i C˜ik , EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH (a) grid vs. dual grid 143 (b) with associated jump Figure 5. Cells C k − •, smooth dual-cells C˜k − , discontinuous dual-cells − where i enumerates each cell C k that tiles Ω. The identification of cells containing discontinuities is expressed in Algorithm 3.1. Our next task is to limit the number of cells used in smooth regions, akin to the procedure used in Section 2.5 to increase the gap size away from refinement points in R. Thus in Rn , the size of the cells that tile Ω should increase with distance from nodes that have an active C˜k element. It is this coarsening procedure using smoothness information provided by Algorithm 3.1 that is explored in Section 3.2. Algorithm 3.1. [Locating Jump Boundaries] (1) (2) (3) (4) Given f : R → R, for m = 1 → n apply Algorithm 2.3 to array f |em any edge returned assigns C˜k active return ∪i C˜ik 3.2. Cell Based Coarsening. Section 3.1 detailed our method for locating jumps and labeling cells that contain discontinuities. In this section, we will coarsen the domain using the information in each C˜k and add a constraint on the largest size a neighboring cell can have based on homogenization techniques introduced in Section 2.1. The resulting adaptive grid C ∪k is constructed using a balanced quadtree structure, which bounds the growth in the wavenumbers across neighboring cells [3]. We construct the adaptive grid by reinterpreting Algorithm 3.1 as applied to Ω ∈ R2 as a wavelet packet so that all subbands form a quadtree and individual subbands are in one-to-one correspondence with rectangular regions in the wavenumber space. This provides the tiling procedure with a global dyadic structure while using only one-dimensional transforms to provide local smoothness information regarding the location of jumps. Note that despite our exposition of the quadtree construction in R2 , the tree structure can be extended to three dimensions, becoming an octree, or more generally, a Kd-tree in K-dimensions [18]. Moreover, the balanced tree structure enables us to construct an adaptive grid C ∪k in Rn that is locally dyadic such that the MRA 144 A. LIMON, H. MORRIS EJDE/CONF/17 structure is preserved locally. Algorithm 3.2 summarizes the construction of a balanced tree using the detail coefficients associated with domain Ω. Algorithm 3.2. [Constructing an Adaptive Grid] (1) Given an Ω ⊂ Rn apply Algorithm 3.1 to Ω (2) construct a nd-tree T using ∪i C˜ik (3) return C˜∪k a balanced version of T (4) nodes in T form C ∪k To construct an adaptive grid C ∪k , we need to address the construction and balancing of the tree structure, T . In Section 3.3, we define a quadtree structure, T ∈ R2 , in terms of domain subdivision following the work of Sammet [17, 18] and exposition in [2]. In the Section 3.4, we redefine the quadtree scheme in terms of an indexing scheme tailored to Algorithm 3.2. The general Kd-tree structure follows by extending the subdivision scheme from four in R2 to 2n in Rn . 3.3. Quadtrees. A quadtree is a rooted tree in which each node has four children in R2 , and each node represents a segment of the domain Ω. It was developed by Finkel and Bentley in 1974 [10]. The quadtree data structure is capable of representing different types of data, including points, lines, curves and tiles [17]; however, we will limit the exposition to partition of R2 by decomposing the region into four equal quadrants. Figure 6 depicts a balanced quadtree with four levels; note that each neighboring tile never differs by more than a factor of two, a neighboring tile being defined as any node (or nodes) that share a common edge. Figure 6. four-level quadtree In addition to the segmentation of Ω by recursive local dyadic tiling, we can use the quadtree structure to represent grid points. Note that a quadtree node is similar EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 145 to a binary tree node, where left and right tags in R are replaced by north–west, south–west, south–east and north–east tags in R2 . To add x and y coordinates, an addition key is added, so that the usual data structure requires seven fields: N W , N E, SW , SE, x, y and an associated name or function value. Building an unbalanced quadtree for point based data via recursion is accomplished by subdividing Ω into four initial quadrants. Each quadrant containing data points is once again subdivided until each subquadrant contains only a single point. The depth, k, of the associated tree will be at most log(s/c) + 3/2, where c is the minimum distance between data points and s is the length of the domain Ω [18]. For our application, the data points will represent nodes in C˜k , the domain length is 2L , and the tree depth corresponds to the local MRA level k. Besides the local dyadic segmentation of the domain, which divides subbands in correspondence with tiles in the wavenumber space, quadtrees are advantageous in terms of their computational complexity. Theorem 3.3 details the computational cost of building an unbalanced quadtree and subsequent cost of balancing it. The balancing of the tree T is done recursively by comparing the depth of neighboring nodes; new approaches to finding nearest neighbors in a computationally efficient manner are still being researched [17]. We define a shift based variant in Section 3.4 that is tailored to the multilevel refinement method introduced in Section 3.2. Theorem 3.3 (Quadtree Construction and Balancing [2]). A quadtree of depth k storing a set of N points has O((k + 1)N ) nodes and can be constructed in O((k + 1)N ) time. Moreover, a quadtree, T , with O(M ) nodes can be balanced in O((k + 1)M ) time. 3.4. Index Based Quadtree. In Section 3.3, the recursive version of the quadtree was constructed by searching for data points in each subquadrant. However, for our application, the data point locations are known a priori because the locations are encoded in C˜k . Thus there is no reason to search the domain twice, as Algorithm 3.1 has coordinatized the cells containing discontinuities (or large gradients). What follows is a method by which to compute a balanced quadtree based on C˜k . We replace the N W ,SW ,SE and N E labels with {0, 1, 2, 3} and add to these the tag {1, 0} depending on whether the node has children, or not. So the children of node 1 are represented by the index set {10, 11, 12, 13}. In this way, the entire tree can be labeled by [0, 3] in R2 , or [0, 2n−1 ] in Rn . Because each index is augmented each time the node is split, that means the number of digits in the index key defines the level. For example, index {2011} defines a node at level four. Moreover, the longest index key defines the width of the domain. This scheme also gives us a direct method by which to compute grid coordinates, due to the direct link between the index scheme and the level and width of the domain. Coordinates can be found by using the following atlas: {0} = (0, w), {1} = (0, 0), {2} = (w, 0) and {3} = (w, w), where w define the width of the domain. For example, the index {2013} = (w, 0) + (0, w)/2 + (0, 0)/4 + (w, w)/8 = (9w/8, 5w/8) = (9, 5), because w = 24 . This coordinate decomposition is thus dyadic, and has the added benefit that ascending the tree can be done trivially by dumping the right-most key element, so the parent of {2013} is {201}. This means that once the location of the jump is localized by C˜k and transferred into tree index notation, all associated parent nodes can be found by deleting the right-most index until only a single digit is left. Hence, building an unbalanced quadtree can be done directly via the atlas and indexing scheme. 146 A. LIMON, H. MORRIS EJDE/CONF/17 Algorithm 3.4. [Unbalanced Tree] (1) Given domain size and jump coordinates (2) Compute associated jump indexes via atlas (3) While index size > 1: (4) Eliminate repeated jump indexes (5) Find parents via index dumping Once the unbalanced version of the tree is produced, it can be balanced by adding nodes to non-conforming regions by the same recursive method discussed in Section 3.3. (a) fine-scale surface f ∈ C L (b) plot of all associated dkj (c) 3-level balanced quadtree via dL J Figure 7. Comparing detail coefficients to associated quadtree tiling of f 3.5. Balancing and Coarsening. In this section, we use Algorithm 3.1 to locate cells that enclose a codimension-one discontinuity on the piecewise smooth surface illustrated in Figure 7a. An important point to note, in terms of Figure 7b is that the coefficients dkj ∈ C˜∪k are not smoothly distributed on the surface. As a EJDE-2009/CONF/17 MULTILEVEL ADAPTIVE MESH 147 Figure 8. Balanced grid w/ refinement points sample from Sierpinski’s triangle result, there are large deviations between horizontal and vertical spacings between nodes. Any direct tiling in correspondence to C˜∪k will produce large spurious modes, due to the violation of the homogenization constraints, and hence will distort the reconstruction of f in terms of the sparse version f˜. By applying Algorithm 3.2, the jump information encoded in C˜∪k is used to con˜L struct dL J ∈ C (illustrated in Figure 7c as solid points). These nodes are encoded into a quadtree and balanced to produce the adapted grid in Figure 7c (open circles). Note that the balanced three-level tree tiles the domain and maintains the locally dyadic constraint; hence, no large spurious modes will be generated and f will be well reconstructed by the grid C ∪k . Our last example, Figure 8, illustrates our adaptive grid scheme applied to a more complicated set C˜L . In this case, C˜L is chosen to be Sierpinski’s triangle (sampled on a 256 × 256 grid). The number of elements in the refinement set are defined by 32 ((2L /2n )2 +2L /2n )/2 for domain size 2L and iterate n = 0, L. Iterating Sierpinski’s map to unit cell length (i.e., n = 8) results in 6581 refinement points (or about 10% fill). We apply Algorithm 3.4 to the jump coordinates C˜L , and the resulting balanced dyadic grid C ∪k is illustrated in Figure 8. Conclusions We introduced a mesh refinement strategy based on Harten’s multiresolution analysis. The MRA was shown to be able to locate points that bound discontinuities in R, and by using multiple transforms in Rn , the MRA was extended to identify cells that contain a codimension-one discontinuity. These refinement cells were then used to form a balanced Kd-tree whose nodes form an adaptive mesh whose density increases in the vicinity of a discontinuity. In a forthcoming issue, we will show how this mesh refinement scheme can be coupled to radial basis functions to provide a framework for building adaptive PDE solvers on Kd-trees. 148 A. LIMON, H. MORRIS EJDE/CONF/17 References [1] F. Aràndiga, R. Donat; Nonlinear Multiscale Decompositions: the approach of A. Harten, J. Num. Alg. 23, 175–216, 2000. [2] M. de Berg, M. van Kreveld, M. Overmars, O. Schwarzkopf, Computational Geometry: algorithms and applications, Chap. 14, second ed., Springer-Verlag, Berlin, 2000. [3] M. Berger, P. Colella; Local Adaptive Mesh Refinement for Shock Hydrodynamics, J. Comp. Phys. 53, 64-84, 1984. [4] T. Chan, H. Zhou; ENO-Wavelet Transforms for Piecewise Smooth Functions, SIAM J. Num. Anal. 40, 1369–1404, 2002. [5] W. Dahmen; Stability of Multiscale Transforms, J. Fourier Anal. Appl. 2, 341–361, 1996. [6] I. Daubechies; Ten Lectures on Wavelets, SIAM, Philadelphia, 1992. [7] I. Daubechies, W. Sweldens; Factoring Wavelet Transforms into Lifting Steps, J. Fourier Anal. Appl. 4, 245–267, 1998. [8] I. Daubechies, I. Guskov, W. Sweldens; Regularity of Irregular Subdivision, Constructive Approx. 15, 381–426, 1999. [9] D. Donoho; Interpolating Wavelet Transforms, Tech. Report 408, Dep. of Stat., Stanford Uni., 1992. [10] R. Finkel and J. Bentley; Quad Trees: a data structure for retrieval on composite keys, Acta Inform., 4 1–9, 1974. [11] A. Harten; ENO Scheme with Subcell Resolution, J. Comput. Phys. 83, 148–184, 1989. [12] A. Harten; Adaptive Multiresolution Schemes for Shock Computations, J. Comput. Phys. 115, 319–338, 1994. [13] A. Harten; Discrete Multiresolution Analysis and Generalized Wavelets, Appl. Num. Math. 12, 153–192, 1993. [14] A. Harten, S. Osher, B. Engquist, S. Chakravarthy; Some Results on Uniformly High-order Accurate Essentially Nonoscillatory Schemes, Appl. Num. Math. 2, 347–377, 1986. [15] M. Jansen, P. Oonincx; Second Generation Wavelets and Applications, Springer-Verlag, London, 2005. [16] P. Oswald; A Stable Subspace Splitting for Sobolev Space and Domain Decomposition Algorithm, Proc. 7th Int. Conf. on Domain Decomp. 180, 87–98, 1994. [17] H. Samet; Foundations of Multidimensional and Metric Data Structures, Chap. 3, MorganKaufmann, San Francisco, 2006. [18] H. Samet; The Design and Analysis of Spatial Data Structures, Addison-Wesley, Reading, MA, 1990. [19] S. Schaefer, J. Warren; Dual Marching Cubes: primal contouring of dual grids, IEEE Proc. of Pacific Graphics 2004 24, 195-201, 2005. [20] F. Schröder-Pander, T. Sonar, O. Friedrich; Generalized Multiresolution Analysis on Unstructured Grids, Num. Math. 86, 685–715, 2000. [21] J. Simoens, S. Vandewalle; On the Stability of Wavelet Bases in the Lifting Scheme, TW Report 306, Depart. of Comp. Sci. Katholieke Univ. Leuven, Belgium 2000. [22] G. Strang, T. Nguyen; Wavelets and Filter Banks, Wellesley-Cambridge Press, Wellesley, 1996. [23] W. Sweldens; The Lifting Scheme: a new philosophy in biorthogonal wavelet construction, Proc. SPIE 2569., 68–79, 1995. [24] W. Sweldens; The Lifting Scheme: a construction of second generation wavelets, SIAM J. Math. Anal. 29, 511–546, 1997. Alfonso Limon School of Mathematical Sciences, Claremont Graduate University, CA 91711, USA E-mail address: alfonso.limon@cgu.edu Hedley Morris School of Mathematical Sciences, Claremont Graduate University, CA 91711, USA E-mail address: hedley.morris@cgu.edu