2 Support Operator Method - Rohan

advertisement
Parameter Estimations for
Direct Optimization Grid-Generation
Using the Support Operator Method
Jose Castillo
Terrence McGuinness
Computational Science Research Center, San Diego State University
Abstract.
This paper incorporates variational, geometry and solution adaptive, grid-generation technology with a new class
of finite-volume methods to numerically solve general elliptic partial differential equations. Current numerical grid generation
technology provides a powerful tool for adapting grids to complex geometries. The support-operators finite-volume methods
have been proven to be effective in solving problems on general grids. The main goal of this research is to combine these two
technologies to produce algorithms for fast and accurate solutions on non-trivial geomotries.
INTRODUCTION
Whenever discrete points over a physical domain
are free to move, better numerical results using finite
difference schemes can be achieved by selecting the
most appropriate grid. The direct optimization grid
generation method developed by Castillo requires a set
of parameters to generate such grids. The support
operator method for numerically approximating the
solutions to partial differential equations will be
integrated with the direct optimization method for grid
generation to achieve such results. But in order to do
so, we must be confident that the most appropriate
parameters have been selected. In this paper we will
show how such a selection process can be automated
so that comparisons of different techniques within the
method can be assessed. It has been shown that the
support operator method can solve diffusion problems
on structured (logically rectangular) non-orthogonal
grids with second order convergence rates and is exact
for smooth (piecewise linear) grids [3]. Direct
optimization methods for grid generation produce
smooth grids on non-trivial geometries by controlling
uniformity of length, area, and orthogonality[2]. The
direct optimization method can also incorporate
solution adaption as part of this control. As a result,
the combining these two technologies can generate
accurate and stable finite difference schemes for
solving difficult numerical problems when working on
non-trivial domains. Moreover, best numerical results
are obtained when the most appropriate parameters for
the numerically generated grid are chosen.
1 Direct Optimization
Grid-Generation Method
The direct optimization grid generation method
produces grids of higher quality1 by minimizing the
variation in length, area, and orthogonality within each
of the cells. The points of the grid are defined as,
æx ö
÷, where x = éx ù, y = éy ù,
z = çç ÷
÷
ë ij û
ë ij û
÷
çè yø
for 1 £ i £ I , 1 £ j £ J ,
which also correspond to the discrete variables in the
domain of the functional that is to be minimized. In
general, this functional is a linear combination of three
functionals
min   FL  z    FA  z   Fo  z   (1.0)
for , ,   0
where,
FL  z  
( L )
2
1
h 2
ij

 ( Lvij ) 2 ,
Eij
is the length functional with
Lhij   xi 1, j  xij    yi 1, j  yi , j 
2
2
representing the horizontal edges, and
FA  z    Aij2 , with
Cij
1
See Steinberg and Roach, Symmetric Operators in General
Coordinates. Technical Report of New Mexico, Albuquerque, NM,
(1991) for metrics on quality measures.
11
1  xij  xi 1, j 1  yi 1, j  yi , j 1   
,
Aij  
2  yi , j  yi 1, j 1  xi 1, j  xi , j 1  


as the area functional, and
Fo  z    Oij , with
Cij
Qij   uk , vk  , Oij 
(k )
 Q 
4
(k )
2
ij
,
k 1
uk and
represents the orthogonal functional with
vk being the edges at each corner. Reference grids are
introduced to provide additional flexibility and control
over the outcome of the optimized grid [2]. The
reference grids have the same dimensionality of the
physical grids, which scale the functionals by their
corresponding nodal displacements. We introduce
h
reference lengths, lij and
and
lijv for the length functional,
aij for references to the areas for the area
individual cells. In addition, the discrete analogs of
these differential operators are formed using
conservation laws giving rise to stable finite difference
schemes. These discrete differential operators are
determined by imposing the vector identities found in
mathematical physics. This is done by an appropriate
choice of one of the operators as a prime operator, and
then constructing the other operator, referred to as the
supported operator, based on a particular vector
identity. For example, for general elliptic problems we
begin by descretizing the divergence operator. From
this we construct the discrete analog to the gradient via
an application of a discrete analog of the Divergence
Theorem, which relates the two operators. A general
elliptic problem is then formed by the composition the
divergence with the gradient to form the equation,
div  K grad u   f  x, y 
were K is a symmetric matrix. This is then numerically
approximated by the composition of the discrete
analogs of the two operators,
DIV  K GRAD U   F  x, y  .
functional. Then the new weighted functionals become
h 2
( Lvij ) 2 
Aij2
1  ( Lij )
FL  z     h 2  v 2 , FA  z    2 (1.2)
2 Eij  (lij )
(lij ) 
Cij aij


The length functional in (1.1) can be further
generalized so that the x and y components of the
vertical and horizontal edges can be weighted
independently as,
x
i 1, j
FL  x, y   
Ei , j
 xi , j 
2

A12 hs2
x
i , j 1
 xi , j 
B12 ht2
2
y
i 1, j
 yi , j 
i , j 1
 yi , j 
   F dV   F  n dS
2


A22 hs2
y

We begin by introducing the conservation law that
leads to the particular vector identity used to construct
the supported operator from the prime operator. The
Divergence Theorem is expressed by equating the rate
of change of a conserved quantity contained within a
fixed domain  to the flux of that quantity across the
boundary of  : denoted as  . Letting F be the
flux we then have
2
B22 ht2
where
(1.3)
n is the outward normal to  and dS is the
surface element on
.
Between the weights in the objective functional (1.0),
 ,  , and  , and the weights of the generalized length

 .
Letting F   A we have
 
   A     A  A  .
So then the above conservation law becomes
    AdV   A  dV    A  nˆ dS ,
2
functional of (1.3), A12 , A22 , B12 , and, B2 there are a total
of seven parameters, or settings, required to determine
a particular grid of highest quality.



which is written in vector notation as
2 Support Operator Method
The support operator method works well on general
grids because its formulation is based on the
discretization of invariant differential operators such
as the divergence, gradient, and curl. This is also true
because no information is used in its formulation on
the structure of the grid but only information of the
  divA dV    A, grad  dV     A, nˆ  dS .



As a simplification we use systems with vanishing
Dirichlet boundary conditions. So the above becomes
12


   divAdV   A, grad dV .


A general elliptic equation is represented by the
composition of the divergence and the gradient,
L    div  K grad     f .
where L is the differential operator
L  div  K grad  .
(Hosted 2D), for the support operator method and the
direct optimal grid generator written in FORTRAN 90,
a graphical user interface, (Interface), written in Visual
Basic, and a graphic viewer, (Graph it), written in C++
using OpenGL. The graphical user interface allows the
user to select the required settings that are saved to a
file that is read into the numerical code written in
FORTRAN 90 (see Figure 1). The numerical code in
turn generates data files for the graphic viewer to
display. Each of the codes may be launched via the
graphical user interface.
With u and v both being scalar fields and A and B
being vector fields defined in the closed region  we
define the inner products as,
u, v   uv dV and



A, B   A, B dV ,

where the first inner product is defined in the space of
scalar functions and the second in the space of vector
functions. Then, the inner product of a scalar function
acted on u by L would be,
L u, v    v div( grad u ) dV
V

  grad u, grad v  dV
V
hence,
Lu, v  u, Lv ,
Lu, u  0 ,
if and only if
div A, u  A, grad u .
Therefore the linear differential operator L is selfadjoint and symmetric positive definite. This is a direct
consequence of having a solution if and only if the
gradient is the negative adjoint of the divergence. If
this negative adjoint relationship were maintained
between the discrete analogs for the divergence and
the gradient, then the discrete analog to the L operator
would also be self-adjoint and symmetric positive
definite. As a result the matrix representation for the
discrete approximation to the L operator would also
be symmetric and diagonally dominant which would
ensure a stable finite difference scheme.
3 SOM & DOM code implementation
A software package was written as part of this
study and is available in three parts: the PDE solver,
Interface
(Visual Basic)
Graph it
(C++/OpenGL)
Hosted 2D
(Fortran 90)
Solution
Data
Settings
FIGURE 1. Software package flow for DOM and SOM.
4 L2 Error Norm as Objective Functional
In order to produce a grid of highest quality for a
particular problem using the direct optimization
method, a set of seven parameters, or settings,


S   ,  ,  , A1 , A2 , B1 , B2 , must be determined. To
2
2
2
2
guarantee that the best grid is produced, a set of
parameters S are chosen such that an objective
function, F : S
, is a minimum. We construct
F by a call to hosted 2D with a particular set of
parameters, or settings, and then calculating
the L2 norm for the image of F from the solution data
as illustrated in FIGURE 1. The implementation of
the support operator method used here evaluates scalar
functions at the cell centers. As a result, a projection
operator of the continuum functions is used to
calculate the error between the approximated and the
exact solutions as,
 ph u ij  u  xijc , yijc  ,
where,
xijc , yijc are the geometric centers of each cell
and u is the exact solution. Then the L2 , meansquare norm, to the measure error is determined by
EL2  U  phu
L2


1
2
 M 1 N 1
2
    U ij   Ph u ij VCij  ,
 i 1 i 1

where U is the solution of the finite-difference
approximation.
13
4 Powell Algorithm for
Determining Optimal Parameters
Powell’s method, known as a direction set method,
for determining a minimum of a multidimensional
objective function is selected due to the fact that the
calculation of a gradient is not required. The idea
behind direction set methods is to determine a
conjugate set of directions towards a local minimum
by minimizing each direction separately as a onedimensional problem.
Starting with an initial
approximation to the minimum x 0 , and letting
u1 ,..., u n be the columns of the identity matrix, a single
iteration of the algorithm is as follows[1]:
1.
f  xi 1   i u i  , and define xi  xi 1   i u i .
For i=1, …, n-1, replace
3.
Replace
4.
Compute  to minimize f  x0   u n  ,
The parameters required for selecting a particular
grid using the direct optimization method for solving a
specific numerical problem on general grids with the
support operator method was successively determined
by applying the Powel algorithm. An exhaustive
approach of searching over all direction separately was
possible by concurrent programming techniques on a
multiprocessor computer. This approach was used to
insure an optimal grid for comparative analysis within
the theoretical framework of direct optimization grid
methods.
ACKNOWLEDGMENTS
For i=1, …, n, compute  i to minimize
2.
Conclusion
ui by ui+1 .
u n by x n  x 0 .
which is repeated until some stopping criterion is
meet. One limitation of the Powell method is that it
will find local minimum depending on the starting
point or direction. A notably inefficient approach used
here was to determine multiple minima by using all the
permutations of the identity matrix for the initial
direction, and then choosing the minimum from the set
of results. FIGURE 2 illustrates the 24 separate runs
of the Powell method for each permutation of an initial
direction for determining the 4 optimal parameters
2
A12 , A22 , B12 , and, B2 . The y-axis represents the values
This work was performed at the computational
science lab made possible thanks to a Cooperative
Research and Development Agreement (CRADA)
with Compaq. The FORTRAN 90 code for the Powell
Method was borrowed from William H., Numerical
Recipes, Cambridge University Press, 1992.
REFERENCES
1. Brent, R. P., Algorithms for Minimization without
Derivatives, Englewood Cliffs, Prentice-Hall, (1973).
2. Castillo, J.E., A Practical Guide to Direct Optimization
for Planar Grid_Generation, 37, 123-156, (1999).
3. Shashkov, M., Conservative Finite-Difference Schemes
on General Grids, Boca Raton, FL, CRC Press, (1996).
of the L2 error for each call to the objective function
by the number of calls on the x-axis.
5.70
5.60
5.50
5.40
5.30
5.20
5.10
5.00
0
100
200
300
400
500
600
number of function calls
700
800
FIGURE 2. 24 separate runs of Powell’s Algorithm, one for each permutation of 4 parameters
14
15
Download