An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations Nicholas Zabaras and Xiang Ma Materials Process Design and Control Laboratory Sibley School of Mechanical and Aerospace Engineering 101 Frank H. T. Rhodes Hall Cornell University Ithaca, NY 14853-3801 Email: zabaras@cornell.edu URL: http://mpdc.mae.cornell.edu/ Symposium on `Numerical Methods for Stochastic Computation and Uncertainty Quantification' in the 2008 SIAM Annual Meeting, San Diego CA, July 7-11, 2008 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 1 Outline of the presentation o Stochastic analysis o Issues with gPC and ME-gPc o Collocation based approach o Basics of stochastic collocation o Issues with existing stochastic sparse grid collocation method (Smolyak algorithm) o Adaptive sparse grid collocation method o Some numerical examples CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 2 Motivation All physical systems have an inherent associated randomness SOURCES OF UNCERTAINTIES Process •Multiscale material information – inherently statistical in nature. Engineering component •Uncertainties in process conditions •Input data •Model formulation – approximations, assumptions. Heterogeneous random Microstructural features Why uncertainty modeling ? Assess product and process reliability. Estimate confidence level in model predictions. Identify relative sources of randomness. Control? Provide robust design solutions. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 3 Problem definition Define a complete probability space , F , P . We are interested to find a u : D such that for P-almost everywhere (a.e.) stochastic function the following equation hold: , L x, ; u f x, , x D B x, u g x , where x x1 , x2 , , xd are the coordinates in operator, and B is a boundary operators. x D d , L is (linear/nonlinear) differential In the most general case, the operator L and B as well as the driving terms f and g, can be assumed random. In general, we require an infinite number of random variables to completely characterize a stochastic process. This poses a numerical challenge in modeling uncertainty in physical quantities that have spatio-temporal variations, hence necessitating the need for a reduced-order representation. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 4 Representing Randomness Interpreting random variables as functions 1. Interpreting random variables 2. Distribution of the random variable Random variable x E.g. inlet velocity, inlet temperature MAP S Real line Sample space of elementary events Collection of all possible outcomes o 1 0.1x Each outcome is mapped to a corresponding real value A general stochastic process is a random field with variations along space and time – A function with domain (Ω, Τ, S) 3. Correlated data Ex. Presence of impurities, porosity Usually represented with a correlation function We specifically concentrate on this. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 5 Representing Randomness Karhunen-Loèvè expansion Based on the spectral decomposition of the covariance kernel of the stochastic process 1. Representation of random process - Karhunen-Loeve, Polynomial Chaos expansions 2. Infinite dimensions to finite dimensions Random process - depends on the covariance Mean Set of random variables to be found • Need to know covariance • Converges uniformly to any second order process Eigenpairs of covariance kernel Set the number of stochastic dimensions, N Dependence of variables Pose the (N+d) dimensional problem CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 6 Karhunen-Loeve expansion X ( x, t , ) X ( x, t ) X i ( x, t )Yi ( ) i 1 ON random variables Deterministic functions Stochastic Mean process function Deterministic functions ~ eigen-values, eigenvectors of the covariance function Orthonormal random variables ~ type of stochastic process In practice, we truncate (KL) to first N terms X ( x, t , ) fn( x, t , Y1 , CORNELL U N I V E R S I T Y , YN ) Materials Process Design and Control Laboratory 7 The finite-dimensional noise assumption By using the Doob-Dynkin lemma, the solution of the problem can be described by the same set of random variables, i.e. u x, u x, Y1 , , YN So the original problem can be restated as: Find the stochastic function u : D such that L x, Y; u f x, Y , x D B x, Y; u g x, Y , x D In this work, we assume that Yi i 1 are independent random variables with probability density function i . Let i be the image of Yi . Then N N Y i Yi , Y i 1 Y Y1 , is the joint probability density of , YN with support N i N i1 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 8 Uncertainty analysis techniques Monte-Carlo : Simple to implement, computationally expensive Perturbation, Neumann expansions : Limited to small fluctuations, tedious for higher order statistics Sensitivity analysis, method of moments : Probabilistic information is indirect, small fluctuations Spectral stochastic uncertainty representation: Basis in probability and functional analysis, can address second order stochastic processes, can handle large fluctuations, derivations are general Stochastic collocation: Results in decoupled equations CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 9 Spectral stochastic presentation A stochastic process = spatially, temporally varying random function X : ( x, t , ) CHOOSE APPROPRIATE BASIS FOR THE PROBABILITY SPACE HYPERGEOMETRIC ASKEY POLYNOMIALS PIECEWISE POLYNOMIALS (FE TYPE) GENERALIZED POLYNOMIAL CHAOS EXPANSION SUPPORT-SPACE REPRESENTATION SPECTRAL DECOMPOSITION KARHUNEN-LOÈVE EXPANSION COLLOCATION, MC (DELTA FUNCTIONS) SMOLYAK QUADRATURE, CUBATURE, LH CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 10 Generalized polynomial chaos (gPC) Generalized polynomial chaos expansion is used to represent the stochastic output in terms of the input X ( x, t , ) fn( x, t , Y1 , P , YN ) Stochastic input Z ( x, t , ) Z i ( x, t ) i (Y ( )) i 0 Stochastic output Askey polynomials in input Deterministic functions Askey polynomials ~ type of input stochastic process Usually, Hermite, Legendre, Jacobi etc. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 11 Generalized polynomial chaos (gPC) Advantages: 1) Fast convergence1. 2) Very successful in solving various problems in solid and fluid mechanics – well understood 2,3. 3) Applicable to most random distributions 4,5. 4) Theoretical basis easily extendable to other phenomena 1. 2. 3. 4. 5. P Spanos and R Ghanem, Stochastic finite elements: A spectral approach, Springer-Verlag (1991) D. Xiu and G. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, Comput. Methods Appl. Mech. Engrg. 187 (2003) 137--167. S. Acharjee and N. Zabaras, Uncertainty propagation in finite deformations -- A spectral stochastic Lagrangian approach, Comput. Methods Appl. Mech. Engrg. 195 (2006) 2289--2312. D. Xiu and G. Karniadakis, The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comp. 24 (2002) 619-644. X. Wan and G.E. Karniadakis, Beyond Wiener-Askey expansions: Handling arbitrary PDFs, SIAM J Sci Comp 28(3) (2006) 455-464. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 12 Generalized polynomial chaos (gPC) Disadvantages: 1) Coupled nature of the formulation makes implementation nontrivial1. 2) Substantial effort into coding the stochastics: Intrusive 3) As the order of expansion (p) is increased, the number of unknowns (P) increases factorially with the stochastic dimension (N): combinatorial explosion2,3. P 1 N p ! N ! p! 4) Furthermore, the curse of dimensionality makes this approach particularly difficult for large stochastic dimensions 1. 2. 3. B. Ganapathysubramaniana and N. Zabaras, Sparse grid collocation schemes for stochastic natural convection problems, J. Comp Physics, 2007, doi:10.1016/j.jcp.2006.12.014 D Xiu, D Lucor, C.-H. Su, and G Karniadakis, Performance evaluation of generalized polynomial chaos, P.M.A. Sloot et al. (Eds.): ICCS 2003, LNCS 2660 (2003) 346-354 B. J. Debusschere, H. N. Najm, P. P. Pebray, O. M. Knio, R. G. Ghanem and O. P. Le Maître, Numerical Challenges in the Use of Polynomial Chaos Representations for Stochastic Processes, SIAM Journal of Scientific Computing 26(2) (2004) 698-719. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 13 Failure of gPC Gibbs phenomenon: Due to its global support, it fails to capture the stochastic discontinuity in the stochastic space. 0.25 0 0.25 0.5 X 0.75 1 9.3E-07 1.4E-06 1.9E-06 2.4E-06 2.9E-06 0.75 Y 0.5 0 -6.4E-08 4.3E-07 1 Y-velocity 0.75 Y Capture unsteady equilibrium in natural convection X-velocity 9.2E-07 5.7E-06 1.0E-05 1.5E-05 2.0E-05 2.5E-05 3.0E-05 3.4E-05 1 0.5 0.25 0 0 0.25 0.5 0.75 1 X Therefore, we need some method which can resolve the discontinuity locally. This motivates the following Multi - element gPC method. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 14 3.4E-06 Multi-element generalized polynomial chaos (ME-gPC) Basic concept: Decompose the space of random inputs into small elements. In each element of the support space, we generate a new random variable and apply gPCE. The global PDF is also simultaneously discretized. Thus, the global PC basis is not orthogonal with the local PDF in a random element. We need to reconstruct the orthogonal basis numerically in each random element. The only exception is the uniform distribution whose PDF is a constant. We can also decompose the random space adaptively to increase its efficiency. 1. X. Wan and G.E. Karniadakis, An adaptive multi-element generalized polynomial chaos method for stochastic differential equations, SIAM J Sci Comp 209 (2006) 901-928. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 15 Decomposition of the random space We define a decomposition B of as k [ak ,1 , bk ,1 ) [ak ,2 , bk ,2 ) B Mk1 k k1 k2 [ak ,d , bk ,d ] where k1 , k2 1, 2 , M Based on the decomposition B , we define the following indicator random variables: Y k , 1 k 1, 2 , M I k 0 otherwise Then a local random vector Yi is defined in each element i subject to a conditional PDF fˆk (Yk | I k 1) where CORNELL U N I V E R S I T Y f (Yk ) Pr( I k 1) J k Pr( I Bk 1) 0 Materials Process Design and Control Laboratory 16 An adaptive procedure Let us assume that the gPC expansion of random field in element k is Np uˆk (ξ k ) uˆk ,i k ,i (ξ k ) i 0 From the orthogonality of gPC, the local variance approximated by PC with order p is given by Np 2 k , p uˆk2,i i2 i 1 The approximate global mean u and variance 2 can be expressed by N N u uˆk ,0 Pr( I Bk 1), [ k2, p (uˆk ,0 u ) 2 ]Pr( I B 1) 2 k 1 k 1 k Let k denote the error of local variance. We can write the exact global variance as N 2 2 k Pr( I B 1) k 1 CORNELL U N I V E R S I T Y k Materials Process Design and Control Laboratory 17 An adaptive procedure We define the local decay rate of relative error of the gPC approximation in each element as follows: k 2 k, p k2, p 2 k , p 1 Np i2 uˆ 2 i N p1 1 k ,i k2, p Based on k and the scaled parameter Pr( I B 1) , a random element will be selected for potential refinement when the following condition is satisfied k k Pr( I B 1) 1, k 0 1 where and 1 are prescribed constants. Furthermore, in the selected random element k , we use another threshold parameter 2 to choose the most sensitive random dimension. We define the sensitivity of each random dimension as (uˆi , p ) 2 i2, p ri N , i 1, 2, , d , 2 2 j N 1 uˆ j j where we drop the subscript k for clarity and the subscript i , p denotes the mode consisting only of random dimension Yi with polynomial order p p 1 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 18 p An adaptive procedure All random dimensions which satisfy ri 2 max rj , j 1, d 0 2 1, i 1, 2, , d, will be split into two equal random elements in the next time step while all other random dimensions will remain unchanged. If Yk corresponds to element [a, b] in the original random space [1,1] , the ab ab element [a, 2 ] and [ 2 , b] will be generated in the next level if the two criteria are both satisfied. Through this adaptive procedure, we can reduce the total element number while gaining the same accuracy and efficiency. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 19 An Example: K-O problem So it can successfully resolve the discontinuity in stochastic space. However, due to its gPC nature in each element and the tree like adaptive refinement, this method still suffers from the curse of dimensionality. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 20 First summary The traditional gPC method fails when there are steep gradients/finite discontinuities in stochastic space. gPC related method suffers from the so called curse of dimensionality when the input stochastic dimension is large. High CPU cost for large scale problems. Although the h-adaptive ME-gPC can successfully resolve discontinuity in stochastic space, the number of element grows fast at the order O (2 N ) . So the method is still a dimension-dependent method. Therefore, an adaptive framework utilizing local basis functions that scales linearly with increasing dimensionality and has the decoupled nature of the MC method, offers greater promise in efficiently and accurately representing high-dimensional non-smooth stochastic functions. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 21 Collocation based framework The stochastic variable at a point in the domain is an N dimensional N function. f : The gPC method approximates this N dimensional function using a spectral element discretization (which causes the coupled set of equations) Spectral descretization represents the function as a linear combination of sets of orthogonal polynomials. Instead of a spectral discretization, use other kinds of polynomial approximations Use simple interpolation! f: N CORNELL U N I V E R S I T Y That is, sample the function at some points and construct a polynomial approximation Materials Process Design and Control Laboratory 22 Collocation based framework Need to represent this function f: N Sample the function at a finite (i ) M set of points Θ M {Y }i1 Use polynomials (Lagrange polynomials) to get a approximate representation M I f (Y) f (Y (i ) )ai (Y (i ) ) i 1 Function value at any point is simply I f (x ) Spatial domain is approximated using a FE, FD, or FV discretization. Stochastic function in 2 dimensions Stochastic domain is approximated using multidimensional interpolating functions CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 23 From one-dimension to higher dimensions Denote the one dimensional interpolation formula as with the set of support nodes: In higher dimension, a simple case is the tensor product formula For instance, if M=10 dimensions and we use k points in each direction This quickly becomes impossible to use. Number of points Total number of in each direction, k sampling points 2 1024 3 59049 4 1.05x106 5 9.76x106 10 1x1010 CORNELL U N I V E R S I T Y One idea is only to pick the most important points from the tensor product grid. Materials Process Design and Control Laboratory 24 Conventional sparse grid collocation (CSGC) In higher dimensions, not all the points are equally important. Some points contribute less to the accuracy of the solution (e.g. points where the function is very smooth, regions where the function is linear, etc.). Discard the points that contribute less: SPARSE GRID COLLOCATION 1D sampling Tensor product 2D sampling CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 25 Conventional sparse grid collocation Both piecewise linear as well as polynomial functions, L, can be used to construct the full tensor product formula above. The Smolyak construction starts from this full tensor product representation and discards some multi-indices to construct the minimal representation of the function Define the differential interpolant, ∆ as the difference between two interpolating approximations with U 0 0 i U i U i 1 The sparse grid interpolation of the function f is given by Aq , N ( f ) (i1 iN )( f ) i q where i i1 id , is the multi-index representation of the support nodes. N is the dimension of the function f and q is an integer (q>N). k = q-N is called the depth of the interpolation. As q is increased more and more points are sampled. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 26 Conventional sparse grid collocation The interpolant can be represented in a recursive manner as Aq, N ( f ) (i1 i q iN )( f ) Aq 1, N ( f ) (i1 i q iN )( f ) with AN 1, N 0 Can increase the accuracy of the interpolant based on Smolyak’s construction WITHOUT having to discard previous results. Can build on previous interpolants due to the recursive formulation To compute the interpolant Aq , N , one only needs the function values at the sparse grid point given by: H q,N ( X i1 ... X iN ) q N 1|i| q One could select the sets X i , in a nested fashion such that X i X i 1 Similar to the interpolant representation in a recursive form, the sparse grid points can be represented as: H q , M H q 1, M H q , M with CORNELL U N I V E R S I T Y H q , M |i| q ( X i1 ... X iM ) i i i 1 and X X \ X Materials Process Design and Control Laboratory 27 Choice of collocation points and nodal basis functions One of the choice is the Clenshaw-Curtis grid at non-equidistant extrema of the Chebyshev polynomials. ( D.Xiu etc, 2005, F. Nobile etc, 2006) By doing so, the resulting sets are nested. The corresponding univariate nodal basis functions are Lagrangian characteristic polynomials: There are two problems with this grid: (1) The basis function is of global support, so it will fail when there is discontinuity in the stochastic space. (2) The nodes are pre-determined, so it is not suitable for implementation of adaptivity. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 28 Choice of collocation points and nodal basis functions Therefore, we propose to use the Newton-Cotes grid using equidistant support nodes and use the linear hat function as the univariate nodal 1 basis. Yji 21i i 1i Y ji Y j 2 It is noted that the resulting grid points are also nested and the grid has the same number of points as that of the Clenshaw-Curtis grid. Comparing with the Lagrangian polynomial, the piecewise linear hat function has a local support, so it is expected to resolve discontinuities in the stochastic space. The N-dimensional multi-linear basis functions can be constructed using tensor products: CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 29 Comparison of the two grids Clenshaw-Curtis grid CORNELL U N I V E R S I T Y Newton-Cotes grid Materials Process Design and Control Laboratory 30 From nodal basis to multivariate hierarchical basis We start from the 1D interpolating formula. By definition, we have with and we can obtain since we have For simplifying the notation, we consecutively number the elements in and denote the j-th point of X i as Y ji X i hierarchical surplus CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 31 , Nodal basis vs. hierarchical basis Nodal basis CORNELL U N I V E R S I T Y Hierarchical basis Materials Process Design and Control Laboratory 32 Multi-dimensional hierarchical basis For the multi-dimensional case, we define a new multi-index set which leads to the following definition of hierarchical basis Now apply the 1D hierarchical interpolation formula to obtain the sparse grid interpolation formula for the multivariate case in a hierarchical form: Here we define the hierarchical surplus as: CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 33 Error estimate Depending on the order of the one-dimensional interpolant, one can come up with error bounds on the approximating quality of the interpolant1,2. If piecewise linear 1D interpolating functions are used to construct the sparse interpolant, the error estimate is given by: Where N is the number of sampling points, If 1D polynomial interpolation functions are used, the error bound is: assuming that f has k continuous derivatives 1. 2. V. Barthelmann, E. Novak and K. Ritter, High dimensional polynomial interpolation on sparse grids, Adv. Comput. Math. 12 (2000), 273–288 E. Novak, K. Ritter, R. Schmitt and A. Steinbauer, On an interpolatory method for high dimensional integration, J. Comp. Appl. Math. 112 (1999), 215–228 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 34 Hierarchical integration So now, any function u can now be approximated by the following reduced form It is just a simple weighted sum of the value of the basis for all collocation points in the current sparse grid. Therefore, we can easily extract the useful statistics of the solution from it. For example, we can sample independently N times from the uniform distribution [0,1] to obtain one random vector Y, then we can place this vector into the above expression to obtain one realization of the solution. In this way, it is easy to plot realizations of the solution as well as the PDF. The mean of the random solution can be evaluated as follows: where the probability density function Y is 1 since we have assumed uniform random variables on a unit hypercube [0,1]N . CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 35 Hierarchical integration The 1D version of the integral in the equation above can be evaluated analytically: Since the random variables are assumed independent of each other, the value of the multi-dimensional integral is simply the product of the 1D integrals. Denoting we rewrite the mean as Thus, the mean is just an arithmetic sum of the hierarchical surpluses and the integral weights at each interpolation point. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 36 Hierarchical integration To obtain the variance of the solution, we need to first obtain an approximate expression for u 2 Then the variance of the solution can be computed as Therefore, it is easy to extract the mean and variance analytically, thus we only have the interpolation error. This is one of the advantages to use this interpolation based collocation method rather than quadrature based collocation method. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 37 Adaptive sparse grid collocation (ASGC) Let us first revisit the 1D hierarchical interpolation For smooth functions, the hierarchical surpluses tend to zero as the interpolation level increases. On the other hand, for non-smooth function, steep gradients/finite discontinuities are indicated by the magnitude of the hierarchical surplus. The bigger the magnitude is, the stronger the underlying discontinuity is. Therefore, the hierarchical surplus is a natural candidate for error control and implementation of adaptivity. If the hierarchical surplus is larger than a pre-defined value (threshold), we simply add the 2N neighbor points of the current point. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 38 Adaptive sparse grid collocation As we mentioned before, by using the equidistant nodes, it is easy to refine the grid locally around the non-smooth region. It is easy to see this if we consider the 1D equidistant points of the sparse grid as a tree-like data structure Then we can consider the interpolation level of a grid point Y as the depth of the tree D(Y). For example, the level of point 0.25 is 3. Denote the father of a grid point as F(Y), where the father of the root 0.5 is itself, i.e. F(0.5) = 0.5. Thus the conventional sparse grid in Ndimensional random space can be reconsidered as CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 39 Adaptive sparse grid collocation Thus, we denote the sons of a grid point Y Y1 , , YN by From this definition, it is noted that, in general, for each grid point there are two sons in each dimension, therefore, for a grid point in a Ndimensional stochastic space, there are 2N sons. The sons are also the neighbor points of the father, which are just the support nodes of the hierarchical basis functions in the next interpolation level. By adding the neighbor points, we actually add the support nodes from the next interpolation level, i.e. we perform interpolation from level |i| to level |i|+1. Therefore, in this way, we refine the grid locally while not violating the developments of the Smolyak algorithm CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 40 Adaptive sparse grid collocation To construct the sparse grid, we can either work through the multi-index or the hierarchical basis. However, in order to adaptively refine the grid, we need to work through the hierarchical basis. Here we first show the equivalence of these two ways in a two dimensional domain. We always start from the multi-index (1, 1), which means the interpolation level in first dimension is 1, in second dimension is also 1. For level 1, the associated point is only 0.5. So the tensor product of the coordinate is 0.5 0.5 (0.5, 0.5) Thus, for interpolation level 0 of smolyak construction, we have (1,1) A( N , N ) w(0.5,0.5)a(1,1) CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 41 Adaptive sparse grid collocation Recalling, we define the interpolation level k as AN k , N . So, now we come to the next interpolation level, k 1, i 3 For this requirement, the following two multi-indices are satisfied: (1, 2), (2,1) Let us look at the index (1,2) i1 1 The associated set of points is 0.5 (j=1). i2 2 The associated set of points is 0 (j=1) and 1 (j=2). So the tensor product is (0.5) (0,1) (0.5, 0), (0.5,1) Then the associated sum is (1,2) 1,2 w(0.5,0)a(1,1) w(0.5,1)a1,2 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 42 Adaptive sparse grid collocation In the same way, for multi-index (2,1), we have the corresponding collocation points (0,1) (0.5) (0, 0.5), (1, 0.5) So the associated sum is (2,1) 2,1 w(0,0.5)a(1,1) w(1,0.5)a2,1 i Recall Aq , N ( f ) ( 1 iN )( f ) i q So for Smolyak interpolation level 1, we have A21,2 i1 i 3 (1,1) (1,2) iN ( f ) w(0.5, 0.5) a(1,1) w(0.5, 0) a(1,1) 1,2 (2,1) 2,1 w(0.5,1) a1,2 w(0, 0.5) a(1,1) w(1, 0.5) a2,1 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 43 Adaptive sparse grid collocation So, from this equation, it is noted that we can store the points with the surplus sequentially. When calculating interpolating value, we only need to perform a simple sum of each term. We don’t need to search for the point. Thus, it is very efficient to generate the realizations. Here, we give the first three level of multi-index and corresponding nodes. (1,1) (0.5, 0.5) (1, 2) (0.5, 0), (0.5,1) (2,1) (0, 0.5), (1, 0.5) (2, 2) (0, 0), (1, 0), (0,1), (1,1) (1,3) (0.5, 0.25), (0.5, 0.75) (3,1) (0.25, 0.5), (0.75, 0.5) CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 44 Adaptive sparse grid collocation In fact, the most important formula is the following From this equation, in order to calculate the surplus, we need the interpolation value at just previous level. That is why it requires that we construct the sparse grid level by level. Now, let us revisit the algorithm in a different view. we start from the hierarchical basis. We still use the two-dimension grid as an example. Recalling CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 45 Adaptive sparse grid collocation As previous given, the first level in a sparse grid is always a point (0.5,0.5). There is a possibility that the function value at this point is 0 and thus the refinement terminates immediately. In order to avoid the early stop for the refinement process, we always refine the first level and keep a provision on the first few hierarchical surpluses. It is noted here, the associated multi-index with this point (0.5, 0.5) is (1,1),i.e. (0.5, 0.5) (1,1) and it corresponds to level of interpolation k = 0. So the associated sum is still (1,1) A( N , N ) w(0.5,0.5)a(1,1) Now if the surplus is larger than the threshold, we add sons (neighbors) of this point to the grid. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 46 Adaptive sparse grid collocation In one-dimension, the sons of the point 0.5 are 0,1. According to the definition, the sons of the point (0.5,0.5) are (0, 0.5), (1, 0.5), (0.5, 0), (0.5,1) which are just the points in the level 1 of sparse grid. Indeed, the associated multi-index is (0, 0.5), (0,1) (2,1), (0.5, 0), (0.5,1) (1, 2) Recalling when we start from multi-index (1, 2) (0.5, 0), (0.5,1), (2,1) (0, 0.5), (1, 0.5) So actually, we can see they are equivalent. Now, let us summarize what we have k 1: (0, 0.5), (1, 0.5), (0.5, 0), (0.5,1) So the associated sum is (1,2) 1,2 (2,1) 2,1 w(0.5,0)a(1,1) w(0.5,1)a1,2 w(0,0.5)a(1,1) w(1,0.5)a2,1 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 47 Adaptive sparse grid collocation After calculating the hierarchical surplus, now we need to refine. Here we assume we refine all of the points, i.e. threshold is 0. Notice here, all of the son points are actually from next interpolation level, so we still construct the sparse grid level by level. First, in one dimension, the sons of point 0 is 0.25, 1 is 0.75. So the sons of each point in this level are: (0, 0.5) (0.25, 0.5), (0,1), (0, 0) (1, 0.5) (0.75, 0.5), (1,1), (1, 0) (0.5, 0) (0.5, 0.25), (1, 0), (0, 0) (0.5,1) (0.5, 0.75), (1,1), (0,1) Notice here, we have some redundant points. Therefore, it is important to keep track of the uniqueness of the newly generated (active) points. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 48 Adaptive sparse grid collocation After delete the same points, and arrange ,we can have (0, 0), (1, 0), (0,1), (1,1) (2, 2) (0.5, 0.25), (0.5, 0.75) (1,3) (0.25, 0.5), (0.75, 0.5) (3,1) which are just all of the support nodes in k=2. Therefore, we can see by setting the threshold equal zero, conventional sparse grid is exactly recovered. We actually work on point by point instead of the multi-index. In this way, we can control the local refinement of the point. Here, we equally treat each dimension locally, i.e., we still add the two neighbor points in all N dimensions. So there are generally 2N son (neighbor) points. If the discontinuity is along one dimension, this method can still detect it. So in this way, we say this method can also detect the important dimension. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 49 Adaptive sparse grid collocation: implementation As shown before, we need to keep track of the uniqueness of the newly active points. To this end, we use the data structure <set> from the standard template library in C++ to store all the active points and we refer to this as the active index set. Due to the sorted nature of the <set>, the searching and inserting is always very efficient. Another advantage of using this data structure is that it is easy for a parallel code implementation. Since we store all of the active points from the next level in the <set>, we can evaluate the surplus for each point in parallel, which increases the performance significantly. In addition, when the discontinuity is very strong, the hierarchical surpluses may decrease very slowly and the algorithm may not stop until a sufficiently high interpolation level. Therefore, a maximum interpolation level is always specified as another stopping criterion. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 50 Adaptive sparse grid collocation: Algorithms Let 0 be the parameter for the adaptive refinement threshold. We propose the following iterative algorithm beginning with a coarsest possible sparse grid , i.e. with the N-dimensional multi-index i = (1,…,1), which is just the point (0.5,…,0.5). 1) Set level of Smolyak construction k = 0. 2) Construct the first level adaptive sparse grid . • Calculate the function value at the point (0.5, …, 0.5). • Generate the 2N neighbor points and add them to the active index set. • Set k = k + 1; 3) • While k kmax and the active index set is not empty Copy the points in the active index set to an old index set and clear the active index set. • Calculate in parallel the hierarchical surplus of each point in the old index set according to Here, we use all of the existing collocation points in the current adaptive sparse grid This allows us to evaluate the surplus for each point from the old index set in parallel. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 51 Adaptive sparse grid collocation: Algorithms • For each point in the old index set, if Generate 2N neighbor points of the current active point according to the above equation; Add them to the active index set. • Add the points in the old index set to the existing adaptive sparse grid adaptive sparse grid becomes Now the • Set k = k + 1; 4) Calculate the mean and variance, the PDF and if needed realizations of the solution. In practice, instead of using surplus of the function value, it is sometimes preferable to use surplus of the square of the function value as the error indicator. This is because the hierarchical surplus is related to the calculation of the variance. In principle, we can consider it like a local variance. Thus, is more sensitive to the local variation of the stochastic function than . Error estimate of the adaptive interpolant CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 52 NUMERICAL EXAMPLES CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 53 Adaptive sparse grid interpolation Ability to detect and reconstruct steep gradients CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 54 A dynamic system with discontinuity We consider the following governing equation for a particle moving under the influence of a potential field and friction force: d2X dX dh f dt dx dt 2 X t 0 x0 , dX / dt t 0 v0 If we set the potential field as h x 35/ 8 x4 15/ 4 x2, then the differential equation has two stable fixed points x 15 / 35 and an unstable fixed point at x 0 . The stochastic version of this problem with random initial position in the interval can be expressed as d2X dX 35 3 15 f X X, 2 dt 2 2 dt with stochastic initial condition X t 0, Y X 0 X Y , dX |t 0 0 dt where X t, Y denotes the response of the stochastic system, X 0 x1 x2 / 2, 1,1. X | x1 x2 | / 2 and Y is uniformly distributed random variable over [1,1] . CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 55 A dynamic system with discontinuity If we choose X 0 0.05, X 0.2 and a relatively large friction coefficient f 2, a steady –state solution is achieved in a short time. The analytical steady-state solution is given by X t , Y 15 / 35, Y 0.25 X t , Y 15 / 35, Y 0.25 This results in the following statistical moments X t , Y 1 15 / 35 4 Var X t , Y 45 112 The analytic PDF is two dirac mass with unequal strength. In the following computations, the time integration of the ODE is performed using a fourth-order Runge-Kutta scheme and a time step t 0.001 was used. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 56 Failure of gPC expansion 4 1 3.5 0.5 3 2.5 X PDF 0 -0.5 2 1.5 1 -1 0.5 -1.5 -1 -0.5 0 Y 0.5 0 -2 1 -1.5 -1 Y -0.5 0 0.5 p 10 3.5 1 3 0.5 2.5 0 X PDF 2 1.5 -0.5 1 -1 0.5 -1.5 -1 -0.5 0 Y 0.5 1 0 -2.5 -2 -1.5 -1 Y -0.5 0 p 20 Evolution of X (t , Y ) for 0 t 10 CORNELL U N I V E R S I T Y Solution at t 10 PDF at t 10 Materials Process Design and Control Laboratory 57 0.5 0.3 0.43 0.25 0.42 0.2 0.41 0.15 0.4 Var (X) Mean (X) Failure of gPC expansion 0.1 0.05 0.38 gPC Exact 0 -0.05 0.39 5 10 Order 15 gPC Exact 0.37 20 0.36 5 10 Order 15 20 Mean and variances exhibit oscillations with the increase of expansion order CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 58 Failure of gPC expansion 1 0.5 X Steady-state solution at t 25 0 p=5 p = 10 p = 15 p = 20 Exact -0.5 -1 -1 -0.5 0 Y 0.5 1 Therefore, the gPC expansion fails to converge the exact solutions with increasing expansion orders. It fails to capture the discontinuity in the stochastic space due to its global support, which is the well known “Gibbs phenomenon” when applying global spectral decomposition to problems with discontinuity in the random space. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 59 Failure of Lagrange polynomial interpolation 9 0.6 8 0.4 7 0.2 6 PDF 10 X 1 0.8 0 5 -0.2 4 -0.4 3 -0.6 2 -0.8 1 -1 -1 -0.5 0 Y 0.5 0 -1 1 -0.5 0 Y 0.5 1 0 Y 0.5 1 k 5 0.8 35 0.6 30 0.4 25 0.2 PDF X 20 0 15 -0.2 10 -0.4 5 -0.6 -0.8 -1 -0.5 0 Y k 10 Evolution of X (t , Y ) for 0 t 10 Solution at t 10 0.5 1 0 -1 -0.5 PDF at t 10 Much better than gPC, but still exhibit oscillation near discontinuity point. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 60 Failure of Lagrange polynomial interpolation 0.5 0.17 Lagrange Polynomial interpolation Exact 0.166 0.3 0.164 0.2 0.1 0.162 2 4 6 Level 8 10 0.16 10 12 0.5 11 Level 12 13 0.403 Lagrange Polynomial interpolation Exact 0.45 0.4 Lagrange Polynomial interpolation Exact 0.402 Var (X) Var (X) Lagrange Polynomial interpolation Exact 0.168 Mean (X) Mean (X) 0.4 0.35 0.3 0.401 0.25 0.2 2 4 CORNELL U N I V E R S I T Y 6 Level 8 10 12 0.4 10 11 Level 12 13 Materials Process Design and Control Laboratory 61 Failure of Lagrange polynomial interpolation 1 X 0.5 Steady-state solution at t 25 0 k=3 k=5 k=7 k=9 Exact -0.5 -1 -1 -0.5 0 Y 0.5 1 Similarly, the CSGC method with Lagrange polynomials also fails to capture the discontinuity. However, better predictions of the low-order moments are obtained than the gPC method. Though the results indeed converge, the converged values are not accurate. Near the discontinuity point of the solution, there are still several wriggles, which will not disappear even for higher interpolation levels. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 62 Sparse grid collocation with linear basis function k 5 k 10 Evolution of X (t , Y ) for 0 t 10 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 63 Sparse grid collocation with linear basis function 60 1 50 0.5 PDF X 40 0 k=3 k=5 k=7 k=9 Exact -0.5 -1 -1 -0.75 -0.5 -0.25 0 Y 0.25 0.5 Solution at t 25 0.75 30 20 10 1 0 -0.8 -0.6 -0.4 -0.2 0 Y 0.2 0.4 0.6 0.8 PDF at t 25 Thus, it is correctly predicting the discontinuity behavior using linear basis function. And it is also noted that there are no oscillations in the solutions shown here. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 64 Adaptivity -1 10 20 10-2 -3 15 -4 10 -5 10 Level Absolute error of variance 10 10-6 -7 10 CSGC -2 ASGC: = 10 10 5 10-8 -9 10 10-10 1 10 2 10 3 10 #Points 4 10 5 10 0 -1 -0.75 -0.5 -0.25 0 Y 0.25 0.5 0.75 1 For CSGC method, a maximum interpolation level of 15 is used and 32769 points are need to reach accuracy of 1105. For the ASGC method, a maximum interpolation level of 30 is used. Only 115 number of points are required to achieve accuracy of 31010. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 65 Highly discontinuous solution Now we consider a more difficult problem. We set f 0.05, X 0 1 and X 0.1 . All other conditions remain as before. The reduction of the friction coefficient, together with higher initial energy will result in the oscillation of the particle for several cycles between each of the two stable equilibrium points. 1.5 1 1 0.5 0 X X 0.5 0 -0.5 -0.5 -1 -1.5 -1 -0.5 0 Y 0.5 1 -1 -1 -0.5 0 Y 0.5 1 Failure of gPC (left) and polynomial interpolation (right) at t = 250s. The failure of Lagrange polynomial interpolation is much more obvious. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 66 Highly discontinuous solution k=4 0.6 0.6 0.4 0.4 0.2 0.2 0 0 -0.2 -0.2 -0.4 -0.4 -0.6 -0.6 -0.8 -1 -0.5 0.5 1 -0.8 -1 k=8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 -0.2 -0.4 -0.4 -0.6 -0.6 -0.5 0 Y CORNELL U N I V E R S I T Y 0.5 1 0 Y 0.5 1 0.5 1 k = 10 Results using the CSGC method with linear basis functions. 0 -0.2 -0.8 -1 -0.5 0.8 X X 0 Y k=6 0.8 X X 0.8 -0.8 -1 -0.5 0 Y Materials Process Design and Control Laboratory 67 The Kraichnan-Orszag three-mode problem The governing equations are (X.Wan, G. Karniadakis, 2005), dx1 x2 x3 dt dx2 x1 x3 dt dx3 2 x1 x2 dt subject to stochastic initial conditions: x1 (0) x1 (0; ), x2 (0) x2 (0; ), x3 (0) x3 (0; ) There is a discontinuity when the random initial condition approaches x1 1 and x2 1 (x1=x2) . So let us first look at the discontinuity in the deterministic solution. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 68 The Kraichnan-Orszag three-mode problem 1.5 x1(0) = 1, x2(0) = 0.95, x3(0) = 1 x1(0) = 1, x2(0) = 0.97, x3(0) = 1 x1(0) = 1, x2(0) = 0.99, x3(0) = 1 x1(0) = 1, x2(0) = 1.00, x3(0) = 1 1 x2 0.5 0 -0.5 -1 -1.5 0 5 10 15 20 25 30 Time Singularity at plane x2 1 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 69 The Kraichnan-Orszag three-mode problem For computational convenience and clarity we first perform the following transformation 2 2 2 y1 y 2 2 2 y3 0 2 2 2 0 0 x 1 0 x2 x 1 3 (X.Wan, G. Karniadakis, 2005) The resulting governing equations become: dy1 y1 y3 , dt dy2 y2 y3 , dt dy3 y12 y22 dt The discontinuity then occurs at the planes y1 0 and y2 0 . Thus, we first study the stochastic response subject to the following random 1D initial input: y1 (0; ) 1, y2 (0; ) 0.1 Y (0; ), y3 (0; ) 0 where Y U (1,1) . Since the random initial data y2 (0; ) can cross the plane y2 know 0 that gPC will fail for this case. we CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 70 The Kraichnan-Orszag three-mode problem 2 y1(0) = 0.1, y2(0) = 1.0, y3(0) = 1.0 y1(0) = 0.05, y2(0) = 1.0, y3(0) = 1.0 y1(0) = 0.02, y2(0) = 1.0, y3(0) = 1.0 y1(0) = 0.0, y2(0) = 1.0, y3(0) = 1.0 1.5 y1 1 0.5 0 -0.5 0 5 10 15 20 25 30 Time Singularity at plane y1 0 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 71 One-dimensional random input: variance 0.8 0.25 MC-SOBOL: 1,000,000 ASGC: = 10-1 ASGC: = 10-2 ASGC: = 10-3 gPC: p = 30 0.6 0.15 Var( y1 ) 0.1 Variance of y2 Variance of y1 0.2 MC-SOBOL: 1,000,000 ASGC: = 10-1 ASGC: = 10-2 -3 ASGC: = 10 gPC: p = 30 0.4 Var( y2 ) 0.2 0.05 0 0 5 10 15 Time 20 25 MC-SOBOL: 1,000,000 ASGC: = 10-1 ASGC: = 10-2 -3 ASGC: = 10 gPC: p = 30 0.8 Variance of y3 0 5 10 15 Time 20 25 30 Var( y3 ) 0.6 gPC fails after t = 6s while the ASGC method converges even with a large threshold. The solid line is the result from MC-SOBOL sequence with 106 iterations. 0.4 0.2 0 0 30 0 5 10 CORNELL U N I V E R S I T Y 15 Time 20 25 30 Materials Process Design and Control Laboratory 72 One-dimensional random input: adaptive sparse grid 15 Adaptive sparse grid with Level 10 102 5 0 -1 -0.5 0 Y 0.5 1 From the adaptive sparse grid, it is seen that most of the refinement after level 8 occurs around the discontinuity point Y = 0.0, which is consistent with previous discussion. The refinement stops at level 16, which corresponds to 425 number of points, while the conventional sparse grid requires 65537 points. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 73 One-dimensional random input The ‘exact’ solution is taken as the results given by MCSOBOL The error level is defined as max Var( yi ) Var( yi , MC ) t 30 i 1,2,3 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 74 One-dimensional random input: long term behavior 0.2 1 0.15 MC-SOBOL: 1,000,000 ASGC: = 10-2 -3 ASGC: = 10 0.9 MC-SOBOL: 1,000,000 ASGC: = 10-2 -3 ASGC: = 10 0.8 Mean of y1 Variance of y1 0.7 0.1 0.6 0.5 0.4 0.3 0.05 0.2 0.1 0 0 20 40 Time 60 80 100 0 0.7 MC-SOBOL: 1,000,000 ASGC: = 10-2 -3 ASGC: = 10 Time 60 80 100 MC-SOBOL: 1,000,000 -2 ASGC: = 10 ASGC: = 10-3 0.6 0.5 0.5 Variance of y3 Variance of y2 40 0.7 0.6 0.4 0.3 0.4 0.3 0.2 0.2 0.1 0.1 0 20 0 20 40 CORNELL U N I V E R S I T Y Time 60 80 100 0 0 20 40 Time 60 80 100 Materials Process Design and Control Laboratory 75 One-dimensional random input: Realizations These realizations are reconstructed using hierarchical surpluses. It is seen that at earlier time, the discontinuity has not yet been developed, which explains the reason that the gPC is accurate at earlier times. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 76 Two-dimensional random input Next, we study the K-O problem with 2D input: y1 (0; ) 1, y2 (0; ) 0.1Y1 (0; ), y3 (0; ) Y2 (0; ) Now, instead of a point, the discontinuity region becomes a line. 0.25 MC-SOBOL: 1,000,000 -1 ASGC: = 10 ASGC: = 10-2 ASGC: = 10-3 gPC: p = 10 1 MC-SOBOL: 1,000,000 -1 ASGC: = 10 -2 ASGC: = 10 ASGC: = 10-3 gPC: p = 10 0.8 Variance of y2 Variance of y1 0.2 0.15 0.1 0.05 0 0.6 0.4 0.2 0 2 CORNELL U N I V E R S I T Y 4 Time 6 8 10 0 0 2 4 Time 6 8 10 Materials Process Design and Control Laboratory 77 Two-dimensional random input 1 MC-SOBOL: 1,000,000 ASGC: = 10-1 -2 ASGC: = 10 ASGC: = 10-3 gPC: p = 10 0.8 0.5 Y2 Variance of y3 0.6 0 0.4 -0.5 0.2 0 0 2 4 Time 6 8 10 -1 -1 -0.5 0 Y1 0.5 1 It can be seen that even though the gPC fails at a larger time, the adaptive collocation method converges to the reference solution given by MC-SOBOL with 106 iterations. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 78 Two-dimensional random input The error level is defined as max Var( yi ) Var( yi , MC ) t 10 i 1,2,3 The number of points of the conventional grid with interpolation 20 is 12582913. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 79 Two-dimensional random input: Long term behavior -3 3 4 ASGC: = 10-1 3 ASGC: = 10-2 ASGC: = 10-3 2 1 0 1 -3 ASGC: = 10-1 ASGC: = 10-2 ASGC: = 10-3 0 -1 -1 -2 0 x 10 2 Error in variance Error in mean 5 x 10 10 20 30 Time 40 50 -2 0 10 20 30 40 50 Time Error in the mean (left) and the variance (right) of y1 with different thresholds for the 2D K-O problem in [0, 50]. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 80 Three-dimensional random input Next, we study the K-O problem with 3D input: y1 (0; ) Y1 (0; ), y2 (0; ) Y2 (0; ), y3 (0; ) Y3 (0; ) This problem is much more difficult than any of the other problems examined previously. This is due to the strong discontinuity ( the discontinuity region now consists of the planes Y1 = 0 and Y2 = 0) and the moderately-high dimension. It can be verified from comparison with the result obtained by MC-SOBOL that unlike the previous results, here 2 x 106 iterations are needed to correctly resolve the discontinuity. Due to the symmetry of y1 and y2 and the corresponding random input, the variances of y1 and y2 are the same. Finally, in order to show the accuracy of the current implementation of the h-adaptive ME-gPC, we provide the results for this problem. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 81 Three-dimensional random input:ASGC 0.36 0.39 0.34 MC-SOBOL: 2,000,000 ASGC: = 10-1 ASGC: = 10-2 ASGC: = 10-3 0.38 MC-SOBOL: 2,000,000 ASGC: = 10-1 ASGC: = 10-2 ASGC: = 10-3 0.32 Variance of y3 Variance of y1 0.37 0.36 0.35 0.28 0.26 0.34 0.33 0.3 0.24 0.22 0 1 2 3 4 5 Time 6 7 8 9 10 0 1 2 3 4 5 Time 6 7 8 9 Evolution of the variance of y1=y2 (left) and y3 (right) for the 3D K-O problem using ASGC CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 82 10 Three-dimensional random input: ME-gPC 0.39 0.36 MC-SOBOL: 2,000,000 ME-gPC: p = 3 ME-gPC: p = 4 ME-gPC: p = 5 ME-gPC: p = 6 0.32 Variance of y3 0.37 Variance of y1 0.34 MC-SOBOL: 2,000,000 ME-gPC: p = 3 ME-gPC: p = 4 ME-gPC: p = 5 ME-gPC: p = 6 0.38 0.36 0.3 0.28 0.35 0.26 0.34 0.33 0.24 0 1 2 3 4 5 Time 6 7 8 9 10 0.22 0 1 2 3 4 5 Time 6 7 8 9 Evolution of the variance of y1=y2 (left) and y3 (right) for the 3D K-O problem using ME-gPC with 1 104 and 2 103 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 83 10 Three-dimensional random input Due to the strong discontinuity in this problem, it took much longer time for both methods to arrive at the same accuracy. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 84 Stochastic elliptic problem In this example, we compare the convergence rate of the CSGC and ASGC methods through a stochastic elliptic problem in two spatial dimensions. As shown in previous examples, the ASGC method can accurately capture the non-smooth region in the stochastic space. Therefore, when the non-smooth region is along a particular dimension (i.e. one dimension is more important than others), the ASGC method is expected to identify it and resolve it. In this example, we demonstrate the ability of the ASGC method to detect important dimensions when each dimension weigh unequally. This is similar to the dimension-adaptive method, especially in high stochastic dimension problems. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 85 Stochastic elliptic problem Here, we adopt the model problem aN , u , f N , in D u , 0 on D with the physical domain D x x, y 0,1 . To avoid introducing errors of any significance from the domain discretization, we take a deterministic smooth load f N , x, y cos x sin y with homogeneous boundary conditions. 2 The deterministic problem is solved using the finite element method with 900 bilinear quadrilateral elements. Furthermore, in order to eliminate the errors associated with a numerical K-L expansion solver and to keep the random diffusivity strictly positive, we construct the random diffusion coefficient aN , x with 1D spatial dependence as L n 18 L log aN , x 0.5 1 cos 2 x n 1 Yn e 2 n 1 where Yn , n 1, N are independent uniformly distributed random variables in the interval 3, 3 N CORNELL U N I V E R S I T Y 1/ 2 2 2 2 Materials Process Design and Control Laboratory 86 Stochastic elliptic problem This expansion is similar to a K-L expansion of a 1D random field with stationary covariance x1 x2 2 log aN , x 0.5 x1 , x2 exp 2 L Small values of the correlation L correspond to a slow decay, i.e. each stochastic dimension weighs almost equally. On the other hand, large values of L results in fast decay rates, i.e., the first several stochastic dimensions corresponds to large eigenvalues weigh relatively more. By using this expansion, it is assumed that we are given an analytic stochastic input. Thus, there is no truncation error. This is different from the discretization of a random filed using the K-L expansion, where for different correlation lengths we keep different terms accordingly. In this example, we fix N and change L to adjust the importance of each stochastic dimension. In this way, we investigate the effects of L on the ability of the ASGC method to detect the important dimensions. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 87 Stochastic elliptic problem: N = 5 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 88 Stochastic elliptic problem: N = 5 We estimate the L2 D approximation error for the mean and variance. Specifically, to estimate the computation error in the q-th level, we fix N and compare the results at two consecutive levels, e.g. the error for the mean is E Aq , N u N Aq 1, N u N The previous figures shows results for different correlation lengths at N=5. For small L, the convergence rates for the CSGC and ASGC are nearly the same. On the other hand, for large L, the ASGC method requires much less number of collocation points than the CSGC for the same accuracy. This is because more points are placed only along the important dimensions which are associated with large eigenvalues. Next, we study some higher-dimensional cases. Due to the rapid increase in the number of collocation points, we focus on moderate to higher correlation lengths so that the ASGC is effective. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 89 Stochastic elliptic problem: higher-dimensions #points:4193 Comparison: CSGC: 3.5991x1011 CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 90 Stochastic elliptic problem: higher-dimensions In order to further verify our results, we compare the mean and the variance when N = 75 using the AGSC method with 108 with the ‘exact’ solution obtained by MC-SOBOL simulation with 106 iterations. E Aq , N u N E uMC E uMC Relative error of variance Relative error of mean 0.0015 0.0004 0.0003 0.001 0.0002 0.0005 0.0001 0 0 0 0.2 0.2 y 0.6 0.6 0.4 0.4 x y 0.8 0.8 0.2 0.2 0.4 0.4 0 0 0 0.6 0.6 1 1 Computational time: ASGC : 0.5 hour CORNELL U N I V E R S I T Y 0.8 0.8 1 1 x MC: 2 hour Materials Process Design and Control Laboratory 91 Application to Rayleigh-Bénard instability Cold wall = -0.5 Insulated 0.75 Y Insulated 1 0.5 0.25 0 0 0.25 0.5 0.75 hot wall 1 X Filled with air (Pr = 0.7). Ra = 2500, which is larger than the critical Rayleigh number, so that convection can be initialized by varying the hot wall temperature. Thus, the hot wall temperature is assumed to be a random variable. Prior stochastic simulation, several deterministic computations were performed in order to find out the range where the critical point lies in. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 92 Rayleigh-Bénard instability: deterministic case 0 10 Evolution of the average kinetic energy for different hot wall temperatures h = 0.30 h = 0.40 h = 0.50 h = 0.55 h = 0.60 h = 0.70 [K]1/2 10-1 -2 10 10-3 20 40 Time 60 80 100 As we know, below the critical temperature, the flow vanishes, which corresponds to a decaying curve. On the other hand, the growing curve represents the existence of heat convection. Therefore, the critical temperature lies in the range [0.5,0.55]. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 93 Rayleigh-Bénard instability: deterministic formulation Steady state Nusselt number which denotes the rate of heat transfer: 1 Nu h c 0 y 1 y 0 dx Nu 1 Conduction Nu 1 Convection This again verifies the critical hot wall temperature lies in the range [0.5,0.55]. However, the exact critical value is not known to us. So, now we try to capture this unstable equilibrium using the ASGC method. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 94 Rayleigh-Bénard instability: stochastic formulation We assume the following stochastic boundary condition for the hot wall temperature: h 0.4 0.3Y where Y is a uniform random variable in the interval [0,1]. Following the discussion before, both conductive and convective modes occur in this range. 12 0.15 10 0.1 Nu Level 8 0.05 6 4 0 2 0.4 0.45 0.5 0.55 h 0.6 Nu Nu 1 CORNELL U N I V E R S I T Y 0.65 0.7 0.4 0.45 0.5 0.55 h 0.6 0.65 0.7 0.541 Materials Process Design and Control Laboratory 95 Rayleigh-Bénard instability: stochastic formulation 3 0.08 0.06 v u 2 0.04 u v 1 0.02 0 0 0.4 0.45 0.5 0.55 h 0.6 0.65 0.7 0.4 0.25 0.45 0.5 0.55 h 0.6 0.65 0.7 0.541 0.2 0.15 0.1 0.05 0 -0.05 0.4 0.45 0.5 CORNELL U N I V E R S I T Y 0.55 h 0.6 0.65 0.7 nonlinear: convection Solution of the variables versus hotwall temperature at point (0.1,0.5) Linear: conduction Materials Process Design and Control Laboratory 96 Rayleigh-Bénard instability: stochastic formulation Max = 0.436984 Max = 0.436984 0.35 0.25 0.15 0.05 -0.05 -0.15 -0.25 -0.35 -0.45 0.35 0.25 0.15 0.05 -0.05 -0.15 -0.25 -0.35 -0.45 Prediction of the temperature when h 0.436984 using ASGC (left) and the solution of the deterministic problem using the same h (right). Fluid flow vanishes, and the contour distribution of the temperature is characterized by parallel horizontal lines which is a typical distribution for heat conduction. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 97 Rayleigh-Bénard instability: stochastic formulation Max = 4.33161 Max = 0.667891 Max = 4.31858 4 3 2 1 0 -1 -2 -3 -4 Max = 4.33384 Max = 4.3208 4 3 2 1 0 -1 -2 -3 -4 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 4 3 2 1 0 -1 -2 -3 -4 Max = 0.667891 4 3 2 1 0 -1 -2 -3 -4 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 Prediction of the temperature when h 0.66789 using ASGC (top) and the solution of the deterministic problem using the same h (bottom). CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory 98 Rayleigh-Bénard instability: Mean Max = 1.73709 Max = 1.73123 1.6 1.2 0.8 0.4 0 -0.4 -0.8 -1.2 -1.6 1.6 1.2 0.8 0.4 0 -0.4 -0.8 -1.2 -1.6 Max = 1.73128 Max = 1.73715 1.6 1.2 0.8 0.4 0 -0.4 -0.8 -1.2 -1.6 ASGC (top) CORNELL U N I V E R S I T Y Max = 0.55 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 Max = 0.549918 1.6 1.2 0.8 0.4 0 -0.4 -0.8 -1.2 -1.6 MC-SOBOL (bottom) Materials Process Design and Control Laboratory 99 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 Rayleigh-Bénard instability: Variance Max = 3.28535 Max = 3.30617 3 2.6 2.2 1.8 1.4 1 0.6 0.2 3 2.6 2.2 1.8 1.4 1 0.6 0.2 ASGC (top) U N I V E R S I T Y 3 2.6 2.2 1.8 1.4 1 0.6 0.2 Max = 3.27903 Max = 3.29984 CORNELL Max = 0.0129218 0.012 0.011 0.01 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 Max = 0.0128783 3 2.6 2.2 1.8 1.4 1 0.6 0.2 MC-SOBOL (bottom) Materials Process Design and Control Laboratory100 0.012 0.011 0.01 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 Conclusion An adaptive hierarchical sparse grid collocation method is developed based on the error control of local hierarchical surpluses. Besides the detection of important and unimportant dimensions as dimension-adaptive methods can do, additional singularity and local behavior of the solution can also be revealed. By employing adaptivity, great computational saving is also achieved than the conventional sparse grid method and traditional gPC related method. Solution statistics is easily extracted with a higher accuracy due to the analyticity of the linear basis function used. The current computational code is a highly efficient parallel program and has a wider application in various uncertainty quantification area. CORNELL U N I V E R S I T Y Materials Process Design and Control Laboratory101