Sinc-Galerkin solution of second-order hyperbolic problems in multiple space dimensions by Kelly Marie McArthur A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics Montana State University © Copyright by Kelly Marie McArthur (1987) Abstract: A fully Galerkin method in space and time is developed for the second-order hyperbolic problem in one, two, and three space dimensions. Using sinc basis functions and the sinc quadrature rule, the discrete system arising from the orthogonalization of the residual is easily assembled. (Two equivalent matrix formulations of the systems are given. One lends itself to scalar computation while the other is more natural in a vector computing environment. In fact, it is shown that passing from . one to the other is simply a notational change. In either setting the move from one to two or three space dimensions does not significantly affect the ease of implementation. Intermediate diagonalization of each matrix representing the discretization of a second partial leads to the diagonalization of the overall system. The method was tested on an extensive class of problems. Numerical results indicate the method has an exponential convergence rate for analytic and singular problems. Moreover that is independent of the spatial dimension. SINC-GALERKIN SOLUTION OF SECOND-ORDER HYPERBOLIC PROBLEMS IN MULTIPLE SPACE DIMENSIONS by Kelly Marie McArthur A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics MONTANA STATE UNIVERSITY Bozeman, Montana May 1987 £>37£ M ll 9f C-Ojp,^ ii APPROVAL of a thesis submitted by Kelly Marie McArthur This thesis has been read by each member of the thesis committee and has been found to be satisfactory regarding c o n t e n t , English u s a g e , f o r m a t , citations, bibliographic s t y l e , and consistency, and is ready for submission to the College of Graduate Studies. ^-Q j m 7 Chairperson, Graduate Committee Date Approved for the Major Department JZd / 9 * 7 HeauTyMathema^icad Sciences Department Date Approved for the College of Graduate Studies jr-?A Date Graduate Dean iii STATEMENT OF PERMISSION TO USE In presenting requirements University, for this thesis in partial fulfillment of the a doctoral Montana State of the Library. I further agree of this thesis is allowable only for scholarly p u r p o s e s , consistent U .S . at I agree that the Library shall make it available to borrowers under rules that copying degree Copyright Law. with "fair Requests use" as for prescribed in the extensive copying or reproduction of this thesis should be referred to University Microfilms I n ternational, Michigan 48106, to reproduce 300 North Zeeb Road, Ann Arbor, to whom I have granted "the and distribute copies of exclusive right the dissertation in and from microfilm and the right to reproduce and distribute by abstract in any f o r m a t ." Date iv ACKNOWLEDGMENTS The author thankfully acknowledges the superb typing of Ms. R e n e 1 Tritz and Eric G r e e n w a d e . for his the invaluable Thanks also go careful reading of to Professor this w o r k . author recognizes Professor Frank and extension of sine Stenger function t h e o r y . and willingness to discuss another generation computer assistance of new problems Norman Eggert In addition, for the the revival His clear exposes have of sine function s t u d e n t s . led to yet Finally, the author thanks Professor Kenneth L . Bowers and Professor John Lund, whose guidance is responsible for this work. possess the rare ability to teach one to teach oneself. They V TABLE OF CONTENTS Page 1. 2. 3. 4. 5. I N T R O D U C T I O N.... ................................... THE SINC-GALERKIN SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS ......... I 8 Interpolation on (-=, = ) ......................... Quadrature on (-»,<»)............................ Interpolation and Quadrature on Alternative A r c s . . . . . ................. Sinc-Galerkin M e t h o d ............................ 9 18 22 26 THE SINC-GALERKIN METHOD FOR THE WAVE E Q U A T I O N .............................. 35 SOLUTION OF THE DISCRETE SINC-GALERKIN SYSTEM... 56 Classical M e t h o d s ................................ Solution of the Discrete System in One Spatial V a r i a b l e ................ Solution of the Systems in Two and Three Spatial V a r i a b l e s ....................... . 56 61 65 NUMERICAL EXAMPLES O F THE SINC-GALERKIN M E T H O D . . 71 Exponentially Damped Sine W a v e s . ............... Singular P r o b l e m s .......... Problems Dictating Noncentered Sums in All I n d i c e s . ......... 78 82 REFERENCES CITED 86 91 vi LIST OF TABLES Table 1. Page Machine Storage Information for the Matrices B (^ ) and B^J ) ........................... 58 2. Numerical Results for Example 5 . 1 ............ 79 3. Numerical Results for Example 5 . 2 ............... 80 4. Numerical Results for Example 5 . 3 ............... 81 5. Numerical Results for the Damped Sine W a v e .... 81 6. Numerical Results for Example 5 . 4 ....... 82 7. Numerical Results for Example 5 . 5 ............... 84 Numerical Results for Example 5 . 6 ............... 85 8. . / 9. Numerical Results for the Singular Problems.... 85 10. Numerical Results for Example 5 . 7 . . . . ........... 86 11. Numerical Results for Example 5 . 8 ............... 88 12. Numerical Results for Example 5 . 9 ............... 89 13. Numerical Results for Problems Dictating Nbncentered Sums in All Indices. . ............... 89 vii LIST OF FIGURES Figure Page 1. The Domain of D e p e n d e n c e .......................... 3 2. The Numerical Domain of Dependence for the E x p l i c i t , Centered Finite Difference M e t h o d .................................. 4 3. S (0,1 ) (x) = sine (x) , x 6 R ........ .............. 10 4. The Domain D g ............... ..... . ................. 14 5. The Conformal Map X ........... ............... . 6. The Regions Dg , Dw , and Dg ........................ 38 7. The Basis Functions S q (X) and S q (t ) .............. 39 . 23 viii ABSTRACT A fully Galerkin method in space and time is developed for the second-order hyperbolic problem in one, two, and three space dimensions. Using sine basis functions and the sine quadrature rule, the discrete system arising from the orthogonalization of the residual is easily a s s e m b l e d . ^Two equivalent matrix formulations of the systems are given. One lends itself to scalar computation while the other is more natural in a vector computing environment. In fact, it is shown that passing from . one to the other is simply a notational c h a n g e . In either setting the move from one to two or three space dimensions does not significantly affect the ease of implementation. Intermediate diagonalization of each matrix representing the discretization of a second partial leads to the diagonalization of the overall system. The method was tested on an extensive class of p r o b l e m s . Numerical results indicate the method has an exponential convergence rate for analytic and singular problems. Moreover that is independent of the spatial dimension. CHAPTER I INTRODUCTION The general seco n d - o r d e r , linear partial differential equation (1.1) Aux?c + Bux y + Cuyy + Dux + Euy + Fu = G , where A, B, C, D, E , F, and G are functions of x and y only, is classified at the point (B2 - 4A C ) (x,y). zero, or That is, when negative parabolic, the or elliptic, reviews classical (x,y) via the discriminate (B2 - 4AC)(x,y) equation at (x,y) respectively. The is positive, is hyperbolic, present chapter results for a purely hyperbolic, constant coefficient problem corresponding to A = -I, C = I, and B = D = E conditions, = F the = 0 . Including boundary and initial specific partial differential equation examined is u tt (x,t ) - u x x (x,t ) = G (x,t ) u(0,t) = u (I ,t ) = 0 (x,t ) E (0,1) x (0,“ ) t > 0 ( 1 .2 ) u(x,0) = f(x) 0 < x < I Ut (XzO) = g(x) This problem is referred equation since it models string with the 0 < x < I to as a one-dimensional wave displacement of a vibrating initial displacement f, initial velocity g, and subject to an external force G . 2 The equation in (1.2) is representative of one of the two hyperbolic canonical forms due to the absence of the term Ux t - In alternative c o n t r a s t , the canonical form distinguishing feature is term is the mixed partial U x t . of the that its only second-order The change of variables f = x + t and 'n = x - t transforms the equation in (1.2) to (1.3) -4u?)? = G((f + i?)/2 , (f - Ti)/ 2) (?,%) G (0,=) x It is well-known that found by solving two depend on and t, the correct change of variables is ordinary differential the discriminate. Farlow [I]. and By integrating applying (-<»,I) equations which For a complete discussion see (1.3), restoring the variables x the initial and boundary conditions, d'Alembert explicitly solved (1.2) [2]. His result of 1747 is u(x,t ) 1 f (x + t) + f(x — t) + x+t J g (<X)da 2 x-t (1.4) t where f , g, and J x + (t — p ) J G(a,p)dadp O x-(t-p) G are extended as odd periodic functions when necessary. The solution particular where (1.4) solutions, is the uH (x,t) sum and of homogeneous uP (x,t), and respectively. 3 (1.5) 1 UH (X ,t ) 2 x+t f(x+t) +f(x-t) + J g (a )da and t x+(t-p) J up (x ,t ) ( 1 .6 ) G (a ,p )dadp x-(t-p) From the form of the homogeneous that u H at a fixed point As a result interval of d e p e n d e n c e . to find [Xq [xQ - t0 , X q + tQ ] on the - tQ , xQ + tQ ] is called the The particular solution may be used the domain of dependence shown by the shaded region in Figure I below. The curves representing f and n that the characteristics variables Notice it is apparent (Xq ,t 0 ) depends only on the initial conditions over the interval x-axis. solution, dependence out of are the called x-axis the change of characteristics of cut and the define (1.2). interval of the domain of dependence. xn + t Figure The Domain of D e p e n d e n c e . The significance of constraint it imposes on the domain of dependence is the schemes used to numerically solve the one-dimensional wave equation (1.2). This constraint is 4 called the A scheme Courant-Friedrichs-Lewy which condition is provides a the centered (or ready C F L ) condition [3]. illustration of the finite difference method defined explicitly by the difference equation (1,7) U i,j = m2(Ui+l,j-1 + U i - l >j - l ) + 2(1 " m2)Ui,j-i " U i,j-2 + (At)^ G (x i 't J - I ) • Here U i ^ approximates u at m = At/Ax is derivation of Ames [4]. difference below). the ratio (Xi ,tj) = (iAx,jat) of the stepsizes. the time-marching scheme Stepping back gridpoints which These gridpoints dependence outlined in time influence 2. (1.7) A detailed is included in determines blanket the in Figure and U i j (see Figure 2 numerical domain of The CFL criteria states that a necessary condition for the convergence of At and must Ax -» contain requires m 0) is the <1. the finite (1.7) (as that the numerical domain of dependence analytic domain of dependence. This The computational impact of restricting m < I is briefly discussed in Chapter 4. t. -- Figure 2. The Numerical Domain of Dependence for the E x p l i c i t , Centered Finite Difference Method. 5 The method condition of the present trivially. Galerkin method, This builds Fourier tensor products of conformal maps. (0,1) x satisfies technique, called the The approximate expansion sine CFL the Sinc- an approximate solution for valid on the entire domain. generalized work (1.2) is a truncated of basis elements which are functions composed with suitable ,The support of each.basis element is (0,») hence, the method may be termed a spectral method and its numerical domain of dependence is identically the domain of the partial differential equation. To ease applied to the description (1.2), equations is of the Sinc-Galerkin method its counterpart for derived in Chapter 2. ordinary differential Here the basis elements are single sine functions composed with conformal maps. pertinent sine function properties needed to construct an approximate solution are reviewed. that when 2N + approximate, Ofe-K^fN"), > s i n g u lar i t i e s . than that I basis Further, functions are o, Stenger occurs even it is shown used to define this the optimal exponential order of K The in the convergence, presence of [5] discusses a more general setting considered in Chapter 2. The chapter closes with the formulation of the discrete linear system whose solution specifies the result that Lund approximate. system is symmetric, a [6] has shown depends on the correct choice of weighted inner product. i ) This 6 Chapter (with f 3 = g equations writing. extends two and most the Sinc-Galerkin and then = 0) in the further to three common space method to (1.2) the analogous wave dimensions. procedure for At this solving partial differential equations with a semi-infinite time interval is a Galerkih discretization of the spatial domain with time dependent c o e f f i c i e n t s . The result is a system of ordinary differential usually equations techniques. Botha discretizations of functions. In and space solved Finder [3] using contrast, Gottlieb and consider only However, element Orszag basis [7] They show use that in because Gottlieb and Orszag finite difference techniques for the temporal the time solution acknowledge Galerkin of these spectral methods exhibit an exponential convergence rate. domain, difference develop finite globally defined spatial basis elements. space many via the has finite-order incompatibility accuracy. They of the error statement in time versus space with the following remark: No e f f i c i e n t , infinite-order accurate timedifferencing methods for variable coefficient problems are yet k n o w n . The current state-of-the-art of time-integration techniques for spectral methods is far from satisfactory on both theoretical and practical g r o u n d s . . .[7]. The point of view just cited time domain. taken here differs from the two sources by carrying the Galerkin discretization into the Chapter 5 reports numerical attest to the success of this notion. i results which 7 Besides developing the Sinc-Galerkin introduces notation to resulting discrete facilitate systems. the depends somewhat Chapter 3 description of the These systems are posed in two algebraically equivalent matrix f o r m s . to use method. The choice of form on available computing facilities. Chapter 4 discusses this topic along with algorithms for the solution of the linear systems in either form. The nine groups of examples included in Chapter 5 are broken into three. Sinc-Galerkin method space dimensions. have analytic combinations Each group For problems instance, solutions of singularities. for highlights a both in one, the while feature of the two, and three first three examples the second algebraic and three logarithmic The numerical results show that the rate of convergence is not affected by this singular behavior. last three the The examples show the dramatic reduction in the size of the discrete system parameter have selections. asymptotic error solved when Finally, 0(e"~^'^") , care is exercised in each group indicates that K > 0, is independent of the dimension of the wave equation. \ attained 8 CHAPTER 2 THE SINC-GALERKIN SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS The goal of this chapter is to derive the discrete SincGalerkin system necessary to build an approximate to the solution of Lf(x) s f " (x) + v(x)f(x) = o(x) a < x < b (2 . 1 ) f (a) = f(b) = O valid on the interval (a ,b ). A symmetric matrix formulation for the system can be posed and i s , numerically. The resulting in f a c t , approximate solution converges to the true solution on (a,b) at the is a positive constant to build the approximate. maintained in the statements analysis a of where K Further, the convergence rate is presence of singularities' (the solution theory is n e c e s s a r y . error rate 0 ( e _l<:JTr) and 2N + I basis functions are used has an unbounded derivative) these easy to solve on background the in boundary. general sine function In p a r t i c u l a r , the foundation the Sinc-Galerkin To prove for the method is the error associated w ith the truncated sine quadrature r u l e . 9 Interpolation on Numerical Whittaker's at the sine function methods are rooted [8] work concerning interpolation of integers. Rather than based on polynomials, properties are proper setting. The a function using well-known expansions Whittaker far more in E .T . sought an distinguished when foundation of expansion whose applied in the the series is the sine function sin(Tcx) sine(x) (2 .2 ) x E R ItX shown in Figure 3 below. The resulting formal cardinal series for a function f is (2.3) ^ f(k)sine(x - k) k=-«> To generalize (2.2) and (2.3) to handle interpolation on any evenly spaced grid define for h > 0 sinc^- S(k,h)(x) (2.4) and denote the Whittaker cardinal function by (2.5) C (f ,h ) (x) E Y f (kh)S(k,h)(x) k=-™ whenever this (2.5) series c o n v e r g e s . In engineering literature is often called a band-limited series. 10 Figure 3: S ( 0 , I ) (x) = sine(x) , x E R. With regard to using (2.5) as an approximation t o o l , two important classes of functions must be identified. The first is the very restricted class for which (2.5) is exact. The second which the is the difference between class f and C (f ,h) [9] and M c N a m e e , et.al. task by displaying of a functions is s m a l l . [10] accomplish natural f for J.M. Whittaker this identification link between the Whittaker series and aspects of Fourier series and i n t e g r a l s . The Fourier transform for a function g is ( 2 .6 ) A fundamental result of Fourier analysis is that if g E L 2 (R) then g E L 2 (R) and g is recovered from g by the Fourier inversion integral ii g( t) (2.7) For a select = — 2ir set f J of g(x)e ixtdx functions, the Paley-Wiener Theorem shows that the inverse transform has compact s u p p o r t . Their result is Theorem (2.8): If g G L2 (R), positive constants A and C so that entire, |g(w)| and there exist < C e x p { A | w | } where w G (5> then (2.9) g(w) I A a = 2 jt J g(x)e ixwdx -A . Showing that the sine function satisfies the hypotheses of Theorem (2.8) with A = it and C = I is straightforward. An elementary calculation gives (2.10) sinc(w) = J e " ixwdx X (_ K z K )e ™ ixwdx ; -it hence, - 00 an immediate consequence of Theorem (2.8) is CO (2.11) SincT(X) = I To accommodate the (2.4), (2.12) sinc( t ) e ixtdt = X (-*,*) (x) translated sine function a change of variables in (2.10) gives S (k ,h )(x ) J sine h e ixtdt appearing in 12 _ J1 e ixS ^ (-ir/h, K/h) (x) for fixed real S . The support of the characteristic function in (2.12) prompts the definition of a class Paley-Wiener c l a s s , which is of functions, called the naturally associated with the Whittaker series. Definition (2.13): Let B (h) be the set of functions that g 6 L2 (R), g is e n t i r e , and |g(w)| g such < C e x p {tc |w|/h) where w 6 0. Before the Whittaker function can be discussed, one result is vital. Theorem (2.14): If g E B(h) I zt-w\ = - J g (t )sine )d t . then g(w) ' -CD A proof of Theorem (2.14) is found in [10]. completeness the converse of Theorem Theorem (2.15): (2.14) If g E L2 (R) then k(w) I ' For is 00 zt-w\ - r g ( t ) s i n e (-^-Idt is in B ( h ) . Proof: Using P a r s e v a l 1s Theorem with the inner product <f,g> = J for k. f (t ) g (t ) dt establishes the growth estimate Application of M o r e r a 1s Theorem proves the Cauchy-Schwarz inequality yields k E L2 (R). entirety and 13 The significance of all the proceeding work is in the then g(w) “ ak S(k,h)(w) subsequent elegant theorem. Theorem (2.16): If g G B(h) J k=-® where Proof: (2.17) ak = h J t - kh g(t)sinc^— ---- jdt = g(kh) The Paley-Wiener Theorem, e-ixw _ ^ sin (?)Z the identity (-D k .ikhx w+kh k=-« and the uniform convergence theorem justify the ensuing steps: g(w) ir/h J g(x)e ixwdx I -rc/h jr/h sin h jr h re Z k=-® L k=-® ^ k=-® (?)/ -rc/h g(x) ,k, (-l)*"sin(Kw/h) w + kh (-D I k=-o I sin( rcw/h) w + kh g(kh) sinc^W ^ ikhx k e (- 1 ) w+kh . I g (x)eikhxdx 14 Hence for f £ B ( h ) , f(w) = C(f,h)(w) for all w £ p. that this is a stronger result than originally initial quest was for a class of functions Whittaker series was exact on R. is derived from Theorem (2.16) sought. Note The such that the An even stronger statement and the identity I, if A = k (2.18) I J S(k,h)(t)S(A,h)(t)dt = 0, if A ^ k that is, Theorem (2.19): The set | orthonormal set in B(h) Unfortunately the S (k ,h) | [10]. set B(h) is extremely restrictive and some relaxation is necessary if (2.5) interpolatory tool. here called BP(Dg ), approximation to is a complete is used as a practical M c N a m e e , et. a l . [10] where C(f,h) f is very good. is identifies a s e t , not exact In particular, but its the domain of analyticity for B P ( D g ) is D g . Definition (2.20): Dg = (z: z = x + Figure 4: iy, The Domain Dg |y| < d £ R, d > 0} 15 The class BP(Dg) Definition Is specified by Definition (2.21): Bp (Dg) is the (2.21). set of functions f such that (2.22) f is analytic in D g , (2.23) J d If (t+iy) Idy = 0(|t|a ) as t -> ±® where 0 < a < I, -d and for p = I or. 2 . Np (f ,D o ) = Iim y-»d 00 „ r ■ If(t + iy)Ip dt J . “ 00 . (2.24) 1/p" iy)Ipdt < OS . The exact form of the error is given Theorem (2.25): by Theorem (2.25). If f E Bp (Dg ) .then £ (f) (x) = f (x) - C (f ,h ) (x) where san( jrx/n; e ( f ) (x) J _______ f(t-id)_________ (t-x-id) sin[ it(t-id) /h] (2.26) f(t+id)_________ (t-x+id) sin[ jt (t+id) /h] Moreover if f E B 1 (Ds ) '5 B(Ds ) then N 1 (f,Dg ) (2.27) e (^) H00 - 2,red sinh( icd/h) N ( f ,Ds ) 2rcd sinh( Tcd/h) 16 while if f G B 2 (Dg ) then N 2 (f ,D g ) Ile (f) Ilro < ------ = ---------2 VE3 sinh(Kd/h) (2.28) and N 2 (f,Dg ) llc(f) H2 (2.29) For a proof see s i n h (Kd/h) Stenger [11]. Worth error statement of Theorem (2.25) line. Theorem However, (2.16), the original goal line and Theorem Of far (2.25) IIs(T)IIo3 2 B2 (Dg ) . is that the is valid only on the real applies to the complex plane. was approximating on the real certainly satisfies that. greater interest from (2.27) , (2.28), hence, recall, noting, is the order statement derived and (2.29). As h -> 0 sinh(jcd/h) -» ® ; 0 independent of whether f E B 1 (Dg ) or M o r e o v e r , the rate of convergence is governed by sinh (Kd/h) ; i.e., l/sinh( rcd/h) = 0( exp (-Kd/h) ) as h 0. Although the exponential convergence rate is attractive, to be of practical importance (2.5) is truncated. it must be maintained when Denote the truncated Whittaker series for a function f by ^ (2.30) CM > N (f,h)(x) = Y J /x - kh\ f (kh)sinc^— --- k=-M where it is assumed C (f ,h ) (x) converges. Theorem (2.31): If f G BP(Dg ) for p = I or 2, d there exist positive constants a and (3 such that >0 and 17 (2.32) e«x if x 6 (-=,0) S - I3x if x E [0,® ) |f(x)I < L then choosing (2.33) N = a M + I P and h = (K d / (a M ) )% (2.34) gives (2.35) Hf - CM / N (f ,h)||oo < C M* e “ (JtdaM)^ where C is a constant dependent on f . Proof: that From Theorem (2.25) there exists a constant L 1 such If (x) - C (f ,h) (x) I < L1S-icdZ*1 the triangle inequality and for all x J |f (-kh) I + k=M+l If (k h ) I k=N+l k=M+l £ B- I3kh k=N+l e-aMhe- PNh ----- + — < L1B-jcdZh + L Now if N and h are defined by (2.33) and (2.34), S Lie- ( ^ M ) X J J S“ akh + < L 1B-jcdZh + L - CM ,,(f,h)(x)| Using (2.32), |f(x) - CM f N (f,h) (x) I < L l6- jcdZh + K(X) E R. + + then e — {xdaM)X _-(JtdaM)^ = C M7* e The choice errors. of N and h is dictated by balancing asymptotic The truncation errors for the lower and upper sums 18 are of orders 0 ( a n d Ofe- ^ * 1), respectively. these order statements gives N = convenience to Using (2.33) guarantee N is an integer. is deduced by equating and the aM/|i. the truncation Equating is a The choice for h error order 0(e- a M h ) order of the interpolation error 0 ( ) . Hence, truncating sums for computational feasibility need not be at the expense of the exponential convergence rate. carries over during the development of This rate the sine quadrature rule. Quadrature on The interpolation (-=,«) results just reviewed are the groundwork for the derivation of the sine quadrature rule on the real line. As with interpolation, formula involves infinite s u m s ; deducing the error caused line. The use a primary by truncation. numerically practical means of the real hence, the quadrature task is This leads to a estimating an integral over of conformal maps generalizes the sine quadrature to alternative c u r v e s . A few preliminaries are necessary before stating the quadrature theorem analogous to Theorem (2.25). /x - kh\ sincf— ---- Idx = Lemma (2.36) .iax Lemma (2.37) f x - z, I , h > 0, k 6 Z e iaz0, Im(Z0 ) > 0 0 Im(Z0 ) < 0 2ici , . 19 Lemmas (2.36) and (2.37) are proved using standard contour integration. Lemma (2.38): If f G L 1 (R) then J C ( f ,h)(x)dx = h £ f(kh) k=-= where h > 0 and k E Z. Proof: By the integral test £ |f(kh)| converges. k=-® Since /x — kh\ If(kh)sinc^— ---j( < |f(kh)| , /x x — - kh f (kh)sincf— - J k=-<» converges absolutely and, M - t e s t , uniformly. ! C (f,h)(x)dx t h e r e f o r e , by the Weierstrass Thus f I -Co f(kh)sinc(^ ^ ^dx k=- ^ f(kh) J )dx —0 Ic--OO = h sinc(iL_— — 2 f(kh) k=-oo These three lemmas along with. Theorem background to prdve the following. (2.25) provide the 20 Theorem (2.40) (2.39): J — If f G B(Dg ) s B 1 (Dg ) then f (x )dx = h CO £ — f (kh) + r?(f) — OT where (2.41) I f?(f) c (f )(x )dx and e -jcd/h N (ffDs) (2.42) |i?(fH Here s(f)(x) S 2sinh( Jtd/h) and N( f , D g ) are defined in (2.26) and (with p = I), respectively. Proof f s(f) (x)dx I f (x )dx - J C (f ,h )(x )dx J f (x )dx - h £ — f(kh) Ic=-CO CO From the left hand side, using (2.26) oo 17(f) s J J J E(f)(x)dX sin(rex/h) 2rci J f (t-id)dt (t-id-x) sin( it(t-id) /h) f (t+id)dt (t+id-x) sin( jc(t+id) /h) (2.24) 21 irct/h _ f (t-id)e -I irt/h 00 f (t+id)e (t+id)e:L,Tt/n J sin( Jt(t+id) /h) sln( jc (t-id)/h) e-n:d/h 21 where F u b i n i 1s Theorem and Z0 = t ± id give expression leads to (2.37) with a = ic/h and the last equality. (2.42). The result of Theorem (2.39) the familiar Bounding the last trapezoidal rule the rate of convergence for is that sine on the (2.40) quadrature is real line. However, is 0 (e -2,rd/h ) rather than the usual 0 (h2 ) that is associated with the trapezoidal rule when f" is b o u n d e d . restriction f G Sine B(Dg ) function properties which, in. lead to the turn, accounts for the vastly accelerated convergence rate. As with interpolation, truncating the sine numerical practicality calls for q u a d r a t u r e , n e e ' trapezoidal, series. Define the truncated trapezoidal series (2.43) TM N (f,h) N 5 h £ f(kh) , h > 0 . k=-M Analogous to Theorem (2.31).is Theorem (2.44). Theorem (2.44): for some positive constants as in (2.33) and (2.45) If f G B ( D g ), d > 0 and f (2.34) gives satisfies (2.32) a and |3, then choosing N and h 22 < K e_2ird/h + - e-aMh + - e“ pNh 1 a . p . < c e-(2judaM)^ where K 1 , K, and C are c o n s t a n t s . The proof (2.31); [6]. follows in a like fashion to that of Theorem for greater detail see Stenger Once again selections for asymptotically balancing [12], N and [13] and Lund h are errors. motivated by Finally, note the increased rate of convergence for quadrature, 0 (e- (2)tdaM) ^ versus interpolation, 0(e_ ^ d a M ^ ) . Interpolation and Quadrature on Alternative Arcs The preceeding results hold useful in the setting of for x G building To be a Sinc-Galerkin approximate for the solution of a differential equation, results Stenger must be generalized to [5] shows that conformal maps extension. alternative the inte r v a l s . provide the means of His discussion is briefly outlined here with the goal in mind the statement of the quadrature rule. Let D be a simply Definition (2.20). a conformal map X : (2.46) and connected domain and Dg be as in Given a,b G 3D such that a ^ b, there is D Dg (see Figure, 5) satisfying V(R) = X - 1 (R) = Y Figure 5: The Conformal Map X. With respect to D, the following definition is similar to ( 2 .2 1 ) . Definition (2.48): Let B(D) be the class of functions such that f is holomorphic on D; (2.49) J If (w )dw I = 0(|t|a ) as t -> <*> V (t+L) where (2.50) a G [0,1) and L = N( f ,D) E Iim inf CCDC-»9D (iy: |y| < d) f |f(w)dw| ^ ; and < =° where C is a simple closed curve in D. Rather rules for than D developing from interpolation and quadrature s c r a t c h , two theorems provide a natural link to the past w o r k . 24 Theorem (2.51): If V is a conformal simply connected domain D and if f G map of D g onto the B ( D ) z then F G B(Dg ) where (2.52) 5 f (Y{w))Y-'(w) F(w) . Also Theorem (2.53): If X: D -> D g is a c o n f o r m a l , one-to-one and onto mapping then V = X -1 is c o n f o r m a l . Stenger is a [13] remarks on Theorem (2.51) standard complex while Theorem (2.53) analysis r e s u l t . Finally a revised version of the exponential decay property Definition (2.54): Let f satisfying the conditions -c G Y = = V(R) exponentially G B(D), X of Theorem (2.53), X -1 (R) • with (2.32) respect Then to be f/X' X is needed. a conformal map and is said if there exist to decay positive constants K, a and P such that f (X) (2.55) exp (-a |X(T:)|) , T G Yl < K X ' (T) e x p (-pI X (t ) I) , T G y r where Yl = {z: z E f( (-==,0) ) } (2.56) Y r = {z: Theorem (2.51) Z G V ( [ 0 , » ) )}. and Definition (2.54) suggest that the conformal map is incorporated into the previous theorems and 25 definitions quite easily. The interpolation arid quadrature rules that follow support this notion. Theorem (2.57) : hypotheses of Let f G B(D) and Theorem (2.53). = Y (k h ), (2.33) and D Dg satisfy the F u r t h e r , suppose f / X 1 decays exponentially with respect to X. and zk X: If z E y = Y(R) = X - 1 (R) k E Z, h > 0, then selecting N and h as in (2.34), respectively, N gives f (Zlc) X (z )-kh h (2.58) < K e~(%daM)% where K is a constant independent of z. Note that to interpolate f rather than f / X 1, simply let fX 1 and substitute hypotheses of into (2.58) Theorem (2.57). F = assuming F satisfies the Equation (2.58) shows that the rate of convergence for sine interpolation on a curve y remains 0 (e~ (^dxxM) ^ ) Similarly, the rate of convergence for the sine quadrature rule is u n c h a n g e d . Theorem (2.59): (a) IffE (2.57), B(D) and X , Y , z, and Zjc are as in Theorem. then for h sufficiently small 26 (2.60) I » ^ f (z)dz - h f (zk ) '(Zk ) N(f'D) ^ - 2 ?rd/h 1-e" k=-= B K 2 e-2xd/h (b) Further, if f / X 1 decays exponentially with respect to x then (2.61) J I f (z )dz - h f(=k) X 1 (Zk) < K 9 e-2*d/h + K e -«Mh 2 a k=-M + E e“ PNh P (c) Finally, if N and h satisfy (2.33) and respectively, (2.34), then for some constant 0 depending on f (2.62) K 0 e“ 2icd/h + - e_aMh + - e” PNh < C e™ <2jrd(xM)?i 2 . a . P Steriger [12] provides proofs for centered s u m s , that these to handle M # N. rationale for the use suitable assumptions of Theorems is, M =N. (2.57) and (2.59) Lund [6] expands on Both of their proofs illustrate the of conformal maps; namely, under the maps are a means of transferring a standard set of results from Dg to various domains D. significant, . Most is that conformality preserves the e r r o r . Sinc-Galerkin Method \ The previous discussion is \ the background necessary to \ approach the numerical solution of differential^equations 27 via the work, Sinc-Galerkin it is sufficient method. For the purposes of this to examine the method specific class of differential equations. the second-order, applied to a Hence, consider self-adjoint boundary value problem Lf(x) s f"(x) + v(x)f(x) = o (x) , a < x < b (2.63) f (a) = f(b) = 0 (b,0)} C SD n O *< {(a,0), 1C .Il O < in (2.64) (U A X A O' Let D be a simply connected domain with and (2.65) With regard to D , the solution of the Sinc-Galerkin (2.63) set of basis functions (2.66) where X : (2.53). L . is summarized as follows. (Sj. ) S i (X) s S (i , h) o x(x) D method to approximate by , h > 0 Dg is described in (2.46), See Figure 5. Define the (2.47), and Theorem Next define the approximate solution by A (2.67) fm (x) s N £ fiSi (x) , m = M + N + I . i=-M To determine the unknown coefficients, f^ , orthogonalize the residual with respect to the basis e l e m e n t s ; that is, 0 = (Lfm - o, Sp ) (2.68) = . ( fm " , Sp ) + (vfm - a , Sp ) , -M < p < N . 28 Here, the inner product is b (u,v) = J u(x)v(x)w(x)dx (2.69) a where w is a weight function yet to be specified and the quadrature rule used to evaluate the inner product is (2.61). The resulting discrete linear system is solved for fi , -M < i < N. A general and Finder by which review of [3]. ease Their discussion to judge (i) the ease of of constructing the evaluating regard to demonstrated by on the it is function leads the and the inner (iv) the shown that an (ii) the p r o d u c t ; (iii) the ease of accuracy Sinc-Galerkin elements and These include: basis elements; of method, (2.66) while the second sine basis the third, includes several criteria the quality of a method. solving the system; With Galerkin methods is found in Botha the method. the is shown first is to depend sine quadrature rule. adroit choice to a system which is easily solved. For of weight Lastly, the accuracy follows directly from Theorem (2.59). Stenger [5] thoroughly discusses W = introduces w = 1/(X')%. 1 / X 1 and Lund [6] S t e n g e r 's choice of weight function handles regular singular problems but yields a nonsymmetric linear results* system. Lund's weight function in a symmetric linear system while possibly restricting the class of functions to which the method applies. The development of the discrete system for a general weight motivates the 29 choice w = I / ( X 1 as well as addresses the questions raised in the previous p a r a g r a p h . Continuing with inner product (2.68), for -M < p < N integrate the (f",S ) by parts twice to get b O = J f " (x)Sp(x)w(x)dx a + [V (K )f (x) - O ( X ) ]S p ( X ) W ( X ) d x 1 + J J f (x){[Sp (X)W(X)]" + v (X)Sp (X)W(X)}dx o (x)Sp (x)w(x)dx (2.70) b BT + f f (x) r d2 M -S (p,h) + S(p,h) I " S(p,h) o X(X) o x(x) ( X '(x))^w(x) (X"(x)w(x) + 2X'(x)w'(x)) o X(x)(w"(x) + v(x)w(x)) (x)S (p,h) e X (X )W (X )dx where BT is the boundary term b { f 1(x)Sp (x)w(x) - f (x)[Sp (x)w(x ) ] 1 \ (2.71) BT = For now, BT is assumed to vanish. a • The exact assumptions 30 governing BT =0 are discussed conformal maps are (2.61) to (2.70) used. To in Chapter 3 when specific apply the sine quadrature several suppositions are necessary. First, f [SpW]" satisfies the hypotheses of Theorem (2.59). fvSp w and a S p W are in B ( D ) . It is unnecessary for f vSpw / X 1 and oS w / X ' to decay exponentially with respect to X. This (2.60) for g evaluation. That P is because for g 6 B(D), the quadrature rule integrated against the sine Second, yields point is, (2.72) J g(x)Sp(x)dx = g(ph) + 0 (e-n:d//h) . a To proceed, the following three identities are u s e f u l . I , p = i (2.73) S (p ,h) o x(x) = 6 (0 ) 0 , p 7* i X=X, (2.74) hi— = 6 S (p,h) oX (x) 0 (I) x=x. (-l)l-P , P t4 ! . i-P and S (p ,h) o X(x) X=X . LdX' - jc2/3 (2.75) 6(2 ) Pi , P = i -2(-l)i-p , P T5 i (i - P )' Applying the quadrature rule results to (2.70) yields 31 N I f (X i ) i=-M I ~2 &p V Lh X ' (x I^w (x I) M x / X u (Xi ) + h 6P i l ( F H I T w(Xi) + 2w m x I i (2.76) . + - p i X X ? 6(0) I 1 v l x I lwfxI 1 (X i ) X 1 0txI lwtV p i X 1 (X i ) . + o,e-(KdnM,^, ( X i ) i=-M An equivalent matrix formulation is — I (2 ) D(X'w) h2 + - I (1) D "I'" + 2w'"J Lx' J h (2.77) + D w" + VW X' where (2.78) ? - ^-M+1' ^N-I' ^ ' GT(X-M) g ( x -M+l) (2.79) D(g) G(Xw) mxm 32 I (2) 6(2) Pi 2 2(- I )m (BI-I) 2 2 - JC 2 2 ( - D m- 1 3 72 (m - 2 ) 2 (2.80) 2(- I )m 2(- I )m 1 (m- 1 )2 (m- 2)2 mxm and 1(11 ■ t t ¥ ) i. 0 -1 I 0 1 2 - ( - D m' 1 m-1 ( - D m"2 1 m-2 (2.81) (_l)m -l - ---- --_ m-1 For an arbitrary weight solve (2.77). (_1 )m-2 - ---- -— m— 2 function it . . . 0 is unclear how to In p a r t i c u l a r , the skew-symmetric nature of I (1 ) causes difficulties. It can be argued that as h -» I (2 ) is the dominant matrix in the system. 0 This is somewhat satisfying in that l(2 ) is a symmetric, negative definite 33 Toeplitz matrix Lund's (2.82) whose spectra lies in the interval ( - K 2 ,0) [6] notion is to consider X" — w + 2w' = 0 X' or equivalently (2-83) X" —2w' r " "V” * Integrating both sides of (2.83) yields w = 1/(X')% and the system now simplifies to — I (2) D ( (X ')^) +, D (X')%/ X ' (X' )* (2.84) = D [ o / ( X ' ) 3 / 2 ]f . Multiplying through by D(X') and factoring gives (2.85) At = D (o / (X 1)%)I where (2.86) A = D (X ') — l(2 ) + D I D(X') + (X')*7(X')3/2 (X ')•' and (2.87) f = D(1/(X' The matrix . A is a real symmetric, negative definite matrix; h e n c e , there exists an orthogonal matrix Q such that (2.88) A= QAQT where A is the diagonal matrix Solving for t , (2.89) t = QA -1Q t D (o/(X')^)? of the eigenvalues of A. 34 in turn, "t is recovered via and, (2.90) ? = D ( ( X ')%)# . In review, a symmetric coefficient matrix adjoint problem (2.1) The class of singular appears to [6]. proof is attained by selecting w = I / (X 1) ^ . problems that w = 1/(X')% handles be somewhat more restricted than if w = 1 / X 1 H o w e v e r , this statement is dependent on in for the self- (6]. singular problem, When w the method of = 1/(X')% is known to apply to a the rate of convergence remains 0 (exp(-K'fH)) where 2N + I basis functions are used. 35 CHAPTER 3 THE SINC-GALERKIN METHOD FOR THE WAVE EQUATION Various numerical methods for solving general second- order hyperbolic problems can be found in the literature [3]. Usually a scheme is first developed for the classic vibrating string problem L u ( x , t) E u t t (x,t) - ux x (x,t) = f (x ,t ) (x ,t ) 6 (0,1 )x (0, <») (3.1) Once u(o, t ) = u(i ,t ) = o t e to,a.) u(x,0) x E [0,1] this generalizing = U t (x,0) = 0 model problem routes are adaptations to nonlinearities, is investigated, possible. handle Among . several them are nonconstant coefficients, or higher space d i m e n s i o n s . With regard to the Sinc-Galerkin method, space dimensions. In this chapter the discrete linear Sinc- Galerkin system for it a (3.1) natural extension the present work examines higher is derived in some detail and from to the linear systems three space dimensions is exhibited. for two and The actual solution of the systems is postponed until Chapter 4. A common discretization coefficients. procedure of the for solving spatial (3.1) is a Galerkin domain with time dependent This gives a system of ordinary differential 36 equations typically solvers) [14]. necessity to solved by difference techniques Drawbacks of this artificially truncate fact that the numerical solution is time grid. In the include the the time domain and the valid only work implements a Galerkin scheme in time as well as space. The functions the of the method is largely due to the choice of basis e l e m e n t s ; these basis elements intervals technique on a finite this success of contrast, method (O.D.E. composed with suitable (0,1) and (0,~ ). means for building are tensor The an products of sine conformal maps for the conformal map for (0,«) is approximate solution to (3.1) valid on the infinite time domain. The structure of the discrete computationally efficient result of the for at identities quadrature rule (2.61), least two (2.73) - system is reasons. (2.75) and As a the sine the coefficients of the unknowns and the constant terms are easily property is Sinc-Galerkin evaluated. Note, independent of the weight function. that this A property which is intimately connected to the selection of the weight is the symmetry of the system. shown that a generalization symmetric linear system. discrete system are given. to implement i.e. In the present chapter it is of w = Two matrix The I / ( X 1)^ leads to a formulations for the more convenient formulation is dependent on the computational environment, the available computer h a r d w a r e . 37 To determine solution of the Sinc-Galerkin (3.1) the fundamental steps remain u n c h a n g e d . First, then these basis The unknown using solution. approximate are with respect used in functions define coefficients ((x ,t ): 0 < intervals appearing the The orthogonalization by means of (2.61). I, 0 < t and (0,=>) products are f o r m e d . (3.2) in basis functions on the region x < (0,1) Chapter 2 an approximate is defined with a weight function and evaluated To construct the orthogonalizing the residual basis elements. the quadrature rule for select a set of basis functions, determined by to the approximate 0(z ) = A n < “ }, basis functions are built So to begin, on the and then their tensor define the conformal maps 1 - z and (3.3) T(w) = A n ( W ) The map 0 carries the eye-shaped region (3.4) D- = onto the infinite Similarly, (3.5) onto Dg . Z = X + iy: strip arg Dg I — given < d < jr/2 z by Definition the map T carries the infinite wedge Dw = (w = t + is: Iarg(w)| < d < rc/2} These regions are depicted in Figure 6. (2.20). 38 Figure 6: The Regions Dfi, Dw , and Dg . The compositions (3.6) S 1 (X) = S(i,hx ) o 0(x ) and (3.7) S j (t) = S(j,ht ) o T(t) define the basis elements for (see Figure 7 b e l o w ) . the mesh (0,1) and (0,°°), respectively The "mesh sizes" h x and sizes in Dg for the uniform grids (Ichx ), -a> < k < C0, and (pht ) , -o» < points xk 6 (0,1) p < in Dg and tp G (0,») <=. and Xjc = 0 1 (khx ) = e khx /(I + e khx The sine grid- in Dw are the inverse images of the equispaced g r i d s ; that is, (3.8) h t represent ) 39 Pht (3.9) tp = T “ 1 (pht ) S(O1I) o <j)(x) Figure 7: S(0,1) o T(t) The Basis Functions S q (X) and S*(t). The fully Galerkin approximate is now defined by (3-1°) X ^ t Ix '!) ' ™x “t I' I I=-Mx U ljS 1(X)Sj It) J=-Mt mX = MX + NX + 1 mt = M t + Nt + I The inner product used to orthogonalize the residual is (3.11) (u ,v ) J J 0 u(x,t)v(x,t) 0 dxdt [0'(x)T'(t)]* and may be viewed as the double integral analogue of with w = 1/(X')%. Galerkin system for (2.69) Direct development of the discrete Sinc(3.1) is obtained via 40 -M (3.12) 0 = (Lu X However, this < k < N - f, Sk S ) , • T -Mt < p < N t procedure obscures the important parameter selections needed to implement the m e t h o d . The development of one-dimensional the Sinc-Galerkin systems for the 8 A A = r(t) O u"(t) rt problems (3.13) u(0) = u ’(0) = 0 and 0 < x < I u" (x) = S(x) (3.14) u(0) yields all of problem while = u(l) = 0 the sine clearly matrices ,for the two-dimensional disclosing the parameter selections just mentioned. Beginning with in the orthogonalization detail). (2.60) (3.13), perform two integrations by parts Once this is of done, the residual apply the (see (2.70) quadrature rule in Theorem (2.59) with X replaced by T to get [u"(t) - r(t)]{S(p,ht ) o T(t)(T'(t))-%}dt u ( t ) {S(p,ht ) o T(t)(f'(t))-*}" (3.15) for r(t){S(p,ht ) , T(t)(T'(t))-*}dt dt 41 = ht £ U ( t n.)f I r 6,(2) I 6p ° ’ ) J=-C r(tp ) - h4 3/2 + 1U + Jr (T'(tp )) The second equality in (3.15) assumes the boundary term ( S(PfJK) o T(t) (T' (t) ) « u' (t) (3.16) [S(p,ht ) o T ( t ) (T'(t)) vanishes. ' u (t )} |0 Using the definitions of T (3.3) and S( p , h t ) ° this is true as long as (3.17) Iim u'(t) -Tt/An (t ) = 0 , (3.18) Iim u ( t ) / J T = 0 , t->o+ and (3.19) Iim u(t)/(Jt f+00 « A n (t )) = 0 . With regard to the last equality in (3.15), defined by (3.20) the nodes tp are (3.9) and the identity (T'(t))™3/2 is utilized. [(T'(t)) %]" = -1/4 The integrals Iu and Ir are explicitly defined in [15] and represent the exact error terms in (2.60). Theorem (2.59) , I11 vanishes with order 0(e-,cd/h) if u (3.21) u(z) [S(Pzht ) o T(z)(T'(z))~"%] E B(Dw ) while the same statement holds for Ir if (3.22) r(t)(S(p,ht ) o T(t)(T'(t))-%) E B(Dw ) . From 42 to truncate the infinite sum note that u ( t ) (S(Pzht ) o t ( t ) (t'(t))“^>" behaves like u ( t ) (?•(t))3 ^ 2 near t = 0 and t = ®. Hence, condition (2.55) simplifies to tY (3.23) |u(t)(?■(t))%| , t 6 (0,1) < K t “ 6 , t G [I,*) for positive constants K, (3.18) and for (3.16) (3.19) the to vanish quadrature rule 0 = h t. (2.61) I v, and 6. Since (3.23) implies only additional assumption necessary is (3.17). Applying the truncated to (3.15) yields u l t J lI r 1 = 6P J 1 ! - 4J S V ' jtJ 1!* J=-Mt (3.24) r(t ) - ht --------— — + 0 {exp (-JtdZht ) ) (t'(tp ))3/2 + 0(e x p (-YMtht )) + 0(exp(-6Nth t )) If ht and Nt are chosen by (3.25) ht = (JtdZ(YMt ) )% and I (3.26) then the errors are asymptotically balanced and the final linear system is I ,(0) 4 6Pd (T'(tj))% u(tj) (3.27) - r(tp )Z(T'-(tp ) ) 3^ 2 = 0 ( e x p { - (JtdYMt )^ ) ) . Therefore, if 43 Nt £ a (3.28) Um ^ (t) s UjS(Jzht ) o T(t) , mt = Mt + Nt + I J=-Mt is an assumed approximate solution of ' (3.13) then since - A um t (tp) = Up, is defined the discrete Sinc-Galerkin system for by replaced by U j . for (3.27) (3.29) the side left-hand When p = - M t , .. .,Nt of (3.13) ,(3.27) With u (t .) ■ J the matrix formulation follows from (2.84) and reads A tD((T')%)u = D((T')“ 3/2 r)3? where and the At is remaining terms are defined by (2.78) and a real maintain these symmetric, definite matrix. properties in the discrete system (3.29) change of variables (3.31) negative (2.79). To the . v = D((T')-%)u leads to (3.32) A t v = D((T')~% r)f where the matrix (3.33) i A t = D ( T ' ) A tD(T' ) . The solution u is found via the procedure outlined in (2.88) through (2.90). Before considering (3.14) note that the selection h t in (3.25) asymptotically balances the error terms exp(-jcd/ht ) and exp(-vMth t ) while the selection N t in (3.26) balances 44 the asymptotic error term errors e x p (-6Ntht ) exp(-6N^h^) arises as inequality (3.23) which assumes u(t) infinity. For many problems and a exp(-yM^h^). consequence The of the decays algebraically at the solution decays exponentially as (3.34) |u(t)(?'(t))%| In this case, (3.35) , t G [I,-) . Lund [6] shows that the' choice —— h t An significantly < K e " 6t + I — t. 6 M^ th r reduces the size of the discrete systems solved with no loss of accuracy. The preceeding discussion applied parallel development. (3.2) (3.14) follows a The map T is replaced by the map 0 of (since 0 is compatible with the interval is substituted for ht . (0,1)) and h x Again orthogonalizing the residual and integrating by parts to to twice renders (3.15) with a boundary term on an equation similar (0,1) analogous to (3.16). To guarantee the boundary term vanishes it is assumed that (3.36, Iim x-»l x_rj- “ 5 + U,(X,'E An(x) Example 5.4 in Chapter 5 unbounded as x -» O+ illustrates yet discussion is included in decay condition (2.55) u ' (x)Vl-x An(l-x) (3.36) that a Iu(x ) (0'(x))%| where is satisfied. example. < L (I - x)P u' is Further The exponential simplifies to xa (3.37) case , , x E (0,%) x G [%,!) 45 for positive replaced by constants u(0')3'/2 L, and a, 0, and |3 where f and X are respectively. Hence the selections (3.38) h x = (rcd/(aMx ) and (3.39) I Nx .= result in the balanced asymptotic error rate 0 (exp (-(JtdaMx )^) ) . T h e r e f o r e , if a (3.40) Umx = Mt £ U iS ( I fIix ) o 0 (x) , mx = M x + Nx + I i=-Mx is an assumed approximate solution of (3.14) then the discrete Galerkin system for the (U1 ) is given by (3.41) Axw = D ( (0')“* s)t ■ where the matrix (3.42) and A x = D(0')AxD ( 0 ' ) A x is the same as A t in (3.30) with h t replaced by h x - As with A. > A is symmetric and negative definite. t x when M x = N , x D ( 0 1) is centrosy m m e t r i c . T o e p l i t z , Ax is centrosymmetric. Finally, Further, Thus since Ax is the vector w is related to the vector of unknowns u by (3.43) w = D( (0 ')_J*)u . The separate one-dimensional problems serve to identify the matrices Ax and A t as selections. well as disclosing the parameter Returning to the two-dimensional hyperbolic 46 problem (3.1) and its approximate solution (3.10), the unknown coefficients (u Jj) are found by orthogonalizing the residual. for One possible formulation the resulting discrete Sinc-Galerkin system is (3.44) Ax V - VAt = G ,where the matrices A t and Ax are identified in (3.33) and (3.42). The mx x mt matrices V and G are defined by (3.45) V = D((0')"%)UD((T')-%) and (3.46) G = D((*')-%)FD((T')-%) where U and F are the Mtx x mt matrices which consist of the coefficients The form (u ^j) for of (3.44) (3.10) suited to subsequent use of the motivation (3.45) At . that respectively. the unknowns the sine as a matrix is more o r thogonalization quadrature transforming procedure and rule. the The second unknown matrix U by introduces the symmetric coefficient matrices As a (3.44) is f(x^,tj), has two easily discerned m o t i v a t i o n s . One is that representing naturally and direct consequence is easy to of this symmetry, solve numerically. Chapter, A x and the system 4 discusses the solution technique at some l e n g t h . A second alternative for posing the discrete system occurs when the unknowns are arrayed as a v e c t o r . In some s e n s e , representing (u Jj) as a vector matrix is purely however, a notational matter; versus the a computational aspects of storing coefficient matrices and numerically 47 solving the whether discrete system can vary greatly depending on {u^ . } is regarded as a vector or matrix. Two background terms provide the notational machinery to rewrite the discrete system posed as in (3.44) as a system defined w ith (u^j } as a v e c t o r . Definition (3.47); p x q matrix. Let A be an m x n matrix and B be a The Kronecker or tenspr product of A and B is mp x nq matrix A ® B H The second I a iiB a I2B . a 21B ^ 2 2® • _a mlB a m2B • • • • term, . . * . • concatenation, a lnB a 2nB • a mnB loosely representing a finite ordered array as a v e c t o r . concatenation is defined For now, a precise for an array definition subscripts is sufficient. refers to Eventually with n - s u b s c r i p t s . when there are one or two If b = (bj), I < i < m, then the concatenation of b is denoted (3.48) Hence, co(b) for s Eb1 , b 2 , ..., b m ]T a column vector r, co(r) = rT . vector c, co(c) = c while for a row When B = (bj j ), I < i < m and I < j < n, then c o (B) is the mn x I vector 48 COfbi l ) cO(b I2 ) (3.49) CO(B) = c o <b in> which may be regarded Davis [16] includes a discussion of as stacking the columns of a matrix. more thorough concatenation and but slightly different the Kronecker p r o d u c t . In p a r t i c u l a r , Davis defines concatenation as stacking the rows of a matrix rather than its c o l u m n s . Besides the notational following theorems ease the apparatus just introduced, transition to a the system whose unknowns are represented as a v e c t o r . Theorem (3.50): dimension, If A and B a and (3 are scalars, are matrices of identical then (3.51) C O (aA + PB) = aco(A) + Pco(B) Theorem (3.52): Let A, X, and B be matrices dimensions are compatible with the product A X B . (3.53) Proof: CO(AXB) = (BT ® A)co(X) Beginning with A X B , its ij-th element is (AXB)ij = m £ k=l a jk n £ A=I X jca bAj Then whose 49 n m bAj I = I A=I a Ik xkA ' k=l The last line is prec i s e l y ,( B ^ A)co(X). To apply form of (3.44) (354) where Theorems (3.50) and (3.52), a more convenient is Ax VIm t - I m x VAt = G Ig (q Concatenating = mt ,mx ) (3.54) is and the q using x q the identity matrix. symmetry of At , the discrete system admits the equivalent representation (3.55) (Imt 0 Ax - A t @ Im x )co(V) = co(G) which is referred to as the Kronecker sum form. The coefficient matrix (3.56) Bi=) = Imt » Ax - A t ® Imx - (Blj) , -Mt < l,j < Mt has an easily discerned block form. The square blocks have dimension Hix x Hix and are given by (3.57) where Bi j = S ( O ) A x - ( A t )l j Imx (At )^j is the ij-th element , -Mt < l.j < Nt of the matrix A t . The vectors co(V) and co(G) are related to U and F by co(V) = co(D((*')-%)UD((T')-%)) (3.58) = (D((T')-%) 0 D((0')-%}co(U) = (b((T')-%) 0 D ((01) and (3.59) From co(G) (3.58), it is evident that C O (F) (3.55) is the matrix 50 formulation which arises via a natural ordering of the sine gridpoints from left to right, bottom follows to top where (Xi ,tj) ( , tp ) if tj > tp or if tj = tp and X i > xk . The previous discussion may be generalized to the second order hyperbolic problems in two and three space dimensions. Explicitly, the problems considered are L u (x ,y ,t ) s u tt - v 2u = f(x,y,t) (x,y, t ) 6 (0,1 )2x (0, oo) (3.60) u| t 6 [0, 00) =0 '3(0,I)2 uj = ut j It=0 = 0 (x,y) G [0,1]2 It=0 and Lu(x,y,z,t) (3.61) s u tt - 9 2u = f (x,y,z,t) (x,y,z,t) E (0, l)3x(0,oo) u| =0 t E [0,=») lS ( O fI)3 ul O all on CO n-dimensional CM Il C i—i O closed variables are t=0 H 1C C (0 r-t open and O where (x,y , z) E [0,1]3 = 0 ut = unit In particular, construction elements because the Sinc-Galerkin respectively, map 0 basis (3.2) approximate are cubes. are the The spatial the same interval as a matter of ease rather than necessity. of respectively, is on this simplifies the each spatial interval used repetitively. solutions Hence the to (3.60) and (3.61), 51 (3.62) A Nx u mx ,m y ,m t Cx 'Y '*) = J N J Nt J u ijAsijA<*'Y-t I I=-Mx J=-My A = - M t and (3.63) N I N N Nt I I I u I j M s IjkAlx I=-Mx J=-My k=-Mz A = - M t zy,z, t) where (3.64) S iJA(x ^Yzt ) = S1 (X)Sj-(y)S^(t) (3.65) S i JkA (x ZYzz Zt ) = S i (x)SJ.(y)Sk (z)S^(t) (3.66) mq = Mq + Nq + 1 and Sp , , q = x , y rz,t (P = i/J/k) and S^ are given by u s u a l , the unknown coefficients (U^j- (3.6) and Z (3.7). As (u I j k A ) are found or by orthogonalizing the residual using the weight function (3.67) w ( x , y ,t ) = 1 / ( 0 ' (x)0'(y)T'(t))^ for the three-dimensional problem and (3.68) w ( x , y , z , t) = l/(0'(x)0'(y)0'(z)T'(t))% for the four-dimensional problem. Analogous selections problems. to are The the two-dimensional c a s e , the parameter deduced from form the one-dimensional differential of separate equations in y and z are assumed to be like (3.14). The choices for the y parameters are , - (3.69) and hy = (TCdZ(SMy ) ) H - ‘ / one-dimensional that of problem 52 (3.70) + I based upon the assumption v |u(y)(0'(y))%| < L (3.71) ye (1-y)" where L , $, and the z parameters, (o,%) y e [%,i) n are positive constants. With respect to an assumption like (3.71) with f replaced by u and n replaced by v gives (3.72) hz (red/(pMz j)% and + I (3.73) To list the linear systems for <u ijA> additional notational devices are necessary. I < i < view u(3) and <u ijkA> For U = (u^j), m and I < j < n, U is represented as a matrix. E (Uij^ ) , I < i < m, I < j < n, and I < A To < p, as a matrix define m a t (U(3 )) = mat((Ujj&)) (3.74) Cco(Uijl), C O ( U ij2), Conveniently, concatenating concatenating the matrix; c o (U (3 )) the array i .e ., c ° ( (UijA)) C O ( U i Ji) (3.75) CO(U1J2 ) .coluijp>J c o (mat( U 1 )) - c°<u ijP >i 3 ^ is equivalent to 53 A recursive definition suffices to generalize concatenation for the n-subscripted array U I < ij < my, = (u.- 4 ), 1 I 1 Z - • •-1H I < j < n, as follows: c o ( U (n)) c o ^ u I 1I2 . ..in )) co(uili2 : - - i (n- l , l ) (3.76) c 0 ^ i 1I2 -■ ■i(n- 1)2 ) C 0 ^U ili2 •••i(n-I)imn ^ In turn, using (3.76) a recursive formula for the matrix representation is easily given by mat(U<n ) ) = mat((uiii2< (3*77) s [C° (Ui l i 2 - - - M n - I ) ^ ' ......... Hence, CO(Uili2--- M n - D Z ^ CO(Uili2---i (n-l)n*n)] going from m a t (U^n ) ) to co(U^n ^) the last means >in)) to index of U^n ) . compactly coefficients when involves unraveling These devices provide a convenient write the spatial the systems for unknown dimension is greater than or equal to two. In two spatial variables, the system analogous to (3.44) is (3.78) {Imy®Ax +Ay ®Im x }mat(v(3 )) - mat( V < 3 ) )At = m a t ( G (3 ^) 54 where for -Mjf < i < Nx , -My < j < Ny and -Mt < A 5 N^ m a t (V^3 ) ) = mat((v--o)) (3) . = DyXm a t ( u ( 3 ))D((T') *) (3.79) (3.80) , Dyx = D ( (0 1(Y))~^) ® D ( ( 0 1(x))~% ) and Ay is the same as A x in (3.42) with h x replaced by h y . The matrix m a t (G (3 ^) is defined U(3 > = replaced by F just like (f (x^,y ^ ,t ^ )) (XizY j ,tA ) is a sine gridpoint. m a t (V (3 ^) with where Adding a the point third variable yields IImzmy ® » x + Imz * ® 4V A* ® Imymx>”atlv<4) ) - m a t ( v < 4 >)At = m a t (G^4 I) where <3 -82) Ipq = Ip * Iq ' and for -Mx < i < Nx , -My < j < Ny , -Mz < k < Nz and -Mt < A < N t mat(v(4)) = mat( ( v i i k A )) . (3.83) = D zyx mat(u(4))D((T') (3.84) D %) = D((0'(z))"%) ® D ( (0'(y ) ) ~%) , ® D((0'(x))"%) . The matrix Az is Ax with hx replaced by h z , and (3.85) m a t (0(4)) = Dzyx m a t ( F ) D ( (?')'%) where F = (f (Xj.,yj ,Zfe,t'A ) ) . the coefficient on the definitions the discretization (3.60) and of (3.78) and (3.81) matrices which multiply m a t ( left represent operators in Note that in mat(v(n )) Laplacian in n - d i m e n s i o n s . (3.61), can of the Laplacian respectively. break up ^ ), j = 3,4, the Alternative discretized 55 The Kronecker sum form for the discrete system in two spatial variables is derived by concatenating (3.78). That iS' c o (g (3 )) = c o ( m a t (G^3 ))) = c o [(Im y ® A x + A y ® Im x >mat<V<3 >)Imt - ^ y m xm a t Iv l 3 j IAt] =' 1Bt ® I1my ® Ax + Ay ® Imx}co(mat(vl3))) - A tT ® Imymx co(mat(v(3;))) - [ 1B t ® I1By ® Ax * Ay ® Inlx) - At ® 1Bymx ] c°lv(3J) Similarly, the vectorized version of (3.81) ^1Hit ® ( ^mzMy ® aX + -Fmz ® Ay ® 1Hix + Az ® ^-mymx^ - (3.87) - At ® ! m z m y m x j ^ t V ^ h By definition of natural ordering: concatenation, first followed by the z (for t-domain. is across the the = co(G(4 )) grid x-domain, . is swept in a then the y, (3.87)), and lastly through the A closer examination of this structure as well as the solution is developed in the next chapter. 56 CHAPTER 4 SOLUTION OF THE DISCRETE SINC-GALERKIN SYSTEM Classical Methods As mentioned schemes for time in Chapters dependent I and partial 3, typical Galerkin differential equations use a trial function which is a truncated expansion of basis functions defined over the dependent coefficients. function for . (4.1) (3.1) spatial domain For instance, and having time the form of a trial is . u ( x , t) = £ c i (t)0^(x) i=l where t0*) 2=1 is complete in some the true solution u ( x , •). respect to 9 •, I < j function space containing Orthogonalizing the residual with < N, leads to a system of ordinary differential equations in time. The literature abounds with algorithms to solve of this type truncated time grid [14], [17]. In contrast, system a discrete, the Sinc-Galerkin method defines the basis functions on the entire space-time domain. of the on trial function The coefficients are now constants, hence a system of ordinary differential equations is exchanged for a system of linear equations. Linear systems also arise when classic 57 schemes such as finite differences or finite elements are applied to (3.1) on a truncated time domain. established routines exist these systems. finite differences numerically systems In for the or feasible method is numerical more common scheme which to approximating solutions of is Galerkin solves essential the form and B is called the setting the methods, a the discrete if the Sinc- represent a viable alternative for (3.1), (3.60), and (3.61). Conventional algorithms typically written in solution of light of developed techniques such as (derived in Chapter 3) Galerkin Several well- solve linear systems Bx = b where x is a vector of unknowns coefficient matrix. discrete Sinc-Galerkin In systems the present (3.55); (3.86), and (3.87) have the coefficient matrices (4.2) B < 2 > - Imt ® Ax - At ® Iax , (4.3) B O ) = Imt ® (Imy ® Ax 4- Ay ® Imx) - A t ® Imymx and (4.4) , ^ B < 4 > = Imt ® (Imziny ® Ax 4 Imz ® Ay ® Imx + Az ® 1Iiiymx * ~ At ® 1HizMiyinx respectively. Table I lists the dimensions of B ^ *, j = 2,3,4, and suggests a first consideration for a feasible scheme — machine storage. Methods which require full - storage mode are often impractical in the setting of systems arising from the numerical solution of equations. partial differential This is certainly true h e r e . For example, if a Table I. Machine Storage Information for the Matrices Equation (3.55) j 2 Dimension of Nonzero elements ^ . mm. x m m X t m X (m t'' x in + ni t (3.86) (3.87) 3 4 m m m. x m m m x y t x y t X t - I) ' and m m m fm + m + m - 2) x y t\ x y z > m m m m x m m m m xyzt xyzt m m m m (m + m + x y z tv x y of B(j) m + m - 3) z t ' Nonzero elements m m^m X t t m m m m xy t t m m m m m x y Z t t mm x t m m m x y t m m m m xyzt O f B<j) Unknowns 59 coarse grid = Itiy with order IO4 ; i.e., 370,000 are IO8 = mz elements nonzero. must 2500; however, the stored of which in this case B ^2 ^ is of the more than six million elements represented only 247,500 are nonzero. to position be The problem persists when B^ 2 ) has a fine grid like Hix = m^ = 50. 2500 x = mt = 10 is used B ^ 4 ) has Moreover, with regard the regularity of the nonzero elements suggests possibility of structure dependent algorithms with accompanying storage savings. To put B ^ 2 ) into perspective, consider the coefficient matrix for the usual centered finite difference scheme. Assume the order of the Sinc-Galerkin method is 0(e that the true solution When mx = 50, finite of (3.1) differences and decays in time like e- t . call an approximate for stepsize h = .02 to have an expected error of O 0(hi ) = — V ITLy 0(e ). Further, truncated short of t = 8. in the spatial domain requires the solution, the time domain should not be This translates to and 50 gridpoints 400 steps in time. at each time system with 148 nonzero entries. step, of Iterating a tridiagonal For purposes of comparison to B (2 ^ , these systems can be combined into one large system with 99,100 nonzero elements. of nonzero elements finite differences option than the present method. Example 5.4, the 0(e Therefore, is based on a count a more viable H o w e v e r , as demonstrated in convergence for the Sinc-Galerkin method is maintained when the solution u of (3.1) is 60 singular. Finite maintaining differences 0(h2) have convergence no in direct the analogue presence of singularities. The matrices finite element occurring due solutions of to finite difference or partial differential equations are often solved via iterative methods precisely because the methods can be coded to take sparse coefficient matrices advantage of well-structured, [4]. discrete Sinc-Galerkin systems Given the structure of the (3.55), (3.86), and (3.87) methods such as Jacobi, G a u s s - S e i d e l , and SOR may, with some modification, be practical but foreseeable modification taken of the structure is that of is troublesome unexplored. A greater advantage must be j = location of nonzero elements. of they remain 2,3,4, than just the Storing only nonzero elements (see Table I). Finally, with respect to convergence the success of an iterative method depends on the spectrum of the iteration matrix. detailed analysis be useful. work on providing In of the spectrum of spectrum the proof of of B ^ 2 ), a , j = 2,3,4, would Whereas the author has done the this direction, extensive numerical the Conjecture analytic (4.11) argument below remains elusive. This chapter details which solve the discrete Chapter 3. Both two numerically Sinc-Galerkin viable methods systems derived in are direct methods that take advantage of symmetry and block structure to reduce machine storage as 61 well as ease implementation. The choice of technique is somewhat dependent on available example, computing f a c i l i t i e s . one algorithm is a modified block Gauss-Jordan elimination routine. Although it is possible to implement this method on a scalar machine, vector machine architecture. suited to a scalar machine, less machine needing less author, For storage storage, inherently suited to The other algorithm is wellin than the it is part, the because block it requires routine. scalar machine Despite available to the a Honeywell Level 66, remains too small to implement the method in three space dimensions when m„ = m„ = m_ = m+ = 8. In this instance a supercomputer is a necessity due to storage capabilities. Solution of the Discrete System in One Spatial Variable The discrete as f o l l o w s . Sinc-Galerkin system Since Ax and At are (3.44) may be solved symmetric there are orthogonal matrices Q and P so that (4.5) Q 7A xQ = Ax and (4.6) P 1A tP = At where A , A t. are diagonal matrices containing the Nx eigenvalues < (Xx )V i = - M x respectively. (4.7) Z= Nt and <<xt >j If the change of variables Q tVP of Ax and At ' 62 is made in (3.44) (4.8) then the equation takes the form Ax Z - Z A t = H where (4.9) H = Q tGP . The solution of (4.8), in component form, is given by -Mx < i < Hx (4.10) [ (Xx ) i - (Xt )1-] _• = hjj X 1 X J J J where h ^ j is the from (4.10) (3.44) Once Z is determined is recovered via in turn, using (3.45) gives U. If (Xx )i in (4.10) -Mt < j < N t ij-th element of H . the solution V of V = QZP t and, , = (Xt )j for some indices i and j , the equation is inconsistent. In the array of examples listed in Chapter 5 this matching of eigenvalues never occurs. The author believes the following: Conjecture a (Ax ) D o ( A t ) = 0 where a(A) (4.11): denotes the spectrum of A. This is connected, so the author believes, nature of the spectrum of Lu = -u" infinite domains (see deBoor of this has been proven, on compact and Swartz continuing to the different versus s emi­ [18]). While none analytic and numerical work supports the validity of the conjecture. Verifying (3.44) is nonsingular the consistency equivalent which, in to of showing turn, the system posed as in the implies unique solution for the linear equations matrix the is existence of a (3.55). To 63 establish the Davis connection a property of tensor products that [16] states is useful; (4.12) that is, . (A ® B)(C © D) = (AC) ® (BD) assuming each product is defined. Using (4.12) and transforming variables in (3.55) via (4.13) co(Z) = (PT ® Q t )c o (V) yields (4.14) {Imt ® Ax - A t 0. InixJco(Z) = co(H) where (4.15) co(H) = (PT ® QT )c o (G) Equation (4.14) shows that the eigenvalues of B ^ 2 ) are given by all possible combinations of -Mt < j < Nt . Equally diagonalization of B^) (2 ) need to is the fact that j.s accomplished by two intermediate never be stored. array with respect (Xx ) ^ - (Xt )j-, -Mx < i < Nx , significant diagonalizations of much smaller b . matrices. Indeed, storage has As a result, if m x - m t , the largest mx m t elements. This product represents the number of unknown coefficients in the trial function (3.10) under solving the (see Table transformations (3.44) and (3.55), I). . Finally, (4.7) and (4.13) respectively, note that the algorithms can be machine coded identically. An alternative described below. method of solution for Consider the transformation co( Z D ) e CO( Q 7V) (4.16) = (Imt® QT )c o (V) . (3.55) is 64 ' Rewriting (3.55) in terms of co(ZD ) and multiplying through by Ijn^ ® Q t yields (2 ) B£, CO(Zd ) .= CO(Hd ) (4.17) where (4.18) Bll2 I = Imt • Ax - A t • Imjt and (4.19) . C o (Hd ) = c o (Qt G) (2 ) The matrix Bd is diagonal matrices. . a block matrix all of whose blocks are Explicitly, if the blocks are called B D(ij) then they are given by (4-20) where 6D(Ij) “ 6Ijl aX - <*t>ij 1Hlx -Mt 5 i-l 5 N t (At )i J denotes the ij-th element of A t , structure has come diagonalization of matrix Ax . at Moreover, less than elements B^2* (see techniques work matrix diagonal minor machine that matrices. and storage needed, Table extremely well inversions expense of only one the m x x m x symmetric, negative definite considerably of the This improved to I). the B^) is save the nonzero Block elimination on the system (4.17) as all multiplications After for are performed on elimination procedure the solution is recovered from c o (V) = (Imt ® QJ c o (Zd ) = c o (QZd ) from which U is found via U = D((*')%)V D ( ( T ')*). Although block elimination techniques performed on scalar machines, for B^2 ) the structure of B ^ 2 ) is may be 65 inherently suited to vectorized computation. Jordan elimination routine solving A block Gauss- (4.17) has been written in explicit vector FORTRAN and implemented on the CYBER 205 computer. Its performance was consistent with the results of the algorithm which diagonalizes both Ax and A ^ . Solution of the Systems in Two and Three Space Variables The algorithms detailed in ready extensions Galerkin for systems extension is previous application arising actually block techniques. the an. from to the (3.60) section admit discrete Sinc- and (3.61). One assortment of methods utilizing The second is based on diagonalizing A„, Ay , A t , and Az (if it occurs in the system). The discussion begins with the second method. A „ and A_ are Y symmetric z hence there exist orthogonal matrices R and S such that (4.21) R 7A yR = Ay and (4.22) where S t A zS = Az A and eigenvalues of and (3.87) Az are Ay and may be diagonal Az , respectively. Equations Let -Mx 5 1 5 N x' -M y (ZijkA)' C O (Z 1 ) the (3.86) 5 = (z^j^) and J5 Ny , -Mz < k < Nz , -Mt < A < N t , be defined by (4.23) containing transformed using changes of variables analogous to (4.13) as follows. Z(4) = matrices (P7 ® R 7 ® Q7 )c o (y(3 )) 66 and (4.24) co(z(4)) = (PT ® ST ® R t ® QT )co(V(4V) where Q and P are given by (4.5) and Rewriting (3.86) in terms of [1I t ® (4.6), respectively. yields ® aX + Ay * Im x ) - A t ® ImymJ c o l Z <3 >) (4.25) = (PT ® R t ® Q T ) c o ( G (3)) while (3.87) in terms of 4 ^ becomes [1Jiit ® ^ m zmy® A x + Imz ® Ay ® Imx + Az ® Imymx ^ <4 -2 6 ) - At ® = (PT ® S T ® R t ® Q ^ ) c o ( G (4?) . If the arrays = (hi;jA ) and H ^ 4 ^ - (HiJkjl) , -Mx < i < Nx , -My < j < Ny , -Mz < k < Nz , -Mt < A < Nt , are given by (4.27) co ( H ( 3 )) = (PT ® R t ® Q T ) c o ( G (3)) and (4.28) CO(H ^4 ^) = (P? ® ST ® R T ® QT )c o (G (4 ^ ) then the solutions of (4.29) (4.25) and (4.26), respectively, are C(Xx )i + (Xy)j - (Xt )A ]zijA = h ijA and (4.30) As in C(Xx )i + (Xy )j + (Xz )k - (Xt )A ]zijkA = h ijkA (4.10), the is inconsistent. possibility exists that (4.29) or Again this never happens for discussed in Chapter 5. . (4.30) the examples The author believes in the validity 67 of the analogue of Conjecture That Is, based on spatial domain the is (4.11) array compact of examples while infinite then, with respect for (4.29) the to the and (4.30). tested, if the temporal is semi­ sine discretization the spectrum of the discretized Laplacian is disjoint from the spectrum unknown of At- Assuming coefficients U ^ the in the recovered from (4.29) via (4.23) (4.30) may be transformed using give expansion and (3.79). (4.24) (3.62) the are Similarly, followed by (3.83) t:o appearing in the trial function (3.63). One item remains to When solving completely structure allows the products retaining use of three and four (Xi j k ), l < i < m , suppose X (Si j ), I < i ,j < m; B = C = (Ci j ), specify the algorithm. the system as above the matrix-vector products which occur are highly structured. A conjecture is valid, I < i,j 5 p. Taking advantage of that to be determined subscripts. while For instance, I < j < n, I < k < p; (by^), I < i,j < n; and The product (C ® B ® A)co(X) is defined by [ (C ® B ® A ) c o (X)]ijk P I (4.31) I ckt t=l where [ I S=I air xrst b js r=l ]ijk indicates the ijk-th element. A, B , and C are as before; I < j < n , I < k < p, Similarly, if X = (xi j k A ), I < i < m , . I < A < q ; and D = Cdij ), I < i,j < q; then (D' ® C ® B ® A) co (X) is given elementwise via 68 A) CO (X) 1 2 [(D ® C O B ® q (4.32) Hence, p ^ ^Av ^ v=l t=l ckt is merely a recovering the trial ^ s=l b Js ^ r=l a Ir xrstv set of function nested loops. coefficients Likewise, from that the (when it occurs) algorithm is just diagonalization of the characteristic definitive described. linear systems posed as Ax , Ay , At , and Az Indeed, diagonalizations are carried out, when the numerical in (3.86) (3.81), respectively. method for a first, Further, the numerical direct implementation ease, discussion. needed. The and (3.87) (3.87) represent occurring in The the diagon a l i z a t i o n s . The is established by the preceeding is number the minimal machine storage solve (3.86) of is These unknown coefficients the trial function and, as such, also give the minimum array size n e e d e d . storage, (3.78) and it is m xm ym zm t (see Table I). the is the discrete systems The maximum array size required to m xm ym t while for values second of these the chief attributes of this solution of consequence of the solution of indistinguishable from the numerical solution of are 3 ) and x s equally simple. Note the m the code needed to evaluate the vector c o (h (3 ) ) or co(H^4 ^) z(4) n Hence, with regard to machine it is not possible to do better than this method. remaining class of algorithms mentioned as it has yet to be implemented. is The only briefly 69 distinguishing trait the matrices A x , of the Ay , undiagonalized. A t, Here A t class is or Az that at least one of (if will not it the change of variables (4.33) CO(Z^3 j ) E (Ifflt © R t ® QT )c o (V(3)) (4.34) (3.86) b £3 > remains be diagonalized. example, transforms occurs) As an to CO (Zj(3 I) = (Imt © RT © Q T )c o (g (3 >) where (4.35) Bjl3 ) is a b £3 > block (Imy ® A x + Ay ® Imx ) - A t ® Imymx block matrix all matrices. a = Imt ® of whose . blocks are diagonal Therefore, one appropriate choice of solution is Gauss-Jordan elimination complication arises is, either the routine. However, a due to the added space dimension. block routine must be written That to handle unknowns with three subscripts or the array of unknowns must be adapted to two subscripts. routines used to they may not take solve In the latter case, (4.17) advantage of the block could be applied although, all vectorization possible. The dilemma is compounded for the system (4.36) B ^ 4 ) c o (Z ^ 4 )) = (Imt ® ST 0 R T © QT )C O (G <4 ) ) where B(4 ) D - I mt ® ( 1HizItiy ® Ax + 1IItz ® Ay ® 1Hix (4.37) + Az ® 1 JIlymx ) “ A t ® 1 Ulz IIlymx and 70 (4.38) CO(ZjJ4 J) = (Imt 0 ST ® R T ® QT )c o (V(4) ) Here the array of unknowns has four subscripts. To further cloud the picture, the transformed coefficient matrices are the transformation. In manner in which the stored particular, may if depend on only diagonalized the resulting transformations of B (3 ) require more storage in Ax is and B^4 J terms of nonzero elements than the matrices B ^ 3 J and B1J4 J (see Table I). Diagonalizing the matrices Ax , appears the (3.60), easier route and (3.61) techniques are inclusion in order terms but more widely the partial Ay , A ^ , and possibly A z in the setting of problems the author applicable. differential and/or nonconstant believes (3.1), that block For instance, equation the of lower coefficients may discourage diagonalization of the entire coefficient matrix. 71 CHAPTER 5 NUMERICAL EXAMPLES OF THE SINC-GALERKIN METHOD The space-time and (3.61) known Sinc-Galerkin method was tested solutions on a exhibit large class differing problems with known solutions error evaluation. numerical success nine examples With characteristics of the s c h e m e . nine breaks for to for all herein are In (3.1), (3.60), of problems whose behaviors. allows regard is evident reported for a error, Choosing more complete considerable p r o b l e m s ; h e n c e , the selected to highlight addition the sample of into three groups of three where within a group a given characteristic is illustrated for problems in one, two, and three space dimensions. Examples 5.1 that is, - 5.3 and 5.7 - 5.9 display a first trait; optimal parameter selections for an expected convergence rate result in a discrete system of minimum size for the Sinc-Galerkin method. discrete system is maintained (see Examples 5.7 - 5.9). of the Moreover, symmetry of the for any choice of parameters A characteristic common to each nine examples is that the numerical solution (3.10), (3.62), or (3.63) In particular, (depending on space by employing dimension) is g l o b a l . the conformal map ?(t) = An(t) the approximate solution is valid on the infinite time 72 interval . Another significant Examples 5.4 - 5.6 whose respective boundaries. is, parameter the analytic solutions are is exhibited by singular on their With respect to implementation (that se l e c t i o n s ), these singular property the Sinc-Galerkin method for problems proceeds in the same fashion as for examples. following (5.7), Referring to the rate the order statements of convergence governed by the Lip a class to which of the method is the solution belongs;— not the degree of singularity of the solution. Perhaps the most Galerkin method, distinguished indeed of feature spectral methods the potential exponential convergence rate (2.1) this convergence is of the Sincin general, [7]. established For problem analytically. Specifically, a direct consequence of Theorem (2.59) if basis 2N + I functions approximate solution K > 0, holds problems. of used to construct for (3.61) and their respective approximates (3.10), rate on exponential grid. convergence, exist. theory (see and singular (3.60), and (3.62), and .An analytic extension to uniform in dimensions multiple Arguments Chapter 2 lead directly variables. an is easy to establish the exponential convergence the sine currently to analytic (3.1), it respect both problems (3.63), is that (2.1) then the order 0(exp(-K>fW) ) , uniformly With are is based on does not analytic function for the development in one variable) to problems Alternatively, in functions of several complex arguments based on the . 73 development as in Stenger [5] are complicated by the form of Green's functions in higher dimensions. that the method hold exponential convergence (indicated in the uniformly in The author believes rate of the Sinc-Galerkin discussion following higher dimensions. (5.12)) does Unfortunately, the development of analytic tools to verify this convergence has not kept pace with the development and numerical testing of the m e t h o d s . Since block techniques for the problems in two and three space dimensions reported are have for the diagonalizations. yet systems When compiler. discretization of solved feasible Honeywell Level 66 computer FORTRAN-77 to be implemented, using For (3.60) or the a via code double, the results all possible was run on a precision ANS large systems arising from the (3.61) the array of unknowns exceeds the maximum array size allowed on the scalar machine mentioned. code run H e n c e , these systems were on a solved with the same CRAY XMP/48 using a single precision CFT.141 compiler. Single precision on the CRAY XMP/48 is equivalent to double precision on most scalar CRAY XMP/48 reproduced the machines. error results Indeed, the of smaller double precision Honeywell r u n s . With regard to problems maximum absolute error between in one space dimension, the the numerical approximation, u i A , and the true solution, u ( x ir tA ) , at the sine gridpoints was determined and reported as 74 (5.1) IIE(1 M h ) N - = max lu-o - u (X1 ,to) I u . IA 1 where h is the stepsize associated with x. Similarly, the problems in two and three space dimensions, for the maximum errors are (5.2) IIE(2) (h)ll = max IU 1 1 O - u ( x l ,y1,to)| u IjA J J ' and (5.3) IIE(3 Mh)II = max |U 1J1ca - u (X1 ,yj ,zfe,tA ) | u ijkA respectively, approximates u (Xf,Yj/zk /t A )• where of u IjA the true the solution u ( x , t) of * u IjkA values are in one numerical n ( x 1 ,yj,tA ) The notation .ddd-v represents To implement the method (5.4) an<^ and .ddd x 10- v . space dimension, assume (3.1) satisfies ju (x ,t ) j < Mx"+% (I-X)P+^ t y + * e " 6t for some positive a, |$, v, and 6. As a consequence of (5.4) there exist constants K and L such that tY (5.5) Iu(x ,t ) (T'(t))%| , t E (0,1) < K e-8t , t E [l,co) and (5.6) |u(x,t)(0'(x))%| hold uniformly for taken in (3.14), light of (x,t) E (0,1) x , x E (0,%) (l-x)P , x E [%,!) (0,»). the one-dimensional respectively, the approximate X a < L motivate the These conditions problems (3.13) and parameter selections for 75 (5.7) A u 'm^ = M fc + Nt + I . Recall that the asymptotic errors for the one-dimensional problem (3.13) are 0(exp(-rcd/ht ) ) , 0(exp(-YMth t.) ) , and 0(exp(-6Ntht )) (see (3.24)). bound which (3.23) is These depend analogous to (5.5). on the growth Similarly the asymptotic errors associated with the spatial problem (3.14) are 0 (e x p (-ird/hx ) ) , 0(exp(-aMx hx ) ) , and 0 (e x p (-|iNxh x ) . Mx is chosen, balancing the asymptotic dimensional problems with respect errors for to Once the one­ 0(exp(-aMx hx ) ) determines the following stepsizes and summation limits: (5.8) hx = tc/ (2 a M x )^ where the angle d in Figure 6 is taken to be (5.9) Nx = (5.10) h t = hx n / 2 , ^ Mx + I and (5.11) Mt = ^ Mx + I Assuming the solution.of a smaller value of N t , (5.12) Nt = (3.1) decays exponentially in time, 76 may be chosen (see Lund [6]). Note that when — M„ or — M p x Y x is an integer these are the values used for Nx and respectively. , The additional +.1 appearing in (5.9) , (5.11), and (5.12) guarantees that all the appropriate errors are at least 0(e x p (-aMx hx )). Parameter selections for the approximate in two or three space dimensions balancing are deduced asymptotic (3.60) errors. via the Hence, u(x,y,t) of (5.13) Iu(x ,y ,t ) I < K x a+^ (I-X)P+^ y^+^ same if notion the of solution satisfies (l-y)f?+^ tY+^ e-61: then the additional stepsize by and summation limits Ny for the approximate M y and (3.62) are given by (5.14) hy = hx (5.15) My a Mx + I 7 * Ny = ^V+1 and (5.16) Similarly for (3.61), assuming the solution u (x ,y ,z ,t ) satisfies (5.17) |u(x,y,z,t)I < Kw(x,y,t)zUt% (i-z)v+^ where (5.18) then w(x,y, t ) = x a+^ (I-X)P+^ y ^ +^ the approximate remaining (3.63) are parameters for (1-y) rt^ the tY+^ e~6t , Sinc-Galerkin I 77 (5.19) hz = hx (5.20) Mz = and (5.21) For a Mx + I V Nz = all examples in one sequence of runs with Mx ='4, or two space dimensions a 8, 16, and 32 is reported. In three space dimensions a run corresponding to Mx = 32 is not currently possible, considerable unknowns. machine to the left. the storage With respect error result large to even on to all CRAY XMP/48, needed for due the nine examples, to the array of whenever an is reported from a CRAY XMP/48 run a * appears This * perform on also indicates that the run was too the Honeywell machine available to the author. In all cases yields a reported, M^ > Nt (see (5.12)), which much smaller discrete system than the choice of N t given by (3.26). The choice M t = Nt or that given by (3.26) results in larger matrices with h o corresponding increase in accuracy. Worth noting is that as a consequence of T(t) = An(t), large. the map the sine gridpoints in time become quite Recalling (3.9), t^ s e^^t _ In @11 Q f th^ following results the selection Mx = 16, for example, to a choice of Nt that yields t ^ = e 3 ^*7 8 5 4 ^ = 1 0 . 5 5 . leads A 78 large number of iterations is typically approximate the solution for a t of similar using Hence, time-marching schemes. required to magnitude when in comparison, the Sinc-Galerkin method requires much smaller systems to attain the same accuracy as finite differences (see the discussion following Examples 5.7 - 5.9). The parameter a = % is common to all nine examples hence the stepsize h = h x = it/ (2aMx )^ and the asymptotic error 0(exp(-aMx hx )) are the same throughout this chapter. result; these values are not included Computationally the asymptotic rate 0(exp[-aMx hx ] ) = 0(exp[-%/(2(xMx )%]) in every t a b l e . (which for d = Exponentially Damped Sine Waves Damped sine wave in one-dimension. . L v (x ,t ) 2 v t t (x,t) - v x x (x,t) = {Jt2 + (2 - 4t + (I + jc2 )t2 )e- t }sin(rcx) v(0,t) = v(l,t) v(x, 0) = Sin(Ttx) = 0 , v t (x, 0) = 0 This problem is transformed to the form (3.1) via u(x,t) = v(x,t) Lu(x,t) u(0,t) - sin(Ttx) , which yields = (2 - 4t + (I + Tt2 Jt2 Je-t Sin(Itx) = u(l,t) = 0 u ( x , 0) = u t (x,0) = 0 The analytic solution is u(x,t) jr/2 is is consistently attained at the sine gridpoints. Example 5.1: As a = t2e- t sin (itx) . The 79 (5. 4)) are a = P = Table 2 displays the for gridpoints maximum the 1/2, Y = 3/2, and 6 = 1 . absolute sequence error Il (see X= parameters 8, at 16, the sine and Additionally a column with asymptotic error is given. a = |$, M x = N jf. In this case referred to as a centered sum. the sum The sum on ,Since on i in (5.7) A in noncentered sum, as commonly happens when using Table 2. 3 2. (5.7) is is a (5.12). Numerical Results for Example 5. I . h HE^1 ) (h) II Asymptotic Error I 1.57080 .378-1 .432-1 3 2 I .11072 .253-1 .118-1 16 6 3 .78540 .905-3 .188-2 32 11 4 .55536 .320-3 .138-3 MX NX Mt Nt 4 4 2 8 8 16 32 Example 5.2: Damped sine wave in two-dimensions. L u (x ,y ,t ) = (2 - 4t + (I + 2rc2 )t2 )e-tsin(rcx)sin(rcy) uI =0 *3(0,I)2 u| = ut It=O , t=0 where, recall, cube. = 0 (0,1)n refers to the open n-dimensional unit The analytic solution is u ( x , y ,t ) = t 2e -tsin( jcx)sin( rcy) , hence the parameters are Q = P = ?= ?? = 1/2, v = 3/2, and 6 = 1. Table 3 lists IIE^2 ) (h) II, the maximum absolute error at the sine 80 gridpoints. Note that the result corresponding.to M„ = 32 was obtained on the CRAY XMP/48 as indicated by the *. Table 3. Numerical Results for Example 5.2 IIE^2 ) (h) Il MX NX MY NY Mt Nt 4 4 4 4 . 2 I .813-1 8 8 8 8 3 2 .760-1 16 16 16 : 16 6 3 .387-3 32 32 32 32 11 4 *.451-3 Damped sine wave in three- d i m e n s i o n s . Example 5.3: L u (x ,y ,z ,t) = (2 - 4t + (I + 3n 2 )t 2 )e -tsin( rcx) sin( jry)sin( jcz ; = 0 3 (0 ,1 )^ t=0 \ = 0 = u, 't=0 The true solution u(x,y;z,t) the three-dimensional solutions. 4. = t2e-tsin( analogue of jtx )sin the The results for this problem are (iry)sin( Jtz) is previous two given in Table 81 Table 4. mX Numerical Results for Example 5.3. nX 4 4 My Hy Mz V Mt Mt HEj (H)II 4 4 2 I .325-0 4 4 13 1 8 8 8 8 8 8 3 2 * .123-1 16 16 16 16 16 16 6 3 * .451-3 With respect to Examples 5.1, 5.2, and 5.3, absolute value of the true solution indicates that there is no is the the maximum same. Table 5 consistent difference in the error results for the various space dimensions. Table 5. Mx Numerical Results for the Damped Sine Wave. h IIE^lj (h)ll IIE^) (h)|| IIE^3 * (h)|| Asymptotic Error 4 1.57080 .378-1 .813-1 .325-0 .432-1 8 I .11072 .253-1 .760-1 *.123-1 .118-1 16 .78540 .905-3 .387-3 *.451-3 .188-2 32 .55536 .320-3 *.451-3 wmm — mtm — ■ .138-3 82 Singular Problems Example 5.4: A singular problem in one space dimension. (3 - 12t + 4t2 ) x An(x) L u (x ,t ) u(0,t) = u(l,t) ITt = 0 u(x,0) = U t ( X fO) = 0 The true solution for this problem is u(x , t ) = t 3/ 2e -t singular at t = Although ux satisfied. x 0 and is the again h at x =0, method requires no at x = 0. condition (3.36) despite the different regard to implementation. stepsize is The solution is algebraically logarithmically singular unbounded Further, singularities An(x). is character of the modification For a = p = % and Y = 6 =1, with the = hx = ht = k / (Mx )^ and the asymptotic rate shown in Table 6 is achieved, despite the presence of singularities. Numerical Results for Example 5.4. Table 6. Mt Nt IIEy ) (h) Il Asymptotic Error MX NX 4 4 .2 I .321-2 .432-1 8 8 4 2 .405-2 .118-1 16 16 8 3 .852-3 .188-2 32 32 16 4 .767-4 .138-3 83 In fact, comparing Tables 6 and 2, this problem has slightly smaller errors with a minor increase in the number of gridpoints near t = 0 due to the algebraic singularity. logarithmic singularity affects neither The the performance of the method nor the system size. Example 5.5: A singular problem in two space dimensions. (3 - 12t + 4t2 ) x An(x)y An(y) L u (x ,y ,t ) - 4t 2 ' ^ An(y) uI + — An(x) e 4>TF =0 '3(0,I ) 2 uI = utI It=0 = 0 't=0 The true solution for this problem, u (x ,t ) = t 3/2e -t x An(x)y A n ( y ) , exhibits the same algebraic singularity at t = 0 as the previous e x a m p l e . the solution anything, Example is singular on the spatial In addition, boundary. If the difficulties appear more severe here than in ‘ 5.4; h o w e v e r , the errors shown in Table 7 are slightly better than those, in Table 6. . The parameters used are ot = fi = C = f? = % and y = 6 = 1 . 84 Table 7. Numerical Results for Example 5.5. . '• IIE^12 > (h)|| MX Nx mY ”y Mt Nt . 4 4 4 4 2 I .221-2 8 . 8 8 8 4 2 .370-2 16 16 16 16 8 3 .235-3 32 32 32 32 16 4 *.530-4 A singular problem in three space d i m e n s i o n s . Example 5.6: (3 - 12t + 4 t 2 ) x An(x)y An(y)z An (z ) Lu (x ,y ,z ,t ) = - 4t 2 + — z uj ^ An(y)An(z) An(x)An(y) + ^ An(x)An(z) e U t =0 '3(0,I)3 U j = UtI *t=0 = 0 't=0 The true solution is u(x,y,z, t) = t 3/,2e -t x An(x)y An(y)z An(z) and, with the addition of u = v = ?£, the parameters are identical to those in Example 5.5. Table 8 lists the results for the present example while Table 9 singular p r o b l e m s . includes the errors for each of the 85/ Table 8. Numerical Results for Example 5.6. IIEjl3 ) (h)|| MX Nx mY NY Mz Nz Mt Nt 4 4 4 4 4 4 2 I .354-2 8 8 8 8 8 8 4 2 *.286-3 16 16 16 16 16 16 8 3 *.405-4 Table 9. MX Numerical Results for the Singular P r o b l e m s . h I l ) (h)ll IIEjl2 ) (h) II IIEjl3 ) (h) II Asymptotic Error 4 1.57080 .321-2 .221-2 .354-2 .432-1 8 I .11072 .405-2 .370-2 *.286-3 .118-1 16 .78540 .852-3 .235-3 *.405-4 .188-2 32 .55536 .767-4 *.530-4 .138-3 — For runs corresponding to Mx = 8 and 16 the error associated with Example 5.6 is almost a full decimal the singular Moreover, this singularity of examples in improvement the problem. using finite differences. one and occurs . No place better than two space dimensions. despite the greater such analogue exists when 86 Problems Dictating Noncentered Sums in All Indices Example 5.7: Lu(x,t) Noncentered sums in one space dimension. = {(2 - 4t + t2 ) x (I - x ) 3 - 6t2 (l - x)(2x - I ) }e_t u(0,t) = u(l,t) = O U(XfO) = u t (x,0) = 0 The analytic solution of this problem is u(x,t) = t 2e -tx(l - x ) 3 . The parameter selections a = 1/2, P = 5/2, Y = 3/2, and 6 = 1 space and time errors in Table 10. Table 10. f (Mx Nx dictate noncentered sums in both and M t ^ Nt ) ^as shown with the Again h s hx = h t = %/(Mx )%. Numerical Results for Example 5.7. IIE^l) (h)|| Asymptotic Error Mt Nt I 2 I .237-2 .432-1 8 2 3 2 .208-2 .118-1 16 4 6 .707-4 .188-2 32 7 11 .237-4 .138-3 MX NX 4 . 3 4 Even though there are three-fourths fewer the region gridpoints in % < x < I , the same asymptotic rate e x p (-Mxh x / 2) is predicted. In comparison to Examples 5.1 and 5.4 the results are better even with the smaller system size. This 87 is because the convergence rate of the Sinc-Galerkin method is governed by the asymptotic behavior of the solution. Example 5.8: Noncentered sums in two space dimensions. Lu(x,y,t) = 10{(2 - 4t + t 2 ) x (I - x ) 3 y 2 (l - y) - 2t2 [3(I - x)(2x - l)y2 (l - y ) + x ( I - x) 3 (l - 3y)]}e-t uI =o '3(0,I)2 = UtI t=0 =0 U=O The solution of this problem, u(x,y,t) = 10t 2e~tx(i - x ) 3 y 2 (l - y ) , selections, a 6=1. = 1/2, |3 = 5/2, £ = 3/2, v = 1/2, y = 3/2, and As a result of appearing in these parameter the approximate more complete discussion of comparison to example. appearing on selections the sums (3.62) are all noncentered. how small this system A is in a finite difference solution follows the next Note that the multiplicative factor of 10 the right - hand side of the problem was chosen so that the solution was of the same the previous yields the parameter example. current e x a m p l e . Table 11 order of magnitude as gives the results for the 88 Table 11. Numerical Results for Example 5.8. Mt Nt 4 2 I .805-2 3 8 3 2 .489-2 4 6 16 6 3 .867-4 7 11 . 32 11 4 *.682-3 MX Nx “y V 4 I 2 8 2 16 32 . Example 5.9: IIE^2 ) (h) Il Noncentered sums in three space dimensions. L u (x ,y ,z ,t ) = 100{(2 - 4t + t 2 )x(I - x ) 3Y 2 (I - y ) z 3 (I - z ) 2 - 2t2 [3(I - x)(2x - I )y 2 (I - y ) z 3 (I - z ) 2 + x ( I - x)3 (I - 3 y )Z3 (I - z)2 + x ( I - x)3y 2 (I - y ) (IOz3 - 12z2 + 3 z ) ] } e ~ ^ u| =0 I3(0,1)3 u| = ut| It=O = 0 . *t=0 The analytic solution of this problem is u(x,y,z, t) = IOO t 2S."tXt I - x)3 y 2 (I - y ) z 3 (I Analogous to the factor of 10 in Example 5.8, 100 is a magnitude a d j u s t m e n t . P = 5/2, 6 = 1 the z) 2 . factor of The parameters a = 1/2, f = 3/2, 7) = 1/2 , u - 5/2, v = 3/2, Y = 3/2, and yield the summation parameters appearing in Table 12. The composite error results for Examples 5.7 - 5.9 are given in Table 13. 89 Table 12. ■ Numerical Results for Example 5.9. MX Nx My NY 4 I 2 8 2 16 4 Table 13. MX HEjl3 > (h) Il Mz Nz Mt Nt 4 I 2 2 I .395-2 3 8 2 3 3 2 *.389-3 6 16 4 6 6 3 *.313-4 . Numerical Results for Problems Dictating Noncentered Sums in All Indices. I h l (h) Il IlE u 2 ^ (%) Il IlE^S ) (h) Il Asymptotic Error 4 1.57080 .237-2 .805-2 .395-2 .432-1 8 1.11072 .208-2 .489-2 *.389-3 .118-1 16 .78540 .707-4 .867-4 *.313-4 .188-2 32 .55536 .237-4 *.682-3 The last three examples _____ illustrate how the parameter selections described by (5.8) - (5.12), (5.19) - effort for the (5.21) (5.14) - (5.16), and minimize the expenditure of computational the Sinc-Galerkin method's .138-3 competitiveness like finite differences. As method. This also increases with respect to alternatives discussed in Chapter 4, one measure of competitiveness is the number of nonzero elements in the coefficient matrices. I For instance, the asymptotic 90 error corresponding to Maintaining this requires stepsize domain a to at gridpoints on time. error least h FD t that matrices This evidence the = = using = 16 finite is .188-2. differences .04 and iteration of the time 6.3. This translates scheme as 674,200 nonzero 5.8 and 21,595,000 for entries. when M jj finite difference matrix with Sinc-Galerkin using computational Example have author herein are, notes one system elements for Example 5.9. Their corresponding 251,160 and 3,294,060 nonzero supports S t e n g e r 's efficiency of the [13] statement Sinc-Galerkin method becomes more apparent in higher dimensions. the to 25 any of the spatial intervals and 158 steps in Solving the yields a runs Further, that the computational savings exhibited in large p a r t , due to including in the Galerkin p r o c e d u r e . the time domain REFERENCES CITED Farlow, S.J. Partial Differential Equations for Scientists and E n g i n e e r s , John Wiley and Sons, New York, 1982. ^ Weinberger, H .F . A First Course in Partial Differential E q u a t i o n s , John Wilqy and Sons, New York, 1965. Botha, J .F . and Finder, G .F . Fundamental Concepts in the Numerical Solution of Differential E q u a t i o n s , John Wiley and Sons, 1983. Ames, W.F. Equations, Numerical Methods for Partial Differential 2nd e d . , Academic Press, New York, 1977. S t e n g e r , F . "A Sinc-Galerkin Method of Solution of Boundary Value Problems." Mathematics of Computation 33 (January 1979): 85-109. Lund, J. "Symmetrization of the Sinc-Galerkin Method for Boundary Value P r o b l e m s ." Mathematics of Computation 47 (October 1986): 571-588. Gottlieb, D . and O r s z a g , S.A. Numerical Analysis of Spectral Methods: Theory and A p p l i c a t i o n , SIAM, Philadelphia, 1977. Whittaker, E .T . "On the Functions Which Are Represented by the Expansions of the Interpolation T h e o r y . " P r o c . Roy. S o c . Edinburgh 35 (1915): 181-194. Whittaker, J.M. Interpolatory Function T h e o r y , Cambridge University P r e s s , London, 1935. M c N a m e e , J., S t e n g e r , F., and Whitney, J.L. "Whittaker's Cardinal Function in Retrospect." Mathematics of Computation 25 (January 1971): 141-154. S t e n g e r , F . "The Approximate Solution of ConvolutionType Integral Equations." SIAM J . Math. Anal. 4 (August 1973): 536-555. 92 REFERENCES CITED— Continued 12. S t e n g e r , F . "Integration Formulas via the Trapezoidal Rule." J . Inst. Maths. Applies. (1973): 103-114. 12 13. S t e n g e r , F. "Numerical Methods Based on Whittaker Cardinal, or Sine Functions." SIAM Review 23 (April 1981) : 165-223. 14. G e a r , W .C . Numerical Initial Value Problems in Ordinary Differential E q u a t i o n s , Prentice-Hall, Englewood Cliffs, New Jersey, 1971. 15. McArthur, K . M . , B o w e r s , K.L., and Lund, J. "Numerical Implementation of the Sinc-Galerkin Method for SecondOrder Hyperbolic Equations." To appear in Numerical Methods for Partial Differential E q u a t i o n s , (1987). 16. Davis, P.J. Circulant M a t r i c e s , John Wiley and Sons, New York, 1979. 17. Isaacson, E . and Keller, H .B . Analysis of Numerical Methods, John Wiley and Sons, New York, 1966. 18. d e B o o r , C. and S w a r t z , B . "Collocation Approximation to Eigenvalues of an Ordinary Differential Equation: Numerical Illustrations." Mathematics of Computation 36 (January 1981): 1-19. ■. ' MONTANA STATE UNIVERSITY LIBRARIES 762 1002 490 5