Why a minimizer of the Tikhonov functional is closer to the exact solution than the first guess Michael V. Klibanov∗, Anatoly B. Bakushinsky⋄ and Larisa Beilina△ ∗ Department of Mathematics and Statistics University of North Carolina at Charlotte, Charlotte, NC 28223, USA ⋄ Institute for System Analysis of The Russian Academy of Science, Prospect 60 letya Oktyabrya 9 , 117312, Moscow, Russia △ Department of Mathematical Sciences Chalmers University and Gothenburg University SE-42196, Gothenburg, Sweden E-mails: mklibanv@uncc.edu, bakush@isa.ru, larisa@chalmers.se September 27, 2010 Abstract Suppose that a uniqueness theorem is valid for an ill-posed problem. It is shown then that the distance between the exact solution and terms of a minimizing sequence of the Tikhonov functional is less than the distance between the exact solution and the first guess. Unlike the classical case when the regularization parameter tends to zero, only a single value of this parameter is used. Indeed, the latter is always the case in computations. Next, this result is applied to a specific Coefficient Inverse Problem. A uniqueness theorem for this problem is based on the method of Carleman estimates. In particular, the importance of obtaining an accurate first approximation for the correct solution follows from one of theorems. The latter points towards the importance of the development of globally convergent numerical methods as opposed to conventional locally convergent ones. A numerical example is presented 1 Introduction The classical regularization theory for nonlinear ill-posed problems guarantees convergence of regularized solutions to the exact one only when both the level of the error δ and the regularization parameter α (δ) tend to zero, see, e.g. [10,18]. However, in the computational practice one always works with a single value of δ and a single value of α (δ) . Let x∗ be the ideal exact solution of an ill-posed problem for the case of ideal noiseless data. The existence 1 of such a solution should be assumed by one of Tikhonov principles [18]. Suppose that x∗ is unique. The Tikhonov regularization term usually has the form α (δ) kx − x0 k2 , where x0 is counted as the first guess for x∗ and k·k is the norm in a certain space. Let {xn }∞ n=1 be a minimizing sequence for the Tikhonov functional. In the case when the existence of a minimizer of this functional is guaranteed (e.g. the finite dimensional case), we replace this sequence with a minimizer xα , which is traditionally called a regularized solution (one might have several such solutions). The goal of this paper is to address the following questions: Why it is usually observed computationally that for a proper single value of α (δ) one has kxn − x∗ k < kx0 − x∗ k for sufficiently large n? In particular, if xα exists, then why the computational observation is that kxα − x∗ k < kx0 − x∗ k? In other words, why the regularized solution is usually more accurate than the first guess even for a single value of α (δ)? Indeed, the regularization theory guarantees that this is true only in the limiting sense when both δ and α (δ) tend to zero. These questions were not addressed in the regularization theory. We first prove a general Theorem 2, which addresses the above questions in an “abstract” form of Functional Analysis. In accordance to this theorem the short answer on these questions is this: If a uniqueness theorem holds for an ill-posed problem, then the above accuracy improvement takes place. Usually it is not easy to “project” an abstract theory on a specific problem. Thus, the major effort of this paper is focused on the demonstration on how our Theorem 2 works for a specific Coefficient Inverse Problem for a hyperbolic PDE. Also, we present a numerical result for this problem. We show that two factors are important for the above observation: (1) the uniqueness theorem for the original problem, and (2) the availability of a good first guess. The latter means that x0 should be chosen in such a way that the distance kx0 − x∗ k would be sufficiently small. In other words, one needs to obtain a good first approximation for the exact solution, from which subsequent iterations would start from. These iterations would refine x0 . However, in the majority of applications it is unrealistic to assume that such an approximation is available. Hence, one needs to develop a globally convergent numerical method for the corresponding ill-posed problem. This means that one needs to work out such a numerical method which would provide a guaranteed good approximation for x∗ without a priori knowledge of a small neighborhood of x∗ . To apply our Theorem 2 to a Coefficient Inverse Problem (CIP) for a hyperbolic PDE, we use a uniqueness theorem for this CIP, which was proved via the method of Carleman estimates [7,11-15]. Note that the technique of [7,11-15] enables one to prove uniqueness theorems for a wide variety of CIPs. A similar conclusion about the importance of the uniqueness theorem and, therefore, of the modulus of the continuity of the inverse operator on a certain compact set, was drawn in [9]. However, norms kxα − x∗ k and kx0 − x∗ k were not compared in this work, since x0 = 0 in [9]. Only the case when both δ and α (δ) tend to zero was considered in [9], whereas these parameters are fixed in our case. We now explain our motivation for this paper. The crucial question about a method of choice of a good first approximation for the exact solution is left outside of the classical regularization theory. This is because such a choice depends on a specific problem at hands. 2 On the other hand, since the Tikhonov regularization functional usually has many local minima and ravines, a successful first guess is the true key for obtaining an accurate solution. Recently the first and third authors have developed a globally convergent numerical method for a Coefficient Inverse Problem (CIP) for a hyperbolic PDE [2-6]. Furthermore, this method was confirmed in [16] via a surprisingly very accurate imaging results using blind experimental data. It is well known that CIPs are very challenging ones due to their nonlinearity and illposedness combined. So, it is inevitable that an approximation was made in [2] via the truncation of high values of the positive parameter s of the Laplace transform. In fact, this approximation is similar with the truncation of high frequencies, which is routinely done in engineering. It was discovered later in [3-6] that although the globally convergent method of [2,3] provides a guaranteed good approximation for the exact solution, it needs to be refined due to that approximation. On the other hand, since the main input for any locally convergent method is a good first approximation for the exact solution, then a locally convergent technique was chosen for the refinement stage. The resulting two-stage numerical procedure has consistently demonstrated a very good performance [2-6], including the case of experimental data [5]. So, since only a single value of the regularization parameter was used in these publications, then this naturally has motivated us to address the above questions. We also wish to point out that the assumption about a priori known upper estimate of the level of the error δ is not necessary true in applications. Indeed, a huge discrepancy between experimental and computationally simulated data was observed in [5] and [16], see Figure 2 in [5] and Figures 3,4 in [16]. This discrepancy was successfully addressed via new data pre-processing procedures described in [5,16]. Therefore, it is unclear what kind of δ the experimental data of [5,16] actually have. Nevertheless the notion of a priori knowledge of δ is quite useful for qualitative explanations of computational results for ill-posed problems. In section 2 we present the general scheme. In section 3 we apply it to a CIP. In section 4 we present a numerical example of the above mentioned two-stage numerical procedure solely for the illustration purpose. 2 The General Scheme For simplicity we consider only real valued Hilbert spaces, although our results can be extended to Banah spaces. Let H, H1 and H2 be real valued Hilbert spaces with norms k·k , k·k1 and k·k2 respectively. We assume that H1 ⊂ H, kxk ≤ kxk1 , ∀x ∈ H1 ,the set H1 is dense in H and any bounded set in H1 is a compact set in H. Let W ⊂ H be an open set and G = W be its closure. Let F : G → H2 be a continuous operator, not necessarily linear. Consider the equation F (x) = y, x ∈ G. (1) The right hand side of (1) might be given with an error. The existence of a solution of this equation is not guaranteed. However, in accordance with one of Tikhonov principles for ill-posed problems, we assume that there exists an ideal exact solution x∗ of (1) with an 3 ideal errorless data y ∗ and this solution is unique, F (x∗ ) = y ∗. (2) ky − y ∗ k2 ≤ δ, (3) δ γ > δ, ∀δ ∈ (0, δ 0 (γ)) . (4) Jα (x) = kF (x) − yk22 + α kx − x0 k21 , x ∈ G ∩ H1 . (5) We assume that where the sufficiently small number δ > 0 characterizes the error in the data. Since it is unlikely that one can get a better accuracy of the solution than O (δ) in the asymptotic sense as δ → 0, then it is usually acceptable that all other parameters involved in the regularization process are much larger than δ. For example, let the number γ ∈ (0, 1) . Since limδ→0 (δ γ /δ) = ∞, then there exists a sufficiently small number δ 0 (γ) ∈ (0, 1) such that Hence, it would be acceptable if one would choose α (δ) = δ γ , see (9) below. To solve the problem (1), consider the Tikhonov regularization functional with the regularization parameter α, It is natural to treat x0 here as the starting point for iterations, i.e. the first guess for the solution. We are interested in the question of the minimization of this functional for a fixed value of α. It is not straightforward to prove the existence of the minimizer, unless some additional conditions would be imposed on the operator F , see subsection 3.3 for a possible one. Hence, all what we can consider is a minimizing sequence. Let m = inf G∩H1 Jα (x) . Then m ≤ Jα (x∗ ) . (6) Usually m < Jα (x∗ ). However we do not use the latter assumption here. 2.1 The case dim H = ∞ Let dim H = ∞. In this case we cannot prove existence of a minimizer of the functional Jα , i.e. we cannot prove the existence of a regularized solution. Hence, we work now with the minimizing sequence. It follows from (5), (6) that there exists a sequence {xn }∞ n=1 ⊂ G ∩ H1 such that m ≤ Jα (xn ) ≤ δ2 + α kx∗ − x0 k21 and lim Jα (xn ) = m. (7) n→∞ By (3)-(7) 1/2 δ2 2 ∗ + kx0 k1 . (8) + kx − x0 k1 kxn k1 ≤ α It is more convenient to work with a lesser number of parameters. So, we assume that α = α (δ) . To specify this dependence, we note that we want the right hand side of (8) to be bounded as δ → 0. So, using (4), we assume that α (δ) = δ 2µ , for a µ ∈ (0, 1) . 4 (9) Hence, it follows from (8) and (9) that {xn }∞ n=1 ⊂ K (δ, x0 ) , where K (δ, x0 ) is a precompact set in H defined as q 2 2(1−µ) ∗ (10) + kx − x0 k1 + kx0 k1 . K (δ, x0 ) = x ∈ H1 : kxk1 ≤ δ Note that the sequence {xn }∞ n=1 depends on δ. The set K (δ, x0 ) is not closed in H. Let K = K (δ, x0 ) be its closure. Hence, K is a closed compact set in H. Remark 1. We have introduced the specific dependence α := α (δ) = δ 2µ in (9) for the sake of definiteness only. In fact, in the theory below one can consider many other dependencies α := α (δ) , as long as α (δ) >> δ for sufficiently small values of δ and limδ→0 α (δ) = 0. Let B1 and B2 be two Banah spaces, P ⊂ B1 be a closed compact and A : P → B2 be a continuous operator. We remind that there exists the modulus of the continuity of this operator on P . In other words, there exists a function ω (z) of the real variable z ≥ 0 such that ω (z) ≥ 0, ω (z1 ) ≤ ω (z2 ) , if z1 ≤ z2 , lim+ ω (z) = 0, z→0 kA (x1 ) − A (x2 )kB2 ≤ ω kx1 − x2 kB1 , ∀x1 , x2 ∈ P. (11) (12) We also remind a theorem of A.N. Tikhonov, which is one of back bones of the theory of ill-posed problems: Theorem 1 (A.N. Tikhonov, [18]). Let B1 and B2 be two Banah spaces, P ⊂ B1 be a closed compact set and A : P → B2 be a continuous one-to-one operator . Let Q = A (P ) . Then the operator A−1 : Q → P is continuous. We now prove Theorem 2. Let the above operator F : G → H2 be continuous and one-to-one. Suppose that (9) holds. Let {xn }∞ n=1 ⊂ K (δ, x0 ) ⊆ K be a minimizing sequence of the functional (5) satisfying (7). Let ξ ∈ (0, 1) be an arbitrary number. Then there exists a sufficiently small number δ 0 = δ 0 (ξ) ∈ (0, 1) such that for all δ ∈ (0, δ0 ) the following inequality holds kxn − x∗ k ≤ ξ kx0 − x∗ k1 , ∀n. In particular, if a regularized solution xα(δ) exists for a certain δ ∈ (0, δ0 ) , i.e. Jα(δ) m (δ) , then xα(δ) − x∗ ≤ ξ kx0 − x∗ k . 1 (13) xα(δ) = (14) Proof. By (5), (7) and (9) Hence, q q 2 2 ∗ kF (xn ) − yk2 ≤ δ + α kx0 − x k1 = δ 2 + δ 2µ kx0 − x∗ k21 . kF (xn ) − F (x∗ )k2 = k(F (xn ) − y) + (y − F (x∗ ))k2 = k(F (xn ) − y) + (y − y ∗)k2 (15) q ≤ kF (xn ) − yk2 + ky − y ∗ k2 ≤ δ 2 + δ 2µ kx∗ − x0 k21 + δ, 5 where we have used (3). By Theorem 1 there exists the modulus of the continuity ω F (z) of the operator F −1 : F K → K. It follows from (15) that q 2 2 2µ ∗ (16) δ + δ kx0 − x∗ k1 + δ . kxn − x k ≤ ω F Consider an arbitrary ξ ∈ (0, 1) . It follows from (11) that one can choose a number δ 0 = δ 0 (ξ) so small that q 2 2 2µ ∗ ωF (17) δ + δ kx − x0 k1 + δ ≤ ξ kx0 − x∗ k1 , ∀δ ∈ (0, δ0 ) . The estimate (13) follows from (16) and (17). It is obvious that in the case of the existence of the regularized solution, one can replace in (16) xn with xα(δ) . 2.2 Discussion of Theorem 2 One can see from (16) and (17) that two main factors defining the accuracy of the reconstruction of points xn , as well as of the regularized solution xα(δ) (in the case when it exists) are: (a) the level of the error δ, and (b) the accuracy of the first approximation x0 for the exact solution, i.e. the norm kx∗ − x0 k1 . However, since in applications one always works with a fixed level of error δ, then the second factor is more important than the first. In addition, regardless on the existence of the infimum value m (δ) of the functional Jα(δ) , it is usually impossible in practical computations to construct the above minimizing sequence {xn }∞ n=1 . This is because the functional Jα(δ) usually features the problem of multiple local minima and ravines. The only case when an effective construction of that sequence might be possible is when one can figure out such a first guess x0 for the exact solution x∗ , which would be located sufficiently close to x∗ , i.e. the norm kx0 − x∗ k1 should be sufficiently small. Indeed, it was proven in [6] that, under some additional conditions, the functional Jα(δ) is strongly convex in a small neighborhood of xα(δ) (it was assumed in [6] that the regularized solution exists). Furthermore, in the framework of [6] x∗ belongs to this neighborhood, provided that x0 is sufficiently close to x∗ , see subsection 3.3 for some details. Hence, local minima and ravines do not occur in that neighborhood. In particular, the latter means that the two stage procedure of [2-6] converges globally to the exact solution. Indeed, not only the first stage provides a guaranteed good approximation for that solution, but also, by Theorems 7 and 8 (subsection 3.3), the second stage provides a guaranteed refinement and the gradient method, which starts from that first approximation, converges to the refined solution. In summary, the discussion of the above paragraph points towards the fundamental importance of a proper choice of a good first guess for the exact solution, i.e. towards the importance of developments of globally convergent numerical methods, as opposed to conventional locally convergent ones. Another inconvenience of Theorem 2 is that it is difficult to work with a stronger norm k·k1 in practical computations. Indeed, this norm is used only to ensure that K (δ, x0 ) is 6 a compact set in H. However, the most convenient norm in practical computations is the L2 −norm. Hence, when using the norm k·k1 , one should set k·k1 := k·kH p , p ≥ 1. At the same time, convergence of the sequence {xn }∞ n=1 will occur in the L2 −norm. This is clearly inconvenient in practical computations. Hence, finite dimensional spaces of standard piecewise linear finite elements were used in computations of [2-6] for both forward and inverse problems. Note that all norms in such spaces are equivalent. Still, the number of these finite elements should not be exceedingly large. Indeed, if it would, then that finite dimensional space would effectively become an infinitely dimensional one, and the L2 −norm would not be factually equivalent to a stronger norm. This is why local mesh refinements were used in [2-6], which is an opposite to a uniformly fine mesh. In other words, the so-called Adaptive Finite Element method (adaptivity below for brevity) was used on the second stage of the two-stage numerical procedure of [2-6]. While the adaptivity is well known for classical forward problems for PDEs, the first publication for the case of a CIP was in [1]. 3 Example: a Coefficient Inverse Problem for a Hyperbolic PDE We now consider a CIP for which a globally convergent two-stage numerical procedure was developed in [2-6]. Recall that on the first stage the globally convergent numerical method of [2,3] provides a guaranteed good first approximation for the exact solution. This approximation is the key input for any subsequent locally convergent method. Next, a locally convergent adaptivity technique refines the approximation obtained on the first stage. So, based on Theorem 2, we provide in this section an explanation on why this refinement became possible. 3.1 A Coefficient Inverse Problem and its uniqueness Let Ω ⊂ Rk , k = 2, 3 be a bounded domain with its boundary ∂Ω ∈ C 3 . Consider the function c (x) satisfying the following conditions c (x) ∈ [1, d] in Ω, d = const. > 1, c (x) = 1 for x ∈ Rk Ω, c (x) ∈ C 2 Rk . (18) (19) c (x) utt = ∆u in Rk × (0, ∞) , u (x, 0) = 0, ut (x, 0) = δ (x − x0 ) . (20) (21) Let the point x0 ∈ / Ω. Consider the solution of the following Cauchy problem Equation (20) governs propagation of both acoustic and electromagnetic waves. In the p acoustical case 1/ c(x) is the sound speed. In the 2-D case of EM waves propagation, the dimensionless coefficient c(x) = εr (x), where εr (x) is the relative dielectric function of the 7 medium, see [8], where this equation was derived from the Maxwell equations in the 2-D case. It should be also pointed out that although equation (18) cannot be derived from the Maxwell’s system in the 3-d case for εr (x) 6= const., still it was successfully applied to the imaging from experimental data, and a very good accuracy of reconstruction was consistently observed [5,16], including even the case of blind study in [16]. We consider the following Inverse Problem 1. Suppose that the coefficient c (x) satisfies (18) and (19), where the number d > 1 is given. Assume that the function c (x) is unknown in the domain Ω. Determine the function c (x) for x ∈ Ω, assuming that the following function g (x, t) is known for a single source position x0 ∈ /Ω u (x, t) = g (x, t) , ∀ (x, t) ∈ ∂Ω × (0, ∞) . (22) Since the function c (x) = 1 outside of the domain Ω, then we can consider (20)-(22) as the initial boundary value problem in the domain Rk Ω × (0, ∞) . Hence, the function u (x, t) can be uniquely determined in this domain. Hence, the following function p (x, t) is known ∂n u (x, t) = p (x, t) , ∀ (x, t) ∈ ∂Ω × (0, ∞) , (23) where n is the unit outward normal vector at ∂Ω. This is an inverse problem with the single measurement data, since the initial condition is not varied here. This is because only a single source location is involved. The case of single measurement is the most economical one, as opposed to the case of infinitely many measurements. It is yet unknown how to prove uniqueness theorem for this inverse problem. There exists a general method of proofs of such theorems for a broad class of CIPs with single measurement data for many PDEs. This method is based on Carleman estimates, it was first introduced in [7,11] (also, see [12-15]) and became quite popular since then, see, e.g. the recent survey [19] with many references. In fact, currently this is the single method enabling proofs of uniqueness theorems for CIPs with the single measurement data. However, this technique cannot handle the case of, e.g. the δ− function in the initial condition, as it is in (21). Instead, it can handle only the case when at least one initial condition is not zero in the entire domain of interest. We use one of results of [13,14] in this section. Fix a bounded domain Ω1 ⊂ Rk with ∂Ω1 ∈ C 3 such that Ω ⊂⊂ Ω1 and x0 ∈ Ω1 Ω. Consider a small neighborhood / N (∂Ω1 ). Let N (∂Ω1 ) of the boundary ∂Ω1 such that x0 ∈ the function χ (x) ∈ C ∞ Ω1 be such that 1 for x ∈ Ω1 N (∂Ω1 ) , (24) χ (x) ≥ 0, χ (x) = 0 for x ∈ (∂Ω1 ) . The existence of such functions is known from the Real Analysis course. Let ε > 0 be a sufficiently small number. Consider the following approximation of the function δ (x − x0 ) in the distributions sense ! |x − x0 |2 χ (x) , (25) fε (x − x0 ) = C (ε, χ, Ω1 ) exp − ε2 8 where the constant C (ε, χ, Ω1 ) > 0 is chosen such that Z fε (x − x0 ) dx = 1. (26) Ω1 We approximate the function u via replacing initial conditions in (21) with uε (x, 0) = 0, ∂t uε (x, 0) = fε (x − x0 ) . (27) Let uε (x, t) be the solution of the Cauchy problem (20), (27) such that uε (x, t) ∈ H 2 (Φ × (0, T )) for any bounded domain Φ ⊂ Rk . It follows from the discussion in §2 of Chapter 4 of [17] that such a solution exists, is unique and coincides with the “target” solution uε (x, t) of the Cauchy problem (20), (27), uε (x, t) ≡ uε (x, t). Consider a number T > 0, which we will define later. Then due to a finite speed of wave propagation (Theorem 2.2 in §2 of Chapter 4 of [17]) there exists a domain Ω2 = Ω2 (T ) with ∂Ω2 ∈ C 3 such that Ω ⊂⊂ Ω1 ⊂⊂ Ω2 uε |∂Ω2 ×(0,T ) = 0 (28) and uε (x, t) = 0 in Rk Ω2 × (0, T ) . (29) Therefore, we can consider the function uε (x, t) as the solution of the initial boundary value problem (20), (27), (28) in the domain QT = Ω2 × (0, T ) . Hence, repeating arguments given after the proof of Theorem 4.1 in §4 of Chapter 4 of [17], we obtain the following Lemma 1. Let the function c (x) satisfies conditions (18), (19) and, in addition, let c ∈ C 5 Rk . Let domains Ω, Ω1 satisfy above conditions. Let the function fε (x − x0 ) satisfies (25), (26), where the function χ (x) satisfies (24). Let the number T > 0. Let the domain Ω2 = Ω2 (T ) with ∂Ω2 ∈ C 2 be such that (29) is true for the solution uε (x, t) ∈ H 2 (Φ × (0, T )) (for any bounded domain Φ ⊂ Rk ) of the Cauchy problem (20), (27). Then in fact the function uε ∈ H 6 (QT ). Hence, since k = 2, 3, then by the embedding theorem 3 uε ∈ C QT . We need the smoothness condition uε ∈ C 3 QT of this lemma because the uniqueness Theorem 3 requires it. Theorem 3 [13,14]. Assume that the function fε (x − x0 ) satisfies (25), (26) and the function c (x) satisfies conditions (18), (19). In addition, assume that 1 + (x, ∇c (x)) ≥ c = const. > 0 in Ω. 2 (30) Suppose that boundary functions h0 (x, t) , h1 (x, t) are given for (x, t) ∈ ∂Ω × (0, T1 ) , where the number T1 is defined in the next sentence. Then one can choose such a sufficiently large number T1 = T1 (Ω, c, d) > 0 that for any T ≥ T1 there exists at most one pair of functions (c, v) such that v ∈ C 3 Ω × [0, T ] and c (x) vtt = ∆v in Ω × (0, T ) , 9 v (x, 0) = 0, vt (x, 0) = fε (x − x0 ) , v |∂Ω×(0,T ) = h0 (x, t) , ∂n v |∂Ω×(0,T ) = h1 (x, t) . To apply this theorem to our specific case, we assume that the following functions gε , pε are given instead of functions g and p in (22), (23) uε |∂Ω×(0,T ) = gε (x, t) , ∂n uε |∂Ω×(0,T ) = pε (x, t) . (31) Hence, using Theorem 3 and Lemma 1, we obtain the following uniqueness result Theorem 4. Suppose that the function c (x) satisfies conditions (18), (19), c ∈ C 5 Rk and also let the condition (30) be true. Assume that all other conditions of Lemma 1 are satisfied, except that now we assume that T is an arbitrary number such that T ≥ T1 = T1 (Ω, c, d) where T1 was defined Theorem 3. Then there exists at most one pair of functions (c, uε ) satisfying (20), (27), (31). Since in Theorem 4 we have actually replaced δ (x − x0 ) in (21) with the function fε (x − x0 ) , then we cannot apply Theorem 2 directly to the above inverse problem. Instead, we formulate Inverse Problem 2. Suppose that the coefficient c (x) satisfies (18), (19), (30), where the numbers d, c > 1 are given. In addition, let the function c ∈ C 5 Rk . Let the function fε (x − x0 ) satisfies conditions of Lemma 1. Let T1 = T1 (Ω, c, d) > 0 be the number chosen in Theorems 3, and T ≥ T1 be an arbitrary number. Consider the solution uε (x, t) ∈ H 2 (Φ × (0, T )) (for any bounded domain Φ ⊂ Rk ) of the following Cauchy problem c (x) ∂t2 uε = ∆uε in Rk × (0, T ) , uε (x, 0) = 0, ∂t uε (x, 0) = fε (x − x0 ) . (32) (33) Assume that the coefficient c (x) is unknown in the domain Ω. Determine c (x) for x ∈ Ω, assuming that the function uε (x, t) is given for (x, t) ∈ ∂Ω × (0, T ) . For the sake of completeness we formulate now Theorem 5. Assume that conditions of Inverse Problem 2 are satisfied. Then there exists at most one solution (c, uε ) of this problem. Proof. By Lemma 1 the solution of the problem (32), (33) uε ∈ C 3 Ω × [0, T ] . Since the function c (x) = 1 in Rk Ω, then, as it was shown above, one can uniquely determine the function pε (x, t) = ∂n uε |∂Ω×(0,T ) . Hence, Theorem 4 leads to the desired result. 3.2 Using Theorems 2-5 Following (5), we now construct the regularization functional Jα (c) . Choose a number T ≥ T1 (Ω, c, d) > 0, where T1 was chosen in Theorems 3,4. Let Ω2 = Ω2 (T ) be the domain chosen in Lemma 1. Because of (29), it is sufficient to work with domains Ω, Ω2 rather than with Ω and Rk . In doing so, we must make sure that all functions {cn }∞ n=1 , which will replace ∞ 5 the sequence {xn }n=1 in (7), are such that cn ∈ C Ω2 , satisfy condition (30) and also 10 cn ∈ [1, d] , cn (x) = 1 in Ω2 Ω. To make sure that cn ∈ C 5 Ω2 , we use the embedding theorem for k = 2, 3. Hence, in our case the Hilbert space H (1) = H 7 (Ω2 ) ⊂ C 5 Ω2 . The (1) Hilbert space H1 should be such that its norm would be stronger than the norm in H (1) . (1) (1) (1) Hence, we set H1 := H 8 (Ω2 ) . Obviously H 1 = H (1) and any bounded set in H1 is a precompact set in H (1) . We define the set G(1) ⊂ H (1) as n o G(1) = c ∈ H (1) : c ∈ [1, d] , c (x) = 1 in Ω2 Ω, k∇ckC (Ω) ≤ A and (30) holds , (34) where A > 0 is a constant such that A≥ c + 0.5 . maxΩ |x| Hence, the inequality k∇ckC (Ω) ≤ A does not contradict (30). Clearly, G(1) is a closed set which can be obtained as a closure of an open set. To correctly introduce the operator F , we should carefully work with appropriate Hilbert spaces. This is because we should keep in mind the classical theory of hyperbolic initial boundary value problems, which guarantees smoothness of solutions of these problems [17]. Consider first the operator Fb : G(1) → H 2 (QT ) defined as Fb (c) = uε (x, t, c) , (x, t) ∈ QT , where uε (x, t, c) is the solution of the problem (32), (33) with this specific coefficient c. Lemma 2. Let T ≥ T1 (Ω, c, d) > 0, where T1 was chosen in Theorems 3 and 4. Let Ω2 = Ω2 (T ) be the domain chosen in Lemma 1 and QT = Ω2 × (0, T ). Then the operator Fb is Lipschitz continuous on G(1) . Proof. Consider two functions c1 , c2 ∈ G(1) . Denote e c = c1 − c2 , u e (x, t) = uε (x, t, c1 ) − uε (x, t, c2 ) . Then (28), (32) and (33) imply c1 u ett − ∆e u = −e c∂t2 uε (x, t, c2 ) in QT , u e (x, 0) = u et (x, 0) = 0, u e | ∂Ω2 ×(0,T ) = 0. (35) Let f (x, t) = −e c∂t2 uε (x, t, c2 ) be the right hand side of equation (35). Since in fact uε ∈ H 6 (QT ) (Lemma 1), then ∂tk f ∈ L2 (QT ) , k = 0, ..., 4. (36) Hence, using Theorem 4.1 of Chapter 4 of [17], we obtain ke ukH 2 (QT ) ≤ B1 ke ckC (Ω) ∂t2 uε (x, t, c2 )L2 (Q ) + ∂t3 uε (x, t, c2 )L2 (Q ) , T T (37) where the constant B = B(A, d) > 0 depends only on numbers A, d. Since By the embedding theorem H 7 (Ω) ⊂ C 5 Ω and k·kC 5 (Ω) ≤ C k·kH 7 (Ω) with a constant C > 0 depending 11 only on Ω. Since e c ∈ H 7 (Ω) and the norm ke ckC (Ω) is weaker than the norm ke ck C 5 ( Ω) , 2 3 then it is sufficient now to estimate norms k∂t uε (x, t, c2 )kL2 (QT ) , k∂t uε (x, t, c2 )kL2 (QT ) from the above uniformly for all functions c2 ∈ G, i.e. by a constant, k∂t2 uε (x, t, c2 )kL2 (QT ) + k∂t3 uε (x, t, c2 )kL2 (QT ) ≤ B2 ,where B2 = B2 G(1) > 0 is a constant depending only on the set G(1) . And the latter estimate follows from Theorem 4.1 of Chapter 4 of [17] and the discussion after it. Remark 2. Using (37) as well as the discussion presented after Theorem 4.1 of Chapter 4 of [17], one might try to estimate u e in a stronger norm. However, since the function 3 e c∂t uε (x, 0, c2 ) = e c∆fε (x − x0 ) 6= 0 in Ω, then this attempt would require a higher smoothness of functions c1 , e c. In any case, the estimate (37) is sufficient for our goal. We define the operator F (1) : G(1) → L2 (∂Ω × (0, T )) as F (1) (c) (x, t) := gε = uε |∂Ω×(0,T ) ∈ L2 (∂Ω × (0, T )) . (38) Lemma 3. The operator F (1) is Lipschitz continuous and one-to-one on the set G(1) . Proof. The Lipschitz continuity follows immediately from the trace theorem and Lemma 2. We prove now that F (1) is one-to-one. Suppose that F (1) (c1 ) = F (1) (c2 ) . Hence, uε (x, t, c1 ) |∂Ω×(0,T ) = uε (x, t, c2 ) |∂Ω×(0,T ) . Let u e (x, t) = uε (x, t, c1 ) − uε (x, t, c2 ) . Since k c1 = c2 = 1 in R Ω, then we have u ett − ∆e u = 0 in Rk Ω × (0, T ) , u e (x, 0) = u et (x, 0) = 0, in Rk Ω, u e | ∂Ω×(0,T ) = 0. Hence, u e (x, t) = 0 in Rk Ω × (0, T ) . Hence, we obtain u e |∂Ω×(0,T ) = ∂n u e |∂Ω×(0,T ) = 0. (39) F (1) (c∗ ) = g ∗ (x, t) := uε (x, t, c∗ ) |∂Ω×(0,T ) . (40) Since by Lemma 1 both functions uε (x, t, c1 ) , uε (x, t, c2 ) ∈ C 3 Ω × [0, T ] , then (39) and Theorem 4 imply that c1 (x) ≡ c2 (x) . (1) Suppose that there exists the exact solution c∗ ∈ H1 ∩ G(1) of the equation Then for a given function g ∗ (x, t) this solution is unique by Lemma 3. Let the function g (x, t) ∈ L2 (∂Ω × (0, T )) be such that e g (x, t) = uε (x, t, c∗ ) |∂Ω×(0,T ) +gδ (x, t) , (x, t) ∈ ∂Ω × (0, T ) , e (41) kgδ kL2 (∂Ω×(0,T )) ≤ δ. (42) where the function gδ (x, t) represents the error in the data, 12 (1) Let the function c0 ∈ H1 ∩ G(1) be a first guess for c∗ . We now introduce the regularization (1) functional Jα as Jα(1) (c) = ZT Z 0 ∂Ω (uε (x, t, c) − e g (x, t))2 dSx dt + α kc − c0 k2H 8 (Ω) ; c, c0 ∈ G, (43) where the dependence α = α (δ) = δ 2µ , µ ∈ (0, 1) is the same as in (9), also see Remark 1. (1) (1) (1) Let m = m (δ) = inf H (1) ∩G(1) Jα (c) and {cn }∞ be the corresponding n=1 ⊂ H1 ∩ G 1 minimizing sequence, limn→∞ Jα (cn ) = m (δ) . Similarly with (10) we introduce the set (1) K (1) (δ, x0 ) ⊂ H1 , which is a precompact set in H (1) , q 2 2(1−µ) (1) (1) (44) K (δ, c0 ) = c ∈ G : kck1 ≤ δ + kc∗ − c0 k1 + kc0 k1 . (1) (1) Let K = K (δ, c0 ) ⊂ H (1) be the closure of the set K (1) (δ, c0 ) in the norm of the space (1) H (1) . Hence, K (δ, c0 ) is a closed compact in the space H (1) . (1) Theorem 6. Let Hilbert spaces H (1) , H1 be ones defined above and the set G(1) be the one defined in (34). Let the number T1 = T1 (Ω, c, d) > 0 be the one chosen in Theorems 3,4 (1) and T ≥ T1 . Assume that there exists the exact solution c∗ ∈ H1 ∩ G(1) of equation (40) (1) and let c0 ∈ H1 ∩ G(1) be a first approximation for this solution. Suppose that (41) and (42) (1) (1) (1) be the corresponding hold. Let m = m (δ) = inf H (1) ∩G(1) Jα (c) and {cn }∞ n=1 ⊂ H1 ∩ G 1 (1) (1) minimizing sequence, limn→∞ Jα(δ) (cn ) = m (δ) , where the functional Jα(δ) is defined in (43) and the dependence α = α (δ) is given in (9) (also, see Remark 1). Then for any number ξ ∈ (0, 1) there exists a sufficiently small number δ 0 = δ 0 (ξ) ∈ (0, 1) such that for all δ ∈ (0, δ0 ) the following inequality holds kcn − c∗ kH (1) ≤ ξ kc0 − c∗ kH (1) . 1 In addition, if a regularized solution xα(δ) exists for a certain δ ∈ (0, δ 0 ) , i.e. Jα(δ) cα(δ) = m (δ) , then cα(δ) − c∗ (1) ≤ ξ kc∗ − c0 k (1) . H H 1 (1) Proof. Let K be the above closed compact. It follows from Theorem 1 and Lemma −1 (1) (1) → K exists and is continuous. Let ω F (1) be its 3 that the operator F (1) : F (1) K modulus of the continuity. Then similarly with the proof of Theorem 2 we obtain q kcn − c∗ kH (1) ≤ ω F (1) δ 2 + δ 2µ kc∗ − c0 k2H (1) + δ . 1 The rest of the proof repeats the corresponding part of the proof of Theorem 2. Remarks 3: 13 1. When applying the technique of [7,11-15] for proofs of uniqueness theorems for CIPs, like, e.g. Theorem 3, it is often possible to obtain Hölder or even Lipschitz estimates for the modulus of the continuity of the corresponding operator. The Hölder estimate means ω F (z) ≤ Cz ν , ν ∈ (0, 1) and the Lipschitz estimate means ω F (z) ≤ Cz with a constant C > 0. This is because Carleman estimates imply at least Hölder estimates, see Chapter 2 in [14]. This thought was used in [9] for a purpose, which is different from the one of this paper (see some comments in Introduction). In particular, the Lipschitz stability for a similar inverse problem for the equation utt = div (a(x)∇u) was obtained in [15]. So, in the case of, e.g. the Lipschitz stability and under a reasonable assumption that δ ≤ δ µ kc∗ − c0 k2H (1) (see 1 (4)) the last estimate of the proof of Theorem 6 would become kcn − c∗ kH (1) ≤ Cδ µ kc∗ − c0 k2H (1) . 1 µ Since it is usually assumed that δ << 1, then this is an additional indication of the fact that the accuracy of a regularized solution is significantly better than the accuracy of the first guess c0 even for a single value of the regularization parameter, as opposed to the limiting case of δ, α (δ) → 0 of the classical regularization theory. The only reason why we do not provide such detailed estimates here is that it would be quite space consuming to do so. 2. A conclusion, which is very similar to the above first remark, can be drawn for the less restrictive finite dimensional case considered in subsection 3.3. Since all norms in a finite dimensional space are equivalent, then in this case norms kcn − c∗ kH (1) , kc∗ − c0 kH (1) 1 can be replaced with kcn − c∗ kL2 (Ω) , kc∗ − c0 kL2 (Ω) , which is more convenient, see Theorem 7 in subsection 3.3. 3. The smoothness requirement cn ∈ H 8 (Ω2 ) is an inconvenient one. However, given the absence of proper uniqueness results for the original Inverse Problem 1, there is nothing what can be done at this point. Also, this smoothness requirement is imposed only to make sure that Lemma 1 is valid and we need the smoothness of Lemma 1 for Theorem 3. In the next subsection we present a scenario, which is more realistic from the computational standpoint. However, instead of relying on the rigorously established (in [13,14]) uniqueness Theorem 3, we only assume in subsection 3.3 that the uniqueness holds. 3.3 The finite dimensional case Because of a number of inconveniences of the infinitely dimensional case mentioned in subsection 2.2 and in Remark 3, we now consider the finite dimensional case, which is more realistic for computations. Fix a certain sufficiently large number T > 0 and let Ω2 = Ω2 (T ) be the domain chosen in Lemma 1. Since in the works [2-6,16] and in the numerical section 4 only standard triangular or tetrahedral finite elements are applied and these elements form a finite dimensional subspace of piecewise linear functions in the space L2 (Ω) , we assume in this subsection that c ∈ C Ω2 ∩ H 1 (Ω2 ) and derivatives cxi are bounded in Ω2 , i = 1, ..., k, (45) c (x) ∈ [1, d] in Ω, c (x) = 1 in Ω2 Ω. (46) 14 Let H (2) ⊂ L2 (Ω2 ) be a finite dimensional subspace of the space L2 (Ω2 ) in which all functions satisfy conditions (45). Define the set G(2) as G(2) = c ∈ H (2) satisfying (46) . (47) Since norms k·kC (Ω) and k·kL2 (Ω) are equivalent in the finite dimensional space H (2) and G(2) is a closed bounded set in C Ω , then G(2) is also a closed set in H (2) . Let θ ∈ (0, 1) be a sufficiently small number. We follow [4,6] by replacing the function δ (x − x0 ) in (21) with the function δ θ (x − x0 ) , ( Cθ exp |x−x 1|2 −θ2 , |x − x0 | < θ, 0 δ θ (x − x0 ) = 0, |x − x0 | ≥ θ, where the constant Cθ is such that Z δ θ (x − x0 ) dx = 1. Rk We assume that θ is so small that δ θ (x − x0 ) = 0 for x ∈ Ω ∪ Rk Ω1 . (48) Let T > 0 be a certain number. For any function c ∈ G(2) consider the solution uθ (x, t) of the following Cauchy problem c (x) ∂t2 uθ = ∆uθ in Rk × (0, T ) , uθ (x, 0) = 0, ∂t uθ (x, 0) = δ θ (x − x0 ) . (49) (50) Similarly with the Cauchy problem (20), (27) we conclude that because of (48), there exists 2 unique solution of the problem (49), (50) such that u θ ∈ H (Φ × (0, T )) for any bounded domain Φ ⊂ Rk . In addition, uθ (x, t) = 0 in Rk Ω2 × (0, T ) . Similarly with (38) we introduce the operator F (2) : G(2) → L2 (∂Ω × (0, T )) as F (2) (c) (x, t) := gθ = uθ |∂Ω×(0,T3 ) ∈ L2 (∂Ω × (0, T )) . (51) Analogously to Lemma 3 one can prove Lemma 4 (see Lemma 7.1 of [6] for an analogous result). The operator F (2) is Lipschitz continuous. Unlike the previous two subsections, there is no uniqueness result for the operator F (2) . The main reason of this is that δ θ (x − x0 ) = 0 in Ω, unlike the function fε (x − x0 ) , see Introduction for the discussion of uniqueness results. In addition the function c (x) is not sufficiently smooth now. Hence, we simply impose the following Assumption. The operator F (2) : G(2) → L2 (∂Ω × (0, T )) is one-to-one. 15 Similarly with (40)-(42) we assume that there exists the exact solution c∗ ∈ G(2) of the equation F (2) (c∗ ) = g ∗ (x, t) := uθ (x, t, c∗ ) |∂Ω×(0,T ) . (52) It follows from assumption that, for a given function g ∗ (x, t) , this solution is unique. Let the function g (x, t) ∈ L2 (∂Ω × (0, T )) . We assume that g (x, t) = uθ (x, t, c∗ ) |∂Ω×(0,T ) +gδ (x, t) , (x, t) ∈ ∂Ω × (0, T ) , (53) where the function gδ (x, t) represents the error in the data, kgδ kL2 (∂Ω×(0,T )) ≤ δ. (54) Let α = α (δ) = δ 2µ , µ ∈ (0, 1) be the same as in (9), also see Remark 1. We now introduce (2) the regularization functional Jα as Jα(2) (c) = ZT Z 0 ∂Ω (uθ (x, t, c) − g (x, t))2 dSx dt + α kc − c0 k2L2 (Ω) ; c, c0 ∈ G(2) . (55) (1) Obviously this functional is more convenient to work with than the functional Jα (c) in (43). This is because the H 8 (Ω) norm of (43) is replaced with the L2 (Ω) norm in (55). However, unlike the previous subsection where the one-to-one property of the operator F (1) was rigorously guaranteed, here we only assume this property for the operator F (2) . Theorem 7. Let H (2) ⊂ L2 (Ω2 ) be the above finite dimensional subspace of the space L2 (Ω2 ) , in which all functions satisfy conditions (45). Let the set G(2) ⊂ H (2) be the one defined in (47). Let the function uθ be the solution of the Cauchy problem (49), (50) such that uθ ∈ H 2 (Φ × (0, T )) for any bounded domain Φ ⊂ Rk . Let F (2) be the operator defined in (51) and let Assumption be true. Assume that there exists the exact solution c∗ ∈ G(2) of equation (52) and let c0 ∈ G(2) be a first guess for this solution. Suppose that (53) and (54) (2) hold. Let m = m (δ) = minG(2) Jα (c) . Then a regularized solution cα(δ) ∈ G(2) exists, i.e. m = m (δ) = min Jα(2) (c) = Jα(2) cα(δ) . G(2) (56) Also, for any number ξ ∈ (0, 1) there exists a sufficiently small number δ 0 = δ 0 (ξ) ∈ (0, 1) such that for all δ ∈ (0, δ0 ) the following inequality holds cα(δ) − c∗ ≤ ξ kc∗ − c0 kL2 (Ω) . L2 (Ω) Proof. Since G(2) ⊂ H (2) is a closed bounded set then the existence of a minimizer (2) cα(δ) of the functional Jα (c) is obvious even for any value of α > 0. The set G(2) ⊂ H (2) is a closed compact. Hence, it follows from Theorem 1 and Assumption that the operator 16 −1 F (2) : F (2) G(2) → G(2) exists and is continuous. Let ω F (2) be its modulus of the continuity. Then similarly with the proof of Theorem 2 we obtain q 2 2 2µ ∗ cα(δ) − c ≤ ω F (2) δ + δ kc∗ − c0 kL2 (Ω) + δ . L2 (Ω) The rest of the proof is the same as the corresponding part of the proof of Theorem 2. Although the existence of a regularized solution is established in Theorem 7, it is still unclear how many such solutions exist and how to obtain them in practical computations. We show now that if the function c0 is sufficiently close to the exact solution c∗ , then the regularized solution is unique and can be found by a version of the gradient method without a risk of coming up with a local minima or a ravine. In doing so we use results of [6]. Let µ1 ∈ (0, 1) be such a number that 2µ ∈ (0, min (µ1 , 2 (1 − µ1 ))) , (57) where the number µ was defined in (9). Assume that kc0 − c∗ kL2 (Ω) ≤ δ µ1 . Let the number β ∈ (0, 1) be independent on δ. Denote n √ o V(1+√2)δµ1 (c∗ ) = c ∈ G(2) : kc − c∗ kL2 (Ω) < 1 + 2 δ µ1 , o n Vβδ2µ cα(δ) = c ∈ G(2) : c − cα(δ) L2 (Ω) < βδ 2µ . (58) Note that it follows from (57) that βδ 2µ >> δ µ1 for sufficiently small δ, since limδ→0 βδ 2µ /δ µ1 = ∞. The following theorem can be derived from a simple reformulation of Theorem 7.2 of [6] for our case. Theorem 8. Assume that conditions of Theorem 7 hold. Also, let (57) and (58) be true. Let cα(δ) be a regularized solution defined in (56). Then there exists a sufficiently small number δ 1 ∈ (0, 1) and a number β ∈ (0, 1) independent on δ such that for any δ ∈ (0, δ1 ] (2) the set V(1+√2)δµ1 (c∗ ) ⊂ Vβδ2µ cα(δ) , the functional Jα(δ) (c) is strongly convex on the set Vβδ2µ cα(δ) and has the unique minimizer cα(δ) on the set V(1+√2)δµ1 (c∗ ) . Furthermore, cα(δ) = cα(δ) . Hence, since by (58) c0 ∈ V(1+√2)δµ1 (c∗ ) , then any gradient-like method of the (2) minimization of the functional Jα(δ) (c) , which starts from the first guess c0 , converges to the regularized solution cα(δ) . Remarks 4: 1. Thus, Theorems 7 and 8 emphasize once again the importance of obtaining a good first approximation for the exact solution, i.e. the importance of globally convergent numerical methods. Indeed, if such a good approximation is available, Theorem 8 guarantees convergence of any gradient-like method to the regularized solution and Theorem 7 guarantees that this solution is more accurate than the first approximation. 17 2. As it was mentioned in Introduction, global convergence theorems of [2,3] in combination with Theorems 7 and 8 guarantee that the two-stage numerical procedure of [2-6] converges globally. Another confirmation of this can be found in first and second Remarks 3. 4 Numerical Example We now briefly describe a numerical example for Inverse Problem 1. The sole purpose of this section is to illustrate how Theorems 7 and 8 work. Figures 1-a), 1-b) and 1-c) of this example are published in [4], and we refer to this reference for details. So, first we apply the globally convergent numerical method of [2,3]. As a result, we obtain a good first approximation for the solution of Inverse Problem 1. Next, we apply the locally convergent adaptivity technique for refinement. In doing so, we use the solution obtained on the first stage as the starting point. In this example we use the incident plane wave rather than the point source. This is because the case of the plane wave works better numerically than the point source. In fact, we have used the point source in our theory both here and in [2-6] only to obtain a certain asymptotic behavior of the Laplace transform of the function u (x, t), see Lemma 2.1 in [2]. We need this behavior for some theoretical results. However, in the case of the plane wave we verify this asymptotic behavior computationally, see subsection 7.2 in [2]. As it was stated above, numerical studies were conducted using the standard triangular finite elements when solving Inverse Problem 1. The big square on Figure 1-a) depicts the domain Ω. Two small squares display two inclusions to be imaged. In this case we have 4 in small squares, (59) c (x) = 1 outside of small squares. The plane wave falls from the top. The scattered wave g (x, t) = u |∂Ω×(0,T ) in (22) is known at the boundary of the big square. The multiplicative random noise of the 5% level was introduced in the function g (x, t) . Hence, in our case δ ≈ 0.05. We have used α = 0.01 in (55). This value of the regularization parameter was chosen by trial and error. Our algorithm does not use a knowledge of background values of the function c (x) , i.e. it does not use a knowledge of c (x) outside of small squares. Figure 1-b) displays the result of the performance of the first stage of our two-stage numerical procedure. The computed function cglob (x) := c0 (x) has the following values: the maximal value of c0 (x) within each of two imaged inclusions is 3.8. Hence, we have only 5% error (4/3.8) in the imaged inclusion/background contrast, see (59). Also, c0 (x) = 1 outside of these imaged inclusions. Figure 1-c) displays the final image. It is obvious that the image of Figure 1-b) is refined, just as it was predicted by Theorem 7. Indeed, locations of both imaged inclusions are accurate. Let cα (x) be the computed coefficient. Its maximal value is max cα (x) = 4 and it is achieved within both imaged inclusions. Also, cα (x) = 1 outside of imaged inclusions. 18 a) b) c) Figure 1: See details in the text of section 4. a) The big square depicts the domain Ω. Two small squares display two inclusions to be imaged, see (59) for values of the function c (x) . The plane falls from the top. The scattered wave is known at the boundary of the big square. b) The result obtained by the globally convergent first stage of the two-stage numerical procedure of []. c) The result obtained after applying the second stage. A very good refinement is achieved, just as predicted by Theorem 7. Both locations of two inclusions and values of the function c (x) inside and outside of them are imaged with a very good accuracy. Acknowledgments This work was supported by: (1) The US Army Research Laboratory and US Army Research Office grant W911NF-08-1-0470, (2) the grant RFBR 09-01-00273a from the Russian Foundation for Basic Research, (3) the Swedish Foundation for Strategic Research (SSF) at the Gothenburg Mathematical Modeling Center (GMMC), and (4) the Visby Program of the Swedish Institute. References [1] L. Beilina and C. Johnson, A hybrid FEM/FDM method for an inverse scattering problem. In Numerical Mathematics and Advanced Applications - ENUMATH 2001, Springer-Verlag, 2001. [2] L. Beilina and M.V. Klibanov, A globally convergent numerical method for a coefficient inverse problem, SIAM J. Sci. Comp., 31, 478-509, 2008. [3] L. Beilina and M.V. Klibanov, Synthesis of global convergence and adaptivity for a hyperbolic coefficient inverse problem in 3D, J. Inverse and Ill-posed Problems, 18, 85-132, 2010. 19 [4] L. Beilina and M.V. Klibanov, A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Inverse Problems, 26, 045012, 2010. [5] L. Beilina and M.V. Klibanov, Reconstruction of dielectrics from experimental data via a hybrid globally convergent/adaptive inverse algorithm, submitted for publication, a preprint is available on-line at http://www.ma.utexas.edu/mp arc/ [6] L. Beilina, M.V. Klibanov and M.Yu. Kokurin, Adaptivity with relaxation for ill-posed problems and global convergence for a coefficient inverse problem, Journal of Mathematical Sciences, 167, 279-325, 2010. [7] A.L. Buhgeim and M.V. Klibanov, Uniqueness in the large of a class of multidimensional inverse problems, Soviet Math. Doklady, 17, 244-247, 1981 [8] M. Cheney and D. Isaacson, Inverse problems for a perturbed dissipative half-space, Inverse Problems, 11, 865-888, 1995. [9] J. Cheng and M. Yamamoto, One new strategy for a priori choice of regularizing parameters in Tikhonov’s regularization, Inverse Problems, 16, L31-L38, 2000. [10] H.W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Boston, 2000. [11] M.V. Klibanov, Uniqueness of solutions in the ‘large’ of some multidimensional inverse problems. In Non-Classical Problems of Mathematical Physics, Proc. of Computing Center of the Siberian Branch of USSR Academy of Science, Novosibirsk, 101-114, 1981. [12] M.V. Klibanov, Inverse problems in the ‘large’ and Carleman bounds, Differential Equations, 20, 755-760, 1984. [13] M.V. Klibanov, Inverse problems and Carleman estimates, Inverse Problems, 8, 575– 596, 1991. [14] M.V. Klibanov and A. Timonov, Carleman Estimates for Coefficient Inverse Problems and Numerical Applications, VSP, Utrecht, The Netherlands, 2004. [15] M.V. Klibanov and M. Yamamoto, Lipschitz stability of an inverse problem for an acoustic equation, Applicable Analysis, 85, 515-538, 2006. [16] M.V. Klibanov, M.A. Fiddy, L. Beilina, N. Pantong and J. Schenk, Picosecond scale experimental verification of a globally convergent numerical method for a coefficient inverse problem, Inverse Problems, 26, 045003, 2010. [17] O.A. Ladyzhenskaya, Boundary Value Problems of Mathematical Physics, Springer, Berlin, 1985. 20 [18] A.N. Tikhonov, A.V. Goncharsky, V.V. Stepanov and A.G. Yagola, 1995 Numerical Methods for the Solution of Ill-Posed Problems, Kluwer, London, 1995. [19] M. Yamamoto, Carleman estimates for parabolic equations and applications, Inverse Problems, 25, 123013, 2009. 21