OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE FOR SIMPLIFIED REGULARIZATION IN HILBERT SCALES:

advertisement
IJMMS 2004:37, 1973–1996
PII. S0161171204306095
http://ijmms.hindawi.com
© Hindawi Publishing Corp.
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE FOR
SIMPLIFIED REGULARIZATION IN HILBERT SCALES:
FINITE-DIMENSIONAL REALIZATIONS
SANTHOSH GEORGE and M. THAMBAN NAIR
Received 13 June 2003
Simplified regularization using finite-dimensional approximations in the setting of Hilbert
scales has been considered for obtaining stable approximate solutions to ill-posed operator equations. The derived error estimates using an a priori and a posteriori choice of
parameters in relation to the noise level are shown to be of optimal order with respect to
certain natural assumptions on the ill posedness of the equation. The results are shown to
be applicable to a wide class of spline approximations in the setting of Sobolev scales.
2000 Mathematics Subject Classification: 65R10, 65J10, 46E35, 47A50.
1. Introduction. Many of the inverse problems that occur in science and engineering
are ill posed, in the sense that a unique solution that depends continuously on the data
does not exist. A typical example of an ill-posed equation that often occurs in practical
problems, such as in geological prospecting, computer tomography, steel industry, and
so forth, is the Fredholm integral equation of the first kind (cf. [2, 6, 8]). Many such
problems can be put in the form of an operator equation Ax = y, where A : X → Y is a
bounded linear operator between Hilbert spaces X and Y with its range R(A) not closed
in Y .
Regularization methods are to be employed for obtaining a stable approximate solution for an ill-posed problem. Tikhonov regularization is a simple and widely used procedure to obtain stable approximate solutions to an ill-posed operator equation (2.1).
In order to improve the error estimates available in Tikhonov regularization, Natterer
[17] carried out error analysis in the framework of Hilbert scales. Subsequently, many
authors extended, modified, and generalized Natterer’s work to obtain error bounds
under various contexts (cf. Neubauer [18], Hegland [7], Schröter and Tautenhahn [20],
Mair [10], Nair et al. [16], and Nair [13, 15]). Finite-dimensional realizations of the Hilbert
scales approach has been considered by Engl and Neubauer [3].
If Y = X and A itself is a positive selfadjoint operator, then the simplified regularization introduced by Lavrentiev is better suited than Tikhonov regularization in terms of
speed of convergence and condition numbers of the resulting equations in the case of
finite-dimensional approximations (cf. Schock [19]).
In [4], the authors introduced the Hilbert scales variant of the simplified regularization and obtained error estimates under a priori and a posteriori parameter choice
strategies which are optimal in the sense of the “best possible worst error” with respect to certain source set. Recently (cf. [5]), the authors considered a new discrepancy
1974
S. GEORGE AND M. T. NAIR
principle yielding optimal rates which does not involve certain restrictive assumptions
as in [4]. The purpose of this paper is to obtain a finite-dimensional realization of the
results in [5].
2. Preliminaries. Let H be a Hilbert space and A : H → H a positive, bounded selfadjoint operator on H. The inner product and the corresponding norm are denoted by
·, · and · , respectively. Recall that A is said to be a positive operator if Ax, x ≥ 0
for every x ∈ H. For y ∈ R(A), the range of A, consider the operator equation
Ax = y.
(2.1)
Let x̂ be the minimal norm solution of (2.1). It is well known that if R(A) is not closed
in H, then the problem of solving (2.1) for x̂ is ill posed in the sense that small perturbations in the data y can cause large deviations in the solution. A prototype of an
ill-posed equation (2.1) is an integral equation of the first kind,
1
0
k(ξ, t)x(t)dt = y(ξ),
0 ≤ ξ ≤ 1,
(2.2)
where k(·, ·) is a nondegenerate kernel which is square integrable, that is,
11
0
0
k(ξ, t)2 dt dξ < ∞,
(2.3)
satisfying k(ξ, t) = k(t, ξ) for all ξ, t in [0, 1], and such that the eigenvalues of the
corresponding integral operator A : L2 [0, 1] → L2 [0, 1],
1
(Ax)(ξ) =
k(ξ, t)x(t)dt,
0
0 ≤ ξ ≤ 1,
(2.4)
are all nonnegative (cf. [14]). For example, one of the important ill-posed problems
which arise in applications is the backward heat equation problem: the problem is to
determine the initial temperature ϕ0 := u(·, 0) from the measurements of the final
temperature ϕT := u(·, T ), where u(ξ, t) satisfies
ut − uξξ = 0,
(ξ, t) ∈ (0, 1) × (0, T ),
u(0, t) = u(1, t) = 0,
t ∈ [0, T ].
(2.5)
We recall from elementary theory of partial differential equations that the solution
u(ξ, t) of the above heat equation is given by (cf. Weinberger [23])
u(ξ, t) =
∞
ϕ̂0 (n)e−n
2π2t
sin(nπ ξ),
(2.6)
n=1
where ϕ̂0 (n) for n ∈ N are the Fourier coefficients of the initial temperature ϕ0 (ξ) :=
u(ξ, 0). Hence,
u(ξ, T ) =
∞
n=1
ϕ̂0 (n)e−n
2π2T
sin(nπ ξ).
(2.7)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1975
The above equation can be written as
ϕT (s) =
∞
e−n
2π 2T
ϕ0 , un un (ξ) with un (ξ) = 2 sin(nπ ξ).
(2.8)
n=1
Thus the problem is to solve the operator equation
Aϕ0 = ϕT ,
(2.9)
where A : L2 [0, 1] → L2 [0, 1] is the operator defined by
(Aϕ)(ξ) =
∞
e−n
2π 2T
ϕ, un un (ξ) =
1
n=1
k(ξ, t)ϕ(t)dt,
0
0 ≤ ξ ≤ 1,
(2.10)
where
k(ξ, t) :=
∞
e−n
2π2T
un (ξ)un (t).
(2.11)
n=1
Note that the above integral operator is compact, positive, and selfadjoint with positive
2 2
eigenvalues e−n π T and corresponding eigenvectors un (·) for n ∈ N.
For considering the regularization of (2.1) in the setting of Hilbert scales, we consider
a Hilbert scale {Ht }t∈R generated by a strictly positive operator L : D(L) → H with its
domain D(L) dense in H satisfying
Lx ≥ x,
x ∈ D(L).
(2.12)
By the operator L being strictly positive, we mean that Lx, x > 0 for all nonzero x ∈ H.
∞
Recall (cf. [9]) that the space Ht is the completion of D := k=0 D(Lk ) with respect to
the norm xt , induced by the inner product
u, vt = Lt u, Lt v ,
u, v ∈ D.
(2.13)
Moreover, if β ≤ γ, then the embedding Hγ Hβ is continuous, and therefore the norm
· β is also defined in Hγ and there is a constant cβ,γ such that
xβ ≤ cβ,γ xγ
∀x ∈ Hβ .
(2.14)
An important inequality that we require in the analysis is the interpolation inequality
,
xλ ≤ xθr x1−θ
t
x ∈ Ht ,
(2.15)
where
r ≤ λ ≤ t,
θ=
t −λ
,
t −r
(2.16)
and the moment inequality
u v u/v
B x ≤ B x x1−u/v ,
where B is a positive selfadjoint operator (cf. [2]).
0 ≤ u ≤ v,
(2.17)
1976
S. GEORGE AND M. T. NAIR
We assume that the ill-posed nature of the operator A is related to the Hilbert scale
{Ht }t∈
according to the relation
c1 x−a ≤ Ax ≤ c2 x−a ,
x ∈ H,
(2.18)
for some positive reals a, c1 , and c2 .
For the example of the integral operator considered in (2.4), one may take L to be
defined by
Lx :=
∞
j 2 x, uj uj ,
(2.19)
j=1
where uj (t) :=
√
2 sin(jπ t), j ∈ N with domain of L as
2
D(L) := x ∈ L [0, 1] :
∞
j
4
2
<∞ .
x, uj
(2.20)
j=1
In this case, it can be seen that
Ht = x ∈ L2 [0, 1] :
∞
2
j 4t x, uj < ∞
(2.21)
j=1
and the constants a, c1 , and c2 in (2.18) are given by a = 1, c1 = c2 = 1/π 2 (see Schröter
and Tautenhahn [20, Section 4]).
The regularized approximation of x̂, considered in [4] is the solution of the wellposed equation
A + αLs xα = y,
α > 0,
(2.22)
where s is a fixed nonnegative real number. Note that if D(L) = X and L = I, then the
above procedure is the simplified or Lavrentiev regularization.
Suppose the data y is known only approximately, say ỹ in place of y with y − ỹ ≤ δ
for a known error level δ > 0. Then, in place of (2.22), we have
A + αLs x̃α = ỹ.
(2.23)
It can be seen that the solution x̃α of the above equation is the unique minimizer of
the function
x → Ax, x − 2ỹ, x + α Ls x, x ,
x ∈ D(L).
(2.24)
One of the crucial results for proving the results in [4, 5] as well as the results in this
paper is the following proposition, where the functions f and g are defined by
f (t) = min c1t , c2t ,
respectively, with c1 , c2 as in (2.18).
g(t) = max c1t , c2t ,
t ∈ R, |t| ≤ 1,
(2.25)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1977
Proposition 2.1 (cf. [4, Proposition 3.1]). For s > 0 and |ν| ≤ 1,
f
ν
≤ g ν x−ν(s+a)/2 ,
x−ν(s+a)/2 ≤ Aν/2
x
s
2
2
x ∈ H,
(2.26)
where As = L−s/2 AL−s/2 .
Using the above proposition, the following result has been proved by George and
Nair [4].
Theorem 2.2 (cf. [4, Theorem 3.2]). Suppose x̂ ∈ Ht , 0 < t ≤ s + a, and α > 0, and
x̃α is as in (2.23). Then
x̂ − x̃α ≤ φ(s, t)αt/(s+a) x̂t + ψ(s)α−a/(s+a) δ,
(2.27)
where
φ(s, t) =
g (s − 2t)/(2s + 2a)
,
f s/(2s + 2a)
ψ(s) =
g −s/(2s + 2a)
.
f s/(2s + 2a)
(2.28)
In particular, if α = c0 δ(s+a)/(t+a) for some constant c0 > 0, then
x̂ − x̃α ≤ η(s, t)δt/(t+a) ,
(2.29)
t/(t+a)
−a/(s+a)
.
, ψ(s)c0
η(s, t) = max φ(s, t)xt c0
(2.30)
where
For proposing a finite-dimensional realization, we consider a family {Sh : h > 0} of
finite-dimensional subspaces of Hk for some k ≥ s, and consider the minimizer x̃α,h
of the map defined in (2.24) when x varies over Sh . Equivalently, x̃α,h is the unique
element in Sh satisfying the equation
A + αLs x̃α,h , ϕ = ỹ, ϕ
∀ϕ ∈ Sh .
(2.31)
As in Engl and Neubauer [3], we assume the following approximation properties for Sh .
There exists a constant κ > 0 such that for every u ∈ Hr with r > k ≥ s,
inf u − ϕk : ϕ ∈ Sh ≤ κhr −k ur ,
h > 0.
(2.32)
As already exemplified in [3], the above assumption is general enough to include a
wide variety of approximations spaces, such as spline spaces and finite element spaces.
We will also make use of the following result from Engl and Neubauer [3, Lemma 2.2].
Lemma 2.3. Under the assumption (2.32), there exists a constant c > 0 such that for
every u ∈ Hs and h > 0,
inf
ϕ∈Sh
h−a/2 u − ϕ−a/2 + hs/2 u − ϕs/2 ≤ chs us .
(2.33)
1978
S. GEORGE AND M. T. NAIR
3. General error estimates. For a fixed s > 0, let x̃α and x̃α,h be as in (2.23) and
(2.31), respectively. We will obtain estimate for x̃α − x̃α,h so that we get an estimate
for x̂ − x̃α,h using Theorem 2.2 and the relation
x̂ − x̃α,h ≤ x̂ − x̃α + x̃α − x̃α,h .
(3.1)
In view of the interpolation inequality (2.15), by taking ρ = −a/2, τ = s/2, and λ = 0 in
(2.15), we get
s/(s+a)
x ≤ x−a/2
a/(s+a)
xs/2
,
x ∈ Hs/2 .
(3.2)
Thus, we can deduce an estimate for x̃α − x̃α,h once we have estimates for x̃α −
x̃α,h −a/2 and x̃α − x̃α,h s/2 . For this purpose, we first prove the following.
Lemma 3.1. Let x̃α and x̃α,h be as in (2.23) and (2.31), respectively. Then
2
1/2 2
A
x̃α − x̃α,h + αx̃α − x̃α,h s/2
= inf
ϕ∈Sh
2 2
1/2 A
x̃α − ϕ + αx̃α − ϕs/2 .
(3.3)
Proof. It can be seen (cf. [16]) that
u, v∗ := Au, v + α Ls u, v ,
u, v ∈ D(L),
(3.4)
defines a complete inner product on D(L). Let ·∗ be the norm induced by ·, ·∗ , that
is,
1/2 1/2 2
1/2
= A u + αu2s/2
.
u∗ = Au, u + α Ls u, u
(3.5)
Let X be the space D(L) with the inner product ·, ·∗ and let Ph be the orthogonal
projection of X onto the space Sh . Then from (2.23) and (2.31) we have
A + αLs x̃α − x̃α,h , ϕ = 0
∀ϕ ∈ Sh ,
(3.6)
that is,
x̃α − x̃α,h , ϕ
∗
=0
∀ϕ ∈ Sh .
(3.7)
Hence
Ph x̃α − x̃α,h = 0
(3.8)
so that
x̃α − x̃α,h = inf x̃α − x̃α,h − ϕ = inf x̃α − ϕ .
∗
∗
∗
ϕ∈Sh
Now the result follows using the definition of · ∗ .
ϕ∈Sh
(3.9)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1979
Next we obtain estimate for x̃α − x̃α,h using the estimates for x̃α − x̃α,h −a/2 and
x̃α − x̃α,h s/2 . We will use the notation
As := L−s/2 AL−s/2
(3.10)
and observe that for α > 0,
A + αLs x = Ls/2 As + αI Ls/2 x
∀x ∈ Hs .
(3.11)
Theorem 3.2. Suppose x̂ ∈ Ht and assumption (2.32) holds, and let x̃α and x̃α,h be
as in (2.23) and (2.31), respectively. Then
x̃α − x̃α,h ≤f
−s/(s+a)
δ
1
+ α(t−s)/(s+a) hs ,
max Ᏺ(s, a), Ᏻ(s, t, a) Φ(s, h, α)α−a/(2s+2a)
2
α
(3.12)
where f and g are as in (2.25), and
g −s/(2s + 2a)
,
Ᏺ(s, a) =
f −s/(2s + 2a)
g (s − 2t)/(2s + 2a)
Ᏻ(s, t, a) =
x̂t ,
f −s/(2s + 2a)
1
Φ(s, h, α) = c max g
ha/2 , α1/2 h−s/2 .
2
(3.13)
Proof. First we prove
x̃α − x̃α,h 1
Φ(s, h, α)hs x̃α s ,
f (1/2)
x̃α − x̃α,h ≤ Φ(s, h, α)α−1/2 hs x̃α ,
s/2
s
x̃α ≤ Ᏺ(s, a)α−1 δ + Ᏻ(s, t, a)α(t−s)/(s+a)
s
−a/2
≤
(3.14)
(3.15)
(3.16)
with Ᏺ(s, a), Ᏻ(s, t, a), and Φ(s, h, α) as in the statement of the theorem.
By Lemma 3.1 and Proposition 2.1, it follows that
f
2
2
1 x̃α − x̃α,h 2
−a/2 + α x̃α − x̃α,h s/2
2
2
1 x̃α − ϕ2
x̃α − ϕ2
≤ inf g
+
α
−a/2
s/2 .
ϕ∈Sh
2
(3.17)
2
2
1 x̃α − ϕ2
−a/2 + α x̃α − ϕ s/2
2
2
1 1/2 x̃α − ϕ
≤ g
+
α
−
ϕ
.
x̃
α
−a/2
s/2
2
(3.18)
Note that
g
1980
S. GEORGE AND M. T. NAIR
But
g
1 1/2 x̃α − ϕ
x̃α − ϕs/2
−a/2 + α
2
≤ ωh,α,s h−a/2 x̃α − ϕ−a/2 + hs/2 x̃α − ϕs/2 ,
(3.19)
where
1
ha/2 , α1/2 h−s/2 .
ω(h, α, s) := max g
2
(3.20)
Hence, by Lemma 2.3, we have
f
2
2
2
1 s
x̃α − x̃α,h 2
.
−a/2 + α x̃α − x̃α,h s/2 ≤ ω(h, α, s)ch x̃α s
2
(3.21)
In particular,
f
1 s
x̃α − x̃α,h −a/2 ≤ ω(h, α, s)ch x̃α s ,
2
α1/2 x̃α − x̃α,h ≤ ω(h, α, s)chs x̃α .
s/2
(3.22)
s
From these, we obtain (3.14) and (3.15).
Now, to prove (3.16), observe from (2.23) and (3.11) that
−1 −s/2
L
ỹ.
x̃α = L−s/2 As + αI
(3.23)
By Proposition 2.1, taking ν = −s/(s + a), we have
s/2 −1 −s/2
L
L
(ỹ − y)
As + αI
−1 −s/2
1
A−s/(2s+2a)
L
(ỹ − y)
As + αI
s
f −s/(2s + 2a)
As + αI −1 A−s/(2s+2a)
L−s/2 (ỹ − y)
≤ s
f −s/(2s + 2a)
α−1 g −s/(2s + 2a) L−s/2 (ỹ − y)s/2
≤
f −s/(2s + 2a)
≤
(3.24)
so that
s/2 −1 −s/2
L
As + αI
L
(ỹ − y) ≤ Ᏺ(s, a)α−1 δ.
(3.25)
Since L−s/2 y = As Ls/2 x̂, we have
s/2 −1 −s/2 L
L
y
As + αI
≤
−1
1
A−s/(2s+2a)
As Ls/2 x̂ As + αI
s
f −s/(2s + 2a)
≤
(s−2t)/(2a+2s) s/2 1
A
As + αI −1 A(a+t)/(a+s)
L x̂ ,
s
s
f −s/(2s + 2a)
(3.26)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1981
where
(s−2t)/(2a+2s) s/2 A
L x̂ ≤ g
s
s − 2t
x̂t .
2s + 2a
(3.27)
0 < τ ≤ 1,
(3.28)
Since
As + αI −1 Aτ ≤ ατ−1 ,
s
it follows from the above relations that
s/2 −1 −s/2 g (s − 2t)/(2s + 2a) x̂t (t−s)/(s+a)
L
As + αI
α
L
y ≤
f −s/(2s + 2a)
(3.29)
= Ᏻ(s, t, a)α(t−s)/(s+a) .
Thus, (3.25) and (3.29) give
x̃α ≤ Ls/2 As + αI −1 L−s/2 ỹ s
−1 −s/2
−1 −s/2 L
(ỹ − y) + Ls/2 As + αI
L
y
≤ Ls/2 As + αI
(3.30)
≤ Ᏺ(s, a)α−1 δ + Ᏻ(s, t, a)α(t−s)/(s+a) .
Now, the estimates (3.14) and (3.15) together with the interpolation inequality (3.2) give
x̃α − x̃α,h ≤ x̃α − x̃α,h s/(s+a) x̃α − x̃α,h a/(s+a)
−a/2
s/2
−s/(s+a)
1
≤f
α−a/2(s+a) Φ(s, h, α)hs x̃α s .
2
(3.31)
From this, the result follows by making use of the estimate (3.16) for x̃α .
4. A priori error estimates. Now we choose the regularization parameter α and
discretization parameter h a priori depending on the noise level δ such that optimal
order O(δt/(t+a) ) yields whenever x̂ ∈ Ht .
Theorem 4.1. Suppose x̂ ∈ Ht with 0 < t ≤ s + a and assumption (2.32) holds. Suppose, in addition, that
α = c0 δ(s+a)/(t+a) ,
h = d0 δ1/(t+a)
(4.1)
for some constants c0 , d0 > 0. Then, using the notations in Theorems 2.2 and 3.2,
x̂ − x̃α,h ≤ η(s, t) + ξ(s, t) δt/(t+a) ,
(4.2)
1982
S. GEORGE AND M. T. NAIR
where
t/(t+a)
−a/(s+a)
η(s, t) = max φ(s, t)xt c0
, ψ(s)c0
,
−s/(s+a) 1
(t−s)/(t+a)
ds0 c0−1 + c0
ξ(s, t) = c f
2
1 a/2 1/2 −s/2
d0 , c0 d0
× max Ᏺ(s, a), Ᏻ(s, t, a) max g
.
2
(4.3)
Proof. Using the choice (4.1), it is seen that
−a/2(s+a)
Φ(s, h, α)α−a/2(s+a) = cc0
1 a/2 1/2 −s/2
d0 , c0 d0
,
max g
2
δα−1 hs = c0−1 ds0 δt/(t+a) ,
(t−s)/(t+a)
α(t−s)/(s+a) hs = c0
(4.4)
ds0 δt/(t+a) .
Therefore, by Theorem 3.2, we have
x̃α − x̃α,h ≤ ξ(s, t)δt/(t+a) .
(4.5)
Also, from Theorem 2.2, we have
x̂ − x̃α ≤ η(s, t)δt/(t+a) .
(4.6)
Thus the result follows from the inequality
x̂ − x̃α ≤ x̂ − x̃α + x̃α − x̃α,h .
(4.7)
Remark 4.2. We observe that the error bound obtained is of the same order as of
Theorem 2.2, and this order is optimal with respect to the source set
Mρ,t = x ∈ Ht : xt ≤ ρ
(4.8)
in the sense of the best possible worst error (cf. [4]).
5. Discrepancy principle. In this section, we consider a discrepancy principle to
choose the regularization parameter α depending on the noise level δ and the discretization parameter h. This is a finite-dimensional variant of the discrepancy principle considered in [5].
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1983
We assume throughout that y = 0. Suppose that ỹ ∈ H is such that
y − ỹ ≤ δ
(5.1)
for a known error level δ > 0 and Ph ỹ = 0, where Ph is the orthogonal projection of H
onto Sh . We assume, throughout this section, that
A Ph − I ≤ c3 h,
h > 0,
(5.2)
for some c3 > 0, independent of h. Let
−1
.
Rα := As + αI
(5.3)
We will make use of the relation
Rα Aτ ≤ ατ−1 ,
s
α > 0, 0 < τ ≤ 1,
(5.4)
which follows from the spectral properties of the selfadjoint operator As , s > 0.
Let s, a be fixed positive real numbers. For α > 0 and x ∈ H, consider the functions
F (α, x) =
2
3/2 −s/(2s+2a) −s/2
L
Ph x α R α A s
.
2 −s/(2s+2a)
R α A s
L−s/2 Ph x (5.5)
−s/(2s+2a)
2
As
L−s/2 Ph x is nonzero for every x ∈ H
Note that, by assumption (2.18), Rα
with Ph x = 0, so that the function F (α, x) is well defined for all such x. We observe
that the assumption Ph x = 0 is satisfied for x = 0 and h small enough, if Ph x → x as
h → 0 for every x ∈ H.
In the following, we assume that h is such that Ph ỹ = 0.
In order to choose the regularization parameter α, we consider the discrepancy principle
F (α, ỹ) = bδ + dh
(5.6)
for some b, d > 0. In the due course, we will make use of the relation
f
−s
−s
L−s/2 x ≤ g
x ≤ A−s/(2s+2a)
x
s
2s + 2a
2s + 2a
which can easily be derived from Proposition 2.1.
First we prove the monotonicity of the function F (α, x) defined in (5.5).
(5.7)
1984
S. GEORGE AND M. T. NAIR
Theorem 5.1. Let x ∈ H be such that the function α F (α, x) for α > 0 in (5.5)
is well defined. Then, F (·, x) is increasing and it is continuously differentiable with
F (α, x) ≥ 0 for all α > 0. In addition,
lim F (α, x) = 0,
α→0
lim F (α, x) = A−s/(2s+2a)
L−s/2 Ph x .
s
α→∞
(5.8)
Proof. Using the definition (5.5) of F (α, ·), we have
∂
F (α, x)
∂α
=
(∂/∂α) F 2 (α, x)
2F (α, x)
2 3/2 −s/(2s+2a) −s/2
2
2 −s/(2s+2a) −s/2
As
L
Ph x R α A s
L
Ph x 2αRα
=
3/2 −s/(2s+2a)
2
2αRα As
L−s/2 Ph x 2 3/2 −s/(2s+2a) −s/2
L
Ph x (∂/∂α) αRα As
×
2 −s/(2s+2a)
3
Rα As
L−s/2 Ph x (5.9)
4
2 3/2 −s/(2s+2a) −s/2
2 −s/(2s+2a) −s/2
L
Ph x (∂/∂α) Rα
As
L
Ph x α 2 R α A s
−
3/2 −s/(2s+2a)
2 2 −s/(2s+2a)
3
2αRα As
L−s/2 Ph x Rα
As
L−s/2 Ph x 2
2 2 −s/(2s+2a) −s/2
3/2 −s/(2s+2a) −s/2
R A s
L
Ph x (∂/∂α) αRα As
L
Ph x α
=
3
2 −s/(2s+2a)
Rα As
L−s/2 Ph x 3/2 −s/(2s+2a) −s/2
2
2 2 −s/(2s+2a) −s/2
α R α A s
L
Ph x (∂/∂α) Rα
As
L
Ph x −
.
2 −s/(2s+2a)
3
2Rα
As
L−s/2 Ph x Let {Eλ : 0 ≤ λ ≤ a} be the spectral family of As , where a ≥ As . Then
2 ∂ 3/2 −s/(2s+2a) −s/2
α R α
As
L
Ph x ∂α
α
∂ a
d Eλ L−s/2 Ph x, L−s/2 Ph x
=
s/(s+a)
3
∂α 0 λ
(λ + α)
a
3α
1
=
−
d Eλ L−s/2 Ph x, L−s/2 Ph x
s/(s+a)
3
s/(s+a)
4
λ
(λ + α)
λ
(λ + α)
0
(5.10)
2
2 −s/(2s+2a) −s/2
2
3/2 −s/(2s+2a) −s/2
As
L
Ph x − 3αRα
As
L
Ph x .
= R α
Similarly, we obtain
∂ R 2 A−s/(2s+2a) L−s/2 Ph x = −4R 5/2 A−s/(2s+2a) L−s/2 Ph x 2 .
α s
α
s
∂α
(5.11)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1985
Therefore, from (5.9), by using (5.10) and (5.11), we get
2 −s/(2s+2a) −s/2
2
R A s
L
Ph x α
3
2 −s/(2s+2a)
R α A s
L−s/2 Ph x 2
2 −s/(2s+2a) −s/2
2 3/2 −s/(2s+2a) −s/2
× R α
As
L
Ph x − 3αRα
As
L
Ph x ∂
F (α, x) =
∂α
(5.12)
2 5/2 −s/(2s+2a) −s/2
2
3/2 −s/(2s+2a) −s/2
L
Ph x R α A s
L
Ph x 2αRα As
.
+
3
2 −s/(2s+2a)
Rα As
L−s/2 Ph x The above equation can be rewritten as
∂
F (α, x) =
∂α
2
2 −s/(2s+2a) −s/2
R A s
L
Ph x α
3
2 −s/(2s+2a)
R α A s
L−s/2 Ph x 2
2 −s/(2s+2a) −s/2
2 3/2 −s/(2s+2a) −s/2
× Rα
As
L
Ph x − α R α
As
L
Ph x 2α
+
3
2 −s/(2s+2a) −s/2
R α
As
L
Ph x (5.13)
2 5/2 −s/(2s+2a) −s/2
2
3/2 −s/(2s+2a) −s/2
× Rα
As
L
Ph x Rα
As
L
Ph x 2 −s/(2s+2a) −s/2
4 − R α
As
L
Ph x .
Since
3/2 −s/(2s+2a) −s/2
2
R A
L
Ph x α
s
−3 −s/(2s+2a) −s/2
= As + αI
As
L
Ph x, A−s/(2s+2a)
L−s/2 Ph x ,
s
2 −s/(2s+2a) −s/2
2
R A
L
Ph x α s
−3 −s/(2s+2a) −s/2
−1 −s/(2s+2a) −s/2
= As + αI
As
L
Ph x, As + αI
As
L
Ph x ,
(5.14)
we see that
2
3/2 −s/(2s+2a) −s/2
R A
L
Ph x α
s
2 −s/(2s+2a) −s/2
2
= α R α
As
L
Ph x +
−3 −s/(2s+2a) −s/2
−1 −s/(2s+2a) −s/2
As + αI
As
L
Ph x, As As + αI
As
L
Ph x
2 −s/(2s+2a) −s/2
2 2
2 −s/2
= α R α
As
L
Ph x + Asa/(2s+2a) Rα
L
Ph x .
(5.15)
1986
S. GEORGE AND M. T. NAIR
Also, we have
2 −s/(2s+2a) −s/2
4
R A
L
Ph x α
s
=
=
2 −s/(2s+2a) −s/2
2 −s/(2s+2a) −s/2
Rα
As
L
Ph x, Rα
As
L
Ph x
2
3/2 −s/(2s+2a) −s/2
5/2 −s/(2s+2a) −s/2
Rα
As
L
Ph x, Rα
As
L
Ph x
2
(5.16)
3/2 −s/(2s+2a) −s/2
2 5/2 −s/(2s+2a) −s/2
2
≤ R α
As
L
Ph x R α
As
L
Ph x .
Hence,
∂ F (α, x) ≥ 0.
∂α
(5.17)
To prove the last part of the theorem, we observe that
2 −s/(2s+2a) −s/2
α2 Rα
As
L
Ph x − F (α, x)
2 −s/(2s+2a) −s/2
2
3/2 −s/(2s+2a) −s/2
2
As
L
Ph x − α R α A s
L
Ph x α2 Rα
.
=
2 −s/(2s+2a)
R α A s
L−s/2 Ph x (5.18)
We note that
2 −s/(2s+2a) −s/2
2
3 −s/(2s+2a) −s/2
As
L
Ph x = α R α
As
L
Ph x, αRα A−s/(2s+2a)
L−s/2 Ph x ,
α2 Rα
s
3/2 −s/(2s+2a) −s/2
2
3 −s/(2s+2a) −s/2
α R α
As
L
Ph x = α R α
As
L
Ph x, A−s/(2s+2a)
L−s/2 Ph x .
s
(5.19)
Since
αRα − I = As Rα = Rα As ,
(5.20)
it follows that
2 −s/(2s+2a) −s/2
As
L
Ph x − F (α, x)
α 2 R α
=
3 −s/(2s+2a) −s/2
−s/(2s+2a) −s/2
As
L
Ph x, As Rα As
L
Ph x
−α Rα
2 −s/(2s+2a)
R α A s
L−s/2 Ph x a/(2s+2a) 2 −s/2
2
−αAs
R L
Ph x = 2 −s/(2s+2a)α
R α A s
L−s/2 Ph x (5.21)
≤0
so that
−s/(2s+2a) −s/2
A s
2 −s/(2s+2a) −s/2
L
Ph x As
L
Ph x ≥ α 2
.
F (α, x) ≥ α2 Rα
As + α 2
(5.22)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1987
−s/(2s+2a) −s/2
2 −s/(2s+2a) −s/2
α Rα As
L
Ph x, Rα
As
L
Ph x
F (α, x) =
2 −s/(2s+2a)
R α A s
L−s/2 Ph x L−s/2 Ph x .
≤ αRα A−s/(2s+2a)
s
(5.23)
Also, we have
Hence
α
A s + α
2
−s/(2s+2a) −s/2
A
L
Ph x ≤ F (α, x) ≤ αRα A−s/(2s+2a) L−s/2 Ph x .
s
s
(5.24)
From this we can conclude that
lim F (α, x) = 0,
α→0
lim F (α, x) = A−s/(2s+2a)
L−s/2 Ph x .
s
α→∞
(5.25)
This completes the proof.
For the next theorem, in addition to (5.1), we assume that the inexact data ỹ satisfies
the relation
−s/(2s+2a) −s/2
A
L
Ph ỹ ≥ bδ + dh.
s
(5.26)
This assumption is satisfied for small enough h and δ, if, for example,
b + f˜(s) δ + d + c3 f˜(s)x̂ h ≤ f˜(s)y,
(5.27)
where f˜(s) = f (−s/(2s + 2a)), because
Ph ỹ ≥ y − I − Ph Ax̂ − δ,
(5.28)
−s/(2s+2a) −s/2
A
L
Ph ỹ ≥ f˜(s)Ph ỹ .
s
(5.29)
and by (5.7),
Now the following theorem is a consequence of Theorem 5.1.
Theorem 5.2. Assume that (5.1) and (5.26) are satisfied. Then there exists a unique
α := α(δ, h) satisfying
F (α, ỹ) = bδ + dh.
(5.30)
In order to obtain an estimate for the error x̂ − x̃α,h with the parameter choice
strategy (5.30), we will make use of (3.31). The next lemma gives an error estimate for
x̃α s in terms of α = α(δ, h), δ, and h.
Lemma 5.3. Let α := α(δ, h) be the unique solution of (5.30). Then for any fixed τ > 0,
x̃α ≤ c4 (δ + h)τ/(τ+1) α−1 ,
s
(5.31)
1988
S. GEORGE AND M. T. NAIR
where
c4 ≥ max b + g̃(s), c + c3 x̂g̃(s) g̃(s)ỹ
(5.32)
with g̃(s) := g(−s/(2s + 2a)).
Proof. By (3.30), we have
x̃α ≤ Ls/2 Rα L−s/2 ỹ ≤ f˜−1 (s)α−1 αRα A−s/(2s+2a) L−s/2 ỹ .
s
s
(5.33)
−s/(2s+2a)
To obtain an estimate for αRα As
L−s/2 ỹ, we will make use of the moment
inequality (2.17). Precisely, we use (2.17) with
u = τ,
v = 1 + τ,
B = αRα ,
1−τ −s/(2s+2a) −s/2
x = α1−τ Rα
As
L
ỹ.
(5.34)
−s
ỹ,
2s + 2a
(5.35)
Then, since
L−s/2 ỹ ≤ g
x ≤ A−s/(2s+2a)
s
we have
αRα A−s/(2s+2a) L−s/2 ỹ s
τ/(τ+1)
x1/(τ+1)
= B τ x ≤ B τ+1 x 2 −s/(2s+2a) −s/2 τ/(τ+1)
ỹ
= α2 Rα
As
L
g
1/(τ+1)
−s
ỹ
.
2s + 2a
(5.36)
Further, by (5.21),
2 2 −s/(2s+2a) −s/2 α R A
L
ỹ α
s
2 −s/(2s+2a) −s/2
2 −s/(2s+2a) −s/2
As
L
As
L
Ph ỹ I − Ph ỹ + α2 Rα
≤ α2 Rα
≤ g̃(s) I − Ph ỹ + F (α, ỹ)
≤ g̃(s) I − Ph (ỹ − y) + I − Ph Ax̂ + F (α, ỹ)
≤ g̃(s) δ + c3 x̂h + F (α, ỹ).
(5.37)
Therefore, if α := α(δ, h) is the unique solution of (5.30), then we have
2 2 −s/(2s+2a) −s/2 α R A
L
ỹ ≤ b + g̃(s) δ + d + g̃(s)c3 x̂ h.
α
s
(5.38)
Now the result follows from (5.33), (5.36), (5.37), and (5.38).
Lemma 5.4. Suppose that x̂ belongs to Ht for some t ≤ s, and α := α(δ, h) > 0 is the
unique solution of (5.30), where b > g̃(s) and d > c3 x̂g̃(s) with g̃(s) := g(−s/(2s + 2a)).
Then
min b − g̃(s), d − c3 x̂g̃(s)
.
(5.39)
α ≥ c0 δ(s+a)/(t+a) , c0 =
g (s − 2t)/(2s + 2a) ρ
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1989
Proof. Note that by (5.23), Proposition 2.1, and (2.18), we have
L−s/2 Ph ỹ F (α, ỹ) ≤ αRα A−s/(2s+2a)
s
L−s/2 Ph (ỹ − y)
≤ αRα A−s/(2s+2a)
s
+ αRα A−s/(2s+2a)
L−s/2 Ph − I y + αRα A−s/(2s+2a)
As Ls/2 x̂ s
s
L−s/2 Ph (ỹ − y)
≤ αRα A−s/(2s+2a)
s
+ αRα A−s/(2s+2a)
L−s/2 Ph − I y + αRα A(s+2a)/(2s+2a)
Ls/2 x̂ s
s
L−s/2 Ph (ỹ − y)
≤ αRα A−s/(2s+2a)
s
L−s/2 Ph − I y + αRα A(t+a)/(s+a)
A(s−2t)/(2s+2a)
Ls/2 x̂ + αRα A−s/(2s+2a)
s
s
s
(s−2t)/(2s+2a) s/2 A
L x̂ ≤ g̃(s) δ + c3 x̂h + αRα A(t+a)/(s+a)
s
s
s − 2t
ρα(t+a)/(s+a) .
≤ g̃(s) δ + c3 x̂h + g
2s + 2a
(5.40)
Thus
min b − g̃(s), d − c3 x̂g̃(s) (δ + h) ≤ g
s − 2t
ρα(t+a)/(s+a) ,
2s + 2a
(5.41)
which implies
α ≥ c0 (δ + h)(s+a)/(t+a) ,
c0 =
min b − g̃(s), d − c3 x̂g̃(s)
.
g (s − 2t)/(2s + 2a) ρ
(5.42)
This completes the proof.
Theorem 5.5. Under the assumptions in Lemma 5.4, for any fixed τ > 0,
x̃α − x̃α,h ≤ c5 (δ + h)ζ ,
(5.43)
where
ζ :=
τ
s
s + 2a
+ −
+ γ,
τ + 1 2 2t + 2a
c5 ≥ cc4 f
−s/(s+a)
1
1
,1
max g
2
2
(5.44)
with


0,
if t ≥ 1 − a,
γ := s + a 1


1−
, if t < 1 − a.
2
t +a
(5.45)
1990
S. GEORGE AND M. T. NAIR
Proof. Note that by (3.31) and Lemmas 5.3 and 5.4,
x̃α − x̃α,h −s/(s+a)
1
Φ(s, h, α)α−a/(2s+2a) hs x̃α s
2
−s/(s+a)
1
1
≤ cf
max g
ha/2 , α1/2 h−s/2 c4 α−a/(2s+2a) hs α−1 (δ + h)τ/(τ+1)
2
2
−s/(s+a)
1
1
h(s+a)/2 α−1/2 , 1 c4 α−a/(2s+2a)−1/2 hs/2 (δ + h)τ/(τ+1)
≤ cf
max g
2
2
−s/(s+a)
1
1
(δ + h)(s+a)/2−(s+a)/(2t+2a) , 1
≤ cf
max g
2
2
≤f
× c4 (δ + h)τ/(τ+1)+s/2−a/(2t+2a)−(s+a)/(2t+2a)
≤ cf
−s/(s+a)
1
1
, 1 c4 (δ + h)τ/(τ+1)+s/2−a/(2t+2a)−(s+a)/(2t+2a)+γ .
max g
2
2
(5.46)
This completes the proof.
Theorem 5.6. Under the assumptions in Lemma 5.4,
x̂ − xα = O (δ + h)t/(t+a) .
(5.47)
Proof. Since xα is the solution of (2.22), we have
−1
y
x̂ − xα = x̂ − A + αLs
−1 s/2
= αL−s/2 As + αI
L x̂
(5.48)
= αL−s/2 Rα Ls/2 x̂.
Therefore, by (5.7), we have
f
s
x̂ − xα ≤ αAs/(2s+2a) Rα Ls/2 x̂ .
s
2s + 2a
s/(2s+2a)
To obtain an estimate for αAs
inequality (2.17) with
u=
t
,
a
v = 1+
t
,
a
(5.49)
Rα Ls/2 x̂, first we will make use the moment
B = αRα Aa/(s+a)
,
s
1−t/a (s−2t)/(2s+2a) s/2
x = α1−t/a Rα
As
L x̂.
(5.50)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1991
Then, since
x ≤ As(s−2t)/(2s+2a) Ls/2 x̂ ≤ g
s − 2t
s − 2t Ls/2 x̂ ρ,
≤
g
t−s/2
2s + 2a
2s + 2a
(5.51)
we have
αAs/(2s+2a) Rα Ls/2 x̂ s
= B t/a x t/(t+a)
xa/(t+a)
≤ B 1+t/a x 2 (2a+s)/(2s+2a) s/2 t/(t+a)
≤ α 2 R α
As
L x̂
xa/(t+a)
(5.52)
2 −s/(2s+2a) −s/2 t/(t+a)
≤ α 2 R α
As
L
y
xa/(t+a)
≤g
s − 2t
2s + 2a
a/(t+a)
2 −s/(2s+2a) −s/2 t/(t+a)
ρ a/(t+a) α2 Rα
As
L
y
.
Further, by (5.2), (5.7), and (5.21),
2 2 −s/(2s+2a) −s/2 α R A
L
y
α
s
2 −s/(2s+2a) −s/2
2 −s/(2s+2a) −s/2
As
L
(y − ỹ) + α2 Rα
As
L
I − Ph ỹ ≤ α2 Rα
2 −s/(2s+2a) −s/2
As
L
Ph ỹ + α 2 R α
s
δ + c3 x̂h + F (α, ỹ).
≤g −
2s + 2a
(5.53)
Therefore, if α := α(δ, h) is the unique solution of (5.30), then we have
2 2 −s/(2s+2a) −s/2 α R A
L
y ≤ g̃(s) + b δ + g̃(s)c3 x̂ + d h.
α
s
(5.54)
Now the result follows from (5.49), (5.52), (5.53), and (5.54).
Theorem 5.7. Under the assumptions in Lemma 5.4, for any fixed τ > 0,
x̂ − x̃α,h ≤ c6 (δ + h)µ ,
for some c6 > 0, and ζ as in Theorem 5.5.
µ := min
t
,ζ
t +a
(5.55)
1992
S. GEORGE AND M. T. NAIR
Proof. Let xα and x̃α be the solutions of (2.22) and (2.23), respectively. Then by
triangle inequality, (5.4), and Proposition 2.1,
x̂ − x̃α,h ≤ x̂ − xα + xα − x̃α + x̃α − x̃α,h = x̂ − xα + L−s/2 Rα L−s/2 (y − ỹ) + x̃α − x̃α,h 1
As/(2s+2a)
Rα L−s/2 (y − ỹ) + x̃α − x̃α,h ≤ x̂ − xα + s
f s/(2s + 2a)
1
As/(s+a)
Rα A−s/(2s+2a)
L−s/2 (y − ỹ) + x̃α − x̃α,h ≤ x̂ − xα + s
s
f s/(2s + 2a)
1
As/(s+a)
Rα A−s/(2s+2a)
L−s/2 (y − ỹ) + x̃α − x̃α,h ≤ x̂ − xα + s
s
f s/(2s + 2a)
g − s/(2s + 2a)
δα−a/(s+a) + x̃α − x̃α,h ≤ x̂ − xα + f s/(2s + 2a)
g − s/(2s + 2a)
≤ x̂ − xα +
(δ + h)α−a/(s+a) + x̃α − x̃α,h .
f s/(2s + 2a)
(5.56)
The proof now follows from Lemma 5.4 and Theorems 5.5 and 5.6.
Corollary 5.8. If t, s, a satisfy max{0, 1 − a} < t ≤ s and τ is large enough such
that
1
1
s
,
(5.57)
1−
≥
γ+
2
t +a
τ +1
then
x̂ − x̃α,h ≤ c6 (δ + h)t/(t+a)
(5.58)
with c6 as in Theorem 5.7.
Proof. Let ζ, µ be as in Theorems 5.5 and 5.7, respectively. Then we observe that
s
t
1
1
if γ +
.
(5.59)
µ=
1−
≥
t +a
2
t +a
τ +1
Hence the result follows from Theorem 5.7.
6. Order optimality of the error estimates. In order to measure the quality of an
algorithm to solve an equation of the form (2.1), Micchelli and Rivlin [12] considered
the quantity
e(M, δ) := sup x : x ∈ M, Ax ≤ δ
(6.1)
e(M, δ) ≤ E(M, δ) ≤ 2e(M, δ),
(6.2)
and showed that
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1993
E(M, δ) = inf sup x − Rv : x ∈ M, v ∈ H, Ax − v ≤ δ
(6.3)
where
R
is the best possible worst error. Here the stabilizing set M is assumed to be convex
such that M = −M with 0 ∈ M (see also, Vaı̆nikko and Veretennikov [22]), and infimum
is taken over all algorithms R : Y → X. Since H is a Hilbert space and A is assumed to
be selfadjoint and positive, we, in fact, have (cf. Melkman and Micchelli [11])
e(M, δ) = E(M, δ).
(6.4)
Now using the assumption (2.18), and taking r = −a, λ = 0 in the interpolation
inequality (2.15), we obtain
t/(t+a)
x ≤ x−a
a/(t+a)
xt
≤
Ax
c1
t/(t+a)
a/(t+a)
xt
,
x ∈ Ht .
(6.5)
Therefore, for the set
Mt,ρ = x : xt ≤ ρ
(6.6)
with a fixed t > 0, ρ > 0,
e Mt,ρ , δ ≤
δ
c1
t/(t+a)
ρ a/(t+a) .
(6.7)
It is known that the above estimate for e(Mt,ρ , δ) is sharp (cf. Vainikko [21]). In view of
the above observations, an algorithm is called an optimal order yielding algorithm with
respect to Mt,ρ and the assumption (2.18), if it yields an approximation x̃ corresponding
to the data ỹ with y − ỹ ≤ δ satisfying
x̂ − x̃ = O δt/(t+a) ,
x ∈ Ht .
(6.8)
Clearly, Corollary 5.8 shows that if h = O(δ) and if max{0, 1 − a} < t ≤ s and τ is
large enough such that
γ+
1
1
s
1−
≥
,
2
t +a
τ +1
(6.9)
then we obtain the optimal order.
7. Applications. For r ≥ 2, denote by Sh the space of r th-order splines on the uniform mesh of width h = 1/n, that is, Sh consists of functions in C r −1 [0, 1] which are
piecewise polynomials of degree r −2. For positive integers s, let H s denote the Sobolev
space of functions u ∈ C s−1 [0, 1] with us−1 absolutely continuous and the norm uH s
defined by
uH s =
s
(i) u i=1
1/2
,
u ∈ Hs .
(7.1)
1994
S. GEORGE AND M. T. NAIR
Then Sh is a finite-dimensional subspace of H r −1 which has the following well-known
approximation property (cf. [1]): for u ∈ H s , s ∈ N, there is a constant κ (independent
of h) such that
inf u − ϕH j ≤ κhmin{s,r }−j uH s ,
ϕ∈Sh
u ∈ H s , j ∈ {0, 1},
(7.2)
so that assumption (2.32) is satisfied. We take L as in (2.19), that is,
Lx :=
∞
j 2 x, uj uj ,
(7.3)
j=1
where uj (t) :=
√
2 sin(jπ t), j ∈ N, with domain of L as
∞
D(L) := x ∈ L2 [0, 1] :
2
j 4 x, uj < ∞ .
(7.4)
j=1
In this case, (Ht )t∈R is given as in (2.21). It can be seen that
∞
2
2
4t Ht = x ∈ L [0, 1] :
j
<∞
x, uj
j=1
$
#
1
,
= x ∈ H 2t (0, 1) : x (2l) (0) = x (2l) (1) = 0, l = 0, 1, . . . , t −
4
(7.5)
where p denotes the greatest integer less than or equal to p. We observe that H 0 =
L2 [0, 1], and for s ∈ N, Hs ⊂ H s .
Now, let A : L2 [0, 1] → L2 [0, 1] be a positive selfadjoint operator. Then we have
A I − Ph = (I − P )A = sup inf Au − ϕ.
(7.6)
A I − Ph ≤ κhmin(s,r ) sup AuH s .
(7.7)
u≤1 ϕ∈Sh
Hence, by (7.2),
u≤1
From the above inequality it is clear that if Au ∈ Hs for every u ∈ L2 [0, 1], and if
A : L2 [0, 1] → Hs is a bounded operator, then there exists a constant ĉ such that
A I − Ph ≤ κ ĉhmin(s,r )
(7.8)
so that (5.2) is satisfied.
Now, we consider the case of an integral operator, namely (2.4), having all its eigenvalues nonnegative, and k(ξ, t) = k(t, ξ) for all (ξ, t) ∈ [0, 1] × [0, 1] is such that it
is differentiable s times with respect to the variable ξ with its sth derivative lying in
L2 [0, 1]. For example, the integral operator may be the one associated with the backward heat equation problem considered in Section 2.
Now,
di
(Au)(ξ) =
dξ i
1
0
∂ i k(ξ, t)
u(t)dt
∂ξ i
(7.9)
OPTIMAL ORDER YIELDING DISCREPANCY PRINCIPLE . . .
1995
so that
AuH s ≤ k0,s u with k0,s =
s 11 i
∂ k(t, ξ) 2
∂ξ i dt dξ.
i=0
0
(7.10)
0
Thus we get (7.8) with ĉ = k0,s .
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
I. Babuška and A. K. Aziz, Survey lectures on the mathematical foundations of the finite
element method, The Mathematical Foundations of the Finite Element Method with
Applications to Partial Differential Equations (Proc. Sympos., Univ. Maryland, Baltimore, Md, 1972) (A. K. Aziz, ed.), Academic Press, New York, 1972, pp. 1–359.
H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Mathematics
and Its Applications, vol. 375, Kluwer Academic Publishers, Dordrecht, 1996.
H. W. Engl and A. Neubauer, Convergence rates for Tikhonov regularization in finitedimensional subspaces of Hilbert scales, Proc. Amer. Math. Soc. 102 (1988), no. 3,
587–592.
S. George and M. T. Nair, Error bounds and parameter choice strategies for simplified regularization in Hilbert scales, Integral Equations Operator Theory 29 (1997), no. 2,
231–242.
, An optimal order yielding discrepancy principle for simplified regularization of illposed problems in Hilbert scales, Int. J. Math. Math. Sci. (2003), no. 39, 2487–2499.
C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First
Kind, Research Notes in Mathematics, vol. 105, Pitman, Massachusetts, 1984.
M. Hegland, An optimal order regularization method which does not use additional smoothness assumptions, SIAM J. Numer. Anal. 29 (1992), no. 5, 1446–1461.
A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, Applied Mathematical Sciences, vol. 120, Springer-Verlag, New York, 1996.
S. G. Krein and J. I. Petunin, Scales of Banach spaces, Russian Math. Surveys 21 (1966),
85–160.
B. A. Mair, Tikhonov regularization for finitely and infinitely smoothing operators, SIAM J.
Math. Anal. 25 (1994), no. 1, 135–147.
A. A. Melkman and C. A. Micchelli, Optimal estimation of linear operators in Hilbert spaces
from inaccurate data, SIAM J. Numer. Anal. 16 (1979), no. 1, 87–105.
C. A. Micchelli and T. J. Rivlin, A survey of optimal recovery, Optimal Estimation in Approximation Theory (C. A. Micchelli and T. J. Rivlin, eds.), Plenum Press, New York,
1977, pp. 1–54.
M. T. Nair, On Morozov’s method for Tikhonov regularization as an optimal order yielding
algorithm, Z. Anal. Anwendungen 18 (1999), no. 1, 37–46.
, Functional Analysis: A First Course, Prentice Hall of India Private Limited, New
Delhi, 2002.
, Optimal order results for a class of regularization methods using unbounded operators, Integral Equations Operator Theory 44 (2002), no. 1, 79–92.
M. T. Nair, M. Hegland, and R. S. Anderssen, The trade-off between regularity and stability
in Tikhonov regularization, Math. Comp. 66 (1997), no. 217, 193–206.
F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Applicable Anal. 18
(1984), no. 1-2, 29–37.
A. Neubauer, An a posteriori parameter choice for Tikhonov regularization in Hilbert scales
leading to optimal convergence rates, SIAM J. Numer. Anal. 25 (1988), no. 6, 1313–
1326.
E. Schock, Ritz-regularization versus least-square-regularization. Solution methods for integral equations of the first kind, Z. Anal. Anwendungen 4 (1985), no. 3, 277–284.
1996
[20]
[21]
[22]
[23]
S. GEORGE AND M. T. NAIR
T. Schröter and U. Tautenhahn, Error estimates for Tikhonov regularization in Hilbert
scales, Numer. Funct. Anal. Optim. 15 (1994), no. 1-2, 155–168.
G. Vainikko, On the optimality of methods for ill-posed problems, Z. Anal. Anwendungen 6
(1987), no. 4, 351–362.
G. M. Vaı̆nikko and A. Y. Veretennikov, Iteration Procedures in Ill-Posed Problems, 1st ed.,
Nauka, Moscow, 1986.
H. F. Weinberger, A First Course in Partial Differential Equations with Complex Variables
and Transform Methods, Blaisdell Publishing, New York, 1965.
Santhosh George: Department of Mathematics, Government College of Arts, Science and Commerce, Sanquelim, Goa 403505, India
E-mail address: santhoshsq1729@yahoo.co.in
M. Thamban Nair: Department of Mathematics, Indian Institute of Technology Madras, Chennai
600036, India
E-mail address: mtnair@iitm.ac.in
Download