Numerical methods for solution of Volterra and Fredholm integral

advertisement
Numerical methods for solution of Volterra and Fredholm integral
equations for functions with values in L-spaces
Vira Babenkoa∗
a Department
of Mathematics, The University of Utah, Salt Lake City, UT, 84112, USA
Abstract
We consider Volterra and Fredholm integral equations for functions with values in L-spaces. This includes
corresponding problems for set-valued functions, fuzzy-valued functions and many others. We prove
theorems of existence and uniqueness of the solution for such equations and suggest some algorithms
for finding approximate solutions. We get initial results in the approximation of functions with values
in L-spaces by piecewise linear functions and we also get the error estimates of trapezoidal quadrature
formulas. We apply these results for analysis of convergence of suggested algorithms.
Keywords: Volterra and Fredholm integral equations, L-space, set-valued and fuzzy-valued functions,
algorithms, approximation
1. Introduction
Fredholm and Volterra integral equations for single valued functions form a classic subject in pure
and applied mathematics. Such integral equations have many important applications in biology, physics,
and engineering (see, for example, [5], [14], [9], and the references therein). A wide variety of questions
lead to integral equations for functions with values that are compact and convex sets in finite or infinite
dimensional spaces, or that are fuzzy sets (see [18] and [13]). In this paper we consider a generalized
concept, that of an L-space, that encompasses all of these as special cases. In particular, we investigate the
existence and uniqueness of solutions of linear Fredholm integral equations of the second kind and linear
Volterra integral equations, for functions with values in L-spaces and suggest algorithms for approximate
solution of such equations.
In this article we get initial results on approximation of functions with values in L-spaces by piecewise
linear functions. Besides we get the error estimates of trapezoidal quadrature formulas. For known results
on approximation and quadrature formulas for set-valued and fuzzy-valued functions we refer the reader
to [8], [1], [4] and references therein. We use the results on piecewise linear approximation and error
estimates of quadrature formulas for convergence analysis and to estimate the rate of convergence of
offered numerical algorithms of solution of integral equations.
Despite the large number of papers devoted to numerical methods for the solution of integral equations,
we do not know of any work connected with integral equations for functions with values in the space of sets
or fuzzy sets of dimensions that are greater than one. One of the purposes of this paper is to fill this gap.
We show that some existing methods for the solution of real-valued integral equations can be adopted
to the solution of integral equations in L-spaces. These are methods that have relatively low order of
accuracy (such as a collocation method based on ”piecewise linear” interpolation and a quadrature formula
method based on the trapezoidal rule). Attempts to adapt methods of higher accuracy to solutions of
integral equations in L-spaces meet difficulties, because approximation theory and quadrature formulas
theory for functions with values in L-spaces are not sufficiently developed yet.
∗ Corresponding
author
Email address: vera.babenko@gmail.com (Vira Babenkoa )
Preprint submitted to Applied Mathematics and Computation
October 8, 2015
The paper is organized as follows. In Section 2 we list some preliminary results that are used in the
remainder of the paper. In particular we develop a calculus of functions with values in L-spaces. In
Section 3 we discus some problems of approximation theory for functions with values in L-spaces and
obtain errors of quadrature formulas. We show existence and uniqueness of the solutions of these integral
equations in Section 4, and in Section 5 we describe two algorithms for their approximation. Analysis of
convergence of such methods is presented in Section 6. In Section 7 we give illustrative examples. We
end the paper with some conclusions in Section 8.
2. Preliminary results. Calculus of functions with values in L-spaces
2.1. L-spaces
The following definition was introduced by Vahrameev in [20]:
Definition 1. A complete separable metric space X with metric δ is said to be an L – space if in X
operations of addition of elements and their multiplication with real numbers are defined, and the following
axioms are satisfied:
A1. ∀x, y ∈ X x + y = y + x;
A2. ∀x, y, z ∈ X x + (y + z) = (x + y) + z;
A3. ∃θ ∈ X ∀x ∈ X x + θ = x (where θ is called a zero element in X);
A4. ∀x, y ∈ X λ ∈ R λ(x + y) = λx + λy;
A5. ∀x ∈ X λ, µ ∈ R λ(µx) = (λµ) x;
A6. ∀x ∈ X 1 · x = x, 0 · x = θ;
A7. ∀x, y ∈ X λ ∈ R δ(λx, λy) = |λ|δ(x, y);
A8. ∀x, y, u, v ∈ X δ(x + y, u + v) ≤ δ(x, u) + δ(y, v).
2.2. Examples of L-spaces
1. Any real Banach space (Y, k · kY ) endowed with the metric δ(x, y) = kx − ykY is an L-space.
2. Let K(Rn ) be the set of all nonempty and compact subsets of Rn and let Kc (Rn ) ⊂ K(Rn ) be the
subset of convex sets.
Definition 2. Let | · | be the Euclidean norm in Rn . For A, B ∈ K(Rn ), and α ∈ R:
A + B := {x + y : x ∈ A, y ∈ B}; αA := {αx : x ∈ A},
h
δ (A, B) = max sup inf |x − y|, sup inf |x − y| (Hausdorf f Distance),
x∈A y∈B
x∈B y∈A
n
With these operations and metric, K(R ) and its subspace Kc (Rn ) are complete, separable metric
spaces (see [8]) and since the axioms A1-A8 hold, these spaces are L-spaces.
3. The set of all closed bounded subsets of a given Banach space, endowed with the Hausdorff metric
is an L-space.
4. Any quasilinear normed space Y (definition see in [2]) is an L-space.
5. Consider (see, e.g., [6]) the class of fuzzy sets E n consisting of functions u : Rn → [0, 1] such that
(a) u is normal, i.e. there exists an x0 ∈ Rn such that u(x0 ) = 1;
(b) u is fuzzy convex, i.e. for any x, y ∈ Rn and 0 ≤ λ ≤ 1,
u(λx + (1 − λ)y) ≥ min{u(x), u(y)};
(c) u is upper semicontinuous;
(d) the closure of {x ∈ Rn : u(x) > 0}, denoted by [u]0 , is compact.
2
For each 0 < α ≤ 1, the α-level set [u]α of a fuzzy set u is defined as [u]α = {x ∈ Rn : u(x) ≥ α}.
The addition u + v and scalar multiplication cu, c ∈ R \ {0}, on E n are defined, in terms of α-level
sets, by
[u + v]α = [u]α + [v]α , [cu]α = c[u]α for each 0 < α ≤ 1.
Define also 0 · u by the equality [0 · u]α = {θ} (here θ = (0, ..., 0) ∈ Rn ). One of the possible metrics
in E n is defined in the following way
Z
dp (u, v) =
1
δ([u]α , [v]α )p dα
1/p
, 1 ≤ p < ∞.
0
Then the space (E n , dp ) is (see [6, Theorem 3]) a complete separable metric space and therefore an
L-space.
2.3. Integrals of functions with values in L-spaces
We need the following notion of convex elements in L-spaces:
Definition 3. An element x ∈ X is convex if λx + µx = (λ + µ)x ∀ λ, µ ≥ 0.
Remark 1. Note that if the element x is convex, then it follows from A7 - A8 that ∀λ, µ ∈ R
δ(λx, µx) ≤ |λ − µ|δ(x, θ).
(2.1)
Let X c be a set of all convex elements of a given L-space X.
Remark 2. X c is a closed subset of X.
We need the definition of a convexifying operator (see [20]) which we give in a somewhat modified form.
Definition 4. Let X be an L-space. The operator P : X → X c is called a convexifying operator if
1. ∀x, y ∈ X δ(P (x), P (y)) ≤ δ(x, y);
2. P ◦ P = P ;
3. P (αx + βy) = αP (x) + βP (y), ∀x, y ∈ X, α, β ∈ R.
Examples of convexifying operators:
1. The identity operator in the space Kc (Rn ) is a convexifying operator.
2. The operator, that on the space K(Rn ) is defined by the formula P (A) = co (A), is a convexifying
operator. Here by the co (A) we denote the convex hull of a set A.
3. The identity operator in the space (E n , dp ) is a convexifying operator.
Below we discuss L-space X with some fixed convexifying operator P . For any x ∈ X we will use the
notation x
e = P x. If all elements of an L-space X are convex in the sense of Definition 1, then we choose
the identity operator as the convexifying operator.
Next we define the Riemannian integral for a function f : [a, b] → X, where X is an L-space. We
again follow Vahrameev [20] for this purpose. Let fe(t) := fg
(t).
Definition 5. The mapping f : [a, b] → X is called weakly bounded, if δ(θ, fe(t)) ≤ const and weakly
continuous, if fe : [a, b] → X is continuous.
Remark 3. Note that if a function f : [a, b] → X is continuous then f is weakly continuous. If all
elements x ∈ X are convex (i.e. P = Id), then the concepts of continuity and weak continuity coincide.
We need the notion of a stepwise mapping from [a, b] to an L-space X.
Definition 6. The mapping f : [a, b] → X is called stepwise, if there exists a set {xk }nk=0 ⊂ X and a
partition a = t0 < t1 < ... < tn = b of the interval [a, b], such that fe(t) = x
ek for tk−1 < t < tk .
3
Definition 7. The Riemannian integral of a stepwise mapping f : [a, b] → X is defined as
Z
b
f (t)dt =
a
n−1
X
(tk+1 − tk )e
xk .
k=0
Definition 8. We say that a weakly bounded mapping f : [a, b] → X is integrable in the Riemannian
sense if there exist a sequence {fk } of stepwise mappings from [a, b] to X, such that
Z ∗ δ fe(t), fek (t) dt → 0, as k → ∞,
(2.2)
where
R∗
is a regular Riemannian integral for real-valued functions.
nR
o
b
It follows from (2.2) that the sequence
f
(t)dt
is a Cauchy sequence and thus we can use the
k
a
following definition.
Definition 9. Let f : [a, b] → X be integrable in the Riemannian sense and let {fk } be a sequence of
stepwise mappings such that (2.2) holds. Then the Riemannian integral of f is the limit
Z
b
b
Z
f (t)dt = lim
k→∞
a
fk (t)dt.
a
As described in [20] the Riemannian integral for a function f : [a, b] → X has the following properties:
1. If f and g are integrable, then ∀ α, β ∈ R the linear combination αf +βg is integrable, and moreover
Z b
Z b
Z b
(αf (t) + βg(t))dt = α
f (t)dt + β
g(t)dt.
a
a
a
2. If f and g are integrable, then the function t → δ fe(t), ge(t) is integrable and
Z
δ
b
!
b
Z
f (t)dt,
g(t)dt
a
Z
≤
a
a
b
δ fe(t), ge(t) dt.
3. If f is integrable, then fe = P f is also integrable and
Z b
Z b
Z b
f (t)dt = P
f (t)dt =
fe(t)dt.
a
a
a
4. If f is integrable on [a, b], and a ≤ c ≤ b, then function f is integrable on [a, c] and [c, b] and
Z
b
Z
f (t)dt =
a
c
Z
f (t)dt +
a
b
f (t)dt.
c
The following theorem (see [20], [2]) guarantees that we can consider the integrals which arise below
as Riemannian integrals.
Theorem 1. A weakly bounded mapping f : [a, b] → X is integrable in the Riemannian sense if and
only if it is weakly continuous almost everywhere on [a, b].
2.4. Hukuhara type derivative
In this subsection we define the Hukuhara type derivative (see definition of Hukuhara derivative for
set-valued functions in [10]) of a function with values in an L-space, and prove necessary properties
connected with this notion. Suppose X consists only of convex elements.
Definition 10. We say that an element z ∈ X is the Hukuhara type difference of elements x, y ∈ X,
h
if x = y + z. We denote this difference by z = x − y.
4
For f : [a, b] → X we define Hukuhara type derivative.
Definition 11. If t ∈ (a, b), and for all small enough h > 0 there exist differences
h
h
f (t + h) − f (t) and f (t) − f (t − h),
and both limits exist and are equal to each other
h
lim
h→0+
h
f (t + h) − f (t)
f (t) − f (t − h)
= lim
h
h
h→0+
then function has Hukuhara type derivative DH f (t) at point t (if t = a or t = b then there exists only one
limit) and
h
f (t + h) − f (t)
DH f (t) = lim
.
+
h
h→0
The following analog of fundamental theorem of Calculus holds:
Theorem 2. For any function F : [a, b] → X c that has a continuous Hukuhara type derivative on [a, b]
the following equality holds
Z t
F (t) = F (a) +
DH F (s)ds, t ∈ [a, b].
a
Proof. For any continuous function f : [a, b] → X c and any m ∈ X c , we can define
Z t
F (t) =
f (s)ds + m, m ∈ X c .
(2.3)
a
Next we find DH F (t). It follows from (2.3) and Property 4 of integral that for t ∈ [a, b), and h > 0 small
h
enough, the difference F (t + h) − F (t) is defined and
Z t+h
h
F (t + h) − F (t) =
f (s)ds.
t
We consider
!
Z
1 t+h
f (s)ds, f (t) = δ
δ
h t
1
h
t+h
Z
t
1
f (s)ds,
h
Z
!
t+h
f (t)ds
t
≤
1
h
Z
t+h
δ(f (s), f (t))ds.
t
Set ε > 0. Since f is continuous at the point t, there exist σ > 0, such that for s ∈ (t, t + σ), we have
δ(f (s), f (t)) < ε. This implies that for h < σ we have
!
Z
Z
1 t+h
1 t+h
f (s)ds, f (t) ≤
εds = ε
δ
h t
h t
which proves the equality
Z
1 t+h
lim
f (s)ds = f (t).
h→0+ h t
(2.4)
Similarly,
h
F (t) − F (t − h)
1
lim+
= lim+
h
h→0
h→0 h
Z
t
f (s)ds = f (t).
t−h
Thus, DH F (t) = f (t).
From (2.3) we have that
Z t
Z t
F (t) = m +
DH F (s)ds = F (a) +
DH F (s)ds. a
a
5
3. Approximation by piecewise linear functions and errors of quadrature formulas
As usual denote by C[a, b] the space of continuous functions f : [a, b] → R with the norm ||f ||C[a,b] =
max{|f (t)| : t ∈ [a, b]}. Let X be an L-space. Denote by C([a, b], X) the set of all continuous functions
ϕ : [a, b] −→ X. This set endowed with the metric
ρ(ϕ, ψ) = max δ(ϕ(t), ψ(t)) = kδ(ϕ(t), ψ(t))kC([a,b])
t∈[a,b]
is a complete metric space (see, e.g., [2]).
For a function f ∈ C([a, b], X) we define the modulus of continuity by
ω(f, t) =
δ(f (t0 ), f (t00 )), t ∈ [0, b − a].
sup
t0 ,t00 ∈[a,b]
|t0 −t00 |≤t
Note that ω(f, t) → 0, t → 0. If
ω(f, t) ≤ M t
(3.1)
then we say that function f satisfies the Lipschitz condition with constant M .
Denote by ω ∗ (f, t) the least concave up majorant of the function ω(f, t). It is well known (see [12])
that the following inequalities hold
ω(f, t) ≤ ω ∗ (f, t) ≤ 2ω(f, t).
It will be convenient for us to give an estimation of approximation in terms of a function ω ∗ (f, t).
3.1. Piecewise-linear interpolation
Define an operator PN that assigns to a function f ∈ C([a, b], X) the function
PN [f ](t) =
N
X
f (tk )lk (t),
(3.2)
k=0
where tk = a + k b−a
N , k = 0, ..., N , and



(t − tk−1 )/(tk − tk−1 ) if t ∈ [tk−1 , tk ]
lk (t) = (tk+1 − t)/(tk+1 − tk ) if t ∈ [tk , tk+1 ]


0
else.
Theorem 3. If f ∈ C([a, b], X c ), then
b−a
ρ(f, PN [f ]) ≤ ω f,
.
N
(3.3)
(3.4)
Moreover, for t ∈ [tk−1 , tk ], k = 1, ..., N
(t − tk−1 )(tk − t)
δ(f (t), PN [f ](t)) ≤ ω ∗ f, 2
,
tk − tk−1
(3.5)
and therefore
b−a
ρ(f, PN [f ]) ≤ ω ∗ f,
.
2N
(3.6)
If ω(f, t) ≤ M t, t ≥ 0, then
ρ(f, PN [f ]) ≤ M
b−a
.
2N
(3.7)
6
Proof. For t ∈ [tk−1 , tk ]
PN [f ](t) =
tk − t
t − tk−1
f (tk−1 ) +
f (tk ).
tk − tk−1
tk − tk−1
Consequently, using A8 and A7 we have
k −t
δ(f (t), PN [f ](t)) = δ tkt−t
f (t) +
k−1
≤
≤
t−tk−1
t−tk−1
tk −t
tk −tk−1 f (t), tk −tk−1 f (tk−1 ) + tk −tk−1 f (tk )
t−tk−1
tk −t
tk −tk−1 δ(f (t), f (tk−1 )) + tk −tk−1 δ(f (t), f (tk ))
t−tk−1
tk −t
b−a
.
tk −tk−1 ω(f, t − tk−1 ) + tk −tk−1 ω(f, tk − t) ≤ ω f, n
(3.8)
Therefore
δ(f (t), PN [f ](t))
≤ ω f, b−a
N
and ρ(f, PN [f ]) ≤ ω f, b−a
.
N
The inequality (3.4) is proved. Using (3.8) and applying Jensen’s inequality we have
δ(f (t), PN [f ](t))
≤
≤
tk −t
k−1
ω ∗ (f, t − tk−1 ) + tt−t
ω ∗ (f, tk
tk −t
k −tk−1
k−1
k−1 )
ω ∗ f, 2 (tk −t)(t−t
.
tk −tk−1
− t)
The inequality (3.5) is proved. Inequalities (3.6) and (3.7) are now obvious. The following theorem gives the estimation of the error of approximation by piecewise-linear functions
for such f that DH f (t) ∈ C([a, b], X).
Theorem 4. Suppose the function f : [a, b] → X c has the Hukuhara type derivative DH f (t) on the
interval [a, b]. Then if DH f ∈ C([a, b], X c ), we have for any k = 1, ..., N and any t ∈ [tk−1 , tk ]
(tk − t)(t − tk−1 )
δ(f (t), PN [f ](t)) ≤ 2
(tk − tk−1 )2
Z
b−a
2N
ω(DH f, 2u)du,
(3.9)
0
and
1
ρ(f, PN [f ]) ≤
2
Z
b−a
2N
ω(DH f, 2u)du.
(3.10)
0
In particular, if DH f satisfies the Lipschitz condition with constant M , then
ρ (f, PN [f ]) ≤
M (b − a)2
.
8N 2
(3.11)
Proof. For t ∈ [tk−1 , tk ] using Theorem 2 we have
Rt
k−1
k −t
δ(f (t), PN [f ](t)) = δ tkt−t
D
f
(u)du
+ tt−t
f
(t
)
+
f (t),
k−1
H
tk−1
k−1
k −tk−1
R
tk
t−tk−1
tk −t
D
f
(u)du
f
(t
)
+
f
(t)
+
k−1
H
t
−t
t
−t
t
k
k−1
k
k−1
Rt
t−tk−1 R tk
k −t
≤ δ tkt−t
D
f
(v)dv
D
f
(u)du,
H
H
tk −tk−1 t
tk−1
k−1
Rt
Rt
tk −tk−1
tk −t
t−tk
k −t
= δ tkt−t
D
f
(u)du,
D
f
(
u
+
t)du
H
H
t
−t
t−t
t−t
t
t
k−1
k
k−1
k−1
k−1
k−1
k−1
Rt
tk −tk−1
t−tk
k −t
δ
D
f
(u),
D
f
(
≤ tkt−t
u
+
t)
du
H
H
t−tk−1
t−tk−1
tk−1
k−1
Rt
tk −tk−1
t−tk
k −t
≤ tkt−t
ω
D
f,
u
+
t
−
u du
H
t−t
t−t
tk−1
k−1
k−1
k−1
Rt
(tk −tk−1 )
k −t
= tkt−t
ω
D
f,
(t
−
u)
du
H
t−tk−1
tk−1
k−1
b−a
(tk −t)(t−tk−1 ) R 2N
= 2 (tk −tk−1 )2
ω(DH f, 2u)du.
0
Inequality (3.9) is proved. Inequality (3.10) holds since
obvious. 7
(tk −t)(t−tk−1 )
(tk −tk−1 )2
≤
1
4.
Inequality (3.11) now is
3.2. Estimation of the remainder of a Trapezoidal Quadrature Formula
Next we obtain an estimation of the remainder of the trapezoidal quadrature formula
!
Z b
N
−1
X
b−a 1 e
b−a
1
f (t)dt ≈
f (t0 ) +
, k = 0, ..., N.
fe(tk ) + fe(tN ) , tk = a + k
N
2
2
N
a
k=1
Such estimates are well known for real valued functions. For f ∈ C([a, b], X) set
!!
Z b
N
−1
X
b−a 1 e
1e
e
RN (f ) = δ
f (t)dt,
f (t0 ) +
f (tk ) + f (tN )
.
N
2
2
a
(3.12)
k=1
Note that
Z
Z b
PN [f ](t)dt =
b
a
a
b−a
PN [fe](t)dt =
N
!
N
−1
X
1e
1
f (t0 ) +
fe(tk ) + fe(tN ) .
2
2
k=1
Therefore
RN (f )
≤
R
R
Rb
b
fe(t)dt, a PN [fe](t)dt ≤ a δ(fe(t), PN [fe](t))dt
max δ(fe(t), PN [fe](t))(b − a) = (b − a)ρ(fe, PN [fe]).
= δ
b
a
a≤t≤b
Note also that due to the Property 1 of convexifying operator we have for any function f ∈ C([a, b], X):
ω(fe, t) ≤ ω(f, t), t ∈ [0, b − a].
Therefore from Theorems 3 and 4 we have
.
Theorem 5. Let f ∈ C([a, b], X). Then RN (f ) ≤ (b − a)ω f, b−a
N
2
If f satisfies the Lipschitz condition (3.1) with constant M , then RN (f ) ≤ M (b−a)
.
N
R b−a
b−a
2N
If fe has continuous derivative DH fe on [a, b] then RN (f ) ≤
ω(DH fe, 2u)du.
2
0
In particular, if DH fe satisfies the condition (3.1), then RN (f ) ≤
M (b−a)3
.
8N 2
4. Existence and Uniqueness of solution of integral equations
In this section we prove theorems of existence and uniqueness of solution of Fredholm and Volterra
integral equations for functions with values in L-spaces. There exist a lot of works on these integral
equations for set-valued and fuzzy-valued functions (see for example [19], [16], [17], [7] and references
therein).
4.1. Fredholm equation
We consider Fredholm equation of the second kind
Z b
ϕ(t) = λ
K(t, s)ϕ(s)ds + f (t),
(4.1)
a
where ϕ : [a, b] → X is the unknown function, f : [a, b] → X is a known continuous function, the kernel
K(t, s) (t, s ∈ [a, b]) is a known real-valued function, λ is a fixed parameter.
Theorem 6. Suppose K(t, s) satisfies the following conditions
1. K(t, s) is bounded, i.e. |K(t, s)| ≤ M for all t, s ∈ [a, b];
2. K(t, s) is continuous at every point (t, s) ∈ [a, b] × [a, b] where t 6= s.
Let f ∈ C([a, b], X). Then for any λ such that |λ| <
ϕ ∈ C([a, b], X).
8
1
M (b−a)
the equation (4.1) has a unique solution
Proof. Consider the operator A : C([a, b], X) → C([a, b], X) defined by
Z b
Aϕ(t) = λ
K(t, s)ϕ(s)ds + f (t), ϕ ∈ C([a, b], X).
a
We prove first that under the conditions on the K(t, s) this operator maps C([a, b], X) into C([a, b], X).
For this it is enough to show that operator
Z b
Bϕ(t) :=
K(t, s)ϕ(s)ds
a
maps C([a, b], X) into C([a, b], X). To do this consider δ(Bϕ(t0 ), Bϕ(t00 )), a ≤ t0 < t00 ≤ b. We have
δ(Bϕ(t0 ), Bϕ(t00 ))
=
≤
R
Rb
b
K(t0 , s)ϕ(s)ds, a K(t00 , s)ϕ(s)ds
Ra
Rb
b
00
δ a K(t0 , s)ϕ(s)ds,
e
K(t
,
s)
ϕ(s)ds
e
a
Rb
0
00
δ(K(t
,
s)
ϕ(s),
e
K(t
, s)ϕ(s))ds.
e
a
= δ
From this, using property (2.1), we obtain
Rb
δ(Bϕ(t0 ), Bϕ(t00 )) ≤ a |K(t0 , s) − K(t00 , s)|δ(ϕ(s),
e
θ)ds
Rb
0
≤ max δ(ϕ(s),
e
θ) a |K(t , s) − K(t00 , s)|ds
s∈[a,b]
Rb
= ρ(ϕ(·),
e θ) a |K(t0 , s) − K(t00 , s)|ds.
If K(t, s) satisfies conditions 1 and 2 then for any ε > 0 there exists σ > 0 such that
!
Z b
∀t0 , t00 ∈ [a, b]
|t0 − t00 | < σ ⇒
|K(t0 , s) − K(t00 , s)|ds < ε .
a
From this and the above estimate it follows that the function Bϕ(t) is uniformly continuous, and therefore,
Bϕ(t) ∈ C([a, b], X).
Next we show that the operator A is a contractive operator. We have
R
Rb
b
δ(Aϕ(t), Aψ(t)) = δ λ a K(t, s)ϕ(s)ds + f (t), λ a K(t, s)ψ(s)ds + f (t)
R
Rb
b
≤ |λ|δ a K(t, s)ϕ(s)ds, a K(t, s)ψ(s)ds
Rb
≤ |λ| a |K(t, s)|δ(ϕ(s), ψ(s))ds
Rb
≤ |λ| a |K(t, s)|ds max δ(ϕ(s), ψ(s))
s∈[a,b]
≤
|λ|M (b − a)ρ(ϕ(·), ψ(·)).
Thus ρ(Aϕ(·), Aψ(·)) ≤ |λ|M (b − a)ρ(ϕ(·), ψ(·)). This means that the mapping A is contractive if |λ| <
1
M (b−a) . This contractive mapping has a unique fixed point which implies that the equation (4.1) has a
unique solution in the space C([a, b], X). 4.2. Volterra equation
We consider now the Volterra integral equation
Z t
ϕ(t) =
K(t, s)ϕ(s)ds + f (t),
(4.2)
a
where ϕ : [a, b] → X is again an unknown function, and f : [a, b] → X is a known continuous function.
The kernel K(t, s) is a known real-valued function defined for (t, s) ∈ [a, b] × [a, b] such that s ≤ t. Below
we can assume that K(t, s) is defined on all square [a, b] × [a, b], and K(t, s) = 0, if s > t. Therefore a
Volterra integral equation is a particular case of a Fredholm integral equation. Thus we can use Theorem
6 to guarantee existence and uniqueness of the solution of the equation (4.2) for the kernel K(t, s) such
1
that M < b−a
. However, for Volterra equations we can prove the following more general theorem without
the restriction on M .
9
Theorem 7. Let K(t, s) be continuous in the domain {(t, s) ∈ [a, b] × [a, b] : s ≤ t} and let f ∈
C([a, b], X). Then the equation (4.2) has a unique solution ϕ ∈ C([a, b], X).
Proof. To prove this theorem we adopt the method known for Volterra equations for real-valued functions
(see, e.g., [11]). Consider the operator A : C([a, b], X) → C([a, b], X) defined by
Z t
K(t, s)ϕ(s)ds + f (t).
Aϕ(t) =
a
Since equation (4.2) is a particular case of equation (4.1), it is clear that this operator maps the space
C([a, b], X) into C([a, b], X). We prove that some integer power of this operator is a contractive operator.
We need to introduce some additional notations. Let
Z b
KN −1 (t, u)K(u, s)du, N > 1.
K1 (t, s) = K(t, s), KN (t, s) =
a
Note that if |K(t, s)| ≤ M then (see [11, p. 449])
|KN (t, s)| ≤
M N (b − a)N −1
.
(N − 1)!
(4.3)
It is easily seen that for any N ∈ N
N
Z
A ϕ(t) =
b
Z
KN (t, s)ϕ(s)ds +
a
a
−1
bN
X
KN −k (t, s)f (s)ds + f (t).
(4.4)
k=1
Using (4.4) and (4.3) we obtain
δ(AN ϕ(t), AN ψ(t)) ≤
≤
≤
R
Rt
t
δ a KN (t, s)ϕ(s)ds, a KN (t, s)ψ(s)ds
Rt
δ (KN (t, s)ϕ(s), KN (t, s)ψ(s)) ds
Rat
N
N
max δ(ϕ(s), ψ(s)).
|KN (t, s)|δ(ϕ(s), ψ(s))ds ≤ M(N(b−a)
−1)!
a
a≤s≤b
Therefore,
ρ(AN ϕ, AN ψ) ≤
M N (b − a)N
ρ(ϕ, ψ).
(N − 1)!
N
N
If N is sufficiently large, then (b−a)
< 1. Fix such an N . We have shown that the operator AN is
(N −1)! M
contractive. Using the generalized contractive mapping principle (see [11]) we obtain that A has a unique
fixed point and therefore the equation (4.2) has a unique solution ϕ ∈ C([a, b], X). 5. Algorithms for approximate solution
In this section we describe algorithms for the approximate solution of Fredholm and Volterra integral
equations for functions with values in L-spaces.
We adopt well-known methods for integral equations for real-valued functions, specifically collocation
method (see for example [3, ch. 3]) in Subsection 5.1 and 5.2, and quadrature formulas methods (see [3],
[14]) in Section 5.3.
5.1. Fredholm equation
We start with the Fredholm integral equation (4.1). Let n ∈ N. Choose a set of knots a = t0 < t1 <
... < tn = b and a set of continuous real-valued functions γ0 , γ1 , ..., γn defined in [a, b]. Suppose that these
functions satisfy the following interpolation conditions
γk (tj ) = δk,j , j, k = 0, 1, ..., n.
10
Note that we can use cardinal Lagrange interpolation polynomials as well as cardinal interpolation splines
as such functions. For example we can take γk (t) = lk (t), where lk (t) is defined by (3.3), tk = a + k b−a
n .
We look for a solution of (4.1) in the form
ϕn (t) =
n
X
ϕk γk (t), ϕk ∈ X, k = 0, ..., n.
(5.1)
k=0
Substituting in the equation (4.1) and setting t = tj gives
ϕj = λ
n
X
b
Z
ϕ
ek
k=0
K(tj , s)γk (s)ds + fj , j = 0, ..., n,
where f (tj ) = fj . Set aj,k =
ϕj = fj + λ
(5.2)
a
n
X
Rb
a
K(tj , s)γk (s)ds. The system (5.2) can be rewritten in the form
ϕ
ek aj,k , j = 0, ..., n.
(5.3)
k=0
Under some additional assumptions we can solve this system by the method of consecutive approximations.
We illustrate this in detail for γk (t) = lk (t), k = 0, 1, ..., n.
Choose an initial approximation {x0 , ..., xn } to the solution of this system (5.3). For j = 0, ..., n set
x0j = xj ,
xm+1
= fj + λ
j
n
X
x
em
k aj,k ,
m = 0, 1, ... .
k=0
Let X n+1 = X × . . . × X (n + 1 times). Let a metric in the space X n+1 be defined by
ρ ((x0 , ..., xn ), (y0 , ..., yn )) :=
n
X
δ(xj , yj ).
j=0
With this metric the space X n+1 is complete.
Consider the operator B : X n+1 → X n+1 , which is defined according to the rule
yj = fj + λ
n
X
x
ek aj,k , j = 0, ..., n.
k=0
We have
ρ(B(x0 , ..., xn ), B(y0 , ..., yn ))
=
n
P
≤
j=0
n
P
j=0
≤
|λ|
n
P
δ fj + λ
x
ek aj,k , fj + λ
yek aj,k
k=0
n k=0
n
P
P
δ λ
x
ek aj,k , λ
yek aj,k
k=0
n P
n
P
j=0 k=0
≤
n
P
k=0
δ(e
xj , yej )|aj,k | ≤ |λ|
n
P
δ(xj , yj )
j=0
n
P
|aj,k |
k=0
|λ|M (b − a)ρ ((x0 , ..., xn ), (y0 , ..., yn )) .
(The last inequality holds since
n
X
k=0
|aj,k | ≤
n Z
X
k=0
a
b
Z
|K(tj , s)|lk (s)ds ≤ M
n
bX
lk (s)ds = M (b − a).)
a k=0
1
m ∞
the operator B is contractive, and consequently the sequence {(xm
Therefore, if |λ| < M (b−a)
0 , ..., xn )}m=0
converges to the solution (ϕ0 , ..., ϕn ) of the system (5.3) as m → ∞.
The function of the form (5.1) where {ϕ0 , ..., ϕn } is the solution of the system (5.3) is our approximation of the solution of the equation (4.1).
11
5.2. Volterra equation
Because Volterra Equations can be considered as a special case of Fredholm equations the method
described in the preceding section can be applied for their solution. However, in the case of X = X c and
supp γk (t) ⊂ [tk−1 , tk+1 ] the resulting linear system becomes triangular and can be solved explicitly, and
rather simply (here suppγ(t) is the support of a function γ(t)).
Consider the Volterra Equation (4.2) under the assumption that K(t, s) is nonnegative. As before we
look for a solution of the form (5.1).
Rt
As above, let aj,k = a j K(tj , s)γk (s)ds. Assume that γk (t) ≥ 0, k = 0, ..., n. Then aj,k ≥ 0 if k ≤ j
and aj,k = 0 if k > j. Substituting (5.1) in (4.2) and evaluating at tj gives the triangular system
ϕj =
j
X
aj,k ϕk + fj ,
j = 0, . . . , n,
k=0
that define ϕ0 , ..., ϕn . This system can be rewritten as
h
ϕj − aj,j ϕj =
j−1
X
ϕk aj,k + fj .
k=0
We assume that functions γk (t) are uniformly bounded (in n) and n is sufficiently large so that
0 ≤ aj,j < 1
for j = 0, . . . , n.
With that assumption and using the fact that ϕj are convex, we obtain
h
ϕj − aj,j ϕj = (1 − aj,j )ϕj .
Thus we obtain the explicit recursion
1
ϕj =
1 − aj,j
ϕ 0 = f0 ,
j−1
X
!
ϕk aj,k + fj
,
j = 1, . . . , n.
(5.4)
k=0
5.3. Nyström Method for Fredholm Equations
Consider again the Fredholm equation (4.1). For real valued functions the following approach to
finding approximate solution of such equations is well known (see for example [3, ch. 4]). Let a quadrature
formula be given:
Z b
n
X
pj ge(tj ), g ∈ C([a, b], X),
g(s)ds ≈
a
j=1
where a ≤ t1 < t2 < ... < tn ≤ b, pj ∈ R.
Using this quadrature formula we approximate the integral in (4.1) and obtain a new equation:
ϕ(t) = λ
n
X
pj K(t, tj )ϕ(t
e j ) + f (t).
(5.5)
j=1
Evaluate (5.5) at tk , k = 1, ..., n:
ϕ(tk ) = λ
n
X
pj K(tk , tj )ϕ(t
e j ) + f (tk ).
j=1
This is the system of equations with unknown ϕ(tk ) = ϕk , which we rewrite in the form
ϕk = λ
n
X
bkj ϕ
ej + fkn , j = 1, ..., n,
(5.6)
j=1
where bkj = pj K(tk , tj ). Under some additional assumptions we can solve (5.6) using method of consecutive approximations, analogously to the solution of the system (5.3).
The solution ϕ1 , ..., ϕn of the system (5.6) we can use as approximate values of the solution of the
equation (4.1) at points t1 , ..., tn .
12
5.4. Nyström Method for Volterra Equations
Consider now the Volterra equation (4.2). We describe a quadrature formulas method for the approximate solution of the linear Volterra equation (4.2) based on the trapezoidal rule. We assume that
kernel K(t, s) is nonnegative. Since we can consider Volterra equations as a particular case of Fredholm
equations, the method described in section 5.3 can be applied and for their solutions too. However, if we
assume that all elements of the space X are convex we can obtain an explicit solution of our system.
For simplicity let a = 0, b = 1, ti = i/n, i = 0, 1, ..., n. Once we apply the trapezoidal rule for
Rt
approximate calculation of the integral 0 k K(tk , s)ϕ(s)ds in (4.2) and evaluate the resulting relation at
t = tk , k = 1, ..., n we obtain the following
(
)
k−1
X
1 1
1
ϕk = fk +
K(tk , t0 )ϕ0 +
K(tk , ti )ϕi + K(tk , tk )ϕk , k = 1, 2, ..., n,
n 2
2
i=1
(here
0
P
:= 0) with ϕ0 = f0 . Set
i=1
1
n K(tk , ti )
= cki and note that for n large enough we have 0 ≤ cki < 1.
We obtain explicit recursive formula for approximate solution of (4.2) at points tk that is analogous to
(5.4):
!
j−1
X
1
1
ϕ0 = f0 , ϕj =
cj0 ϕ0 +
cji ϕi + fj , j = 1, ..., n.
1 − 12 cjj 2
i=1
It seems possible to use for the approximate solution of Volterra type equations, as well as for Fredholm
type equations (for functions with values in L-spaces), quadrature formulas of higher accuracy (see for
example [14, ch.7]). However an analysis of such methods requires additional tools that have not yet been
developed for functions with values in L-spaces, and even for set-valued functions.
6. Convergence Analysis
In Subsections 6.1 and 6.2 we assume that all elements of the space X are convex.
6.1. Convergence of algorithm of approximate solution of Fredholm equations
Recall that in Section 5.1 the elements ϕ0 , ..., ϕn were defined as a solution of the system (5.3). As
approximate solution of the Fredholm equation (4.1) we consider the function (5.1) with γk = lk .
Theorem 8. Let K(t, s) and f (t) satisfy the condition of Theorem 6, and |λ| <
solution of the equation (4.1) and let ϕn (t), n ∈ N be defined by (5.1). Then
ρ(ϕ, ϕn ) = max δ(ϕ(t), ϕn (t)) → 0, as n → ∞.
a≤t≤b
1
M (b−a) .
Let ϕ(t) be the
(6.1)
If for any s ∈ [a, b] the kernel K(t, s) satisfies the Lipschitz condition (for t) with a constant M1 that
does not depend on s and f (t) satisfies in [a, b] the Lipschitz condition with constant M2 , then there exists
a constant C1 such that for any n
max δ(ϕ(t), ϕn (t)) ≤
a≤t≤b
C1
.
n
(6.2)
Moreover, if for any s ∈ [a, b] the kernel K(t, s) is continuously differentiable with respect to t, and ∂K(t,s)
∂t
satisfies the Lipschitz condition with constant M3 that does not depend on s, f (t) has a Hukuhara type
derivative DH f (t) in [a, b] and DH f (t) satisfies the Lipschitz condition (3.1) with constant M4 , then there
exists a constant C2 such that for any n
max δ(ϕ(t), ϕn (t)) ≤
a≤t≤b
C2
.
n2
(6.3)
13
Proof. Recall that the operator PN [f ](t) is defined by (3.2). Using (5.1) and (5.2) we have
n
Rb
P
n
ϕk a K(t, s)lk (s)ds (t) + Pn [f ](t)
Pn [ϕ ](t) = λPn
k=0
n
n
Rb
P
P
ϕk a K(tj , s)lk (s)ds lj (t) + Pn [f ](t)
= λ
j=0" k=0
#
n
n
Rb P
P
= λ a
K(tj , s)lj (t)
ϕk lk (s)ds + Pn [f ](t)
j=0
k=0
Rb
= λ a Pn,t [K](t, s)ϕn (s)ds + Pn [f ](t),
where Pn,t [K](t, s) =
n
P
K(tj , s)lj (t).
j=0
n
Since Pn [ϕn ](t) = ϕ (t), we obtain that the function ϕn (t) solves the following equation
ϕn (t) = λ
Z
b
Pn,t [K](t, s)ϕn (s)ds + Pn [f ](t).
(6.4)
a
Let ϕ(t) be the solution of the equation (4.1) and let ϕn (t) solves equation (6.4). We estimate the
distance between ϕn and ϕ. We have
R
Rb
b
δ(ϕ(t), ϕn (t)) ≤ |λ|δ a K(t, s)ϕ(s)ds, a Pn,t [K](t, s)ϕn (s)ds + δ(f (t), Pn [f ](t))
R
Rb
b
≤ |λ|δ a K(t, s)ϕ(s)ds, a Pn,t [K](t, s)ϕ(s)ds +
R
Rb
b
+ |λ|δ a Pn,t [K](t, s)ϕ(s)ds, a Pn,t [K](t, s)ϕn (s)ds + δ(f (t), Pn [f ](t))
Rb
≤ |λ| a |Pn,t [K](t, s)|δ(ϕ(s), ϕn (s))ds
Rb
+ |λ| a |K(t, s) − Pn,t [K](t, s)|δ(ϕ(s), θ)ds + δ(f (t), Pn [f ](t)).
Note that if |K(t, s)| ≤ M , then |Pn,t [K](t, s)| ≤ M as well. Therefore
max δ(ϕ(t), ϕn (t)) ≤
|λ|M (b − a) max δ(ϕ(t), ϕn (t))
R b a≤t≤b
+ |λ| max a |K(t, s) − Pn,t [K](t, s)|ds max δ(ϕ(t), θ)
a≤t≤b
a≤t≤b
+
a≤t≤b
max δ(f (t), Pn [f ](t)).
a≤t≤b
From the estimation above we obtain that if |λ| <
1
M (b−a)
then
|λ| M (b − a)) max δ(ϕ(t), ϕn (t))
R ba≤t≤b
≤ |λ| max a |K(t, s) − Pn,t [K](t, s)|ds max δ(ϕ(s), θ) + max δ(f (t), Pn [f ](t)).
(1−
a≤t≤b
a≤s≤b
(6.5)
a≤t≤b
It is easily seen that
Z
a≤t≤b
b
|K(t, s) − Pn,t [K](t, s)|ds −→ 0, as n → ∞.
max
a
Besides that (see Theorem 3)
max δ(f (t), Pn [f ](t)) → 0, as n → ∞.
a≤t≤b
From the last two relations and from (6.5) we see that
ρ(ϕ, ϕn ) = max δ(f (t), Pn [f ](t)) → 0, as n → ∞.
a≤t≤b
We have proved the relation (6.1). Let us prove now the relation (6.2).
14
(6.6)
Suppose that for any s ∈ [a, b] the kernel K(t, s) satisfies the Lipschitz condition (for t) with constant
M1 that does not depend on s. It follows from real-valued analog of Theorem 3 that
Z
b
|K(t, s) − Pn,t [K](t, s)|ds ≤
max
a≤t≤b
a
(b − a)2
M1 .
2n
(6.7)
If f (t) satisfies in [a, b] the Lipschitz condition with constant M2 , then it follows from Theorem 3 that
max δ(f (t), Pn [f ](t)) ≤
a≤t≤b
M2 (b − a)
.
2n
(6.8)
Using (6.7) and (6.8) we obtain from (6.5) that there exists a constant C1 such that for any n
max δ(ϕ(t), ϕn (t)) ≤
a≤t≤b
C1
.
n
We have proved the relation (6.2).
Suppose now that for any s ∈ [a, b] the kernel K(t, s) is continuously differentiable with respect to t,
and ∂K(t, s)/∂t satisfies the Lipschitz condition with constant M3 that does not depend on s, then (see,
e.g., [3, p.60])
Z
a≤t≤b
b
|K(t, s) − Pn,t [K](t, s)|ds ≤
max
a
M3 (b − a)3
.
8n2
(6.9)
If the Hukuhara type derivative of f (t) satisfies the Lipschitz condition (3.1) with constant M4 then due
to Theorem 3
max δ(f (t), Pn [f ](t)) ≤
a≤t≤b
M4 (b − a)2
.
8n2
(6.10)
Using (6.9) and (6.10) we obtain from (6.5) that there exists a constant C2 such that for any n
max δ(ϕ(t), ϕn (t)) ≤
a≤t≤b
C2
.
n2
We have proved the relation (6.3). 6.2. Volterra equation
Since a Volterra equation is a particular case of a Fredholm equation, we can apply the first statement
of Theorem 8 to obtain the convergence of the method presented in Section 5.2 under the assumption
M < 1/(b − a). However, in the case of a Volterra equation with a nonnegative kernel, we can eliminate
the restriction. We obtain
Theorem 9. Let the kernel K(t, s) of the equation (4.2) be nonnegative, continuous and K(t, s) ≤ M
in the domain 0 ≤ s ≤ t ≤ b. Let ϕ(t) be the solution of the equation (4.2) and let ϕn (t), n ∈ N, be the
approximate solution (5.1). Then
ρ(ϕ, ϕn ) = max δ(ϕ(t), ϕn (t)) → 0, as n → ∞.
a≤t≤b
If in addition the kernel K(t, s) satisfies the Lipschitz condition with constant G1 in t for any s in the
domain a ≤ s ≤ t ≤ b, and f (t) satisfies the Lipschitz condition with constant G2 in [a, b], then there
exists a constant G3 such that for any n ∈ N
G3
1
max δ(ϕ(t), ϕn (t)) ≤
.
2 a≤t≤b
n
15
Proof. We use notations from the previous section, taking into account that K(t, s) = 0 if s > t and
λ = 1. As in Section 6.1 we have that ϕ(t) and ϕn (t) satisfy (4.1) and (6.4) correspondingly.
We would like to estimate the distance between ϕn and ϕ. Set
Pn,t;N [K](t, s) := (Pn,t [K])N (t, s).
Consider the operators
Rb
Aϕ(t) = a K(t, s)ϕ(s)ds + f (t), ϕ ∈ C([a, b], X),
Rb
An ϕ(t) = a Pn,t [K](t, s)ϕ(s)ds + Pn [f ](t), ϕ ∈ C([a, b], X).
Let AN ϕ(t) be defined by (4.4) and let
AN
n ϕ(t) =
b
Z
−1
bN
X
Z
Pn,t;N [K](t, s)ϕ(s)ds +
a
a
Pn,t;N −k [K](t, s)Pn [f ](s)ds + Pn [f ](t).
k=1
Since ϕ is a fixed point of the operator A, then ϕ is a fixed point of operator AN , for all N . Therefore,
ϕ satisfies the equation
b
Z
ϕ(t) =
Z
KN (t, s)ϕ(s)ds +
a
−1
bN
X
a
KN −k (t, s)f (s)ds + f (t).
(6.11)
k=1
Similarly (since ϕn is a fixed point of the operator An )
ϕn (t) =
Z
b
Pn,t;N [K](t, s)ϕn (s)ds +
Z
a
−1
bN
X
a
Pn,t;N −k [K](t, s)Pn [f ](s)ds + Pn [f ](t).
(6.12)
k=1
Next we estimate δ(ϕ(t), ϕn (t)). Using (6.11) and (6.12) we have:
max δ(ϕ(t), ϕn (t))
a≤t≤b
≤
Rb
max
a≤t≤b a
NP
−1
+
δ(KN (t, s)ϕ(s), Pn,t;N [K](t, s)ϕn (s))ds
max
Rb
a
k=1 a≤t≤b
+
δ(KN −k (t, s)f (s), Pn,t;N −k [K](t, s)Pn [f ](s))ds
max δ(f (t), Pn [f ](t))
(6.13)
a≤t≤b
=:
∆N +
NP
−1
∆N −k + ∆0 .
k=1
For ∆N we have
≤
∆N
+
≤
+
max
Rb
a≤t≤b a
Rb
max a
a≤t≤b
Rb
max a
a≤t≤b
Rb
max a
a≤t≤b
δ(KN (t, s)ϕ(s), KN (t, s)ϕn (s))ds
δ(KN (t, s)ϕn (s), Pn,t;N [K](t, s)ϕn (s))ds
KN (t, s)δ(ϕ(s), ϕn (s))ds
|KN (t, s) − Pn,t;N [K](t, s)|δ(ϕn (s), θ)ds.
Using (4.3) we obtain
Z
max
a≤t≤b
b
KN (t, s)δ(ϕ(s), ϕn (s))ds ≤
a
1
M N (b − a)N
max δ(ϕ(t), ϕn (t)) ≤ max δ(ϕ(t), ϕn (t)),
(N − 1)! a≤t≤b
2 a≤t≤b
if N is sufficiently large. We fix such an N below. Therefore, we obtain
∆N ≤
1
ρ(ϕ, ϕn ) + max
a≤t≤b
2
Z
b
|KN (t, s) − Pn,t;N [K](t, s)|dsρ(ϕn , θ)
a
16
(6.14)
Let us estimate ∆N −k . Using (4.3) we have for k = 1, ..., N − 1
Rb
∆N −k ≤ max a δ(KN −k (t, s)f (s), KN −k (t, s)Pn [f ](s))ds
a≤t≤b
Rb
+ max a δ(KN −k (t, s)Pn [f ](s), Pn,t;N −k [K](t, s)Pn [f ](s))ds
a≤t≤b
Rb
≤ max a KN −k (t, s)δ(f (s), Pn [f ](s))ds
a≤t≤b
Rb
+ max a |KN −k (t, s) − Pn,t;N −k [K](t, s)|δ(Pn [f ](s), θ)ds
≤
+
a≤t≤b
M N −k (b−a)N −k
max δ(f (s), Pn [f ](s))
(N −1−k)!
a≤s≤b
Rb
max
|KN −k (t, s) − Pn,t;N −k [K](t, s)|ds max δ(Pn [f ](s), θ).
a≤t≤b a
a≤s≤b
Thus for k = 1, ..., N − 1
∆N −k ≤
M N −k (b − a)N −k
∆0 + max
a≤t≤b
(N − 1 − k)!
Z
b
|KN −k (t, s) − Pn,t;N −k [K](t, s)|dsρ(Pn [f ], θ).
The following statment is easy to prove by induction.
For any N there exists a constant CN > 0 (independent on n), such that
Z b
Z b
max
|KN (t, s) − Pn,t;N [K](t, s)|ds ≤ CN max
|K(u, s) − Pn,t [K](u, s)|ds.
a≤t≤b
a
(6.15)
a
a≤u≤b
(6.16)
a
Besides that sequences {ρ(ϕn , θ)} and {ρ(Pn [f ], θ)} are bounded. From this and from the relations (6.13),
(6.14), (6.15) and (6.16) it follows that there exist constants C 0 , C 00 > 0 such that
Z b
1
ρ(ϕ, ϕn ) ≤ C 0 ∆0 + C 00 max
|KN (t, s) − Pn,t;N [K](t, s)|ds, ∆0 = ρ(Pn [f ], f ).
(6.17)
a≤t≤b a
2
If the kernel K(t, s) of the equation (4.2) is nonnegative, continuous and K(t, s) ≤ M in the domain
0 ≤ s ≤ t ≤ b, then
Z b
max
|KN (t, s) − Pn,t;N [K](t, s)|ds → 0, as n → ∞.
(6.18)
a≤t≤b
a
If the kernel K(t, s) satisfies the Lipschitz condition with constant G1 in t for any s in the domain
a ≤ s ≤ t ≤ b, then there exists a constant G4 such that for any n ∈ N
Z b
G4
.
(6.19)
max
|KN (t, s) − Pn,t;N [K](t, s)|ds ≤
a≤t≤b a
n
Now the statement of the Theorem 9 follows from Theorem 3 and relations (6.17), (6.18), and (6.19). 6.3. Error analysis for quadrature methods
We present here only a theorem that gives an error analysis of the method based on trapezoidal
quadrature formula for approximate solution of Fredholm equation.
Let ϕ(t) be an exact solution of the equation (4.1) and let ϕnj , j = 0, 1, ..., n be such that
"
#
n−1
X
b−a 1
1
n
n
n
n
ϕj = λ
K(tj , t0 )ϕ
e0 +
K(tj , ti )ϕ
ei + K(tj , tn )ϕ
en + f (tj ).
n
2
2
i=1
Set εn = max δ(ϕ(tj ), ϕnj ).
0≤j≤n
Theorem 10.
1. Let K(t, s) and f (t) be continuous, |K(t, s)| ≤ M in the domain [a, b] × [a, b] and
1
. Then
|λ| < M (b−a)
εn → 0, as n → ∞.
(6.20)
17
2. If for any s ∈ [a, b] the kernel K(t, s) satisfies the Lipschitz condition (for t) with constant M1 that
does not depend on s and f (t) satisfies in [a, b] the Lipschitz condition with constant M2 , then there
exists a constant C1 such that for any n
εn ≤
C1
.
n
(6.21)
Proof. It is obvious that there exist a constant C2 > 0 such that
εn ≤ C2 max Rn (K(t, ·)ϕ(·))
t
(definition of Rn (K(t, ·)ϕ(·)) see in (3.12)). If K(t, s) and f (t) satisfy the condition of the statement 1 of
the theorem then ϕ(t) as well as K(t, ·)ϕ(·) are continuous.
If K(t, s) and f (t) satisfy the condition of the statement 2 of the theorem then K(t, ·)ϕ(·) satisfies in
[a, b] the Lipschitz condition with some constant M3 .
Now the statements of Theorem 10 follow from Theorem 5. 7. Numerical Examples
In this section we discus numerical examples for set-valued functions, i.e. functions with values in
L-space of convex, compact subsets of Rn : Kc (Rn ). We used MATLAB to implement the algorithm for
the approximate solution of integral equations presented in Section 5.
Example 1. We consider first an initial value problem
DH X = λ(t)X + A(t),
X(a) = X0 ,
where DH X is Hukuhara derivative (see [10]), λ(·) : [a, b] → R+ and A(·) : [a, b] → Kc (Rn ) are continuous
functions, X0 ∈ Kc (Rn ).
This initial value problem can be rewritten in the form of the Volterra integral equation
Z t
Z t
X(t) =
λ(s)X(s)ds + F (t), where F (t) = X0 +
A(s)ds.
(7.1)
a
a
It is well known (see for example [15]) that solution of this equation has the form
Z t
Rt
Rs
X(t) = e a λ(s)ds X0 +
A(s)e− a λ(τ )dτ ds .
a
We consider the equation (7.1) with λ(s) = s and
F (t) = [−1, t] × [0, 1] = [−1, 0] × [0, 1] + [0, t] × {0}, t ∈ [0, 1].
The exact solution of the equation (7.1) is
h
Z t
i
2
2
2
2
X(t) = −et /2 ,
e(t −s )/2 ds × 0, et /2 .
0
We plot (see Figure 1) the Hausdorff distance between the exact solution and the approximate solution.
The time-step is 1/50.
Example 2. Here we consider the Volterra integral equation (4.2) with a = 0, b = 1. Let K(t, s) = ts
and
1
2+t
1
3+t
2
2
f (t) = 0,
+ 2t ln
− t × 0,
+ 3t ln
−t .
2+t
2
3+t
3
h
i h
i
1
1
The exact solution is ϕ(t) = 0, 2+t
× 0, 3+t
, t ∈ [0, 1]. We plot the Hausdorff distance (the error)
between the exact solution and the solution obtained by both collocation and quadrature algorithms that
18
were described in Subsections 5.2 and 5.4. The time-step is 1/14. As one can see from the corresponding
picture (see Fig. 2a), the collocation methods gives a better approximation then the quadrature method
in this case.
Example 3. Let in the equation (4.2) K(t, s) = e−s , t ∈ [0, 1] and
f (t) = 0, et (1 + α cos t) − t − α sin t × 0, et (1 + α sin t) − t + α cos t − α .
The exact solution has the following form: ϕ(t) = [0, et (1 + α cos t)] × [0, et (1 + α sin t)] . This is the
example of the problem for which quadrature method gives better approximation than collocation method
(see Fig. 2b).
Example 4. This is the example of approximate solution of Fredholm equation (4.1) with a = 0, b =
1, λ = 1 by collocation algorithm. We use a kernel K(t, s) = e−(t+s) and
f (t) = [0, et + 1 − (2 − 1/e)e−t ] × [0, 1 − e−t (1 − 1/e)], t ∈ [0, 1].
The exact solution of the problem in this case is ϕ(t) = [0, et + 1] × [0, 1] and as it was in all previous
examples we plot the Hausdorff distance (the error) between the exact solution and the solution obtained
by described in Section 5.1 algorithm, though here to plot results we use a logarithmic scale for the y-axis
(see Fig. 3a). We plot error for 8, 16, 32 and 64 knots.
Example 5. This is another example of a collocation algorithm for Fredholm Equation. This time
we use a kernel K(t, s) = s sin4 (3t) and
1
5
4
4
f (t) = 0, 1 − sin (3t) × 0, 1 + t − sin (3t) , t ∈ [0, 1].
2
6
The exact solution has the form ϕ(t) = [0, 1] × [0, 1 + t]. Results for 8, 16 and 32 knots see on Fig. 3b.
8. Discussion
We have shown that many principles and concepts governing single valued integral equations transfer
to the more general case of functions with values in L-spaces, particularly for set-valued functions, and
functions whose values are fuzzy sets. The algorithms we discussed adopt the collocation method for
the approximate solution of integral equations and use ”piecewise-linear” functions. These algorithms
converge at the rate of O(1/n) if the functions defining the problem have smoothness of order 1, and
converge at the rate of O(1/n2 ) if functions have smoothness of order 2. In future work we hope to
obtain methods of the approximate solution of integral equations that will use alternative methods of
approximation and will converge faster for functions of greater smoothness. We also plan to investigate
the solution of more general, nonlinear integral equations, and integral and differential equation problems
involving functions of more than one independent variable, with values in L-spaces.
Acknowledgments
The author would like to thank Peter Alfeld and Elena Cherkaev for many helpful discussions, advices
and comments that greatly improved the manuscript.
[1] Anastassiou G.A. Fuzzy Mathematics: Approximation Theory. Studies in Fuzziness and Soft Computing, 251 Springer, 2010.
[2] Aseev S.M. Quasilinear operators and their application in the theory of multivalued mappings. Proceedings of the Steklov Institute of Mathematics 167, 25-52 (Russian) (1985; Zbl 0582.46048).
[3] Atkinson, K. E. Cambridge monographs on applied and computational mathematics: The numerical
solution of integral equations of the second kind. Cambridge University Press, 1997.
19
[4] Babenko V.F., Babenko V., Polischuk M.V. On Optimal Recovery of Integrals of Set-Valued Functions.
(2014) arXiv:1403.0840v1.
[5] Corduneanu C. Integral equations and applications. Cambridge university press, Cambridge, 1991.
[6] Diamond, P., Kloeden, P. Metric spaces of fuzzy sets. Fuzzy sets and systems 35 (1990) 241-249.
[7] Diamond, P. Theory and applications of fuzzy Volterra integral equations. IEEE Transactions on
Fuzzy Systems. V.10 I.1 (2002) 97-102.
[8] Dyn, N., Farkhi, E., Mokhov, A. Approximation of set-valued functions: Adaptation of classical
approximation operators. Hackensack: Imperial College Press, 2014.
[9] Guo, D., Lakshmikantham, V., Liu, X. Nonlinear Integral Equations in Abstract Spaces. Kluwer
Academic Publishers. In Mathematics and Its Applications 373 1996.
[10] Hukuhara, M. Integration des Applicaitons Mesurables dont la Valeur est un Compact Convexe.
Funkcialaj Ekvacioj, 10 (1967) 205-223.
[11] Kolmogorov A.N., Fomin S.V. Elements of the Theory of Functions and Functional Analysis. Dover
Books on Mathematics, Dover Publications, 1999.
[12] Korneichuk N.P. Exact Constants in Approximation Theory. Moskva: Nauka, 1987; English transl.
in Encyclopedia Math. Appl. 38, Cambridge Univ. Press, Cambridge 1991.
[13] Lakshmikantham, V., Mohapatra R.N. Theory of fuzzy differential equations and inclusions. Series
in Mathematical Analysis and Applications. Taylor and Francis Inc., 2003.
[14] Linz, P. Analytical and Numerical methods for Volterra equations. SIAM, Philadelphia, 1985.
[15] Plotnikov, V. A., Plotnikov, A. V., Vityuk, A. N. Differential equations with multivalued right-hand
side. Asymptotic methods. Astroprint, 1999.
[16] Plotnikov, A. V., Skripnik, N. V. Existence and Uniqueness Theorem for Set-Valued Volterra Integral
Equations. American J. of Applied Math. and Stat., Vol. 1, No. 3 (2013), 41-45.
[17] Plotnikov, A. V., Skripnik, N. V. Existence and Uniqueness Theorem for Set Volterra Integral Equations. JARDCS. Vol. 6, 3 (2014), 1-7.
[18] Da Prato, G., Iannelli, M. Volterra integrodifferential equations in Banach spaces and applications.
Pitman research notes in mathematics series. Longman Scientific and Technical, 1989.
[19] Tişe, I. Set integral equations in metric spaces. Math. Morav. 13 (1) (2009), 95-102.
[20] Vahrameev, S.A. Integration in L-spaces. Book: Applied Mathematics and Mathematical Software
of Computers, M.: MSU Publisher, (1980) 45-47 (Russian).
20
Figure 1: The Hausdorff distance between exact solution and approximate solution
(b) Example 3. Kernel K(t, s) = e−s
(a) Example 2. Kernel K(t, s) = ts
Figure 2: Example of quadrature and collocation algorithms for Volterra Equations
(b) Example 5. Kernel K(t, s) = e−(t+s) .
Error for 8, 16 and 32 knots.
(a) Example 4. Kernel K(t, s) = s sin4 (3t).
Error for 8, 16, 32 and 64 knots.
Figure 3: Example of a collocation algorithm for Fredholm Equation. Note we use here logarithmic scale
for the y-axis.
21
Download