presented Abstract thesis consider

advertisement
AN ABSTRACT OF THE THESIS OF
for the
WALTER ARTHUR YUNGEN
(Name)
of Science
(Degree)
presented on
Mathematics
(Major)
in
Master
(Date)
Title RIGOROUS COMPUTER INVERSION OF SOME LINEAR
OPERATORS
Redacted for privacy
Abstract approved
(
In this
n
X
n
Dr. Joel Davis)
thesis we consider computer techniques for inverting
matrices and linear Fredholm integral operators
of the
second kind. We develop techniques which allow us to prove the
existence
of and find
approximations to inverses for the above
types of operators. In addition, we are able to bound rigorously
the error in the approximations.
These techniques were imple-
mented in the form of computer programs and some numerical
results are given.
Rigorous Computer Inversion of Some Linear Operators
by
Walter Arthur Yungen
A THESIS
submitted to
Oregon State University
in
partial fulfillment
of
the requirements for the
degree
Master
of
of
Science
June 1968
APPROVED:
Redacted for privacy
Assistant Professor
In
of
Mathematics
Charge of Major
Redacted for privacy
Chairman
of
Department of Mathematics
Redacted
for privacy
Dean of Graduate School
Date thesis is presented
,
<
¡'
/9-
Typed by Carol Baker for Walter Arthur Yungen
ACKNOWLEDGEMENT
I
am indebted to my major professor, Dr. Joel Davis, for
the considerable interest and encouragement given to me during
the preparation of this thesis. Also,
I
wish to thank Dr. A. T.
Lonseth, chairman of the Department of Mathematics, for his
interest and support. This work was supported in part by the
A.
E.C. under contract
Finally,
I
No.
A.T. (45 -1)
-
1947.
express my appreciation to my wife whose
patience, encouragement, and support has been invaluable.
TABLE OF CONTENTS
Page
I.
INTRODUCTION
1
II.
NOTATION AND BACKGROUND
3
III.
ROUNDOFF ERROR
11
IV.
FINITE DIMENSIONAL LINEAR OPERATOR INVERSION
18
V.
FREDHOLM INTEGRAL OPERATOR INVERSION
PART I
-
28
FREDHOLM INTEGRAL OPERATOR INVERSION
-
VI.
VII.
PAR T II
40
SUMMARY
48
BIBLIOGRAPHY
51
RIGOROUS COMPUTER INVERSION OF SOME
LINEAR OPERATORS
I.
In
INTRODUCTION
this thesis we consider rigorous computer inversion of
two families of linear operators, finite dimensional linear operators
(i. e.
kind.
matrices) and linear Fredholm integral operators
of the second
Our goal is to determine the existence of an inverse and to
find an approximation to the inverse with rigorous
error bounds
on
the results.
For each family, we discuss
a
test for existence
of the
method of inversion, including
a
inverse. Also, we find rigorous error
bounds on the results, including both the truncation
error
of the
method and the roundoff error occurring in the computations.
Rounding
error
is
error occurring
computer.
In
treated in detail, beginning with bounds for the
in each arithmetic operation performed by the
considering the truncation error for each method,
we give only the needed
results, drawing heavily from Faddeeva
and the work of Anselone and Moore
[
2
]
[ 6 ]
.
In Chapter II, we introduce much of the basic notation and
give some needed results from numerical analysis.
We develop in
Chapter III bounds for the roundoff error in each arithmetic operation to be used and apply the results to bounding the error in such
compound operations as finding inner products.
In
Chapter
IV we
2
give a method for inversion of
test for existence
truncation error.
of the
We
bounding the roundoff
on the
results.
In
nX n
matrices which includes a
inverse and also yields a bound on the
also apply the results
error
Chapters
of
Chapter III for
in order to give rigorous
V
error bounds
and VI we obtain approximate solu-
tions with rigorous error bounds for linear Fredholm integral equa-
tions of the second kind. We also discuss some of the practical
problems
of
implementing this method.
Using the methods discussed in this thesis, we have written
computer programs that determine the existence of inverses and find
numerical solutions, including rigorous error bounds. Some results
from these programs are given. Finally, possible improvements
and extensions will be discussed briefly in Chapter VII.
3
NOTATION AND BACKGROUND
II.
The purpose of this chapter is to provide a brief sketch of
the notation, definitions, and results needed in later chapters. The
two major topics here will be linear operators and numerical quad -
rature.
The following material on spaces and operators is essentially
as presented in Kolmogorov and Fomin
[
9]
We
.
will be concerned
with complete normed linear spaces over the real number field, in
particular, with the following two spaces:
(i)
the space
C[ 0, 1]
of
continuous functions on
[
0, 1]
with norm
1I4
=
Max
Ix(t)I:
{
0 <
t
<
1
(2. 1)
},
and
(ii)
Euclidean n- space,
Il
Using the metric
r
=
11
p
Max
(x, y)
{
=
I
II
r1
.
Rn,
l
:
x -yII,
1<-
with norm
i<n
-
}
(2. 2)
.
the above spaces can be con-
sidered metric spaces with the resultant concepts
of
convergence
and completeness. Now we can make the following definitions.
4
Definition 2.
1
space is a complete normed linear space.
A Banach (or B -)
It can be shown
that the above examples of spaces are complete and
hence Banach spaces (see
[ 9 ]
From this point on, let S,T
).
denote Banach spaces.
Definition 2.
A
2
linear operator (linear function) is a function
having the following property:
K(s +as')
=
K(s)
V
s, s'
+
aK(s'
Henceforth, we let
K
and
K
operators mapping
S
into
T.
Definition 2.
n
E
(n
and
S
a
E
K: S
T
R,
(2. 3)
).
=
-
1, 2, 3,
)
denote linear
3
A bounded
linear operator
property that there exists
Ks
It should be noted
11
<
M
a
K
constant
s1I,
is a linear operator with the
M
for all
such that
sES.
that for linear operators, continuity and
boundedness are equivalent (see [ 9]
).
(2. 4)
5
Definition 2.
4.
The norm
K
II
II
bound of the numbers
IIKII
It is
of an operator K
satisfying (2.4), or equivalently,
M
Sup
=
is the greatest lower
{
Ks
II
easily seen that norms
/
II
s # 0 }.
IIs11:sES,
(2.5)
linear operators have the following
of
properties:
(i)
IIKis1I < IIKO
(ii)
ii
K 1K2
(iii)
II
K
<
II
II
where
IIsII,
K1
II
II
K2
II
,
II
+
II
K2
SE S
,
and
1+ K2
II
<
II
K1
II
Finally we consider inverses.
Definition 2.
5
The linear operator
every t T,
.
E
K
is said to have an inverse if, for
the equation
Ks
=
t,
has a unique solution
s ES,
s,
i.e.
The operator which takes each
(2. 6)
if
tET
K
is a
1
-1,
onto operator.
into the respective solution
6
of (2. 6) is called the
inverse of
In Ban_ach spaces, the
or
K,
operator
K
K
-1
if it exists, has the
,
following properties [9]:
(i)
K
(lei)
if
-1
is linear,
and
is bounded, then
K
is bounded.
K -1
discussion of operators with the following theorem
We ccr-cl_?.de our
which is a special case of a more general theorem proven in [9].
Theorem 2.
Let
1
be the identity operator and
I
K1
be any bounded
operator such that
Ki
Then the operator
<
(2.7)
1
has an inverse.
K _ I + K1
In Chapters 1V and V we will consider
tors on Rn and
C[ 0, 1].
We now want to
integral by
a
0
t
consider the approximation of a definite
weighted surre over the interval, i. e.
P1
Where the
particular linear opera-
r_
i
n
x(t)dt
ti
w
ni
x(t
ni
),
(2. 8)
n >
-1
i=1
are abscissas belonging to the interval
[
0, 1]
and
7
the
w
ni
are real weights associated with the tni
indicates that the
and
t
ni.
The notation
depend upon the number
w
ni
abscissas used. The formula used in
n
of
particular case may depend
a
upon considerations such as the form of the function
x,
the ac-
curacy required, and the computational tool available (i. e. calcu-
lator, computer, etc.
In
later work
to zero as
).
we will
require that the error in
(2. 8)
converge
becomes large. This brings up the question, under
n
what conditions will this be true? The following theorem from
Berezin and Zhidkov
Theorem
2.
[
3]
will help answer that.
2
The necessary and sufficient conditions for
n
('1
wnix(tni)
n
n
as
x(t)dt
n
--
00,
XE
C[0,
1]
.
0
i=1
(2. 9)
are that this occurs for any polynomial and that
n
w
.I
ni
<M<+00,
-
n>
1
.
(2. 10)
i=1
For
a proof of
this theorem see
automatically satisfied when the
[
3]
w
.
ni
The second condition is
are positive.
8
particular formulas that have the
Now let us examine some
simplifying feature of equal weights for all abscissas. This feature
error in applying the formula.
is helpful in the analysis of roundoff
The
first to
the interval
[
0, 1]
and weights
w
ni
be considered is the repeated midpoint rule on
This formula has abscissas
.
=
1
/n,
i
1, 2,
=
continuous first derivative satisfying
x'(t)
I
ni
=
(2i -1)/ 2n
For functions having
n.
,
t
I
L1,
<
for
0 <
a
t < 1,
it can be shown that
1
Çx(t)dt-
I
0
\n
x(tni)
n
<
I
L1/ 4n
(2. 11)
.
i=1
Also, it can be shown that
constant for
number
i. e. a
x,
Ix(s) -x(t)I
For functions having
x" (t)
<
for
L2,
r
x'
.
LI s -tI,
s,tES.
(2. 12)
derivative satisfying
we can find that
t < 1,
0 <
n
1
-
0
stant for
<
such that
L
a continuous second
x(t)dt
As in the previous
can be replaced by a Lipschitz
L1
n
\)
x(tni)
<
I
L2/ 24n2.
(2. 13)
i=1
case, we can replace
Krylov
[
11] shows
L2
that (2.
by a Lipschitz con-
11) is
the minimum error
9
bound attainable under the conditions specified with any abscissas
and weights.
The other formula to be considered is due to Chebychev.
The formula exists only
Again, this formula has equal weights.
for
n
=
1, 2,
7, 9.
,
It
takes the form
n
`
S1
x(t)dt
=
-1
x(t
)
n2
+
)
ni
E
(2. 14)
n
i=1
where
C x(11+1)
E
n
t
(n +l)!
for
)/
(n +2)!
for n even, and
odd,
n
_
C nx(n +2)(
The abscissas
)
(
n
.
ni
and constants
C
along with a
n < 6,
n
E[-1,l]
thorough discussion of this formula, are given in Hildebrand
[
Although this formula lacks the convenience of existing for all
8]
.
n,
it can be generalized by subdividing the interval and applying the
formula for
n
1, 2,
=
,
'
work we use the Chebychev
interval
[0,1]
or
7,
5
9
to each subdivision.
point rule repeated
r
In
later
times on the
This has the form
.
r
5
1
f(s)ds
0
1
=
f (t ::t. )
{
m
13
j>1
i=1
}
+
E'
m
,
(2. 15)
.
10
where
40, 625f(6)(
F
m
tï.
)
6
(0, 1),
,
(2. 16)
13,934,592m6
=
J
(t 5i
+
-1)/2r,
2j
1
<i<5,
1<j<r,
and
m
In some
=
cases
5r.
we do not have a
use the above error term.
integrated has
a
If we
assume that the function
Lipschitz constant
that the error term in (2.
I
We note
sufficient number of derivatives to
15)
Em
I
L
on
[0,1]
,
f
to be
we can show
satisfies
<
.
254L/ m.
(2. 17)
that this is only slightly different from the bound in (2. 11).
From the given error terms and the fact that derivatives
of
poly-
nomials are bounded, it can be seen that the conditions of Theorem
2. 2
are satisfied by the above formulas. This concludes our discus-
sion of quadrature formulas.
11
III.
ROUNDOFF ERROR
chapter we consider the rounding errors that occur
In this
in the arithmetic operations performed by the computer (CDC 3300)
and methods of accounting for them in the solution of problems.
Much additional general information on rounding
in Wilkinson
[
16]
.
errors can
be found
Iwo general methods will be given for producing
answers and their associated error bounds. The first method is that
of bounding the
error
the accumulated
in each
operation and keeping a tabulation
error at each stage
in the computation.
The second
method involves the use of automatic interval arithmetic.
first method
we will develop bounds for each
show how these
of
In the
operation used and then
errors propagate.
We begin with a
brief discussion
ware characteristics for the
CDC 3300,
of
the floating point hardwhich henceforth will be
called the computer or the machine. The following information on
hardware, and more, can be found in
cussion
of
rounding errors is given.
machine having 24 bit words.
[
4] and
[
5]
;
however, no dis-
This computer is a binary
The internal representation of a
floating point number requires two words, with a 36 bit coefficient,
an
11
bit exponent, and a sign bit. Several facts are pertinent to
our later analysis:
(i) the
coefficient is normalized
2
<
coef.
< 1,
(ii) rounding takes place before normalization, and (iii) truncation
12
occurs during normalization. We denote the rounding error by El,
error
the truncation
machine version
the total error by
E2,
by
an exact quantity
of
by
Q
Q
and the
E,
.
Let us now consider the operation of addition performed on
the numbers
that A,
A
1
2
0,
B
d
<
I
error
We examine the
Case
a 2p
=
=
and
a i, b
I
B -
b 2q,
d>
and
< 1,
I
under the assumptions
0,
(i. e.
E2
=
q -p.
d.
El
=
0).
If the
differ, there will be no normalization error
B
If the signs
0).
=
Since there is no need for shifting to equalize
0:
and
A
d
by cases determined by the quantity
exponents, there will be no rounding error (i. e.
signs of
where
agree, depending on whether the bit
shifted off in normalization is 0 or 1, E2 = 0 or
-36+q
Thus the error bound for this case is
E2 = -(sign A) 2
=IE21 < 2-36+q
E
Case
exponents,
E2
=
d
E1
0; but if
Thus the
=
1:
Since
A
=
2-36IBI/
must be shifted
or
El
=
(sign A)
AB > 0,
E2
=
0
= 0
error bound for this case
[El
=
1E1
E
or
2
E2
-37 +q
2-35IBI.
bit to equalize
1
Again, if
=
-(sign A) 2-36+q
<
2-351B1.
is
2-36+q
21
IbI <
AB < 0,
13
Case
El
=
Again, since
d > 2:
(sign A) e
AB
I
b
<
I
E2
< 0,
1-2
-d
d
bits,
with
d+l)2-37+q
-(1-2
If
must be shifted
A
=
but if
0;
r,
and
E2I <
1
2
< e <
AB
-36+q
2-37+q
E2
> 0,
only when
= 0
otherwise
.
Thus for this case
it can be shown
IEI
In
=
1E
1
2-35IBI.
+E 21
general, we can prove that the error in the addition
B
A
of
and
satisfies
IEI
<
2-35Max {IAI, IBI }.
(3. 1).
The same result holds true for subtraction.
For the operation
B, we have
that
AB
<
=
multiplication on the above
ab 2p +q,
2- 37 +p +q
IEII
multiplication satisfies
IEI
of
and
with
E2
2-35IABI.
4
= O.
<
I
ab
I
< 1.
Thus the
It
A
and
can be seen
error
bound for
(3. 2)
Similarly, it can be shown that the error bound for division satisfies
IEI
2-35 IA/BI.
(3. 3)
14
In
certain key places in calculations it is sometimes advan-
tageous to use a higher precision arithmetic. Hence we comment
also on the
CDC 3300
triple precision floating point,
DFP(3).
The internal representation requires three words, with 47 bits for the
coefficient.
24 bits for the exponents, and a sign bit.
DFP(3) uses
normalized coefficient,
<
2 coef. < 1, but has no rounding
feature. For additional information see [ 4]
By an analysis
a
.
similar to that for floating point, without rounding, one can find
the following bounds for the operations:
Addition(Subtraction)
-
I
E
I
<
2-45Max {IA I,
I
B I},
(3. 4)
Multiplication(Division)
In
conversion from DFP(3)
-
I
E
I
<
error
of the
error
A
satisfies
results to the problem
error and let
in the quantity
IA/131).
(3. 5)
in a compound operation.
which are subject to
I
2-35IAI.
Now let us apply these
the
I
to floating point there is no rounding,
hence the error bound for conversion of
IE
2-45 AB (or
Q.
Let
e(Q)
In the
A
and
B
of bounding
be numbers
denote the absolute value
following analysis we will
use the following rules which are easily found using the methods of
Wilkinson [16] and our previous work:
15
e(A±
(i)
B)
e(A)
<
+
e(B)
2-35Max {IAI, IBI
+
} ,
e(AB)<IAIe(B)+IBIe(A)+e(A)e(B)+2-35IABI, (3.6)
(ii)
and
e(A/
(iii)
assuming that
We
B) < {e(A)+ IAI
I
B
>
I
e(B)/
}/
BI
I
BI -e(B)
{I
}
+235IA/ BI
e(B).
consider first the problem
inner product calculation.
Let
H
{hi} and
=
error
of bounding the
R
=
{
n- dimensional vectors stored in the computer. Let
r.
Q.
}
=
1
in an
be
h.r.
1
1
i
be the machine value of
h.r.,
S.
1
1
,
_
1
.0
of
the partial sum, and
be an
e(S.)
be the machine value
Q
=1
error
bound on the
partial
sum. Then we find
O
e(Si)
2-35
I
Ql
I
+2-35Max{IQiI'IsiI
<
}'
e(Si-1)+2-35IQ1I
1<i<n,
and hence
n-1
n
e(Sn)
2-35
<2-35
-
Max {ISiI,IQi+lI }.
i
i=1
i=
(3. 7)
A
quantity needed in Chapter
row sum of an
nX n
matrix P
_
IV is a bound on
{pig
}
,
the maximum
where the
pif
are
16
subject to error
S(i)
pi.
=
For row
e(p..).
i,
let
< i < n,
1
be the machine value of the
partial sum and
j=1
i
e(S(1))
associated error bound. Then we have, similar to
be the
above,
e(pil),
e(S11))
=
e(S(1))
k
< e(S(1)
-
1-1 )+e(p ik
)+2
35MIS(1)
ax{
,
k-1
1
p
Pik
< k < n,
and hence
n-1
n
e(Pij)
e(Sñl)) <
j=1
+
2-35
Max
{
I
Skl) I, Pik+l
I
I}.
k=1
(3. 8)
Hence we can then write for the maximum row sum,
Q < Max {S(1):
n
1
Q,
1<i<n}.
-< i < n}+Max{e(S(1)):
n
(3.9)
We note
that the error in calculating error bounds must be
accounted for in order to keep the bounds rigorous. Conveniently,
multiplication by powers of
2
on positive quantities
result in ans-
wers that are at least as large as the exact product. Also, addition
17
of
positive quantities may be adjusted to preserve rigorous bounds
by forcing a round up at each step of the
error calculation.
The second method of bounding the
error
in a
series
of
opera-
tions involves the use of interval arithmetic, as discussed in the
work of R. E. Moore
A
=
of a
[a,b]
[
12]
This method uses a pair of numbers
.
to represent the lower and upper bounds, respectively,
compact interval containing an exact but unknown number
For any arithmetic operation
the compact interval
A
o
B=
A 0B
{c od
o
and intervals
A, B
c.
we define
by
:
cEA, dEB}.
At each stage of a computation the partial result is an interval
guaranteed to contain the answer at that stage.
A
set of interval
arithmetic operations has been implemented for the
CDC 3300
at the OSU Computer Center. For details see Computer Center
Report
67 -1
[
14]
.
These are easy to use and allow production of
rigorous interval answers without significant complication
of
a
program. Although this method is much easier to implement, both
of the methods
of
discussed here have been found to be practical ways
producing rigorous error bounds on computed quantities.
18
IV.
FINITE DIMENSIONAL LINEAR OPERATOR INVERSION
In
this chapter we consider linear operators over finite
dimensional normed linear spaces. Suppose we have such a space
S
with scalar field
el, e2,
and basis
R
,en.
n
fore, we will primarily be interested in the space
(2. 2) and the
=
., b2
(6
2j
J
J
,
...
,
):
b
1
J
Consider the linear operator
1
Rn
with norm
usual basis
{e
Ke.,
i
As noted be-
K
on
S
<j
< n
}
.
and the elements of
S,
Let us write these in terms of the original basis,
-< i -< n.
Kel
=
allei +a21e2+
Keg
=
a 12e
+
a22e2
+
+anlen
..
+
an2en
(4. 1)
Ke
Now
consider the
n
=
a
nX n
ln e
+
1
a
2n
matrix
e
2
+
+
a
e
nn n
a..ER
iJ
.
19
all
a12
aln
a21
a22
a2n
(4. 2)
A =
a
It is shown in
a
nl
and each such
K,
linear operator. Thus
of
nn
Faddeeva [6] that this matrix uniquely defines the
linear operator
of finite
a
n2
we will
n
X
n
matrix defines
consider the problem
of
a
inversion
dimensional linear operators synonymous with the problem
inversion
of
n
X
n
matrices.
Recalling Definition
2. 4, it
can be seen that the choice of
vector norm induces a corresponding choice of norm for matrices.
In
Faddeeva
[
6] it is shown
that the norm (2.
2)
for vectors induces
the following norm on matrices
n
TIMT
=
laiJ
Max
l
:
1
<i<n
}.
(4.3)
j=1
Definition
We
2.
5
yields a definition for the inverse of a matrix.
shall now discuss finding the inverse
Method.
of a
matrix by the Hotelling
This method is discussed in Faddeeva
ing "Correction of the Elements of an Inverse.
[
"
6]
under the headThis is an iterative
20
method which easily gives a measure of the accuracy of the result.
Let us consider matrix
A-1,
and let
for which we want the inverse
A
be an approximation to
B0
A-1.
The first step
is to form the matrix
SO=I -ABO.
Since
we
B0
write
A
-1
(4. 4)
it is reasonable to assume
Now if
(4. 4) as
ABO
and apply Theorem 2.
has an inverse.
=
we see that
(4. 5)
I - S0
we see that
1,
II
S0
II
<
implies that
1
1
A(B0(AB0)
=
-1
Bi
=
of
=
I
A
-1
which will converge to
matrices
Bi-1(I+Si-1)'
Si=I-ABi,
1< i< j.
- -
J
Faddeeva
[
6]
(4. 6)
The following method will re-
A-1.
sult in a series of approximations to
Form the sequence
1)
For matrices this is sufficient
has a right inverse.
A
to guarantee the existence of
It is shown in
AB0
Then from
(AB0)(AB0)
A-1.
IlsoH < 1.
that
S.
J
=
S0
and hence
(4.7)
21
B.
A
=
-1
2i
J
)
(4. 8)
.
J
Then by the assumption
Faddeeva
[
6]
B.
Ilsoll < 1,
-A
From this
.
J
derives
II
Bi-A-111
<
Ilsoll 2J/ (1-I1s011).
Boll
II
Hence, theoretically, we can approximate
A -
(4. 9)
to any degree of
1
accuracy desired.
Using the method described, it is possible, beginning with
only a reasonable approximation, to conclude the existence of the
inverse for
a given
matrix, get
a
better approximation to the inverse,
and bound the discrepancy between the inverse and its approximation.
In
order to
do
these things we need rigorous bounds for
ll
and
Boll
Ilsoll
We
consider
a method for getting a
BO
to be exact, and from Chapter III we have
rigorous bound for
to find a rigorous bound for
(4. 4).
11
So
II
II
where
,
It
Boll
S
0
remains then
is as defined in
Observe that the calculation of each element of AB 0
is
just an inner product calculation as discussed in Chapter III. Hence
finding
Ilsoll
is a simple application of techniques from a
pre-
n
vious discussion. Recall that for
ail b/
I..
1J
1
expression for the error
=1
.,
we have an
22
n-1
n
e(Iij)
-<
35
2
{
¡
aid
b
+Max{
j
f -1
Q
aim bmj
laic
b.Q
+lj
I
}
m=1
=1
(4. 10)
where
A
{ai_
=
and
}
B0
=
{
bi.
J
}
.
The elements of
S0
on
J
the diagonal have additional error due to the modification by the
identity matrix, hence
e(1 -Iii) < e(I..)
+ 2
-35 Max
error
We now have a bound for the
{
11ìi1
,
1 }
(4. 11)
.
in each element and can now
apply the previously mentioned method to get a bound for
It
remains only to combine these results rigorously using
to get a bound for
II
-A
B
11
S0
11
.
(4. 9)
1
II.
J
As mentioned in Chapter III, there is another approach to the
problem
of
rigorous bounds
-
interval arithmetic.
A
technique for
finding matrix inverses rigorously using both interval arithmetic
and the Hotelling method is discussed by Hansen [7].
We
will
outline the method from that paper, giving little justification. The
following definitions are needed:
Definition 4.
1
An interval matrix, indicated by a
matrix
of
compact intervals.
superscript
I,
is a
}
'
23
We note
that with each real matrix
M
in a natural way, an interval matrix
Definition 4.
=
{
u.., v..]
[
M
{m..
I
=
associate,
we can
}
13
{[mij, mij]
}
2
Consider
TI
=
N
{nij
=
MI
,
}
=
{
[pi q..]
We define the following
1.
and
} ,
relations:
iJ
(i)
N
(ii)
M
M
I
[pij, gij]
iff
nijE
iff
[pij, gij]
for
1< i,
j < n,
and
Definition 4.
I
.(
T
Ty
c[ uij, vij]
for
1
<
i,
j <
n.
3
We define the
inverse
of
AI
{
a..i }
by
1
(AI)
_
(4. 12)
,
where
aij
if
A-1
=
{
bij
A
:
is defined for all
It can be shown that the
-G
A
-<
AI,
A-1
=
{bij
}
(4. 13)
1,
AI.
are compact intervals.
a..
iJ
Definition 4.4
Consider
MI
=
{[pi qij] 1.
,
We define the
norm
of
MI by
24
II
Max
=
MI Ii
or, equivalently,
ii
Mil
IIMII
{
{
:
I},
<
M
(4. 14)
n
MIIi
=
Max
Max {ipiiI, IgijI }:
1
<i<n }.
(4. 15)
j=1
Definition 4.
5
With
matrix
as above, the width,
MI
is
MI
Definition 4.
{
{i
i
if
gij -pij
qij
pij
}
of an
W(MI),
interval
Ii
6
With
as above, we define the center of
MI
M
MC
{mij
=
:
mij
(pii
=
qij)/
+
2
MI,
Mc
,
by
(4. 16)
1.
i1
We
are now ready to discuss the method given by Hansen [7]
-1
for approximating the inverse
(AI)
-1
to find
exactly because
(AI)
a matrix CI
Actually, it is impossible
.
roundoff error, but we can find
of
-1
such that
(AI) -<
First, find an approximation
CI.
-1
B
(A
to
ted with
I
)
.
Let
and
B.
and
II
be the
B
interval matrices associa-
Calculate
EI=II -AIBI
and find a bound for
11E111
(4. 17)
,
using (4. 15).
If
11E111
<
E0
< 1,
we
1
are assured that the inverse
find
CI.
exists and may proceed to
(AI)
Let
2
SI
=
II
+
EI
+
(EI)
A
+
.
+
(EI)
.
(4. 18)
25
It can be shown
that
1+1
1
II
-SI fI
(I-E1)
Choose
<
II
El
II
'=
b(.Q
(4. 19)
).
so that
.P_
b(2)
Ile..13 -d.
Min
and calculate
using (4. 18).
S1
having all elements equal to
CI
which satisfies
(AI)
J
(4.20)
PI to be the matrix
Now define
and calculate
-b(f ), b(e)]
(4.21)
CI.
As mentioned in [7]
,
for actual computation it is best to
(4.21) in the form
(4. 18) and
GI
[
Tt
B1(S1+ PI),
=
-1
-<
El= {[d.., e..] 1,
- i, j <- n,
<
1
1
rewrite
/ (14 E')
=
E1(Il
+
E1(Il
+
+
EI)
)
=
S1 -
Il,
(4. 22)
and
C1
Notice also that
may form
G
+
-:
B1 + B1(G1 + P1)
(4. 23)
.
PI need not be formed as a matrix but instead we
P
T
by
direct modification
of
GI.
These tech-
niques allow us to save space and get narrow, still rigorous,
intervals.
We conclude this
chapter with some comments on the
26
precision
the methods cu tied above. If we perform the calcu-
of
lation of (4.
7) in
the form
Bi -
Bi-1
+
(4. 24)
Bi- lSi- l'
it is apparent that we are just making a correction to our current
approximation.
iteration.
This correction should become smaller with further
Experience indicates that eventually the order of magni-
tude of the corrections approaches that of roundoff error and further
refinement is impossible. Obviously, it is advantageous for us to
calculate
S.
as accurately as possible, using higher precision
arithmetic if available.
In light of the above
observation, we make
the rule that iteration should be stopped if
M
We
Part
of
Si
M
? Si-1
(4. 25)
M
II
briefly consider the width,
W(CI)
is caused by
W(GI)
W(CI),
and
W(PI).
tions of addition and multiplication contribute to
ence indicates that
tors to
minimize
calculate
W(PI).
EI
and
W(PI)
EI.
Also, both
W(GI)
CI
in (4. 23).
Also, the opera-
W(CI).
Our experi-
are the primary contribu-
With the proper choice of
W(CI).
by the width of
W(GI)
of
and
f
in (4. 20) we can
W(PI)
are affected
Hence, we find that it is advantageous to
in a manner that introduces as little interval width
27
as possible and still preserve rigor. Here it would be beneficial
to use higher precision interval arithmetic. However, at the
present time this is unavailable.
28
V.
FREDHOLM INTEGRAL OPERATOR INVERSION - PART I
In
this chapter we will consider inversion of the Fredholm
integral operator of the second kind. Much of the notation and results in this chapter are from
Let
V
[
be the interval
and [ 2]
1]
[
0, 1]
.
consider the Fred-
We
.
holm integral equation of the second kind
x(s)
-
('1
,1
J
where
x, y, and k
k(s,t)x(t)dt
y(s),
=
s
E
V,
(5.
1)
0
are continuous
V, V, and VX V
on
respec-
tively and Riemann integration is used.
Definition
5.
1
Define the integral operator
K
:C(V)
by
C(V)
1
(Kf)(s)
=
J k(s,t)f(t)dt,
s
e
V, f C(V).
E
(5. 2)
0
We will
require that
k
be continuous on
in addition, that there exists a real
Max {Ik(s,t)I: s,tEV
Then, using norm
Q
}
=
V X V
as above and,
satisfying
Q.
(5.3)
29
IfII - Max {If(s)1
on
the operator
C(V),
KI
=
Sup
{II
KfII/
:
sEV}
(5. 4)
is bounded,
K
('1
fl:f
k(s,t)Idt:sEV}
0} < Max {,S1
< Q.
0
(5. 5)
We can now
write
(5.
in operator form
1)
=y.
(I - K)x
On the
for (5.
(5.6)
computer, we will use the following approximation
1):
n
x (s) n
L)
w
ni
k(s,t ni.)x n (t n .)
=
y(s), n
> 1,
sEV,
(5.7)
i
i=1
where the
w
ni
and
t
quadrature formulas on
Definition
are weights and abscissas for one
ni
V
discussed in Chapter
of the
II.
5. 2
Define the operator
K
n
:
C(V)
- C(V)
by
n
(Knf)(s) - )
w
ni
k(s, t)f(t ni ),
ni
n > 1,
-
i=1
We will
require the
w
ni
and
t
ni
to satisfy
s
E
V,
f E C(V). (5.8)
30
n
(i)
w
i=
and
ni
f(t
ni
-
)
('1
`
JO
f(t)dt
as
n
f E C (V),
00,
1
(5. 9)
n
/'
(ii)
n>
B < +oc,,
<
Wni
I
1
.
i=1
Then the operators
{K
n
are uniformly bounded,
}
n
I1
Kn
II
Max
=
wni
{
I
k(s tni) I:
,
s
E
V} < QB.
i=1
We can now
write
(5. 10)
(5. 7) as
(I-Kn)xn=y.
We
operator
are
(I -K
now
n
)
(5. 11)
prepared to discuss
a
method for inverting the
and getting particular solutions to (5. 7).
Anselone and Moore [2] discuss a method of solving for particular
solutions of (5.7).
solution for (5.
1)
They also give a test for the existence of a
based on existence of a solution for (5. 7), and
they derive a bound for
MX-XI
n
.
We
tion of the method and their results.
results
of
will discuss their justifica-
Later we will show how partial
this method can be used to create in the computer an
operator which approximates
To solve (5.7), the
(I -K) 1.
first step
is to solve the linear system
31
n
x (t .)
n nj
-
w
i=
ni
k(t
,
nj
t .)x (t .)
ni n ni
=
y(t
nj
),
n > 1,
-
1
< j < n
.
1
(5. 12)
If the
solution
solution
x
{x (t
n ni
),
< i <
1
n
exists, we can then get the
}
of (5. 7) by
n
n
xn(s)
=
y(s)
+
w
ni
k(s, t ni.)x n (t ni ), n ->
1,
s
E
(5. 13)
V.
i=1
This solution
solution
x
x
may or may not be a good approximation to the
n
of (5. 1).
Anselone and Moore [ 2] find the following results which
justify the use
of the method
just presented. These results will be
presented without proof.
Lemma 5.
Let
1
K,
K
n
n > 1,
,
be the bounded linear operators given
and 5. 2.
Then
1IK nf-Kfll-- 0
as
in Definitions 5.
1
n
fEC(V),
co,
however,
11
K -K
n
11
7/-
0
as
n
-i
co,
if
k(s, t) 4
0.
(5. 14)
32
Theorem 5.2
Again, let
K,
Kn,
K.-K
11K
-
¡;
11
be as above.
n > 1,
as
0
n
Then
(5. 15)
.
The next theorem is very important as it provides a test for
the existence of a solution for (5.
It is
given the results of solving (5.7).
1)
stated here without proof and in less generality than in [2]
Theorem
5.
.
3
Let
K,
K
n
be as above.
n > 1,
,
-
Suppose
(I -K
-1
n
)
exists and satisfies
K
1I
Then
(I -K)
-1
-K)
n
K-K2
<
1/
.- --
.-
Ij
II
(I-K)
1
(5. 16)
II
exists and
1
,
<
_
{
l
n
,I
--
,
id
,
11-
II
(-
--) -1
II
il
K K -K2
II
Ì
(5. 17)
The question now
to the solution of (5.
1)
arises,
how close is the solution of (5. 7)
The next theorem from
[
2] will help
answer
that qucsticn. Again, it is stated without proof and in less generality
than in [ 2]
.
33
Theorem
5. 4
Let
and
(I -K
n
K, Kn,
)
be as above. Assume that
n > 1,
exist and that
-1
IIx-xnll
_<_
Then
(5. 6) and (5. 11) hold.
11(I-Kn)- 111 {IIKnY-KYII
+
(I -K) -1
IIKnK-K2II
IIxII
},
(5. 18)
and if (5. 16) holds,
IIx- ñII<_II(-
ñ
1 II
{
IIK
+
IIKnK-K2
II'
n
II
ñ
II}
/{1- IIKnK-K2 II'
II(1- ñ)1II}
(5. 19)
and
II
x-xn
-
II
0
as
n-
(5. 20)
co
From the results just presented we can, using information from
solving (5. 7), test for the existence of a solution to (5.
a solution
exists, we can also bound the quantity
We now
(5. 12)
(I -K) - 1.
In the
let us define the matrix
i
m..
We can then
n
II
.
consider a way of creating an operator in the com-
puter which approximates
linear system
Ix -x
If such
1).
= w
.k(t ., t .).
ni nj
nj
write the solution
of (5. 12)
process
K
n
=
of solving the
{m..
} ,
where
(5.21)
,
34
xn
where
=
xn.
(y(t
ni
=
...
(xn(tnl)' xn(tn2),
), y(t
),
n2
,
y(t
5.
)).
nn
definitions from Anselone
Definition
(I-Kn)
=
-1(xn(tnn))
n nn
,
Now
[ 1]
(5. 22)
y
and
let us give the following
:
3
Define operator
qin(f
4.!n
C (V)
:
-
Rn
- (f(tnl)' f (tn2)'
)
for
f E C (V).
It is
easily seen that
Nil
,
by
f(tnn))
(5. 23)
= 1.
II
'
Definition
5. 4
Define operator
¢
n
Rn
:
C(V)
---
by
n
(
*invn)(s)
n n
w
)
L,
_
ni
k(s,t ni.)v n (t ni.)
(5. 24)
i=1
for
V
n
=
(y (t
n
nl
)
,
v (t
It can be shown that
n
II
(I-Kn)-1
)
,
5bn
ll
n2
=
,
=
I +
II
v (t
n nn
Kn
Ii
))R'.
Also, it can be shown that
c13,n(I-Kn)-n.
(5. 25)
35
From this it is obvious that
!
(I-Kn) -
<
1 u
Since the operators
¢n
puter, and we have
(I -K
1
and
)
-1
+ 11K
4in
n
Ii
(5. 26)
10-Rn)-111
can be simulated in the com-
during the solution of (5. 7), we have
in the computer the
necessary information for forming an operator
that approximates
ÉI
-
1
.
preceding discussion we have ignored the roundoff
In the
error
-K)
in the solution of (5. 7).
bounding it.
consider the problem
of
The groundwork for this has been done in Chapters III
From Chapter
and IV.
We now
IV we get a bound
cation error in inverting the matrix
for the roundoff and trun-
(I -K ).
n
This, combined with
the analysis of the inner product calculation given in Chapter III,
ti
by
11
yield a bound on the error in each
e(x
(t
n ni
in
of an
error
by
Q
y(s),
e(k(s, Tni)),
1
n ni
As before we denote the machine
)).
exact quantity
error
x
Q
_<.n,
we
e(y(s)),
that the error in
k(s, ñ),
and in
of the
denoted by
are ready to complete the calculation
bound on the solution to (5. 7).
and assume that
version
denoted
Assuming that we have bounds for the
.
denoted by
1<
- n,
< i <
-
Ti = k(s,T .)x (17)
ni n ni
Let
i
uu
nl
==
x(s),
n
1/ n,
for all
denoted by
i < n.
-
It follows
e(x (s)),
n
from
satisfies
(3. 6)
36
ñ
n
e(xn(s))
<
e(y(s))
e(
+
n
Ti)
n
+
2-35 Max
{
I
Ti
Iñ
Y(s)
i=1
1,
i=1
(5.27)
where
n
n
n
-
< e(
i
e( n ¡ ) T.)
,
i==1
Ji=1 T.)/n
i
+
TiI,
2-35
(5. 28)
i=1
n
and
is evaluated as in the inner product analysis of
T.)
e(1
i=1
Chapter III.
The result (5. 27) and the result (5. 19) can be com-
bined to give a bound for the total discrepancy
computed solution for (5.7) and the solution for (5.
D(s)
Thus if we have
Kn
of a solution for (5. 1).
IIK n y -KyII
x(s)
n
and
II
n
we can find
,
in which
K -K2II
s,
(5.29)
e(xn(s)).
II
K K -K2II
n
,
we can get
If, in addition, we have
x(s)
.
e(x (s))
n
In the next
II
Y
II
,
IIKnII
II
yJI
and
and put an interval around
is guaranteed to lie.
to obtain the quantities
I) IIK
at
1)
during the solution of (5.7) and test for the existence
III
II(I
II
< IIx-xn11 +
between the
D(s)
It is
IIKnY-KYII,
usually possible
and
chapter we will discuss problems related
to obtaining them and implementing the method described.
37
In the method
described above, the operator
used as an approximation for
(I
+
In the following
(I -K )- 1K).
n
n
)
-1
Anselone and Moore
(I -K) -1.
(I-K)-1
suggest that a more appropriate approximation to
be
(I -K
was
[
2]
might
theorem we find a result
analogous to Theorem 5. 4.
Theorem 5.5
Let
K,
Definitions 5.
1
K
n
-
and
5. 2.
x
(I -K)
exist and that
II
=
be the bounded linear operators of
n > 1,
,
x-xnil
Assume that
-1
<_
II
and
y
(I-Kn)-1
x
n
II
II
and
(I -K) -1
=
(I+ (I -K
KnK-K2
II
n
(I -K
)
- 1K)y.
II
xIi
n
)
-1
Then
(5.30)
,
from which we get
-1
II
Also, if (5.
II
x-xn
II
16)
<
II
(I
-
(I-K)-1
+
(I-Kn)-1K)
II
-
0
as
n-
00.
(5. 31)
holds,
(I-Kn)-111
II
K-K2
K
KnK-K2
n
II
II
x
n
II
/
{
1- II (I-K n )-1
II
II
II}
ñK-K2
n
,
(5. 32)
and
IIx-x n
Proof:
II
-'
0
as
n
(5.33)
This proof is quite similar to the proof given in [ 2] for the
38
generalization of Theorem
(I-K +K) (I-K)
(I -K
n
)
I-K
=
n
Operating on this by
Observe that
5. 4.
+
n
K
n
K-K2.
on the left and then by (I -K)
-
-1
on
the right we get
(i+(I-K )--1K)
(I-K)-1
_
n
+
(I-K )-1(K K-K2)(I-K)-1.
n
n
Hence
(I+(I-Kn) -1K)y - (I-K)
-1
y
=
(I-Kn) -1(KnK-K2)(I-K) -1y,
and
x -x
n
=
(I-Kn)
-1
n
(K
n
K-K2)x,
from which we easily get (5.30). Using
< Ilxn-x11
IIxIi
+
Ilxn
(5.31) and (5. 33) can be seen from (5.30), (5. 32),
we also get (5.32).
and (5. 15).
If the quantity
original approach
of
Ky
Anselone and Moore
(I-K )x*
n
and get
xn
by
can be found in some way, we can use the
n
=
Ky
[
2]
to solve the equation
(5. 34)
39
x
n
=y+x*.
n
In solving (5. 34) the roundoff
(5.35)
error considerations are exactly the
same as given for the original method. Then we need only bound
the additional error in (5. 35) to get a complete error bound. It
has been found experimentally that, if one can find
Ky
analytically,
this alternate method gives better results. For cases where the
exact solution was known, the computed solution by this method was
closer to the exact solution than the computed solution by the original
method. Also, the error bound
smaller for this method.
D(s)
in (5. 29) was correspondingly
40
VI.
FREDHOLM INTEGRAL OPERATOR INVERSION - PART II
In the
previous chapter we considered, in a theoretical man-
ner, several related methods for solving an approximation
of (5.
1)
for particular solutions and bounding the truncation and rounding
error.
In addition, we found that we could
tion to the operator
construct an approximaHere we consider
in the computer.
(I -K) -1
some of the more practical aspects of converting the methods pre-
sented into workable programs. Also included will be some numer-
ical results on two examples.
The ideal program would take the given functions
of (5.
1)
and produce numerical
y
and
results, including rigorous error
bounds, with no additional information. This, however, is very
difficult or impossible to do because of problems arising in the
bounding of the
error. Bounding the roundoff error
is easily ac-
complished using the results of Chapters III and IV.
we have bounds on the truncation
IIYII,
IIxnII,
IIKnK-K2
II,
error in terms
IIKnY-KY II,
and
II
In
Chapter
V
of the quantities
(1-Kn)-1
II
.
How-
ever, the calculation of these quantities require some additional
information.
Let us consider the problem of calculating these needed
quantities. First, it is easily seen from (5.11), that
k
41
II
xn,i
(I-Kn)-111
II
<_
(6. 1)
11Y11.
Using this and (5. 26), the list of needed quantities becomes
IIYII,
!i
K
11
II
results
of
given
L1, L2
II
(I-K)
n
Chapter
-1
II
II
KY-KY
n
II
and
,
IV allow us to bound
II
II
(I -K
K K -K2 II
n
n) - l
The
.
If we
II
are
satisfying
Iy(s)--y(t)I
<
L1Is-tI,
0
<s,
(6.2)
t < 1,
and
Ik(u,$)- k(u,t)I
-tl,
<
s,t,u
0 <
< 1,
(6.3)
},
(6.4)
we can determine
IIYII =
Max fl
)1
and
0 < s <
:
1
n
IIKn
=
II
Max
{
I
.)j:
nil Ik(s,t
W
I
- -
o < s <
1}. (6.5)
i=1
This is done by evaluating the quantities to be maximized on an
equally spaced grid
{gi
:
1
< i <
m
} ,
with spacing
b,
and
computing
IIYII <
Max {Iy(gi)I:
and
1
< i <
m}
+
L15 /2,
(6. 6)
n
IIKn II
<
Max
WniI
i=1
Ik(gi'tni)I 1<j<m}+L2b/2.
(6. 7)
42
The quantities
IIKny -KyII
=
('1
` k(s , t)y(t)dt
Max {I
n
-
w
ni
k(s , t
ni
)y(t
ni
)
I
:
0< s <
-
i=1
0
1 }
-
(6. 8)
and
,l
?
II
KnK-K
n
11
n
k(s t)k(t,u)dt
=Maxi)
,
S1
0
0
pose a bigger problem.
-
w
n
k(s, tni)k(tni,u) Idu:
n
i=1
0 <s
<1}
(6. 9)
Quantities of the form
n
1
g(s,t)dt
I
-
0
nig(s'tni)I
i=1
can be bounded using an error term for the quadrature formula.
This usually involves bounds on an appropriate derivative of
with respect to
g
Although we might increase the bound, we
t.
can replace the outside integral in (6.
9)
respect to
reduced the problem of finding
II
Kny -Ky
II
We have now
0 < u < 1.
and
treated above for
II
KnK -K2
II
y
II
II
and
with a maximum with
to that of finding maxima which was
II
Kn
(I
.
We note
of outside information needed, it may be
that with the amount
easier to find the quantities
analytically and treat them as input to the program.
From the above discussion it is clear that before rigorous
bounds can be obtained there must be considerable prior analysis.
43
Hence, for our program we require that
and
II
K K -K2II
n
IIKnII,
YII,
be supplied at the outset.
If
IIKnY-KY II,
automation is more
important than rigor, there is an alternative. While rigorous
bounding of the above quantities may be a somewhat formidable task,
it is easy to get estimates for them.
II
Kn
II
To do this for
II
yII
and
we use the above method with a fine grid and ignore the
intervals between grid points. For
II
Kny -Ky
we use (6. 8) and (6. 9), replacing the outside
maximum with respect to
0 <
u < 1.
We
and
II
KnK -K
integral in
do in the
II
(6. 9) by a
Here we can use
order formula because we need not solve
algebraic system as we
2
approximate the remain-
ing integrals by a high order quadrature formula.
a much higher
II
original approximation
an n
X
n
of (5. 1).
This procedure will yield reasonable estimates to the quantities in
question,
II
In an
Kny -Ky
II
and
II
KnK -K2II
attempt to develop
a
.
truly useful tool, our program
has been designed to give the choice of preparing rigorous bounds
for the needed quantities and getting rigorous error bounds, or
letting the program automatically estimate these and give reasonable estimates of the error.
In
addition, choice of either of the two
quadrature formulas mentioned in Chapter II is given for use in the
solution of the problem.
One final comment should be made.
To
retain rigor one
must consider also errors which may take place in the input of
44
information into the program. Very little information about such
errors has been found. However,
in the
program an attempt has
been made to bound any such errors in order to complete the analy-
sis.
From Tricomi
x(s)
-
.
[
15] we
5
J
('1
es
take the following example,
-tx(t)dt
=
1.
=
1
(6. 10)
0,
0
for which we know the solution
x(s)
+
ex-ex-1.
Using the
methods from the previous discussion or direct computation with
the given definitions, we find the following values for the key quan-
tities (using
15
abscissas
in
[
Repeated Midpoint
Quantity
II
KnK-K2
II
KII
n
II
Kny-Ky
lI
Y
II
II
Repeated
5
Point Chebychev
<
0. 0
0. 0
<
0. 8592
0. 8592
<_
0. 0230
0. 0002
1.0
1.0
<
II
0, 1] ):
Using these values and also estimating them using the methods
given above, our program gives us the following error bounds:
r
II
D
II
Repeated Chebychev
Repeated Midpoint
-2
6. 6 X 10 -4
rigorous
7. 6 X 10
<_
O){
5.
3 X
10
-4
estimated
3.
1
X
10 -8
45
The roundoff
error,
in both cases, was on the order of
This is negligible as compared with
error, based
3.0
X
10-4
on
The largest observed
x-xn IL
II
2. 0 x 10 -8.
comparison with the known solution, was about
using the repeated midpoint rule. For the repeated
Chebychev rule, the answers were correctly rounded to the
5
sig-
nificant digits printed. For the alternate method, using
(Ky)(s)
_
KY II
<
II
-.05 es(e1-1),
0. 8592,
and the repeated midpoint rule, the program finds
and the answers were correctly rounded to the
5
11D
II
<
1.0
X
10
-8
significant digits
printed.
As a second example, from Kopal
x(s)
-
('1
J k(s,t)x(t)dt
=
10
[
]
,
we take
2 s(1-s),
(6. 11)
0
where
k(s,t)
t(1-s)
0 <
s(1-t)
s <
t
< s
_
The solution is
x(s)
1
_
(tan 2)
sins
+
t <
cos
1
.
s
-1.
In
this example
we find the key quantities to be as follows (again using
in
[0, 1]):
15
abscissas
46
II
K K -K2
II
Kn
II
K y -Ky
n
n
Repeated Chebychev
Repeated Midpoint
Quantity
II
II
II
IIyII
<
0.0042
0.00425
<
0.25
0.25
<
0.0105
0.01063
<
0.1250
0.1250
error bounds:
Our program gives the following
Repeated Chebychev
Repeated Midpoint
IIDII
-2
f1.45
X
10
8.4
X
10 -5
rigorous
1.47
X
10-2
estimated
1.
X
10-4
<_
The roundoff
error,
in both cases, was on the order of
This is negligible as compared with
error was about
about
2.
0 X
10-4
1. 5 X
10 -4
1
IIx-xnll
1. 0 X 10
-10
The largest observed
for the repeated midpoint rule and
for the repeated Chebychev rule. For the
alternate method, using
(Ky)(s)
_
IIKyII
<
(s4-2s3+ s)/ 24,
0.0131
,
and the repeated midpoint rule, the program finds
and the largest observed
error is about
1. 0 X 10
IID
II
<
7.7 X
10 -4
-5.
The results from the program using interval arithmetic were
47
similar to those from the program using the other method given in
this paper. This is to be expected since the basic method is quite
similar and the roundoff error is usually negligible. To give an
idea of the machine time required for the above examples, we give
the following times on the
first example:
(i)
using interval arithmetic
-
11
sec.
(ii)
using ordinary arithmetic
-
17
sec.
,
When the program was asked to estimate the key quantities, the
ordinary arithmetic solution required
34
sec.
Our examples illustrate the fact, noted in Chapter II, that
for functions having only a bounded first derivative or a Lipschitz
constant, the repeated midpoint formula is as good as the more
sophisticated formulas. Also, we can see that for functions having
higher derivatives the repeated midpoint formula is not as satis-
factory.
Finally, our examples show that being rigorous in bounding
the
error does
not lead to unnecessarily large bounds.
We
feel that
the bounds found by the methods of this thesis are realistic enough
to be useful.
48
VII.
We have
SUMMARY
discussed methods for inverting
matrices
n X n
and linear Fredholm integral operators of the second kind.
We have
developed techniques which allow us to prove the existence of and
find approximations to inverses for the above types of operators
using the computer. Also, we were able to bound rigorously the
error
in the approximations.
The above techniques were implemented in the form of corn -
puter programs, and some numerical results were given. It was
found that the
error bounds resulting from these programs were
sufficiently realistic to be
It was noted
of
interest and
of
use.
that the interval arithmetic technique is much
easier to implement than the step by step accumulation
Why then do we consider the
of
error.
latter technique? Interval arithmetic
became available here only recently and is not widely available.
A
disadvantage of using interval arithmetic is that higher precision
interval arithmetic is not presently available.
In the
inversion
of
Fredholm operators we used quadrature
formulas having equal weights at the abscissas because this simplified the error analysis. However, the techniques used in this thesis
can be applied to other formulas such as Gauss quadrature which are
more precise for smooth functions.
In
interval arithmetic, the use
49
of
more sophisticated formulas is especially appealing since there
is essentially no increase in complexity involved.
We note
that the methods and techniques
of
this thesis might
be applied to the solution of nonlinear integral equations using
For
Newton's method.
a
theoretical discussion
of the use of
Newton's method for nonlinear integral equations see
[
13 ]
.
Con-
sider the equation
x - K(F(x))
where
(F(x))(s)
=
f(x(s)),
real valued function
of a
s
E
[
=
0,
1
y
]
(7.
,
,
and
is a continuous,
f
real variable. Let
G(x)
=
x
K(F(x))
-
By Newton's method we want to solve
xi+l
xi
- (G'
- y
G(x)
(7. 2)
.
= O.
Considering
(7. 3)
(xi)) 1G(xi),
using the prime to indicate a Fréchet derivative, we need
Now
from (7.
2) we
1)
(G'(xi))
see that
G' (x)
=
I -
(7.4)
K(F' (x)),
which is an operator of the form that we discussed in Chapter V.
Hence we see that we might use our previous work in getting an
approximation to
(G' (x)) -1
and bounding the
error
in the
1
50
approximation. Then using the Newton -Kantorovic Theorem and
techniques discussed previously we should be able to prove the
existence of a solution to (7.
1)
and bound the
error
in the approxi-
mate solution. This method has not been explored in detail or
implemented; however, it does illustrate a possible extension of the
present work.
Thus we see that rigorous results may be obtained from the
computer for many operator equations.
51
BIBLIOGRAPHY
1.
Anselone, P. M. Convergence and error bounds for approximate solutions of integral and operator equations. In:
Error in digital computation: Proceedings of an advanced
seminar conducted by the Mathematics Research Center,
United States Army, at the University of Wisconsin,
Madison, October 5 -7, 1964, ed. by Louis B. Rall. New
York, Wiley, 1965. p. 231-252. (U.S. Army. Mathematics Research Center. Publication no. 15)
2.
Ansleone, P.M. and R. H. Moore. Approximate solutions of
integral and operator equations. Journal of Mathematical
Analysis and Application 9: 268 -277. 1964.
3.
Berezin,
and N. P. Zhidov. Computing methods, tr. by
O. M. Blunn and ed. by A. D. Booth. Vol. 1. Reading,
I. S.
Massachusetts, Addison -Wesley, 1965. 464 p.
4.
Control Data Corporation. 3300 computer maintenance training
manual. Preliminary ed. Vol. 3. St. Paul, Minnesota,
1965. Various paging. (Publication no. 60158800)
5.
3300 computer system reference manual.
Preliminary ed. St. Paul, Minnesota, 1965. Various
.
paging. (Publication no. 60157000)
6.
Faddeeva, V. N. Computational methods of linear algebra,
tr. by Curtis D. Benster. New York, Dover, 1959. 252 p.
7.
Hansen, Eldon. Interval arithmetic in matrix computations.
Journal of SIAM, ser. B, 2: 308 -320. 1965.
8.
Hildebrand, F. B. Introduction to numerical analysis. New
York, McGraw -Hill, 1956. 511 p.
9.
Kolmogorov, A. N. and S. V. Fomin. Elements of the theory
of functions and functional analysis, tr. by Leo F. Boron.
Vol. 1. Rochester, New York, Graylock Press, 1957. 129 p.
10.
Kopal, Z.
11.
Krylov,
Numerical analysis. New York, Wiley, 1961. 594 p.
V. I. Approximate calculation of integrals, tr. by
Arthur H. Stroud. New York, MacMillan, 1962. 357 p.
52
12.
Moore, Ramon E. The automatic analysis and control of error
in digital computing based on the use of interval numbers.
In: Error in digital computation: Proceedings of an advanced
seminar conducted by the Mathematics Research Center,
United States Army, at the University of Wisconsin, Madison,
October 5 -7, 1964. Vol. 1. New York, Wiley, 1965. p. 61130. (U.S. Army. Mathematics Research Center. Publication no. 14)
13.
Moore, R.H. Newton's method and variations. In: Nonlinear
integral equations: Proceedings of an advanced seminar conducted by the Mathematics Research Center, United States
Army, at the University of Wisconsin, Madison, April 22 -24,
1963, ed. by P. M. Anselone. Madison, University of
Wisconsin Press, 1964. p. 65 -98. (U.S. Army. Mathematics Research Center. Publication no. 11)
14.
Oregon State University Computer Center. Interval arithmetic.
Corvallis, Oregon, 1967. 8 numb. leaves. (Report no. 67 -1)
15.
Tricorni, F.
1957.
16.
G.
238 p.
Integral equations
New York,
Interscience,
Wilkinson, J. H. Rounding errors in algebraic processes.
Englewood Cliffs, New Jersey, Prentice Hall, 1963. 161 p.
Download