Uniform approximate solutions of differential systems with boundary conditions

advertisement
Uniform approximate solutions of differential systems with boundary conditions
by Ronald Max Jeppson
A thesis submitted in partial fulfillment of the requirements for the degree of DOCTOR OF
PHILOSOPHY in Mathematics
Montana State University
© Copyright by Ronald Max Jeppson (1981)
Abstract:
Consider the boundary value problem (formula not captured in OCR) My (formula not captured in
OCR) where M and N are constant real n x n matrices such that the n x 2n matrix (M,N) has rank
n.(formula not captured in OCR) is continuous on (formula not captured in OCR) with values in
(formula not captured in OCR) is a constant real n x 1 vector. Let(formula not captured in OCR) be the
set of n x 1 vectors such that each component is a polynomial of degree k or less. Define(formula not
captured in OCR) to be the set of vectors (formula not captured in OCR) such the (formula not
captured in OCR) Find a (formula not captured in OCR) for each (formula not captured in OCR), such
that inf (formula not captured in OCR) The norm (formula not captured in OCR) defined by (formula
not captured in OCR) where (formula not captured in OCR) If (formula not captured in OCR) is linear
in (formula not captured in OCR) such a (formula not captured in OCR) will exist. Then given y is the
unique solution to(formula not captured in OCR) , (formula not captured in OCR) converges uniformly
to y, on (formula not captured in OCR) , as(formula not captured in OCR) . If F is nonlinear,
let(formula not captured in OCR) , where E is a real constant n x n matrix such that(formula not
captured in OCR) and find a (formula not captured in OCR) for each(formula not captured in OCR) ,
such that (formula not captured in OCR) There exists (formula not captured in OCR) subsequence
(formula not captured in OCR) converges uniformly to y, on (formula not captured in OCR) as j, where
y is a solution of(formula not captured in OCR) . In some cases (formula not captured in OCR) for a
given k, satisfies inf (formula not captured in OCR) (formula not captured in OCR) . The above
procedure extends the work of M.S. HenrY, D.
Schmidt and K.L. Wiggins. UNIFORM APPROXIMATE SOLUTIONS
■ OF DIFFERENTIAL SYSTEMS WITH
BOUNDARY CONDITIONS
by.
RONALD MAX JEPPSON
A thesis submitted in partial fulfillment
of the requirements for the degree
■
:
DOCTOR OF PHILOSOPHY
in
Mathematics
Approved:
.Chairman, Examinangi Committee
Graduate Dean
MONTANA STATE UNIVERSITY
Bozeman, Montana
M a y , 1981
iii
ACKNOWLEDGEMENT
I would like to thank my advisor, Dr. Gary Bo g a r i for his guidance
and encouragement throughout my association with him.
His. advice has
been invaluable.
I would also like to thank NORCUS for their financial support
during the summer of 1980, which enabled me to make great progress in
completing my research.
Thanks are due to Kim Hafner for her able interpretation and
efficient typing of this manuscript.
Finally I would like to thank my wife, Joyce, whose patience and
encouragement has provided the primary motivating force throughout my
college experience.
TABLE OF CONTENTS
CHAPTER. '
PAGE
INTRODUCTION ...........................
I.
PRELIMINARY RESULTS
1.1
1.2
1.3
..................
Introduction ........................................
Approximation Theory ...........................
The Theory of Ordinary Differential Equations. . .
I I . MINIMAX APPROXIMATE SOLUTIONS OF. LINEAR. DIFFERENTIAL
SYSTEMS WITH BOUNDARY CONDITIONS. .' ....................
2.1
2.2
2.3
2.4
2.5
Introduction . . . . . . . .
......................
Homogeneous Boundary Conditions. . . .............
Nonhomogeneous Boundary C o n d i t i o n s ......... . . . .
D i s c r e t i z a t i o n ......................................
Examples ......................................
III.
APPROXIMATE SOLUTIONS' OF NONLINEAR DIFFERENTIAL
: SYSTEMS WITH BOUNDARY CONDITIONS. ; . . . . . .........
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
IV.
I
5
5
5
14
25
25
26
35
38
41
49
Introduction . .................
Existence.of Fixed Points. . . . .............
Convergence of Fixed Points. .' .'..................
Rate of Convergence.............
Comparison of SAS and MAS. .........................
Computation, of Fixed P o i n t s ........................
Scalar Examples .. . ...................................
E x a m p l e s ...................... .'..............
49
52
54
58
60
67
72
74
' RESTRICTED RANGE APPROXIMATE SOLUTIONS OF NONLINEAR
DIFFERENTIAL SYSTEMS. WITH BOUNDARY C O N D I T I O N S .........
78
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
I n t r o d u c t i o n ....................
78
Preliminary R e s ults........... ......................■ 78
Existence of Fixed Points. ........................
86
Convergence of Fixed Points. .......................
90
Rate of Convergence.................................
96
Comparison' of RSAS to M A S ......... .. . . . . . . . 102
Computation of Fixed Points. ......................... 102
Scalar E q u a t i o n s ............. ................. .. . 108
Examples ....................
112
V
CHAPTER
V.
PAGE'
CONCLUSIONS
BIBLIOGRAPHY
.............. 116
. . . . . . . . . . . . .
v . .... ...............
117
vi
ABSTRACT
Consider the boundary value problem (*) y ' = F(t,"y) , t £ [0,t ] ,
J^(O) + N ^( t ) = b , where. M and N are constant real n x n matrices
such that the n x 2n matrix (M,N) has rank n.
on [0,t ] x IRn
Let
with values in
.
F(t,y)
is continuous
b is a constant real n x I vector
be the set of n x I vectors such that each component is a
polynomial of degree k or less.
*p such the "p £
Define
and Mp(O) + Np(T) = b.
k _> n + I , such that
inf
to be the set of vectors
Find a "p^ £ P , for each
| |p ’ - F( • ,p) | | =
| |Cp^) ' - "F( •
te?k
The norm
g
| |* | | is defined by
Cg-^ »§2 > * • • ign )
such a p
| |"g| | =
max
t £ [O,t ]
max |git) | where
l<i<n
and g^ E C[.0,x] , I _< i < n.
will exist.
If F is linear in y ,
Then given y is the unique solution to (*),
"pk converges uniformly to "y, on
let f (t ,"y) = F (t ,"f) - E y , where
[0,t ] , as k-*».
If "f is nonlinear,
E is a real constant
such that En = 0, and find a "q^ £ P ^ 1 for each k >
inf
n x n matrix
n + I, such that
I Iv - f ( • ,^qk) I I = I I Cqk) ' - Eq^ - f ( « Jq^) | |.There exists a
' ^ £Qk-n
subsequence
^ of
such that *qk(j) converges uni­
formly to "y, on [0,t ] as j-*”, where “y is a solution of (*).
cases "qk, for a given k, satisfies
F (• ,"qk) I I.
inf | |"p' - F( • ,p) | |=
In some
| |(q^) ' -
The above procedure extends the work of M . S . H e n r y , D.
Schmidt and K . L . Wiggins.
.
INTRODUCTION
In order to mathematically model many physical problems, differen­
tial equations with initial values or boundary values arise naturally.
Since only a small percentage of these problems can be solved in closed
form, we will concentrate on finding polynomial approximations to bound­
ary value problems.
The general form of the boundary value problem to
be considered is:
(*)
y' = F(t,y ) j t e [0,t ] ,
My1
(O) + N y 1
(T) = b
where
M
and
N
are constant real n x n matrices such that the n x 2n
matrix (M,N) has rank n.
values in
IRn.
F(t,‘y) is continuous on [0,t ] x
b is a constant real n x I vector.
If
IRn with
= {p:
"p =
T
(p^,... >Pn ) , p^ is a polynomial of degree k or less for I <_ ± _< n and
Mp(O) + N^( t ) = b} then we will be interested in finding polynomials
"pk e
which approximate solutions of (*) if any exist.
We will be
using the uniform type norm given by
I If I I = max
max
If (t)|
t £ [0,t ] I <_ i _< n
where f = (f^, ... ,f^)
T
and
’
e C [ 0 ,t ] , I <_■ i j< n.
The uniform approximation by polynomials is only one of several
possible approaches.
In the past, discrete methods were extensively
2
used where values at discrete points were approximated.
In recent
years other methods such as Chebyshev series [18] and Splines [6, 14]
have gained popular strength.
Like uniform polynomial approxi­
mations, they approximate the function over the entire interval under
consideration.
We have settled on the uniform polynomial approximation
because of the simplicity of the resulting approximating function.
In Chapter I, we will present definitions and theorems that are
standard in Approximation Theory and the Theory of Ordinary Differ­
ential Equations.
Therefore, the proofs of the theorems will be
omitted.
In Chapter II, we will consider the case when
F(t,y) = A(t)y + if(t)
where A(t) is an n x n matrix with components in C[0,t] and f(t) is an
n x I vector with components in C[0,f ] .
eralize the work of Schmidt and Wiggins
The results obtained here genOr
[15] and a polynomial p e P ^
such that
inf
II
PePk
'
is shown to exist.
_»k
p e
- Ap T f | | = | |(pk )' - Apk - f I I .
We will use the terminology in [15] and call such a
a minimax approximate solution (MAS) of (*) from P^.
shown that if
Also, it is
"y is a unique solution to (*) and "pk e P ^ is a MAS of (*)
for each k ,> n + I , then
Iim
I|(pk )(l) - y ( i)|I = 0,
k->00 ■
i - 0,1.
3
We will determine a rate of convergence similar to that given in [15].
The discretization result presented is basically the same as that given
in [15] but
is
included for completeness.
In Chapter III we will deal with cases in which ]?(t,"y) may not be
linear in "y.
For convenience we will let .
f(t,y) = F(t,y) - E y
where E is an n x n real matrix such that E
n
=0.
Now consider the
boundary value problem
(*■*)
t
= Ey + f(t,y),
te[0,T],
My*(0) + Ny^(T) = b*.
-A ‘
_ i '
If f is nonlinear in y then an MAti of (**) from P
cult or impossible to find.
k
is very often diffi-
We therefore turn to another type of uni­
form approximation which was used by Henry and Wiggins [9] in attempt­
ing to approximate second order initial value problems.
T
'
(p^,...,Pn ) where
Let W^={p: j? =
p^ is a polynomial of degree k or less, ! _ < ! _ < n}.
Using the terminology in [9], we will call a vector polynomial "p^ e P^ a
simultaneous approximation substitute (SAS) of degree k if
inf
Ilv - Z V 1Pk)! I,- Ildk)' - Bfik -£<•,/) II.
V E W1
It can be shown that under certain conditions on f , and each k ^ n + I 9
there exists such an SAS.
Also, there exists a subsequence { " p ^ ^
4
of (Pk ^ ssirfl such that
Iim
I|(^k(j))(i) - f (i) H
J ^ 00
where y is a solution of (**) .
= 0,
i = 0,1,
In some cases we can show that an SAS
of degree k is also a MAS of degree k.
Even if it is not an MAS it
will be shown that it is a "good" approximation to the solution y
of (**).
In Chapter IV, we will generalize the.results of Chapter III by
relaxing the conditions on f.
In doing so, we must use polynomials
which have a restricted range for our approximating set.
To do this we
must use some results, which are stated at the beginning of Chapter IV,
due to Taylor [16] and Taylor and Winter [17] .
We can then get the
same basic results, except that the rate of convergence is decreased as
compared to that obtained in Chapter III.
CHAPTER I
PRELIMINARY RESULTS
1.1
Introduction.
In this chapter we shall discuss some preliminary results from
Approximation Theory and the Theory of Ordinary Differential Equations.
The results in the first section, on Approximation Theory, may be found
in [3].
In the second section on Ordinary Differential Equations, the .
material is taken from [2, 5, 8, 12, 13].
I •2
Approximation The o r y .
Throughout this section X will be a compact metric space.
C[X]
will denote the Banach space of all real valued continuous functions on
X with norm given by Ilfll =
sup If(t)I .
teX
One of the early theorems involved with the approximation of func­
tions is
due to K. Weierstrass, called the Weierstrass Approximation
Theorem.
Theorem 1.1.
Let f e C[a,b];
polynomial p such that
I If -
pII
To each e > 0 there corresponds a
< e.
One proof of the theorem, due to Bernstein, produces a class of
polynomials, called Bernstein polynomials, which satisfy the conclusion
of the above theorem.
These polynomials, however, converge very slowly to
the function f and are not of much practical use.
In trying to improve our
6
approximating set we might as well consider the "best", possible approxi
mating set of polynomials.
degree k or less.
Pq
Let
be the set of all polynomials of
The best possible case would be to find a polynomial
e Q jc such that
(1.2.1)
. inf I|p - fI I = N p
'
- f II .
I
This brings up the question of existence of such a polynomial.
First we will state a rather general existence theorem.
Theorem 1.2.
A finite-dimensional linear subspace of a normed
linear space contains at least one point of minimum distance from a
fixed point.
Since C[a,b] is a normed linear space and Q
k
is a finite dimen-
sional subspace of C[a,b], then given any f e C[a,b] there esixts a
Pq
e Q jc that satisfies (1.2.1).
From theorem 1.2 we heed not be restricted to polynomials for our
approximating set.
Consider the set of functions {g^,...,g } each
of which is contained in C [ X ] .
Let G = span {g^,...,g^}, then G is a
finite dimensional subspace of G[X].
Given any f £ C [ X ] , there is a
point P q E G which satisfies
(1.2.2)
inf
pEG
I Ip
- fI I = IIp
- fI I.
°
n
Given any p e G, there exists a vector c e IRn such that p = ^
1=1
c.g.
1 1
O
7
where c =
.
T
C q = (C q ^ j.•o,Cq ^) »
(1.2.3)
Inf
Therefore, Pq =
^
C0iS i ^or some vector
i— I
So (1.2.2) is equivalent to
I l S c iS1 - fl I - III;,
g
- f||.
<?=#
We will call p =
n
^
c.g. a generalized polynomial.
1=1
For the remainder
11
of this section we will consider generalized polynomials where possible.
We now define a condition on a set of functions {g^,«..,g^} which
is stronger than linear independence .
The condition plays a funda­
mental role in approximation theory.
Definition 1.1.
A system of functions {g^,...,g } is said to
satisfy the Haar condition if each g^ E C [X] and if every set of n
A
vectors of the form t = (g^(t),...,g^(t))
T
is independent.
This means that each deterimant
8 IcV
(1.2.4)
DCt1
'V
made up of n distinct points t^,..-.,t
is nonzero.
• • • W
8
The following useful theorem concerning the Haar condition follows
directly from the definition.
Theorem 1.3.
The system Ig1 ,-••»g } satisfies the Haar condition
if and only if no nontrivial generalized polynomial ][]c^g^ has more
than n - I roots.
We can now state a characterization theorem for systems that satis­
fy the Haar condition, which is called the Alternation Theorem.
Theorem 1.4.
Let {g^,...,g^} be a system of C[a,b] satisfying the
Haar condition, and let Y be any closed subset of [a,b].
a certain generalized polynomial p
In order that
shall be a best approxima­
tion on Y to a given function f e C[Y] it is necessary and sufficient
that the error function r = f - p exhibit on Y at least n +.1 "alterna­
tions" thus:
r(t^) = -r(t^_^) = +
I|r|I, with t^ < ... < t^ and t^ e Y.
Another useful theorem which requires our system to satisfy the Haar
condition is known as the Theorem of de La Vallee Poussin.
By E(f) we will mean,
the infimum of ||p - f || as p ranges over all generalized polynomials
P =
I > i 8iTheorem 1.5.
If p is a generalized polynomial such that f - p
assumes alternately positive and negative values at n + I consecutive
points t^ of [a,bj, then
(1.2.5)
E(f) _> min If(t.) - p(t ) I.
i
1
. 1
Thus far we have discussed existence and characterization of the
9
best approximation.
Now the unicity aspect of the best approximation,
will be examined.
Theorem 1.6.
If the functions g ^ , ...,g^ are continuous on [a,b]
and satisfy the Haar condition,
then the best approximation of each
continuous function by a generalized polynomial
^c^g^
is unique.
In order to compare the unique best approximations to other gener­
alized polynomials, we need the Strong Unicity Theorem.
Theorem 1.7.
Haar condition.
Let the set of functions {g^,...,g^} satisfy the
Let p^ be the generalized polynomial of best approxi­
mation to a given continuous function f.
Then there exists a constant
Y > 0 depending on f such that for any generalized polynomial p,
( 1*2. 6)
||f - p|
I 2 I If
"
PqI I + Tl I P0
“ pi
I•
Suppose we have a system of continuous functions {g^,...,g } which
satisfy the Haar condition.
For each f £ C[X], let Ff be the unique
generalized polynomial of best approximation to f.
is a continuous operator.
From theorem 1.7, F
In fact, from the following theorem,
F satis
fies a Lipschitz condition at each point.
Theorem 1.8.
To each f^ there corresponds a number
X > 0 such
that for all f ,
(1*2.7)
I I FfQ
-FfM
<_Al |fQ - f I I .
Although best approximations are rarely computable analytically,
the algorithm of Remes provides a powerful method for their numerical
10
computation.
Second Algorithm of R e m e s :
We seek a coefficient vector "c* which
renders the uniform norm of the function
n
(1.2.8)
r(t) = f(t) c ig .(t)
j=l
J J
a minimum on the interval [a,bj.
The set of functions {g^,...,gn } is
assumed to satisfy the Haar condition.
In. each cycle of this algorithm,
we are given an ordered set of n + I points from the preceding cycle:
a < t^ < t^... <
< b.
(In the beginning, this set may be arbitrary.)
We now compute a coefficient vector for which max Ir ( t .)I is a minimum.
i
1
From theorem 1.4, this can be accomplished by solving the system
n
(1'2.9)
i
c &
) + (-1)
X = f(t ),
i = 0,1,... ,n.
j = l JJ
Since {g^.... g^} satisfy the Kaar condition the solution will be
unique.
It then follows that the numbers r(t^) are of equal magnitude
but of alternating sign.
(tj_^,t^).
Hence r(t) possesses a root
In addition, let z^ = a and zn + ^ = b., Let
in each interval
<1 = sgn r(t^).
For each i = 0 , 1 , ...,n, select a point y^ in [z^,z^+ ^] where o^r(y)
is a m a x i m u m .
This determines a "trial" set Iy^.... y ^ } .
If
Ilrll > ma x ] r ( y ^ ) I, then the definition of ,{y^,«°.»y } must be altered
as follows.
Let y be a point where Ir ( y ) I is a maximum.
Now insert y
in its correct position in the set {yQ,...,y }, and then remove a y^
in such a way that the resulting ordered set of values of r still alter­
11
nates in sign.
Now the next cycle begins with {yQ ,...,y^} in place of
{t0 ,.... ,tn }.
Theorem 1.9.
generated
The succesive generalized polynomials P ^ = ^ c . g .
in the second algorithm converge■uniformly to the best
approximation p* according to an inequality of the form
(1.2.10)
I|pk - p*|I < A0k -
where 0 < 0 < I .
In most cases, we want to find the best approximation on the clos­
ed interval [a,b] but for practical applications we must settle for the
best approximation on a discrete subset Y of [a,b].
We, therefore,
need some results on comparing the best approximation on a compact
metric space X to the best approximation on a subset Y of X.
First,
we need the following definition.
Definition 1.2.
Let X be a compact metric space with metric d . .
Let Y be any subset of X.
Then the density of Y in X is given by
(1.2.11)
IY I = max inf d(x,y).
x EX ysY
We also need a seminorm bn C [X] defined by
IIflL
(1.2.12)
Theorem 1.10.
= sup I f ( y) I .
.
yeY
Let {g^,...,g } be any set of continuous functions
on the compact metric space X.
tive 6 such that
Il pll
<
To each a > I there corresponds a posi­
a IIpMv
for all generalized polynomials
p = y.c.g. and for all sets Y such that
|y | < 6.
12
The following theorem now establishes the relationship between the
best approximation on a compact metric space X and best approximations
";
on subsets Y .
Theorem 1 .11.
Let X be a compact metric space and
elements of C [X].
Px = ^
p
If f possesses a unique generalized polynomial
CiSi of best approximation on X, then its best approximations
on subsets Y converges to pv as |Y| -> 0.
From theorem 1.11 we find that a function f e C[X] can be approxi­
mated on sufficiently dense subsets of X to get an approximation close
to the
best approximation of f on X.
Finally, we turn to finding the rate of convergence of a best ap­
proximation for a given function f e C [ X ] .
We will restrict ourselves
to the Haar set {l,t,...,tn } on the closed interval [-1,1].
First, we
will need the following definition.
Definition 1.3.
For all 6 ^ 0
'
I
Let X be a compact metric space with metric d.
the modulus of continuity of f is given by
(1.2.13)
w ( 6) =
sup
If(t) - f(s)I.
d(t,s) < 8
It should, be noted that if f is uniformly continuous w ( 6 ) t 0 as
6 4-0.
Let
(1.2.14)
E n (f)
■
- i < t < i l£(t>" S
c^
1-
13
Theorem I; 12.
(i)
If f e C [-1,1] then
E r-Cf) < w<Tr/(n + I)),
(ii)
.
En (f) < U / 2 ) k ||f(k)||/[(n + l)(n)...(n - k + 2)] if
f^k ^ E C [ - l ,1] and n > k.
Since in later chapters we will be interested in the interval [0,t ]
instead of [-1,1], we now develqpe a variation of this theorem.
f
e C [ - l ,1].
Let
Then define
(1.2.15)
,
g(t) = f(|t - I)
for all t E [0, t] .
This gives us a function g e C[0,r]. Let
n
■
n '
E (g) = inf
,
sup
|g(t) c t I.
n
CE
0<t<T
i=l ^
(1.2.16)
Let p be any polynomial of degree n or less.
(1.2.17)
Let
q(t) = p(it - I).
Then q is also a polynomial of degree n or less and
sup
|f(t) - p(t) I = '■ sup |f(-|t - I) - p(^-t - 1)| =
[-1,1]
.
[0,t ]
sup lg(t) - p(t) I .
[0, T]
This implies that E^(f) = E^(g).
(1.2.17)
Therefore,
< "£<^'
Also
g
.max
Ig(I1) - g(t )| =
1V t2lI^
2
■
2
where S1 = - I 1 - I and S0 = — 10 - I.
I rl
2
T 2
.
max „
If(S)
Is1-S2Id-S
Therefore,
1
f(s 2)l
'cn|H
w (6)
Wf (
14
(1.2.18)
w
(— 6) = w (S).
Combining (1.2.17) and (1.2.18), we get
< " g (i S h t )
(1-2-l9)
'
'
ZV)
Using the same argument we can also get that if g v /e C[0,t ] and n > k
(1.2.20)
En (b ) < ( ^ ) k | |g( k ) | |/[(n + l)(n)...(n - k + 2)].
Summarizing we get the following.
Theorem 1.13.
If f e C [0, t ] then
(i) En (f) < w(TTr/2(n + !))
(ii) En (f) < ( T V 4 ) k | |f(k) I I/[ (n + I )(n) ... (n - k + 2)] if
f^k^ E C[0, T] and n > k.
I .3
The Theory of Ordinary Differential Equations
We will be considering the following systems
(I .3. I)
y ’ = A(t)y
and
y' = A(tXy + f(t)
(1.3.2)
where A(t) is a continuous n x n
matrix on [0, t ] and f(t) is a contin­
uous vector on [0, r].
Theorem 1.14.
(1-3.3)
The initial value problem (1.3.2) and
y(t0 ) = y0 ,
0 < tg < T, has a unique solution "y = ^(t) and ^(t) exists on 0 < t ( t .
The vector system (1.3.1) can be replaced by a matrix equation,
(1.3.4)
Y' = A(t)Y,
where Y is a matrix with n rows and k columns.
The matrix Y = Y(t) is a
solution of (1.3.4) if and only if each column of Y(t), when considered
a column vector, is a solution of (1.3.1).
In the remainder of this section Y(t) will be an n x n matrix in
which each column is a linearly independent solution of (1.3.1).
This
implies that if c is a constant vector
(1.3.5)
y(t) = YCtJc
is a solution of(1.3.1).
ten in
the form of
Infact,
(1.3.5) for
any solution
of (1.3.1) can
"c.
some constant
Viewill call Y(t)
be writ­
a
fundamental matrix for (1.3.1) or (1.3.4).
The fundamental matrix is not unique.
Y(O) =
I, where
. I
0
However, if we require that
00
. . .
0
I0
. . .
0
0
I =
0
0
0
.
_ 0
0
0
. . .
.1
I
_
then we get a unique matrix and it is called the principal matrix for
(1.3.1) or (1.3.4).
If A is a constant matrix then the principal matrix is given by
(1.3.6)
Y (t,) = .eAt = I +
I]
k = I
.
■
16
for all t e [0, t | .
We also get that
(1-3.7)
Y(t + s) = Y(t)Y(s)
.
for all t, s E [0, t] and.
(1.3.8)
Y- 1 Ct) = Y(-t)
for all t e [0, t ] .
If A is a constant matrix such that An = 0 for some positive
integer n then the principal matrix is given by'
n-1
k k
(1.3.9) ,
Y(t) = 1 + 2
\r- *
k=l
1
This means that the components of Y(t) are polynomials of degree n - 1
or less.
We will also need to consider the system
(1.3.10)
(z1)' = -iTA(t),
.
where A(t) is the same matrix as that in (1.3.1).
Again, this vector
system can be replaced by a matrix equation
(1.3.11)
z'=-ZA(t).
The matrix Z = Z(t) is a solution of (1.3.11) if and only if each row
of Z(t) is a solution of (1.3.10).
(1.3.12)
If we require that Z(O) = I then
Z ( t ) = Y - 1 (t)
for all t e [0, t ] where Y(t) is the principal matrix for (1.3.1).
A is.a constant matrix, from (1.3.8), we have that
(1.3.13)
,
for all t E [0, t] .
Z(t) = Y (— t)
If
17
Let us now characterize solutions of (1.3.2).
Theorem 1.14.,
Let X(t) be any fundamental matrix for (1.3.1), then
the general solution for (1.3.2) is given by
t
x(t) = X(t)c + X(t) /
0
(1.3.14)
i
X
^
(s)f(s)ds,
where c is an arbitrary constant vector.
If Y(t) is the principal matrix solution for (1.3.1) and A is a
constant matrix; then the general solution to (1.3.2) becomes .
,
(1.3.15)
"y(t) = Y(t)‘c +
/ Y(t - s)f(s)ds.
0
We now turn to considering the systems (1.3.1) and (1.3.2) with
boundary conditions
(1.3.16)
My(O) + Njf(T) = b
or
(1.3.17)
My(O) + Ny(T) = 0 ,
where M and N are n x n constant matrices such that the n x 2n matrix
(1.3.18)
.
W = (M,N)
has rank n.
Definition 1.4.
The dimension of the solution space of a boundary
problem is the index of compatibility.of the problem.
A boundary prob­
lem is incompatible if its index of compatibility is zero.
18
Definition 1.5.
If Y is any fundamental matrix for the vector .
equation (1.3.1) the matrix defined by
(1.3.19)
D=
MY(O) + NY(T)
is a characteristic matrix for the boundary problem.
With the above definitions we can state the following theorem.
Theorem 1.15.
If the.boundary problem (1.3.1) and (1.3.17) has a
characteristic matrix of rank r, then its index of compatibility is n-r.
Another boundary value problem closely related to (1.3.1) and
(1.3.17)
is given by system (1.3.10) with boundary condition
(1.3.20)
Pz(O) - Qz(T) = 0.
Here P and Q are defined as follows:
Let Wj_ be any n x 2n matrix whose
rows form a basis for the orthogonal complement of the row space of the
matrix W.
This implies that WW^ = 0.
(1.3.21)
W 1 = (P,Q)
where P and Q are each n x n matrices.
(1.3.20)
Let
The problem (1.3.10) and
is called adjoint to (1.3.1) and (1.3.17).
Theorem 1.16.
The boundary problem (1.3.1) and (1.3.17) and its
adjoint (1.3.10) and (1.3.20) have the same index of compatibility.
In order to have an integral representation of a solution to
(1.3.2) and (1.3.16) similar to (1.3.14) we will need the concept of
the Green's matrix.
Definition 1.6.
The n x n matrix G(t,s) is said to be a Green's
19
matrix for the system (1.3.1) and (1.3.17) if it has the following
properties.
(i) The components of G(t,s), regarded as functions of t
with s fixed, have continuous first derivatives on
[0,s) and (s,t].
At the point t = s, G has an upward
jump-discontinuity of "unit" magnitude;
(1.3.22)
that is
G (s+ ,s) - G(s ,s) = 1 .
(ii) G is a formal solution of the homogeneous boundary
problem (1.3.1) and (1.3.17).
G fails "to be a true
solution only because of the discontinuity at t = s.
(Iii) G is the only n x n matrix with properties (i) and (ii).
Theorem 1.17.
If the system (1.3.1) and (1.3.17) is incompatible
then there exists a unique Green's matrix for the system given by
(1.3.23)
. G(t,s) =
j Y(t)[|t - s|/(t - s)I + D-1AJZ(S),
where Y(t) is a fundamental matrix solution for (1.3.1),
Z(t) is a matrix solution for (1.3.10), D is the characteristic matrix
and
(1.3.14)
A =
MY(O) - N Y ( r ) .
It should be noted that if we choose the principal matrix solution
for (1.3.1) then our Green's matrix becomes
(1.3.25)
G(t,s). = jY(t)l It - s|/(t - s)I + D -1 AJY^(S).
We can now state the desired theorem.
20
Theorem I «18.
If the system (1.3.1) and (1.3.17) is incompatible
then the unique solution to (1.3.2) and (1.3.16) is given by
T
(1.3.26)
y(t) = Y(t)D ^b + / G(t,s)f(s)ds.
0
Let us now examine the problem of finding a solution to (1.3.2) and
(1.3.17) , if there are any, when the system (1.3.1) and (1.3.17) is com­
patible of index r.
Let rY (t) be an n x n matrix whose first r columns
r.
are linearly independent solutions
y . , I = 1,2,...,r, of (1.3.1) and
I
r
(1.3.17) . The remaining columns are zero. Let
Z(t) be an n x n matrix
r_jT
whose first r rows are linearly independent solutions z^, i = 1,2, ...,r,
of (1.3.10) and (1.3.20).
The remaining rows are all zero.
The general
solution of (1.3.1) and (1.3.17) is given by
r
y(t) = E ry.(t)c.
i=l
x
where the c.'s are arbitrary constants.
I
(1.3.27)
Theorem 1.19.
Whenever (1.3.1) and (1.3.17) is compatible with
index of compatibility r the system (1.3.2) and (1.3.17) has a solution
if and only if the vector equation
.T
(1.3.28)
^
'
/ rZ(s)f(s)ds = 0
0
is satisfied by the vector f*(t).
In finding a solution for (1.3.2) and (1.3.17), we will also need
the concept of a Green's matrix.
We cannot use the one defined by
21
(1.3.23), however, since D is now singular. We will, therefore, use a
generalized Green's matrix.
Definition 1.7.
Whenever (1.3.28) is satisfied we will call a
matrix G(t,s) a generalized Green's matrix for the compatible system
(1.3.1)
and (1.3.17) if it satisfies the following properties.
(i) The components of G(t,s), regarded as functions of
t with s fixed, are continuous on [0,s) and (s , t ] .
At the point t = s, G has an upward jump-discontinuity
of "unit" magnitude;
(1.3.29)
that is
G(s+ ,s) - G(s ,s ) = I.
(ii) Every solution of. (1.3.2) and (1.3.17) may be
(1.3.30)
written in the form
•
r
y(t) =
ry.(t)c
i-1
■
T
+ / G(t,s)f(s)ds.
0
Let Y(t) be the principal matrix solution for the system (1.3.1)
arid (1.3.17).
Let D be the corresponding characteristic matrix.
Definition 1.8.
The Moore-Penrose generalized inverse <f> of the
real constant n x n matrix D is the unique matrix which satisfies the
following properties.
(i) D<{)D = D,
(ii) (J)D(J) = (J>,
(iii) (D<J>)T = d <J),
(iv) (<f>D)T = (J)D.
.
22
Theorem I »20.
A generalized Green's matrix for the compatible
system (1.3.1) and (1.3.17) exists and may be written as
(1.3.31)
G(t,s) — jY(t)[|t - s|/(t - s)I +
<t>A] y ""1(s )
where <i> is the Moore-Penrose generalized inverse for D and
(1.3.32)
A
= MY(O) - NY(T).
We now have that every solution to (1.3.2) and (1.3.17) can be
written in the form (1.3.30) where G(t,s) is the generalized Green's
matrix given by (1.3.21).
The generalized Green's matrix is not unique.
In fact, we have
the following theorem.
Theorem 1.21.
If G^(t,s) is one generalized Green's matrix, then
every generalized Green's matrix is of the form
(1.3.33)
G(t,s) = G 1(I1S) + rY(t)U(s) + V(t)rZ(s),
where V(t) and U(t) are n x n real valued matrices each element of which
is in C [ 0 , t ] .
Furthermore, every matrix of the form (1.3.33) is a gen­
eralized Green's matrix for the compatible system (1.3.1) and (1.3.17).
Let B(t) and C(t) be n x n real valued matrices whose components
of the first r columns are in C[0,t ] and the remaining columns are zero,
with the properties that
T
'
T
/*BT(s ) rY(s)ds = I and / rZ (s)C(s)ds = I ,
0
r
O .
r
23
where
is an n x n matrix such that
I
0
Lo
o.
with I an r x r identity matrix.
Let
(1.3.34)
V(t) = ~ / G0 (t,s)C(s)ds
0
and
T
(1.3.35)
U(t)
=
T
T
f f BT (r)G
O O
0
(r,s)C(s)rz(t)drds - / B T (s)G (s,t)ds
.
o
0
where G 0(t,s) is the generalized Green's matrix defined by
(1.3.31).
Then we have the following theorem.
Theorem I .22.
The matrix
G(t,s) = G0 (t,s) + rY(.t)U(s).+ V(t)rZ(s)
is the unique generalized Green's matrix for (1.3.1) and (1.3.17) satis­
fying the conditions
T
(1.3.36)
/ G(t,s)C(s)ds = 0,
0
t e [0,t ] ,
'
24
(1.3.37)
/ B1 (t)G(t,s)dt = 0,
s.e [0,t ],
0
where G^(t,s) is given by (1.3.1), U(t) is given by (1.3.35) and V(t)
is given by (1.3.34),.
The above, matrix is called the principal generalized Green's
matrix for (1.3.1) and (1.3.17).
It has many of the same properties
as the Green's matrix given in definition 1.6.
Theorem 1.23.
If G(t,s) is the principal generalized Green's
matrix for (1.3.1) and (1.3.17) then the following properties are
satisfied.
(i) The components of G(t,s), regarded as functions of
t with s fixed, have continuous first derivatives on
[0,s) and (s ,t ];
At the point t = s, G has an up­
ward jump-discontinuity of "unit” magnitude: that is
G(s+ ,s) - G ( s ,s) = I.
(ii) For fixed s
e (0 ,t) G(t,s) satisfies (1.3.17).
(iii) For fixed s e [0,r] G(t,s) satisfies
(1.3.38)
G t (t,s) = A(t)G(t,s) - C(t)rZ(s)"
CHAPTER II
MIHIMAX APPROXIMATE SOLUTIONS OF
LINEAR DIFFERENTIAL SYSTEMS WITH BOUNDARY CONDITIONS
2.1.
Introduction.
Let X be any compact metric space.
C[X] will denote the Banach
space of all real valued continuous functions on X with norm given by
(2.1.1)
Ilfll = sup If ( t ) I. .
t£X
Unless otherwise specified X = [0,T] where T is some positive real num­
ber.
The norm then becomes
(2.1.2)
Ilfll =
max
|f(t)|.
te[0,T]
In this chapter we will also be using vectors f =.(f^.
•••V
where f^ £ C[0,T], i = I ,2,...,n.. Define
(2.1.3)
If(t)I =
max If.(t)I .
K K n
1
and
(2.1.4)
Ilfll=
max
max If.(t)I
t£[0,T] K i < n
There should be no confusion between (2.1.2) and (2.1.4) since in the
latter case we are dealing with vectors.
We also must define a norm on matrices.
Let B(t) = (b..(t))
where b . .(t) E C[0,T] , .i = 1 ,2,...,n, j - 1,2,...,n.
1J
Define
26
(2.1.5)
IIb M =
max
te[0,T]
max
^
IbljCt)!
K j < n i'=l
In this chapter we will consider finding vector polynomial approxi
mations to solutions of the systems:
(2 .1 .6)
y ' = A(t)y + f(t)
J^(O) + Ny(T) = 0 ,
and
y ' = A(t)y + f(t)
(2.1.7)
My(O) + Ny(T) = b
■A
where A(t) is a continuous n x n matrix on [0,T ] , f(t) is a continuous
vector on [0,T j , b is a constant vector and M and N are n x n constant
matrices such that the n x 2n matrix.W = (M,N) has rank n.
2.2
Homogeneous Boundary Conditions.
In this section we will examine vector polynomial approximations
to solutions of (2.1.6).
W e will assume throughout this section that
there exists a unique solution to (2.1.6).
Let
Pk = {p(t):
i
= {p(t):
p(t) is a polynomial of degree k or less} and
p(t) = (p1 (t),...,Pn (t))T such that P1 Ct) e Qfc for
= 1 ,2,... ,n and M p 1
(O) + Np(T) = 0 } .
We will call ^
max approximate solution (MAS) of (2.1.6) from P^ if
e
a mini-
27
(2.2.1)
Inf
I I]?' - A]p - f I I = I IOpk)'
- Apk - f I I .
;
Theorem 2.1.
»k
exists a MAS p
Proof.
If
"y is the unique solution of (2.1.6) then there
of (2.1.6) from P1 .
k
Let S = {u(t):
..
u(t) = (u (t),...,u (t))T such that
n
u^(t) E C[0, TJ i = l»2,..,,n and Mu(O) + Nii(T) = 0}.
Define the
linear operator L by
(2.2.2)
Lu = u « - Au.
Also, define the uniform norm I I• I
(2.2.3)
by
IIu IIl = I ILxfl I.
With these definitions, we get
I Ii?'
.(2.2.4)
- Ap - f I I =
I !"p
- Ap' - y ' t Ayl I
= I ILOp - y) II
= I Ip - y M l
for all
P1 .
k
.
Pjc is a finite dimensional subspace of B.
exists a "pkE
P
k
By theorem 1.2 there
such that
inf l.lp - yl L
PePk
= Ilpk - y| I .
^
We, therefore, get the final result.
We will now show that if "pk is a
0 f (2.1.6) from P^ for each
28 ’
k
n + I then p
(2.1.6).
converges uniformly to the unique solution y of
It also follows that (p^)
converges uniformly to y' on [0, T]
In order to establish these results, we prove the following lemma.
Lemma 2.1.
If y is the unique solution to (2.1.6) and "p^ is a
MAS of (2.1.6) from
Proof.
Since
for each k _> n + I, then
Iim I I(^k )' - Apk - f | | = 0.
k-x»
Let E be a constant n x n matrix such that En = 0.
y' is the unique solution to (2.1.6) it satisfies the system
(2.2.5)
5%t) = Ed(t) + g»(t)
. Mu(O) + Nu(T) = 0
where
(2.2.6)
^(t) = (A(t) - E)y(t) + ^(t) . .
^
T
We have that g = (g^, ...,g )
where g ±
E
C[0,T] for i = 1,2,... ,n
From theorem 1.1, there exist polynomials rk n e
^ such that
Iim I Ir^ n - g .I I = 0
k-x”
^
for i = 1 ,2,..., n .
(2.2.7)
Let.rCk n = (rk n ,...,rk n )^.
Iim
I Ilck n - "gl.l = 0.
k-x»
Then
29
At this point we will need to break up the proof into fwo cases
as to whether the system
(2.2.8)
u'(t) = Eu(t)
Mut0) + Nu(T).= 0
is compatible or incompatible.
Case I.
Assume system (2.2.8) is incompatible.
Then ^ is the
unique, solution to (2.2.5) and there exists a unique Green's matrix
G(t,s) such that
T.
(2.2.9)
y(t) =
Let,
/ G(t,s)g(s).ds.
0
.
T
/ G(t,s)rk~n (s)ds.
0
By (1.3.9) and (1.3.15), this implies that the components
(2.2.10)
wk =
polynomials of degree k or less.
of w
From theorem 1.18,. we get that
Mwk (O) + .Nwk (T) = O . Therefore, w
e
P
for k > n + I.
We have that
w k (t) - y(t) = / G(t,s)["rk n (s) - g(s)jds
'■
0
and
( ^ ( t ) ) ’ - y ’(t) = / G (t,s)i"rk n (s) - g(s)]ds.
0
This implies that
(2.2.11)
IIwk - ylI < IIrk'n - glI / l|G(.,s)|Ids
0
are
30
and
(2.2.12)
II(Wk)' - y ’|I < I|rk_n
T
glI / IIg ( *,s)IIds.
o
z
It now follows from (2.2.7) that
(2.2.13)
lim| |v$k - ^l I
o
k-*»
and
(2.2.14)
Iiml |(vSk)' - y'| I = 0.
k^o°
In order to establish that IK w k )' - Awk
fI I approach zero.
we need the following relation.
(2.2.15)
II(Wk )'
- Awk - f I I
= I I (vSk) '
< I K#")'
- Awk - y ' + Ayl I
- i 'W + IlAlI Il^k - f l l
From (2.2.13) and (2.2.14) we then get that
I i m I I(Wk ) ' - Awk - "f 1.1 = 0.
k-x”
Finally from (2.2.1) we know that
(2.2.16)
0 _<
I I Cpk ) '
- Apk -
fI I
_< I I (wk)* - AvSk - f II
w h i c h implies, from (2.2.16), that
Iim I I Cpk ) ' - Apk - fI
k-*»
Case I I .
I =
0.
Assume system (2.2.8) is compatible.
Since
"y is the
unique solution to (2.1.6) there is at least one solution to (2.2.5)
namely 'y.
This implies, using theorem 1.19, that
T
(2.2.17)
/ rZ(s)g(s)ds = 0 ,
0
31
where Z(t) is the n x n matrix defined on page 20.
Therefore, by theorem
1.20, there exists a generalized Green's matrix G(t,s) such that
r
T
(2.2.18)
y(t) = 5 3
ry1 (t)C. + Jf G(t,s)g(s)ds
r_j,
1=1
1
O
where y^, i = I ^2, . . . ,r, are independent soltuions of (2.2.8) and
, i = l,2,...,r, are constants.
The first r rows of rZ(t) are independent solutions of (1.3.10)
and (1.3.20).
Denote them by rz\ i = 1,2,...,r.
Without loss of
generality, we can assume that
T
(2.2.19)
/ r^ ( s ) r^ ( s ) d s =
Let,
(2 .2 .20)
T h e n , define
(2 .2 .21)
“i.k-n V * > -
For k_> n + I, from (1.3.9) and (1.3.13) we get that the components of
cfk n are of degree k - n or less.
Using (2.2.19), (2.2.20) ana (2.2.21), we get
T
T
/ rA s r i k-n M d s -
/ rS ( s ) r t k-n < s ) -
y
1 “l.k-n Zi(s>lds
t
/ 'S(s)f-"(s)ds - E
0
J
1=1
a
k_a
L »K n
/ rZT (s)rk"n (s)ds - a
j,k-n
/ rS ( S ) rIf (S)ds
Q
j
1
32
This implies that
T
(2 .2 .22)
Let, 3. =
/ rZ(s)qk-n(s)ds
O
% U rzTj I -for
= 0.
± = 1> 2 ,...,r and
Y
= 1 + E
5J T t i l .
1=1
'
From (2.2.17), (2.2.20) and (2.2.21) we have
• T
°i,k-n = /
- f(s)]ds.
This implies that
(2.2.23)
Ja .
I
J.)K. u
< J l T fX Il Il =&k-h
6. I
gl ld£
^gll
for i = 1 ,2,...,r.
Therefore,
(2.2.24)
I Iqk-11
- gll
|fk-n _ y
a
W
itl
i ,k-n Zi
- rn
Il
x-1
< Ilrk"n - f II + •£ « ir#k"n - il l IItZ1II
i=l
- iif-"-iii[i + Z
6J i rZ1 Ii I
- Yl lrk'n -ill,
for k ^ n + I where Y does not depend on k.
From (2.2.7) and (22.2.24)
33
we have that
(2.2.25)
Iim
k-^00
I |fk-n -
f|
I =
0.
We can now define
r
wk(t) = T
(2.2.26)
T
rf,(t)C. + / G(t,s)'qk~n(s)ds.
i=l
0
From (1.3.9) we know th^t the components of
degree n - I or less,
are polynomials of
therefore the components of wk (t) are polynomials
or degree k or l e s s .
A l s o , by definition 1.7 and theorem 1.20, since
"qk n satisfies (2.2.22) we have that
Mwk (O) +' Nwk (T) = 0.
Therefore wk £
P and. we proceed exactly as in Case I .
This completes
the proof. .
We can now state and prove our main result.
Theorem 2;2.
MS
of (2.1.6) from
"y is the unique solution to (2.1.6) and "pk is a
If
k
for each k > n + I then
—
Iim H ( P k )(I) - y ( 1 ) | I = 0, i = 0,1.
k-*”
P r o o f . Here B(t,s) will be the Unique Green's matrix associated
(2.2.26)
with the system (2.1.6).
k ^ n + I.
(2.2.27)
Let p
k
be a MAS of degree k or less for each
N e x t , set
A k = Cpk )' - A p k .
Then
y(t) = /
0
B(t,s)f(s)ds,
34
y '(t) - / B (t,s)f(s)ds,
O
I
(t) = f B(t,s)A- (s)ds,
P
O
^
T
^
and
(PkCt))'
= / Bt (t,s)Ak(s)ds.
This implies that
(2.2.28)
I Ipk
yll
< / M
B(Vs)M
MAv
K
O
- f||d s
I I Cpk)' “ Ap^ - f I I / I I B(°,s)I Ids
O
and
(2.2.29)
II(Pk )'
- . y 'I
I
< f II
B
IlCp)'
( ‘,s) I I
I |A
- Ap
f II / II
-
.
From lemma 2.1 we obtain the final result.
From (2.2.11),
(2.2.30)
(2.2.12),
(2.2.15),
- f I Ids
O
,s) I Ids.
This completes the proof.
(2.2.28), and (2.2.24) we have
ll(pk)(i) - y(i)|I < M0 l|rk"n - g I I , i = 0,1.
for all k ^ n + I and.some constant M .
In lemma 2.1, we choose the
rk n 's, i = 1 ,2,'...,n, k ^ n + I , such that
(2.2.31)
B (•
t
I I rk n
- g
II
=
inf
P%-n
I IP
“ fe. I I •
35
Then from theorem 1.13,
Ek - n (gi ) -
(7rxZ4 )m I I
l/[(k - n + l)(k - n)...(k - n - m + 2)]
I
for I = 1 , 2 , ...,n and. all k >_ n + m, provided g ^ e
C[0,T] fbr each i.
This would then give us
(2.2.32) I |rk-n for all k_> n + m.
< (7TT/4)m ||f( m ) ||/[(k - n + l)(k - n) ... (k-n-nri-2)]
Cm [0,TJ i = 1,2,...,n.
Provided
Using (2.2.6)
(2.2.30) and (2.2.32) we then get the following corollary.
Corollary 2.1.
If the components of A(t) and f(t) are elements of
Cm [0,T] there is a constant
a independent of k such that
(2.2.33)
2.3
I l(pk )(1) - y (i)lI < — , i = 0,1.
—
m ■
k
. Nonhomogeneous Boundary Conditions.
In this section we will turn to vector polynomial approximations
to solutions of (2.1.7).
Again, we will assume that there exists a
unique solution to (2.1.7).
For i = I ,2,...,n, let
^ ( t ) be a polynomial of degree k g or less
such that
(2.3.1)
where h(t) = (h^(t),
. Mh(O) +
..., h (t))^.
Nh(T) = b
We have assumed (2.1.7) has
a unique solution so there exists such a vector h(t) for some kg.
(kg
could be taken as one since we can always interpolate on the solution
to (2.1.7) at the points t = O and t =T).
Let y be the unique solution to (2.1.7) and define v(t) = y(t)-h(t).
36
Then,
v '(t) = A(t)v(t) + t-h'(t) + A(t)h(t) + f(t)]
and
Mv(O) + Nv(t) =0.
We then have the system
(2.3.2)
v »(t) = A(t)v(t) + ^(t)
Mv(O) + Nv(T) = 0
where
(2.3.3)
g(t) = -h'(t) + A(t)h(t) + f^t).
For k > kQ
let Pk = {|^(t): ^(t) = (^(t)... Pn(t))T,
P±(t) e .Qk. for i = 1,2,...,n and Mp(O) + Mp(T) = b}.
q
We will call
a Miniiaax Approximate Solution (MAS) of (2.1.7) from Pfc if
(2.3.4)
in% IIp1 - Ap - f II = ||(f)' - A f - f II.
Theorem 2.3.
For k >_ max{k0,n + 1} let f
from P, . Let qk(t) = f(t) + h(t).
be a MAS of (2.3.2)
Then qk is a MAS of.(2.1.7) from
V n d
IKf)' - A f - f II - II(f)' - A f - f| I.
Proof. Let p G P
Then let q(t) = p(t) + h(t)^ Since k > k
A
~ 0
we have that <1 e
We, also, have that
(2.3.5)
IIq' - Aq - f II = |Ip' + ^ ' - Ap - Ah - f ||
= I!"p' - Ap - "gl'I.
37
Since p £
we have the last statement in the theorem.
Let 'q £ Pj^.
Let "p(t)= 'q(t) - h(t). We have that "p £ P^.
We then have
IICqk)* - A^k - f II = II(Pk)' - Apk - f II
< I(p' - Alf - fe| I
= II^'
Since
q
-
Aq
- jf| |.
was arbitrary, the proof is complete.
Corollary 2.2.
(2.3.2) from P^.
For each k ^ max {kg,n +1 } let ^pk be a MAS of
Let "qk(t) = "pk(t) + h(t) . Then if "y is the unique
solution to (2.1.7)
Iim I|(qk)(±) - y(i)|I =0, i = 0,1.
■ .k-*=
Proof. We have
(2.3.6)
Ilqk - yl I - H p k - v||
and
(2.3.7)
IICqk)' - f II = IIC P ^ y
From section 2.2 we get our result.
Also from section 2.2 and equations (2.3.3), (2.3.6) and (2.3.7)
we get the following corollary.
Corollary 2.3.
If the components of A(t) and f(t) are elements
of Cm [0,T] then there is a constant a independent of k such that
(2.3.8)
Il(qk)(i) - y(i)lI <
, i = 0,1.
38
2.4 Discretization
&
In practice, instead of finding a MAS of (2.1.7) in
satisfying
A
(2.3.4), we find a
e
satisfying
(2.4.1) inf maxlp ,(t)-A(t)p(t)-f*(t) I = max |(^(t))' -A(t)py(t)-f(t) I
PEPfc tex
■
teX
where X is a closed subset (usually finite) of [0,t ]. The polynomial
KsblC
P^ is called a discrete minimax approximate solution of (2.1.7) from
A
Pk. We will show that any discrete MAS is nearly a MAS of X "suffic­
iently dense" in [0,T].
Jk
Let p
A .
■
be a MAS of (2.1.7) from P . Define the operator L by
Lu =u'
- Au - f
where Lu = ((Lu)^,...,(Lu)n)T. Also, let
(2.4.2)
<$k = IILpk II ;
and
(2.4.3)
S
k,x
Theorem 2.4.
then 0 <
= max I(Lpk)(t)I.
t£X
X
Given E > Q, there is a S > 0 such that if |x| < 6,
x
Proof.
Let E > 0.
Let 0 < E^ < £/6^.
For each i = 1,2,...,n
apply theorem 1.10 to the finite dimensional linear span of
{(Lp)
"p
E P^}. There are B^'s, i = 1,2,... ,n, independent of
X such that if |x| <
(2.4.4)
then
II(L])) Il <_ (I + E ) max I(Lp) (t) I, i = 1,2,... ,n,
tex
.
39
Ip e P,; Set 6 =
for all
min B . .
Then if |x| <? 6
Ki<n
(2.4.5)
IIL?f| I <_ (I + e^) max max
tEX
A
for. all p E P^.
IX I < 6 .
I(L^) (t) I
KKn
1
Let X be any closed subset of [0,T] such that
Then
(2.4.6)
Sk < IlL^M
£
(I + E ) max
t£X
= (I + ' e 1)6
< (I +
max
KKn
I( L ^ ) , ( t ) I
1
k,X
e/6k )6
k,X
S
k,X
/ 6k)e
e ,
since it is clear that ^jc x
Corollary 2.4.
IX I <
Given
^k .
This completes the proof.
E > 0 there is a 6 > 0 such that if
6, then
5k < IlL^ll < 6 k + e,
40
where
satisfies (2.4.1).
Proof.
From theorem 2.4
■5k < IlLPx11 < 6 k ,x + e
^ 6k + e -
Let
(2.4.7)
where
A,x ■ <£*
G ^
If
- 4
- *
and satisfies (2.4.1).
is the unique solution to (2.1.7) then there exists a unique
Green's matrix B(t,s) associated with the system (2.1.6) and a vector
function h which satisfies
(2.4.8)
u ’(t) = A(t)u(t)
Mu(O) + Nu(T) = b.
Then
(2.4.9)
Px (t) = h(t) +
for all t £ [0,T] .
J B (t,s)(Ak x (s) + f(s))ds
For X sufficiently dense in [0,T], corollary 2.4
implies that
(2.4.10)
IK^)'
-
- f II <
^ + I.
This tjives us that
(2.4.11)
i-Ak
_i
_A
T
IIp x I I < Ilhl I + [Il \ }XlI + IIfl I] / IlB(-,s)| Ids
< Ilhll + [6 + I + I|f II] / IIb ( ',s) IIds.
Now consider the sequence X
|X I ->0 as m "^co.
IB
0
of closed subsets of [0,T], where
By (2.4.11) the sequence {p^
JL
}
is uniformly
IB— J.
41
bounded over [0,T] and, therefore, has a cluster point if^ e
Theorem 2.5.
The vector polynomial "q^ is a MAS of (2.1.7) from
Proof. Suppose "pk
->
as
Z
By corollary 2.4
I IUp ^
m(Z)
as
P .
Z-***. Since ("p^
)^^
11->6.
*m(Z)
Cq^)^\
i = 0,1, as
Z*00 we have that
Ul(Z)
I ILlftj I _
2.5
This completes the proof.
Examples
In this section we will look at examples of computing a MAS for
(2.1.6) or (2.1.7).
The first two are examples using the generalized
Green's, matrix while the last uses the standard Green's matrix.
the
In
case of the generalized Green's matrix we will use the principal
generalized Green's matrix defined in section 1.3.
In all of the examples of this section we used the second
algorithm of Remes to compute the best approximation to a vector
function f whose components are in C [ 0 ,t ] ■
, by means of generalized
polynomials 'q^ =
i = l,2,...,n.
where p^ is a basis element from
It should be noted that this may not be a Haar s et.
However, the algorithm of Remes still seems to work.
Example 2.1.
(2.5.1)
for
y" + W2y = I, t E [0 ,1 ],
y(0) - y ’(0) = 0, y(l) - 2 y '(I) = 0.
%
42
Writing this in the form of (2.1.6) we have
(2.5.2)
ol
y
-l"
y(0) -
°l
"I
O
i
’y(i) ‘
O
O
I—
" o '
=
+
.y' ( ( ) ) .
.1
-2.
.y'd).
. 0 .
■I
U
'o
I
U
-u L
O
(2.5.3)
C
The system corresponding.to (2.2.8) is
-U
-f
' u(0)"
"0
o'
.0
0.
V (O ).
"o'
' u(l)'
+
SS
-I
-2.
.Uf(I).
’
.0.
The principal matrix for (2.5.3) is given by
Y(t) =
!
I
t
0
I
Then the charteristic matrix is. given by
I
-I'
Li
-iJ
D =
which is singular.
'■
It can be easily shown, in the case that D is a 2 x 2 real singu­
lar matrix, that the Moore-Penrose generalized inverse for
43
D1I
D12
D21
D22
(2.5.4)
is given by
I
a
(2.5.5)
D11
D21
D12
D22
where
(2.5.6)
01 - D ll2 + D 122 + D212 + D 222 -
In this case, then, the Moore-Penrose generalized inverse is
4>_ _i
f 1
^
L-i -i.
The principal generalized Green's matrix is
I - t,
2ts + t — I
s < t
G(t,9)
-t
U
2ts + s - 1
s - 11
s > t
Referring to (2.2.26), we can obtain a class of polynomials
.T
T? = (P15P 2). » in Pk , given by
P1 (t) = 3^(1 + t) + a^(2 + 2t - 5t^ + 3t^)+ ...
O
+ ak (2(k - 2) + 2(k - 2)t - (2k - l)t
Ir
+ 3 0
and
P2 (t) = (P1 U).)' .
Now apply L, given .by (2.22), to p in order to obtain a basis for
44
approximation,
given by
gm (t) = -2(2m + I) + 3m(m +
+ 7i2 (2(m - I) + 2(m - l)t
-(2m + l)t^ + Stflrfl) , m = I , ... ,k
- I.
The MAS of degree 6 .is given by p^ = ( P ^ P g ) ^ where
p*(t) = -0.203361 - 0.203361t + 1.497168t2
+ 0.43848813 - 1.63659514 + 0.439024t5
+ 0;074637t6
and
Pg(C) " (Pg(C))'-The actual solution to (2.5.2) is given by y = (y^,yg)
T
where
y 1 (t) = — r [ Tr - 3ttcosttx - 2simrx]
1
TTj
and
Y 2 (C) = y'(t).
The uniform error is given by
I Ip6 - y | I = 0.000718
Example 2.2.
(2.5.7)
y" = ^ qi1^. y ’ + ty - tln(l + t ) , t e [0,1]
y(0) + yT (0) = 1
y ( l ) = ln(2) = 0.69147
-
45
Writing this in.the form of (2.1.7), we have
(2.5.8)
/
y
0
I
v'
y
t -
0
y
+
I
I + tj y
-tln(l + t)
I
I
y(0)
0
0
yd)
0
0
y'(0)
I
0
y'(i)
0.69147J
The system corresponding to (2.2.8) is
(2.5.9)
0
I
0
0
I
I
y(0)
0
0
y(D
o
0
0
V(O)
I
0
/ d )
o
The characteristic matrix is given by
I
I
I
I
which is singular.
The principal generalized Green's matrix is given by
"I - t
0
.0
I - s_
s < t
G(t,s)
-t
s - t"
0
-s
S > t
\
T
The class of polynomials p = (P11P2 ) in Ffc is given by
46
p^(t) =
(t^ - t2 - t + I) + a^(t^ - t2 - 2t + 2) + ..
+ ak (t
- t
- (k - 2 ) t + (k - 2))
and
P2 (t) = (P1 Ct))'
The MAS of degree 6 is given by "p^ = (P1 , P2 )"^ where
6
9
p°(t) = 0.000061 + 0.999939t - 0.499329t
+ 0.322210t3 - 0.198946t4 + 0.087425t5
- 0.0182124t6
and
P2 (.t) = (P1Ct))'
T
The actual solution to (2.5.8) is given by y = (Y1 )Y2 ) where
Y 1 Ct) = ln(l + t)
and
Y2 (t) ?= (Y1 Ct).)'
The uniform error is given by
I Ip6
- y | I = 0.000061
Example 2.3.
(2.5.10)
y" = 2ty/ + 2y, t E [0,1],
Y(O) - y'(0) = I, 2y(l) - y'(l) = 0.
Writing this in the form of (2.1.7) we have
47
(2.5.11)
y
y(0)
0
2
-I
i
O
Iw
O
+
I—
C
0
i— I
Ps
-I
I____
I
0
y 'd ) .
The system corresponding to (2.2.8) is
O
I-I
"u
O
O
■»
"u"
y
=
.u'.
'i
u(0) -
-i"
"0
0 ■
P
.u'(0)]
0_
"o'
'u(l) ‘
=
+
.2
"I
V d X
.0.
The characteristic matrix is given by
1
-I
2
I
D
J
which is nonsingular.
The unique Green's matrix is given by
I - 2t
-2
(2t - l)(s + I)
.
2(s + I)
G(t,s)
1 “2 Ct + I) (2s.
D ( t + I)
s > t
2s - I
48
Using (2.2.10),
the class
of polynomials p = (P11P2 )1 in Pfc is given by
P 1 (t) = a2 t 2 + a3 (t3 + -j (t + I)) + ...
k
I
+ ak (tK + j ( k - 2 ) (t + I)),
and
.
P2 Cfc) = (P1 Cfc))'•
The M S
of degree 6 is given by "p6 = (P1 , p2 )T where
P1(I) = 1.0001327 + 0.0001327t + 1.0176969t2
- 0.2567417t3 + 1.3785357t4 - 1.1914509t5
+ 0.76857213t6 ,
and
P2 Cfc) = (P1 Cfc))' •
. The actual solution to (2.5.11) is given by y = (y1 >y2 )T > where
y 1(t) = exp(t2 )
and
y2(fc) = Y1Cfc)The uniform error is given by
I I p 6 - y l I = 0.001404.
CHAPTER III
APPROXIMATE SOLUTIONS OF NONLINEAR
DIFFERENTIAL SYSTEMS WITH BOUNDARY CONDITIONS
3.1
.
Introduction
In this chapter we will examine vector polynomial approximations
to a solution of the system
(3.1.1)
y' = Ey + f(t,y), t e [0,t ] ,
My(O) + Ny(T) = b,
where E, M, and N are constant real n x n matrices such that En = 0
and the n x 2n matrix (M,N) has rank n.
[0,T] x IRn with values in IRn .
f (t ,"y) is continuous on
b is a constant real vector.
All the norms used in this chapter will be the same as those given
in Chapter 2, Section I.
Throughout this chapter, unlike Chapter 2, we will assume that
the system
(3.1.2)
y' = Ey
My(O) + Ny(T) = 0
is incompatible.
Then there exists a unique Green's matrix G(t,s) for
the system (3.1.2).
(3.1.3)
'
Let a be a number such that
T
; J I|G(* , s ) II ds <
:a.
■ 0
Y(t) will represent the principal matrix for the equation
(3.1.4)
"y' - E^. ..
50
Since En = 0, the components of Y(t) will be polynomials of degree
n - I or less.
Let D be the corresponding characteristic matrix and
define
(3.1.5)
h(t) = YD-1L.
Then the components of. h(t) are polynomials of degree n - I or less and
h(t) satisfies
(3.1.6)
h' = Eh
Mh(O) + Nh(T) = b.
We will need to define the following sets
\
=? {p:
p is a polynomial of degree k or less},
= (p:
P=
(P1 .•••»Pn )T and Pi e Qk , i = 1,2,...,n},
and
Pjt = {'p:
p's Wjt and Mp(O) + N p 1
(T) = b}.
It should be noted that Pjt is not empty for k >_ n - I since h £ P^.
Suppose, we have numbers m and R such that If(t/y) I _< m for all
t E [0,t ] and
I13^ - hi I
R.
Sjt =
Now define the set
E Pjt.:
A g a i n , Sjt is not empty for k
I K p - h|| _< 2mex}.
n - I since h £ Sjt*
Throughout the remainder of the chapter, for convenience of no­
tation, let
(3.1.7)
F[y](t) = f(t,y)
where
F[y] = (F1 [y],...,Fn [y])T .
51
Let p e S^.
For k
• inf
(3,1.8)
n + I
||v- F [^J II = I|v - F [p]II
'raW
■
for some V^ £ Q^_n , i = I ,2,...,n .
i ='1,2,...,n, is unique.
operator
Fk :
Let V q = (v^,...,v^)
and define the
F ^ =
I I^ -
F [3?] is uniformly continuous for
F
T
wk_n •by
(3.19)
1.8
From Theorem 1.6 each v ^ ,
i?| I < 2m(*.
Therefore by Theorem
is a continuous operator for each k > n + I.
For v e W.
k-n’
k > n + I , let
Define the operator
■q(t) = h(t) + / G(t,s)v(s)ds.
0
by
(3.1.10)
B kv = q-
We know that
q(t) = Y(tXc + / Y(t - s)v(s)ds
0
for some constant c. Then q is in W^.
It also follows that "q satisfies
cf' = Eq + v
M^q(O) + Wq(T) = b .
Therefore, Iq £ J^. We then have that B ^ is a continuous operator
mapping W^_n into
V
for k ^ n +
I.
Finally define the operator
W *
5
(3.1.11)
v
-
V fV-
52
Since
is the composition of continuous operators,
is a continuous
operator.
3.2
Existence of Fixed Points
We will be interested in a polynomial
(3.2.1),
V
"p £
such that
= P-
Such a p is called a fixed point of T^.
_dc'
Suppose p E
S^ is a fixed point of T^.
k-n
FkTik - f - " - (V
(3.2.2)
where
k-n
Then
k-n.T
satisfies
(3.2.3)
inf
VEW
Ivk -
- FjPk
k-n
for i = 1,2,...,n,
(3.2.4)
Iv - F1 Ipk ]
then
inf
:I|v - Ffp^J I I = IIv^ n - FfpkTIl
VEW1
k-n
for k > n + U
We also have that
■pk(t) = f^(t) +
f G(t,s)v^ n (s)ds.
0
This implies that
yk-n(t) = (At))' -eA
(3.2.5)
Therefore if p
(3.2.6)
o.
is a fixed point of T^, for k _> n + I, then
inf
I|v - Ffpk ] II = H (pk )' .- Epk - F[pk ] II.
VEW1
k-n.
We will use the terminology used by Henry and Wiggins [9] and call
a fixed point "p^ of T^, for k >_ n ■+ I , a simultaneous approximation
53
substitute of degree k, S A S .
Before determining the conditions on
to assure the existence
of a fixed point we state the Schauder Fixed Point Theorem.
Theorem 3.1.
Let X be a Banach space.
Let S be a compact,
convex subset of X and T a continuous map of S into itself.
Then T
has a fixed point x E X, i . e . , Tx = x.
. Theorem 3.2.
For fixed k >_ n + I let T^ be defined by (3.1.11).
If 2ma jC R then T^ has a fixed point
Proof.
S^ is a compact convex subset of the Banach space
T^ is a continuous map from S^ into
c W^.
and
In order to apply
theorem 3.1 it must be shown that T1 (S1 ) c S1 .
k k
k
Let p e S^.
If we let v^ = F ^p, then
V^, i = 1,2,...,n, satisfies (3.1.8).
(3.2.7)
= (v^,...,v^)1 where each
Let q = T^.
ei (t) = v^(t) - F i I-P K t ) ,
i =• 1,2,.. .,n, ■
and
(3.2.8)
"e(t) = (e1 (t),
Since ^ E S^ we have that
II^p - h| I
..., eh (t))T .
2m« < R.
Therefore
(3.2.9)
Ue. II = IIv. - F1HftII
. < IiF.tp]II
< m, i = 1,2,...,n.
This implies that
(3.2.10)
Ile| I < m.
Set,
54
Then
(3.2.11)
q(t) = h(t) + / G(t,'s)v0 (s)ds
O
h(t) + / G(t,s)[e(s) + Ffpj(S)Jds.
O
and
(3.2.12)
.Ilq-
hi
I < [ I l-el I + I I F fp] I I] / I |G(- ,s)| Ids
■
O
< 2ma.
Therefore q e
3.3
and this completes the proof.
Convergence of Fixed Points
.i,
For each k
n
+ I let p
e .
be a fixed point of T^.
vk-f0
will prove that there is a subsequence of {p
^
'
to a function y where y is a solution of (3.1.1).
We
that converges
In fact, it will be
shown that the' first derivatives of the subsequence of polynomials
converge to
"y'.
Lemma 3.1.
of Tfc.
(3.3.1)
In this direction we first prove the following lemma.
For each k _> n + 'I let p
e
Let
^k (t) = C p tCt)/
- E ^ ( t ) - F[pk ](t), t e [0,T],
then
Iim ||e || = 0.
k-*=
k
Proof. Let
(3.3.2)
be a fixed point
t ' n - d k)’ - Kpk
55
where
Vk"- - (Vk-".... »k-")T .
Since p
is a fixed point of
(3.3.3)
inf
we have
Jlv - F [ p ] M
Ivk' " - F1Iifk J l l , i -
vE<W
Now e^ = (e^ ^,...,e^
which gives us from (3.3.1) and (3.3.2)
that .
(3.3.4)
e i>k = v f n - F. [^k ], i = 1 ,2,... ,n.
From theorem 1.1.3
T TT-
(3.3.5)
where
H e i j k II < W ljkO j p f - ) ,
(3.3.6)
■k
modulus of continuity of F\[p ] for i = 1 ,2,...,n
k^°^
and k > n + I.
i = 1,2,...,n,
We have that
|vk-n(t)| < !IfjlI1
Pc] (t) I + lvk"n(t) - Fi(PkKt) I...
.< m + ,Ilvk"11 - Fi [pk ]||
.
< m +
I IF1 Cpk ] I I
for all t e [0,TJ and k > n + I.
■—
,
< 2m,
i =
■ -v
Since p E
1,2, ...,n,-
S1 it follows that
k .
.iv . (3.3.7)
lpk (t)| < 2 m a
for all t E |0,t J and k > n + I.
"/
We also have that
(Tfk)' - Ifk-" + Eif.
So.
(3.3.8)
I(pk (t))' I < Ivk "(C)I +
<. 2m +
IEpk (C)I
I Ie I I 2 met
= 2m(a IJe II + I)
56
for all t B [0,t ] and k ^ n 4- I .
Therefore, from (3.3.7) and (3.3.8),
and {("p^)' }]c_n + 2. are uniformly bounded on
the sequences
Using the mean value theorem we get that for t,s e [0,t ]
Pi(t) - P^(&)
k
----£-■_ -----
^ CPiCt1 k))' , i = 1,2,.... ,n,
*
k > n + I,
■■
where t
is between s and t for each i and k.
i ,K
Therefore
,
Ipk Ct) - pk (s)| £ max ^lCp1Ctlik) )'I,...,l(p^(tn k ))'|}lt-s|
for t,s E [0,t ] and k
(3.3.9)
n + I.
This, then gives us
lpk (t) --pk (s)| < 2m(aj|E| I + l)|t - s|
for all t,s E [0,T ] and k ^ n + I. ■ Therefore the sequence Cpk }°?_ ,,
^-n+-L
is equicontinuous on [0, t.].
Let .£ > 0.
Since f(t ,"y) is uniformly continuous on compact sets,
for t ,s £ [0,T], we have
IF1Ipk Ht) - Fi^ kKs)! < £, i = l,2,...,n,
whenever
max{ 11 - s|,
! ^ ( t ) - pk (s) I} < S1 .. '
for some S^, i = I ,2,...,n, k^> n + I.
I-Pk Ct) ~ "pk (s ) I < 5
and k 2 u + I .
(3.3.10)
■■■
whenever
11 - s I < 6.#for some S'- i = 1,2,... ,n
A
S^ = min{ S^} 5^} for i = 1 ,2,... ,n.
Let
lFi [Pk ](t) “ F1 [pk ](s)| < £, i >
A
whenever
From (3.3.9) it follows that
|t - s| < S^^ k. 2 u + I.
Then
1,2,...,n
A
We then have, that w^ Jc(^1 ) < e .
57
i = I ,2,...,n, independent of k.
Let K be large enough so that
T TT < .min
k - n - K K n
for k > K.
r
i
idi j
This implies that
(3.3.11)
for
i
=
1,2,.. ., n and; a ll
for
i
= 1 ,2,...
,n and
k
k > K.
K.
From (3.3.5)
Therefore
we get that Ne.
|| <e
II e' I| < e for a ll k > K which
completes the proof.
Theorem 3.3.
and 2ma
. -ik
If p
e
is a fixed point of T^ for each k > n + I
R then there exists a function ^y, whose components are in
C 1 IOjT] and a subsequence {■pk ^^}^._1 of {"pk }k_n+1 such that
(3.3.12)
Iim I|(pk(j))(i) - y (i)
0, i = 0,1,
Moreover y is a solution to system (3.1.1)
Proof.
From. (3.3.7) and (3.3.9) the sequence {"pk }
equicontinuous and uniformly bounded on [0,T].
there is a subsequence {pk ^ ^ } . _ -
is
By Ascoli's theorem
such that " p ^ ^ ^ t )
^(t) uniformly
J 1
on [0,T] for some y.
Using (3.3.1) we have that
- K#k(J) + ik(J) + #r5 kU)]
for all j .
(3.3.13)
From lemma 3.1 it follows that
CPkcj0(P)/ -
uniformly on [0,T] as j
.
Ey(t) + F[y](t)
Since ^ k C ^
is a fixed point of T , .. for
k(
58
each j we have
■pk(:l)(t) = h(t) +
§ G(t ,s) [Cp c^(S))'
- E p k(j)(s)]ds
O
Then
Iim ^ k(j)(t) = h(t) + Iim
4-K»
4-Ho
J
J
e G(t,s) [(^kc j)(s))' - E R C(d)(s)]dS
J■
-
q
which implies that
T
■y(t) = h(t) + jf G(t,s)#[-y].(s)ds.
Then y
_»
O
exists and y is a solution to (3.1.1).
that Cpk ^d\ t ) / ^
From (3.3.13) we get
y^(t) uniformly on [0,T ] as j •*”' .
This completes the
proof of the theorem.
3.4
Rate of Convergence
We will how investigate the rate of convergence of a sequence of
fixed points defined in section 3.2.
Here it is assumed that we have a
sequence of f ixed points {"pk } ^ _ ^ ^
such that ^pk ^
uniformly on [0,T ]
where ^ is a solution to (3.1.1).
A l s o , f must satisfy the conditions
A
in section 3.1 with the additional condition tht
(3.4.1)
lf(t,y^) - f(t,y2 ) I < K | Iy1 -
for some constant K, whenever
We will also assume that 2 mot
Theorem 3.4.
||h -r "y^M
II,
.2m a, i = 1,2, and t e [0, r] .
R.
For f* = (f^ ,...,fn )T , if f^(t,pk ) e Cm [0, t ] , i = I,
2,...,n, k > n + I, and K a < I, then there is a constant 8, independent
59
of k, such that
(3.4.2)
IK p k)(±) - y(±)lI < -|, i = 0 ,1 .
Proof.
■
Since y is a solution to (3.1.1), we have
y(t) = h(t) + / G(t,s)F.[yj (s)ds
^
^
^
where, again, F[y](t) = f (t,y).
■AV
Also, since p
is a fixed point of I
k
we have
T
•pk (t) = h(t) + / G(t,s)[ (Pk (S)/
- LrPk (S)jds
0
for all k > n + I.
Therefore,
"
T
Pk (t) - y(t) = / G(t,s)[(pk (s)y
- Epk(s) - F[y](s)]ds.
■ 0
Then .
(3.4.3)
IIpk
- y I I ^ oil I(pky - Epk - p[y] ||
I a I I ( pk ) - Epk - F[ pk ] I I + a| I F [ pkj
< a I I ( p k / - Epk - Ffpk j I I + oKllpk
for all k
n + I.
(3.4.4)
Also ,
(I - (XK) I Ipk - 'y I I _< a| Ie^l I
where
'
"^k = Cpk)
- Epk - Ftpk].
Since Ka < I, we have that
(3.4.5) ;
for all k > n + I .
Since
Iltk -yll I T ^ I I i k II
-
F[y] I I
- y||
60
a
-L ^
Ey + F[y]
and
(Ik)' = ^k + £^k - F[|k]
we have that
(3.4.6)
,ukv
II(Ik)
'
- y'
II < M t M
+
IlElI Mlk -til
IefcII + [IIE II +
+ K
IIpk
- || |
II "pk - y| I
a
< IUkIl + [11E II + KJ
IIikII
J»k-n
V
Then v
k-n
(Pk)
K]
- Eplf
z k-n
k-nvT
(v^ . ,...,V^ ) where
^nf
r-*k ■, ,■
Il.v - FiIpiv]II = IIvk n - FiIpt
v]M i = 1,2 ,..., I
'k-n
From theorem 1.13, since fi (t,"pk ) E Cm [0,T] for i = 1 ,2,...,n and
k ^ n + I , we have that
(3.4.7)
< - ^ , I = l, 2 ,...,n,
k
Let 0 = mg.x Qi . Then
vk"n " F i Ipk M I
for some constants 0.
(3.4.8)
k
Let B = max{-
3.5
0a
I - aK'
9(1 + a I|E| I)
I -otK
} and this completes
the proof.
Comparison of Sa S and IlAS
In order to accomplish a comparison of the SAS of degree k to the
MAS of degree k we can only consider a special case of (1.3.1), of the
61
type
(3.5.1)
y^(t)
= f(t,y,...,y^n 1^), t e
[0,T],
Y,
1)(0) + dij y (
I = 1 ,2 ,... .,n,
j=l
where f is a continuous real valued scalar function on [0 ,t | x Ifin and
C/j, d^j, and
Let
= {p:
are real constants for i = 1 ,2 ,...,n and j = 1 ,2 ,... ,n
p E Qk and
V,
ci^.p^ 1^(O) + d ^ p ^
i
1 ,2 ,...,n}•
1^(T) = b± .
Then a IiAS of degree k; for this problem, would be a polynomial q E Qk
such that
(3.5.2) inf
: VEQ^
I Iv^nV -
f(«
,v ,...,v ^ n 1^-) I I = I lq^n^
- f (•
,q, . . . ,q^n 1^) I I
An SAS. of degree k, for this problem, would be a polynomial p E Q
s u c h .that
(3.5.3) inf ||v - f(- ......P^n 1^) I I-II
vE<5k-n
- f(* ,p,.. .,P^n 1^) I I .
In order to use the theory of the previous sections, (3.5.1) will
be rewritten as
62
y
?
y'
Cl
I
0
O-.-..
0
0
0
0
I
0
0'
0
...
“ y
-0
“
0
y'
+
C21 C22
C 2n
" y( 0 ) “
0
\ l
=? •
0
...
0
%
0
M
0
O
O
0
.
■
f(t,y,...,y^n
^12 ' ' " dIn
d21 d22 * * * d 2n
'y(T)
"
V
y' (t )
b2
•
cIn
0
%
O
C11 C12
C
y C n -i)
C
y(n_2)
C
■
•
•
4
S •
•
•
•
(n- 1 )
Cnl Cn 2 * * " Cnn
I
(0)
which is in the form of (3.1.1).
dnl dn 2
dnn
b
y (”- N T )
n
We will make all assumptions set forth
in sections 3.1 to 3.4 including (3.4.1) and 2 m a R.
Let P£ = {p:
p = (p,p',...,p(*~l))T, p E Qfc and Mp(O) + Np(T) = b } .
Then (3.5.2) becomes equivalent to finding a vector polynomial
such that
(,3.5.5)
inf ||v' - Ev - F[v]
v e ^k
•
I I = I I (qk )'
- Eqk - F[qk ] | |
E
63
where Fly](t) = f (t,y).
If f is nonlinear we no longer have a guarantee,
as in chapter II, that such a polynomial exists.
Let W;
{p:
p = (p,p'
,Pcn" 15)1 , P £ Qk >.
Then (3.5.3) becomes equivalent to finding a pfc£
(3.5.6)
inf
Iv - F [pk ] II =
I I-(Pk )V
such that
“ Epk - Ffpk ] I I .
V EM
k-n
Let
S1 = Cp E P':
k
k
Ilp - hI I <
—
manner as T 1 in section 3.2,
k ■
guaranteed a fixed point p
Tf in the same
2 ma} and define
k
Then for each k > n + I we are
£
of T^ and this point satisfies
(3.5.6).
Let pk be an SAS of (3.5.4) and let
vk-n
Then vk"n
(vk-».
inf
..., v
Ilv-F
(Pk )' - Epk .
k-n.T ,
) where
lpk ] I I = I Ivk n - F lpk ]'I I, i = 1 ,2 ,.. .,n.
7:Qk-n
,.^k1
^ r
. «
1
k-n
But F 1 [p ] = 0 for i = I ,2 ,...,n - I and v^
0 for i = I ,2 ,...,n-l.
j>k
From Theorem 1.7 there is.a constant Y dependent oh F^[p ] such that
for any v £ Q 1
k-n
(3.5.7)
Iv - Fn [pk ] I I
Theorem 3.5.
If KOi
Y, then p
Proof.
>
k-n
Ivi
,- F fp ]|| +
n
For fixed k > n + I let p
Yllv - v
k-n,
„
'
be an SAS for (3.5.4).
is a MAS of (3.5.4) from S^.
Let “p e S^ .
Then p
(P,P'," , , P ^
where
64
P e Qk..
Let,
v = p ’ - Ep.
Then v = (0,0,...,0,vn )T, where
= p^n ^ .
Iltk" " - ^ll -
This implies that
llvk-n - » II,
n
n
k-n
lVk-11 - F[pk]II - Ilvrn - F rtKJII
and
-Iv-
Ftpk ] |.
|=
Ilvn -
Fn I^k ].II.
Therefore,
(3.5.7)
II* -
II > II^k"" -
r[fl Il +Yl lvk‘“ -
vl I
We have that
T
J G(t,s)vk n (s)ds
0
_ik
PttCt) = h(t) +
and
p(t) = h(t) + /
G(t,s)v(s)ds.
0
Then,
Upk - P i I <
(3.5.8)
k-n
ot|rvk-n - fiI
Ivk'n - v||.> I
||#k - H i .
From (3.5.7) and (3.5.8) we get
(3.5.9)
IIv
- Ffpk I I
Iif-'
lvk"n - FtPk JlI +
I > i
Y IIvk"11 - vl I
> I ivk“ n - F[pki 11 + 11 I i k - i i I .
We also have that
(3.5.10)
I|v
- F[pk ] Il < I|v - Ffi?]II +
I|F[pk ]
- F [ p ] I|
65
£ IIv - F[p] II + K| Iplc -.p| I.
Then from (3.5.9) and (3.5.10) it follows that
Hv-
t it ]
II + K| |p - pk |I > Itfk-n - Ffpk] II +
X
I|pk - f| |
or
(-^ “ K) IIpk - pl I <. H v - F[p] N - I | ^ n - F[fk]I I.
Since
— K ^ 0, we have
0 < IIv - F(pJI l - I |vk'n - Flpk]11
„r
(3.5.11)
II(Pk)' - E^k -FfpkJlI < ||^’ -
-F[^]|I.
S i n c e w a s an arbitrary element from S^, our theorem is proved.
iIr
Theorem 3.5 tells us that p is the "best" approximation out of
the set S^, not necessarily the "best" out of P^.
The only thing
that holds us back in the proof of the theorem 3.5 is equation (3.5.10).
In order for
I I F [ Pk ] “ F [ i>] I I
<
K| Ipk - "pi I
we must have the IIpk - h|| < 2ma and Ifp - h|I X 2ma . The latter
case only holds if "p e S^.
Corollary 3.1.
For fixed k
n + I let ^k be an SAS for (3.5.4).
If
(3.5.12) .
IfUit1) - f(t^2)| < Kl It1 - y2 ll,
for some constant K, for a l l ^y2 and t E [0,T] and Kd j< Y , then
pk is a MAS of (3.5.4) from PV.
Instead of imposing the uniform Lipschitz condition of corollary
66
3.1 we could also place extra conditions on ^(tfy).
If it Is assumed
that If(tj'y) I £ m for all y and t £ [0, T] then the condition that
2m(X _< R is no longer needed and we have the following corollary
to theorem 3.5.
Corollary 3.2.
If |f(t,y)l
<
For fixed I O n + I let *pk be an SAS for (3.5.4).
m for all "y and t £ [0,T] and Ka
<
y, then *pk is
a
MS
of (3.5.4) from P^.
Proof. Suppose for some "p £
(3.5.13)
II"^* -
- F [p] II
I|(pk)' - Epk - F[pk]||.
By (3.5.6) is follows that
11 Cpk)' - Epk.- Ffpk]II <. IIftpk]II £ m.
Then from (3.5.13)
(3.5.14)
IIp^ - E'pl I _< m + IIFtp*] II
< 2m .
Since ^ £ P^ we have that
p(t) = h(t) +
/
O
G(t, s ) f p ,(s) - EpXs)]ds
which implies that
I
- h | I < 2ma.
Therefore p" £ S^ and by theorem 3.5
(3.5.14)
I !"p'
- Ep - F[p] I I _> I ICpk)' - Epk - F["pk ] I |.
From (3.5.13) and (3.5.14) we get equality and thus completes the proof.
67
3.6
Computation of Fixed Points
We now turn to the task of computing a fixed point of T^.
Through­
out this section k ^ n + I will be fixed and all of the conditions stated
in sections 3.1 and 3.2 will be assumed.
Let *p^ be any polynomial in
(this may be taken as It), and
define
Pmfl = TfcPm , m = 0,1,2......
From the proof of theorem 3.2 p^ e sk for each m.
the sequence {"p } n has a cluster point *p e S1 .
m m=u
. K.
Therefore there
as 3 "^00* with
exists a subsequence {Pm (j)^j=i such that
respect to the norm
Since Sfc is compact,
IIe II. The remainder of this section will be devoted
to proving that p is a fixed point of Tk . Let
\
(3-6 -2 >
* Pitl -
e
Pb+ ! • m " °>1'2 ......
Then vm - (Vliii,... ,Vnim)1 where
(3.6.3)
inf
I|v - Fi Ipm J 11 =
Ilvi m - F 1 IPm I 11> i = 1 , 2 .... .
v6V n
m=
'
0 ,1 ,2 ,... .
Here, again, f(t,"y) = "FfyjCt) = (F 1 [y’] (t) ,... ,Fn [y] ( t » T .
Theorem 1.4 guarantees the existence of extremal sets X.
= {t, . ,...,
i,m
l,l,m'
’
t,
, } for i = I ,2,...,n and m =
k-n+ 2 ,i,m
0,1,2,...
.
We know that the
sequences {X^ m }m_Q are contained in the compact set [0,T]^ n+^ for
68
i = 1 ,2 ,.».»n, and therefore have cluster points X
i = 1,2,...,n«
i
= {t.
,,
I ,i
,tk-n+2,i}’
Without loss of generality it will be assumed that all
p and X^,
subsequences from {PnJf^ 0 and {X^
that converge to
i=l,2,...,n, involve the same indices.
These subsequences will be
denoted by {Pm(j)}j=1 and
1 =
.... n ‘
Let
(3.6.4)
= vm - F[pm ] = P ’4.1 - ^-Pm*! “ F [ p J , m ■ 0,1,2,...,
e
We have that e
(3.6.5)
^l,m'"''=n,m)
A
*
Let
I —
and define
(3.6o6)
e = u - F["p] = p*’ - E p -- F^ftP]
where e = (e^,...,e^) .
Theorem 3.6.
t£+1
For t
(3.6.7) Bi (t^ 1 ) =
^ E X^, if
^
then p is a fixed point of T^.
Proof.
Let
(3.6.8)
t = Tp-
Let
(3.6.9)
Then v =
.
(V 1 ,...,vn )T where
v = q' - Eq.
- n + I, I = 1,2, ...,n
.
i
?
69
inf
I Iv - F [p] II = IIv . - F . [p]I I, I = 1 ,2 , ...,n.
:
Let
(3.6.10)
Then
. 4 = v - F[p] = f ’ - Eq - F[p],
I =
,Jtt
O O
OO
and {x 1 >in(j)>j=1. i = l,2,...,n
We have subsequences
such that pm ( j j* p and
x^, i = l» 2 ,.<.,n, as j ^ 00 .
^
From theorem 1.4
(3.6.11)
ei,m(j)(t£ >i,m(j)) = ' ei,m(j)(tZ + l si,m(j)) = -
for Z = I , ...,k - n + I, i = I ,2,...,n and j = 1,2,.,.
.
lleI jffi(J )11
From theorem
1.8 we have that
(3.6.12)
H v i - Vi j m u j N
<
A i (P)IlFi ^
for i = 1,2,...,n and j = 1,2,...
.
- F 1 [Pm u j N I
Therefore for i = 1,2,...,n,
I = 1 ,2 ,...,k - n + 2 and j = 1 ,2 ,... we have
(3.6.13)
leisniOJ(tZjijiiiOJ) - I 1( ^ ji)I
— lvi,m( j)^t-2.,i,m( j)^
+
V i^t^,i,m( j) 1
- FilPm(j)K%i)l
+ |Fi[Pm(j)H%i) " FilPm(j)HtZ 1>Hl(j))l
±1
h -''i.mtj)11 +.IIfJpI
<a + ^Cp))IIfiIpJ - F1ItmcjjJlI
.
+lr1l‘’BU)l<tZil) - Fi^U)l(tZll>m(j))l
.
Since each F i and V i is. continuous and the family {F[p m^^]}^_1 is
equicontinuous we have that
(3.6.13)
Iim U i i m a j Ctejlimcjj) - I 1W
for i = 1,2,...,n and £. = 1 ,2,...,k - n + 2.
il)! - 0
Using the same type of in­
equalities as (3.6.13) it follows that
(3.6.14)
Iim I|e
i,m(j)
j-XX>
i I I , i **" 1 ,2 ,». .,n.
Therefore $
(3.6.15)
e.(t.
,) = -§.(t... ,) " + - I l S 4 II
1,1'
~iK^1+1,1'
- " cI l
for Z = 1,2,... ,k - n +, I and i = 1,2,... ,n
We have that
A ( tZll) ■ ei(tz) ’ V tZll) - F1(P) - P1(tZll) + F1I?!
(3.6.16)
F1(tZ ll) - P1(tZ ll) =
6Z tZll) - cZ tZll)
1»
71
for i = 1,2,...,n and
1,2,...,k -h + 2 .
By (3.6.7) and theorem 1.5
it follows that
(3.6.17)
I
I
Since e^(t^ ^) = +
2 mini B 1 (t^ ^ ) I, i = 1 ,2 ,...,n.
I I e^||
for
t = 1 ,2 ,...,k - n + 2 and i = 1 ,2 ,...,n,
which means
(3.6.18)
Ie i ( ^ i ) I X Ie1 ( ^ 1) I ,
1 , 2 , . . . . ,k
n + 2 , i = 1 ,2 ,...,n.
This implies that
(3.6.19)
sgn(Vi ( I ^ i ) - Ui O t ^ i)) = sgn(et (t£ ± ))
for Z = 1 , 2 , . ..^k - n + 2 and i = 1,2,...,n,
Therefore v (t) - u (t)
I
I
has k - n + I zero's on [0, T] for i .= 1 ,2,...,n.
We know that
a polynomial of degree k - n or less for each I.
u. will also be a
I
polynomial of degree k - n or less since by (3.6.2) v
uniformly as j
oo for each i.
i = 1,2,...,n.
Then "u(t) E v(t).
is
. .
,^u
i,m(j) -I
i
Therefore, v^(t) - u^(t) E 0 for
This gives us that p is a fixed point
of T^ a.nd thus completes the proof.
V
OO
It should be noted that if the sequence {p } „ is such that
m m =0
Pm
uniformly as m-»<» then (3.6.7) is always satisfied and the theorem
would hold.
72
3.7
Scalar Equations
In this section we will consider second order differential equations
with boundary conditions, of the type
y"(t) = f(t,y,y'), t e [0 ,t ],
(3"7»1)
(3.7.2)
C 11^(O) + c 12y ' (0) + d^^y(T) + d 12y'(T) =
. c 21y( 0 ) + c22y (0 ) + d21y(x) + d 22y'(x) = bg
and
(3*7.3)
y"(t) = g(t,y), t E [0,x]
with, boundary conditions (3.7.2).
We could write (3.7.1), (3.7.2) and
(3.7.3) in system form.
i
0
y(t)
I
0
y(t)
=
0
y'(t)
(3.7.5)
+ .
C11
C12
C21
922
0
y'(t)
y( 0 ) "
+
y'( 0 )
0
'y(t) '
I
f(t,y,y')
dIl
d12
y(T)
d21
d22
y'(T)
y'(t)
. b2 .
0
y(t)
=
" b i"
+ .
0
0
y'(t)
g(t,y)
and apply the theorems of the previous sections.
Part of the hypothesis
of these theorems involve computing the Green's matrix G(t,s) and finding
a number a
(3.7.7)
satisfying (3.1.3) where
llG(*,s)|| = max
: t£[0,T]
n
max T
i
J^l
|G.
.( t,s ) |.
73
Using this matrix norm some relitively simple
when the requirements that
examples may be eliminated
2 m a < R or KOt < I or ROt
< y are checked.
Instead, we can deal directly with (3.7.1), (3.7.2) or (3.7.3),
(3.7.2).
Let d^ = (1,0,...,0)^ and d^ = (0,...,0,1)^ be n x I vectors.
Then define a Green's function H(t,s) for the problem
(3.7.8.)
y"(t) = 0
cIiy(O) + ci2iy'(0) + dliy(T) + d12y'(T) = 0
c 2iy (°) + c 22y<
+ d 21y ^ T^ + d 22y '(^) = 0
by letting
(3.7.9)
H(t,s) = d^ G(t,s)d^.
We can use the same conditions and same proofs that were used in pre­
vious theorems except in computing ct our norm becomes
(3.7.10)
M ti(.,s )| I
=
max
|H(t,s)j.
t e[0, T]
In the case of (3.7.1.),
, T
(3.7.11)
(3.7.2) we require a to be a constant such that
T
x
max.< / ||H(* ,s ) I Ids,. / I |H ( • ,s)I
1O
.
0
Ids /
<
«.
?
On the other hand if we are interested in (3.7.3), (3.7.2) then the
constant a would be choosen such that
(3.7.12)
/
0
I |H(- ,s)| Ids
_<
ct.
74
3.8
Examples
In all of the examples of this section we used the algorithm pre­
sented in section 3.6..
In using that algorithm we also need to use the
second algorithm of Remes in order to compute a best approximation.
In order to determine that a given SAS of degree k is also a MAS
or degree k, a strong unicity constant
y must be calculated.
The
following theorem, due to A.K. Cline [4], allows us to calculate a
suitable y
rather easily.
Theorem 3.7.
that E = {t j
}
Let. G = span{l,t,...,tn *}, I = [0, t ] and suppose
is an extremal set for f - p^, where f e C [ 0 , t ] and
Py is the best approximation from G to f .
For i = 1,2,...,n + I,
define q± e G by q± (tj) = sgn[f(t^) - p 0 (t^)], j = l,...,n + I,
j # i.
Then y of the strong unicity theorem may be choosen to be
Y
max
IIq.II
-I
Ki<n+1
All of the following examples satisfy the conditions given in
theorem 3.2 except for example 3.4.
The function f (t,y) in example 3.4
is not continuous for all y.
Examples 3.1.
y" + cos y = 0 ,
t E [0 ,1 j,
y( 0 ) = y(l) = 0 .
We have that If(t,y)I ^ I for all y and t E [0,1].
Therefore, the
75
conditions of theorem 3.2 are satisfied.
Also, K = I
and a = 1/8 so
that K a < I, and the conditions of theorem 3.4 are satisfied.
An SAS
of degree 4 is
P4(t) = 0.4979086t - 0.5004810t2 + 0.0051449t3 - 0.0025724t4 .
In this case Y = 1/3 which implies that K a
a MAS of degree 4.
<
Y .
Therefore p^ is also
We cannot find a solution in closed form for com- .
parison purposes, however, using Picard's iteration it can be shown
that the error is no larger than 0.018.
Example 3.2.
y" = y + I,
t £
[0 ,1 ],
y( 0 ) = y(l) = 0 .
Let R = 1/3, then If(t,y) I _< 4/3 for all
We have that K = I
I Iyl I
and a = 1/8, so again K a
1/3 and t e
<1.
[0 ,1 ].
An SAS of degree
6 is given by
P 6 (t) = -0.4621172t + 0.4999997t 2 - 0.0770121t3 + 0.0416238t4
- 0.0037412t5 + 0.001247It6 .
Again we have that Y
a MAS of degree 6 .
= 1/3 so that K ot <
Y
< I.
Therefore p^ is
The actual solution is given by
y(t) = -g-qry Cet + e 1 t) - I
and the error is given by
I IP6 - yI I = .00000010.
76
Example 3.3.
y" = 2y3 , t £ [0 ,1 ],
y(0) = 1/3, y(l) = 1/4.
We have that h(t) = 1/3 - 1/12 t.
for all
Ily - h| I £ 1/10 and t s
implies that K a
< I.
If R = 1/10 then |f(t,y)|
[0,1].
Also, K = 1.13 and
< 2(13/30)3
a = 1/8 which
An SAS of degree 6 is
P 6 (t) = 0.33333330 - 0.IllllllSt + 0.03703509t2 - 0.01231023t3
+0.0039594914 - 0.00107261t5 + 0.00016611t6 .
In this case we are not guaranteed that p^ is an MAS.
However the
actual solution is
y(t) = l/(t f 3)
which gives us a uniform error of
I Ip6 - y | I = .00000003.
This indicates that it is a very good approximation anyway.
Example 3.4.
-t 2
2
y"(t) = - 2e t (I + In yZ (t)), t e
[0 ,1],
y(0) = I, y(l) = e-1 = 0,3678794.
Even though the continuity condition is not satisfied, the algorithm
still converges and a SAS of degree 7 is given by
P 7 (t) = I + 0.0000087t - 0.9998033t2 - 0.0035580t3 + 0.51411l7t4
- 0.0l49510t5 - 0.1883l62t 6 + 0.0603874J .
77
The actual solution is
y(t) = e
and the uniform error is
I Ip7 - y | I = .00000304.
Example 3.4 indicates that the results of Chapter III are not sharp.
It may be that, in general, f (t,y) need not be continuous for all y
in
R.
However, at this point such a relaxation of the hypothesis
in that direction has not been achieved.
CHAPTER IV
RESTRICTED RANGE APPROXIMATE SOLUTIONS
OF NONLINEAR DIFFERENTIAL SYSTEMS WITH BOUNDARY CONDITIONS
4.1
Introduction
In this chapter we will generalize the results of Chapter III.
In order to accomplish this we will have to consider approximating
polynomials with, restricted r a nges.. Before we get into the main results
of the chapter we will need some results on restricted range approxi­
mations due to G. D. Taylor [16].
These preliminary results were not
stated in Chapter I since they are only used in Chapter IV.
We will
also be using all of the material in Chapter I.
4.2
Preliminary Results
Let X be any compact subset of [0,f] containing at least n + I
points.
Let C[X] denote the Banach space of all real-valued contin­
uous functions defined on X with norm
I If I I
an n dimensional Haar subspace of C [0 ,T ].
= max |f(t)|. Let G be
teX
Let {g^,...,g^} be a
basis for G.
Fix two extended real-valued functions. £ and u defined on X
79
subject to the restrictions:
(i)
Z
may take
(ii)
u
may take
(iii)
on the value - 00, but never + 00.
on the value + °° , but never - 00.
a,= (t: f(t) = - ” } and
X+ m = {t:
u(t) = + «-} are
open subsets of X.
(iv)
(v)
Z is continuous on X ~ Xjm00 and u is continuous on X
X^co.
Z< u for all t E X.
Now define the set V by
V = {p e G:
Z(t) < p(t) < u(t) for all t e X}.
It will be assumed that V has more than one element, which may put
additional conditions on £ and u.
Let f B C [ X ] .
Then p E V is said to be the best approximation
to f from V if
(4.2.1)
Theorem 4.1.
inf I Iq - f I I = IIp - f ||.
qev
.
Given V as above and f £ C[X] then there exists
p - £ V satisfying (4.2.1).
Fix f. £ C[X] and let p £ V.
Then define
X +1 =
{t £ X:.
f(t) - p(t) =||f - p||}
X _1 =
{t E X :
f(t) - p(t) =-||f -
X+2 =
{t e X:
p(t) =
X _2 =
{t E X:
p(t) = u(t)}
Z (t)}
^p = X+1 u X+2 u X-1 u X-2 •
p| I}
80
These will be called "critical" points.
We will now state two characterization theorems.
If (X+1 U x+ 2 ) n(X _1 U X_2 )
Theorem 4.2.
$ 0 then p is
a best approximation to f.
This condition does not occur in the most interesting case,
namely when
l(t)
f(t)
u(t) for all t e X.
A more important
theorem is the following.
Theorem 4.3.
Let f E C [ X ] , p £ V and suppose
(x+1 U
(4.2.2)
x + 2 ) n (x_1 u x_2 ) = 0 .
Then p is a best approximation to f if and only if there are. n + 1
consecutive points
o(t^) = (-1)
i+1
< t 2 < ... < tfi+^ in X
O(t^) where
satisfying
a(t) = -I if t E X _1 u X _2 and
a(t) = +1 if t £ - X 1 U X+ 2 In this case if f E C [ X ] ,
f £ V and Z(t) j< f(t)
u(t) for
all t £ X, Then condition (4.2.2) is satisfied.
It should also be mentioned that
(X+1 u
x+2)n
(X_1 u x_2) =
If (X+1 n
X _ 1 )£
0 then f = p.
(4.2.3) '
(x+1 n x_L)
u
(x+1
n X_2 ) u (X+2 n
X _ 1)
So if we use the requirement that
(X+1 n X_2 ) u (X+2 n X _ 1) = 0
we will be including all f £ C[X] such that £ (t)
f (t)
u(t) for
all t £ [0,T] or f = p, p £ V.
Several important theorems similar to those in section 1.2 will
now be listed.
3
81
Theorem 4.4.
Let f e C[X] and let p be a best approximation
to f and suppose (X+1 n X_2 ) u (X_1 n X+ 2 ) = 0
then p is unique.
Define E ( f ) = inf ||f - p | I for f B C[X], then
PEV
Theorem 4.5.
If q E V and f - q assumes alternately positive
and negative values at n + I consecutive points t^ of X, then
(4.2.4)
E(f) > min If(t ) - q(t.)I•
i
Theorem 4.6.
Let f B C[X] and p be a best approximation to f
and suppose (4.2.3) is satisfied.
Then there exists a constant Y > 0,
depending on f , such that for any q B V
(4.2.5)
H f - qll _>
Ilf
~ pll +
y l l p - qll.
Let f E C[Xj and p be a best approximation to f from V.
(4.2.3)
is satisfied, then we call f admissible.
If
We can then define
the operator Ff £ V to be the unique best approximation to f from V.
Theorem 4.7.
To each admissible f^ E C[X] there corresponds a
number y > 0 such that for all admissible f ,
(4.2.6)
IIFf0 - F f M
<
Yllf 0 - fit.
G. D. Taylor and :M. J . Winter [17] developed the following
algorithm and theorem for calculating the best restricted range
approximations.
We will assume that f E C[X] , f 0 V and l(t)
f(t) O u ( t ) for
82 .
all t EX.
1
t2 ^
Let g^,...,gn be a basis for G.
Choose points tj <
1
^ t^n+!
X so that f cannot be interpolated by a linear
'
'
1
1
I
combination of the g ± ' s on t = {t^,
-t^.}. Solve
n
..
.
53 C g (t ) + (""1)^0
= f(t ), j = 1,2,... ,n + I.
i=l -Va -J
Irri
J
(4.2.7)
Since {g^,...,g },is a Haar set we get a unique solution which
I
we denote by
l ..., Cn+^. Let
P1 = E
If IIf -
and. e1 = IC ^ I.
C1 B1
II =
and £(t)
p^(t)
u(t) for all t e X
I
then p
=? p*, the best restricted approximation and we stop the
iteration.
Let
If this is not the case then we proceed as follows:
= max (p*(t) - u(t)),
tex
= max
tex
(Z(t)
p^(t)).
Let
In case of equality, we let Y
be the largest of {E^,M^,m^}.
be the first largest member of the
triple.
If j
Ilf - p^l
I; if Y'*" = M"*". choose s such that p^(s) - u(s) = Y^;
if
1
y
1
= m
9
= E
= Iif - p"*"II. - e* and
choose s E X such that
1
If(s) - p (s)| =
1
choose s such that Z(s) - p (s) = Y .
} by { t p
..., t^+ ^} where t
Replace
t^ for all i except
83
one, namely Iq , and at that one
(4.2.8)
Sgn*(f(t) - p(t)) =
s.
Define
I +1
if p(t) = f (t) = l(t) ,
I -I
if p(t) = f(t) = u(t),
\ sgn(f(t)
for p £ U and t
- p(t)) Otherwise,
I
Then one replaces t.
by s so that
1O
e X.
sgn*(f(tj) - P 1Ct^)) = (-l) 1+ 1sgn*(f(tj) - P 1CtJ)), i = 1 ,2 ,...,n+ 1 .
We now partition {1,
sets.
if
..., n+1} into at most three disjoint sub­
We will let i e
Y1 =
E 1 , i0
e U2
if i f iQ .
Y1 = M1
if
or iQ
For iQ , we will let iQ
e L2
if
Y1
= m1.
e
The iteration
is continued by solving the system
(4.2.9)
JT
Cj 8j(tJ) + ( - D 1C ^ 1 = f(tj), i e Y2 ,
53
C^g (t^) = u(tj), i E U2 ,
j=l J 3 2
1
^
53
j=l
C^gXtJ) =
J J j-
£(t?), i
e L2
A
2
for (C^, ..., Cn+^ } where the latter equations are retained only
if U 2 ^ 0 or L 2 f
0.
Let
2
2
...>Cn + ^ be the desired soltuions
which exists by the Haar condition, and let
84
>2" g cA a"d*2 -X+i1If I |f - p^l I =
then p
2
and Z(t)
p^(t)
u(t) for all t e X
is the desired best restricted approximation.
the iteration is continued.
Suppose this is the case.
If not,
Then con­
tinuing by induction, we have at the Icth step a given set of points
{t^,...,t^+ ^} with t^ < t^ < ••• <
and t^i e X for
i = 1 ,2 ,...,n+L, three pair wise disjoint (some possibly empty)
sets Y^,
P
k
=
n + 1 }, a polynomial
and Ljt whose union is
n k
52 C . g
J=I J J
k
and a real number C ,v such that
n"ri
(4.2.10)
Cj Bj<^> + (- 1)lcn+l - « ❖ ■ 1 6
J=I
n
Cjgj(tj) - u(C^), I £ Ufc,
J=I
I) C ^ .( t j)
-
'I = Lk-
By the Haar condition this system has a solution.
Also, we have
that
(4.2.11)
sgn*(f(tj) - pk (t^)) = (-l) 1_ 1sgn*(f(tk ) - pk (tk ))
i = 1 ,2 ,...,n+ 1 .
85
If
I If
- pk|
t E X, then p
k
I
=
= |C^+ ^| and f(t)
is the desired best restricted approximation, p*,
and we terminated the iteration.
follows:
Let
If p
= I |f - p ^ | I - e^,
k
k
= max (£(t) - p (t)). Let
teX
In case of equality, we let y
m
triple.
p^(t) < u(t) for all
t
y
k
^ p*, we proceed as
= max (p^(t) - u(t)) and
teX
be the largest of {E ,M ,in }.
be the first largest member of the
If yk = E^ choose s e X such that
I |f
- p^|I =
|f(s) -
pk ( s ) I ; if y k = Mk choose s such that pk (s) - u(s) = yk ; if
Y
= m
choose s such that l(s) - p (s) = Y
.
Replace
{tk ,...,tk^^} by {tk + 1 , ...,tk^ }
where t^ 1 = tk for all i
k+1
except one and that one t, = s.
Make this replacement so that
0
sgn*(f(tk + 1 ) - pk (tk+1)) = (-l)i+ 1sgn*(f(tk+1) - pk (tk +1)),
i = 1,2,... ,n+1.
kk+l
Define pairwise disjoint, sets Y^+ ^,
and
union is {!,...,n + 1}) by i £ {!,...,n + 1} is to be in
Yk + i > uk+1 or Lk+1 according as i £ Y fc, Ufc or Lk respectively
k
k+1
k +1
for any i such that t^ £ {t^
, ...,tn+^ } . : For the new point
k+1
k
k
Ir
k
t^
we say iQ £ Y k , Uk or L k according as Y
= E , M or m
respectively.
Then solve
(4.2.12)
C f 18jC t f 1).+ C - D 1C f 1
fCtfS,
I e Yfcfl.,
86
C - SjU - ,
U(ti
>’ 1 E Uk + 1 >
J=I
k+l
S i C ' f 1)
(tr'). I:
By the Haar condition there is a unique solution
5^ 0.
if
By assumption V ^ 0 implies that
,...,
^ 0 .
Since,
k
if Y^+ ^ = 0, then p would meet every p £ V at least n times which
is impossible.
Let
-,k+l 1
V ' ,,k+l
k+l
L r C i 84 and e
j=l
J
J
Theorem 4.8.
Assume that the iteration does not terminate
after a finite number of steps.
r k 00
polynomials {p
Then the sequence of constructed
converges uniformly to the best restricted
approximation p* to f and e^ t e* = Ilf - p*I I.
4.3
Existence of Fixed Points
We will, again, consider the system
(4.3.1)
y* = E y + f(t,y), t e [0,T],
My(O) + N y (t ) = b,
where E, M and N are constant n x n matricies such that En = 0 and
the n x 2n matrix (M,N) has rank h.
b is a constant n x I vector.
87
£(t,y) is continuous on [0,t ] x lR n with values i n IRn .
We will assume, throughout this chapter, that the system
(4.3.2)
"y'
= Ey
My(Q) + N y ( t ) = 0
is incompatible.
As before, this implies the existence of a
unique Green's matrix"G(t;s).
Let Y(t) be the principal matrix
solution for
(4.3.3)
'y' = Ey
and D the characteristic matrix for (4.3.2).
(4.3.4)
Define
h(t) - Y(t)D~ 1b.
. ^
The components of h are polynomials of degree n - I or less, and
•A
h satisfies
(4.3.5)
-4».
. Jfc,
h = Eh
Mii(O) + N^(T) = "b.
We now develope notation for various sets which will be used
throughout the reaminder of the chapter.
The first three are as
follows:
Qjc “ {p:
P is a polynomial of degree k or less}*
Wk t5 Jp:.
P = (P 1 ,..-,Pn )1 , P 1 e Qk > I < i < n},
Pk = {p:
p £ Wjc and Mp(O) + Np(T) = b}.
and
For k >_ n - I, Pjc is not empty since h £ Pk .
Let. <j)(t) be a
88
scalar real valued continuous function on [0,T] such that .<j>(t) > 0
on [0,T] and
T
(4.3.6)
/
||G(° ,s)||^(s)ds _< I.
0
Also, there exists a number R > 0 such that
(4.3.7)
for
I l"y
lf(t,y)| < (J)(t)R
- h|
I
< R and 0 < t < t .
Then define three additional sets,
as follows:
e Qk and
|p(t)
I <_
Vfc = {p:
p
(j)(t)R for all t e
Uk = .{p:
P = (P11- - ^ P n )1 , P 1 e V ^ 1 I < i < n},
[0,t ] } ,
and
Sk =
{p:
p
e
and
I Ip
- h | I < R}.
Again, it should be observed that, for k _> n - I, Sk is not empty
since h E Sk .
For p £ Sk we have that |f(t,"p)| _< ^( t ) R for all t E [0,T].
V
Therefore, if we use Vrk as our approximating set for f(t,p) then all
of the theorems of section 4.2 apply.
Also, throughout the remainder
of the chapter, let
(4.3.8)
F [ y ] (t) = f(t,y).
Let "p E s .
(4.3.9)
Then for k > n + I
inf
N v - F [p]
VEV 1
1
k-n
II
= ||v
- F_. [p]
1
x
II
89
for some
v± £ V ^ ,
I •_< i <_ n, where "Ffy] = (F 1 [yj,... ,Fn [y] )T
From theorem 4.4 each v^, I
i
and define the operator F fc:
n, is unique.
Sfc -> Uk_n by
= (V1 ,...,v^)
Let
= "Vq .
Since F[y]
is uniformly continuous on compact sets, theorem 4.7 implies that F^
is a continuous operator.
Fbr v e
let
*q(t) = h(t) + / G(t,s)v(s)ds
0
and define the operator ^
that
by
= "q.
We. have that q e
is a continuous linear operator from
define the operator Tfc:
-> Pfc by T^p = ^k ( ^ p ) .
the composition of continuous operators,
call any fixed point of
into P^.
so
Finally,
Since Tfc is
is continuous.
We will
a restricted simulaneous approximation
substitute of degree k, USAS.
Theorem 4.9.
For fixed k ^ n + I the operator
has a fixed
. . ->>k
point p .
Proof.
We will, again, use theorem 3.1.
We already have that
S^ is a compact convex set of the Banach space Wk and Tk is a
continuous operator from Sk into
Wk ,
We, therefore, only
need to show that Tk (Sk ) c
Let "p 6 Sfc.
Iv (t)
I
Let Fjjp = "v^ = (V1 ,... >vn )^*
We have that
cj)(t)R for I X i < n and all t e [0 ,t ] .
that |yy(t) I _< <j)(t)R for all t £ [0,T ].
. T q*(t) = h(t) + / G(t,s)v^(s)ds.
This implies
Let q = Tj^p.
Then
90
Therefore
(4.3.10)
lt(t> - ti(t)|<
/
0
<
||G(.,s)|I Ivn (S)Ids
u
T
/ I |G(.,s ) I I O(S)RdS
0
£ R, t e
[0,t ].
Then I Iq - h| I < _ R which gives us that "q e
and thus completes our
proof.
If "p^ is a fixed point of T^ we, therefore, have that Tp^ = p^
and
(4.3.11)
inf
| |v - Ffpk ] M
= IICpk )' - E$k - F[pk ] I I.
v EU,
k-n
4.4
Convergence of Fixed points
For each k _> n + I let pk £
be a fixed point of T^.
We will show that there is a subsequence of {pk }^_n + ^ that con­
verges to a function y which is a solution of (4.3.1).
Also it will
be shown that the first derivatives of the subsequence converge to
■y'.
Before establishing these results we prove the following lemma.
Lemma 4.1.
of Tfc.
(4.4.1)
For each k X n + I let "pkrE
be a fixed point
Let
l k (t) = Cpk Ct)/
- Efk (t) - Fifk ](t), t £
then
Iim
RrXJO
ItkU
=0
[0,T ] ,
91
Proof.
Let
(4.4.2)
. ($ky
-s.k-n
where v
point of
k—n
(v
'I
-
k - n sT
) .
k > n + I,
Since p
is a fixed
we have that
(4.4.3)
inf
veV,
k-n
^
Now e^ = (e^
U v - Fi [pK] 11 = I l v f ri
1
^)
I, I <
i
“
<
8 Qk_n, I
(4.4.5)
„ rAk,
~ y p
j* 1 i 1 i n-
i ^ n, such that
I Iv
inf
- F f p kJI I
= IIq fn
-
YSQk-n
Let
(4-4.6)
F f t kJ I I, I < I
'
= q f n - F.[$kJ, I < i < n.
From theorem 1.13 we have
(4.4.?)
IU ljkII < WijkOk^), I < i < n, k > n + I.
where Wi k(6) is the modulus of continuity of FiIpkJ for I <_ i <_
and k
n + I.
We have for each I _< i _< n and k
(4.4.8)
n.
“
which gives us from (4.4.1) and (4.4.2) that
k-n
e. , = y
i,k
-i
kt-n
- F. fp*]'!
1
T
e
(4.4.4)
Let
1
n + I that
Ivfn(t)| < IFiIpkJ(t) I + |vfn(t) - FiItkKt)!
< Kt)R + IIVkr-11- FiIpkJlI
<1 Ul Ir + I IF1IpkJ 11
< 2R||<|)| I
< n.
92
for all t £ [0,TJ. Then
(4.4.9)
ltk-n(t) I < 2 I|(()| |R, t e [0,T], k ^ n + I.
Since "p^ e S^, we also have that
(4.4.10)
lpk(t) | £ R + I|h| I, t E [o,T], k _> n + I. '
■
Then from (4.4.2), (4.4.9) and (4.4.10) we. get that
(4.4.11)
l(pk(t))' I < |Epk(t)| + Itk-nCt)!
< IIe II[R + IIhl I] + 2 1|(j)IIr , t s [0,t ],
k > n + I.
From the mean value theorem and (4.4.11) we have
(4.4.12)
lpk(t) - pk(s)| < [IlEllCR +. I|h| I) + 2| I(J)IlR] |t - s|
for all t, s E [0, T] and k_> n + I.
Equations (4.4.10) and (4.4.12)
k 00
establishes that the family {p }jc_n+^ is uniformly bounded and
equicontinuous on [0,T].
Given £ > 0 such that ^(t)R - £ >^ 0 for t £ [0, T]. Let
(4,4,13:>
^ = ' 2(E + R| |(J)| I)*
Since (J)(t) > 0 for t £ [0,t ], we know that 0 < X < 1/2.
is uniformly continuous on compact sets.
k
F[y](t)
Therefore for each I
i _< n,
n + I and t,s E [0,t ]
lFi[pk](t) - F1ItkKs)! < (r - L I)£/2
whenever max{It - s|, lpk(t) - pk(s)I} < 6.
for some
I
I
, I —< i —< n.
Also, from (4.4.12) we know that Ipk(^t) - pk(s) I < ^1 whenever
93
It - s| <
Let
6^ for some
6^, I _< i _< n, k
n + I and t,s
E [0 , T].
= min{ 6^, 6^} for each I <_ i _< n, then
(4.4.14)
whenever
lF1[pk](t) " FiCpkJ(S)I < ( y ^ ) e/ 2
It - si < 6
for I
i
n, k > ^ n + l
and t,s,
£ [0, T].
This implies that
(4.4.15)
^
independent of k.
Zn
Let
( _ L _ ) e /2} I < i _< n,
be large enough so that TTT/(k - n) X
,
min {6.} for k > K 1 .
I
1
(4.4.16)
From (4.4.7) and (4.4.15) we then have
i,k
< w
—
i,k (ir™~r)
k - n
l w I lke6I j
for
each I <_ i _< h- a.nd k 2 maxiK^, n + 1}.
Let ge (t) = ^(t)R - E ^ 0.
For each k ^ n + I let
k-n.
(t)
be a polynomial of degree k - n or less such that
(4.4.17)
inf
Jlv - g-I I
II
11 ~ gg I I -
^Qk-n
There exists a number
(4.4.18)
for all k
such that
I IsjTn - g£| I < e/ 2
maxfKg, n + 1}.
Let K = Qax(Ki )Kg*^ + 1} and for
94
the remainder of the proof assume k > K.
For each k ^ K and I _< i
(4.4.19)
Then r^
n let
r^~n(t) = As^~n(t) + (I - X)q^~n(t).
£ ^k-n ^or ^
^ — n» ^ >_.K.
From (4.4.18) we have
for t £ [0,T] that
g£ (t) - £/2 < s£'n (t) < g£ (t) +
e/ 2 ,
which leads to
(4.4.20)
*(t)R
- I e < S£~n(t) < 4»(t)R - e/2.
Also, from (4.4.6) and (4.4.16) we have for t £ [0,T] and I _< i _< n
that •
Fi IPk K t )
~ (y^-%-) e/2 < qj"n (t) < F.[pk ](t) + ( y - L ^ ) E/2,
which implies that
(4.4.21)
- (Rt)R - (--L-) e/2 < qk"n(t) < *(t)R + (y-^-j) e/2..
Then using (4.4.19), (4.4.20) and (4.4.21) we have that
A(CKt)R - §£) + (I - A) H ( t ) R - (y-^-)e/2] < rk"n (t)
and
X(CKt)R -
E/2) + (I - X)[<Kt)R + (j-^-y)E/2] > rk _ n (t),
for all t £ [0,T] , I <_ i _< n and k
K.
This gives us the following
bounds for rk n .
(4.4.22)
2A(<j)(t)R - £) - *(t)R < rk _ n (t) < *(t)R
for all t £ [0,TJ, I
i
n and k
X > 0, (4.4.22) implies that
(4.4.23)
Irk'n(t)| < *(t)R
K.
Since <j>(t)R - E ^ O and
S>5
Therefore, r^~n
e V1
for each ]>i
■~<* i —< n and k —> K
i
k —n
We also have, for t e [0,t ], I
(4.4.24)
i
h and k ^ K, that
Ir^~n(t) - F1Cpk](t)I
= |Xsk" n(t) + (I - X)qk" n(t) -
F.[pk ](t)|.
< X|sk~n(t) - F.[pk]( t) | + (I - X)Iqk—
n(t) - F1Cpk] (t)I
< 'M'sk" n(t) - SeCt)I + XI8eCt) - F1CpkK t)!
+ (I. - X)| Iej, k l I
< X | + X[|<J)(t)R -
F.[pk ](t)|
+ £]
+ ( I - x X r ^ - T ) eZZ
< X e + X [2RII(J)I I +e]
= 2X(e + Rl l<j)l I)
= E.
Then we have that
(4.4.25)
I|rk n - F1Cpk]II < E
for each I < i < n and k > K.
Since r^-n £ V1
and k > K it follows from (4.4.3) that
for
each I < i < n
96
Z
(4.4.26)
O £ Iiv£"n - FiIpkJII < IIrk-n - Ff[fk]II < e
for each I <_ i < n and k £ K.
Therefore
Ilvk - Ffpk JlI <
E
for all k £ K and this completes our proof.
We can now establish the main result of this chapter.
THEOREM 4.10.
If pk £
is a fixed point of T^ for each
k £ n + I then there exists a function y, whose components are in
C [0,TJ, and a subsequence {p vJy}j=1 of {p }k_n+1 such that
(4.4.27)
Iim
I I (pk ^ ^ ) ^ ^
II
~
= 0, i = 0,1.
Moreover y is a solution to (4.3.1).
Proof.
In light of lemma 4.1, the proof follows exactly the
same as theorem 3.3.
4.5
Rate of Convergence
We will now investigate the rate of convergence of a sequence of
fixed points defined, in section 4.3.
Here it will be assumed that
we have a sequence of fixed points {pk }”
+i such that "pk->- y uniformly
on [0,T J , where y is a solution to (4.3.1).
Also, we assume that f
satisfies the conditions in section 4.3 as well as the condition
(4-5.1)
I f ( ^ y 1) - f(t,y2 )| O K l I y 1 - t 2 ll,
for some constant K, whenever
Let Ol be a number such that
I Ih
- "y^||.£ R, i = 1,2, and t £ [0,T].
97
T
(4.5.2)
/ I|G(- ,s)| Ids _< ct.
Theorem
l < i < n ,
4.11.
For?
= (f^,
f^)^, if fi (t,pk ) £ Cm [0,T],
k > n + I, 4)(t) £ Cm [0,T j for m > 2 and KB < I, then there
is a constant
B, independent of k, such that
(4.5.3)
||c%k)(i) -
Proof.
<_JL_ , i = o ,I.
Since ^ is a solution to (4.3.1) we have,
^
T
y(t) = h(t) + / G(t,s)F[y](s)
where ?[y] (t) = £(t ,y1). Also, since pk is a fixed points of Tfc,
we have
T
h(t) + / G(t,s)[ ($k (s))
"Pk (C) =
for all k
n + I.
- E$k (s)]ds
0
Therefore
I Ipk - yl I < CtIKpk)' - Epk - F[y]| I
(4.5.4)
£ a I I (Pk)'
-
Epk
+ aKl Ipk - y| |
for all k > n + I .
(4.5.5)
Then
(I - O(K)IItk -
yl I I a Mek I I
where
®k =
(Pk)'
Since K ot< I we have that
(4.5.6)
We also have that
y
= Ey + F[y]
“ E Pk “ F tPk ] •
- F[pk] I I
98
and
(Pk)' = ek + Epk - F[fk].
Therefore
II(Pk)' - y' 11 < IItkII + IlElI Ilf - ^ll + Kllf
y 11
which implies that
I 1(f)'
(4.5.7)
- Y yI I
<P
L
For k
+ a ll^J 1I
He, ||.
1 - oik J
k
n + I let
S k"n = d k y
- 4 k.
Then V k n = (vk n , ..., Vk n )T where each v , I < i < n,
I
n
i
—
—
satisfies (4.4.3).
Given any £ > 0 such that <|)(t)R - e
v
0, let S£ (t) be the
polynomial of degree k - n or less which satisfies (4.4.17) where
g£ (t). = <|>(t) ~ G •
Let qk n (t) be a polynomial of degree k - n or
less which satisfies (4.4.5), I <_ i <_ n and k ^ n + I.
that f ( t , f ) = F [ f ] (t) £ Cm [0,T], for each k
4>(t) E Cm [0,T].
n + I, and
From theorem 1.13, there exist constants
I < i < n + I , such that for all k
(4.5.8)
I < i < n,
and
(4.5.9)
Suppose
IlSk- - ^ l l 1 n + I
99
Let Bf. =
max
{B }.
Then for all k
KKnH-I
(4.5 .10)
k
and
(4.5 . 11)
Let X
Ilqlrn
- F±[pk
]I I < —
-L
i
k
k
m - I
, I —< i —< n.
for m > 2 arid k > 2.
Then
(4.5 .12)
I-Xk
For !k- >_ -g— we h a v e B q X
Thus B0km_1 -
km-1 - l'
k E/2, which implies that B^k™
_< km £ /2.
£/2k\ B q or B q (k"1" 1 - I) < e/2km
and
(4.5 .13)
$
E/2
tT T F j 6/2
for It ^ max{— — j 2}.
(4.5.14)
r£~n (t) =
+ (I - X k )qJ"n (t), I < i < n, k > n+1
100
From the proof of lemma 4 . I,we
I <, i <, n and k
and k > max{-
(4.5.15)
>_ max{—
know
2, n + 1} .
that
* r - k. n
for each
We also have that for I < i < n
2, n + 1}
lr^~n(t) - Fi ^ kJCt) I =
l \ s ^ n(t) + (I - \ ) q ^ ( t ) - F1I l k] (t) I
y
< \ l s k" n(t) - F1I t kJCt) I + (I - Xk)lqk' n(t). - F1IpkJCt) I
I xk I I sE n ~ SeU +Xkl8e(t) - F1IpkJCt)!
+ U - Xk)llqk-n - F1ItkJlI
< \ l lsk"n -fell + \[2| I^l Ir + ej
+ (I - xk ) l U i " 11 " Fi[pk ]ll
i
^
5
+^
[2|,*m r + e] + ( i - ? t> 5
< -^rr [2B0 + 2 11*1 IR + e].
k
101
Therefore,
I I r^~n
(4.5.16)
- F. fg t] ||'< _ 1 _ _ [2B() + 2| |<H Ir +
2B0
for I <. i _< n and k >^.max{-^-, 2, n + 1}.
I
i
£]
Since f^-n £
n for
n and k >_ max{— — , 2, n + 1} we have that
(4.5.17)
I Iv^Tn - F1Ipk] I I < ~ T [2BQ H- 2| |<j>| Ir +.e]
K
and hence
(4.5.18)
I I t fcI I
[2B0 + 2 I |<j)I Ir- +
<
£]
.
k
for all k 2 max{—
, 2, n + 1}.
Then from (4.5.6) and (4.5.7) we
get that
(4.5.19)
I Ip
-
yl
I < - i= r
[r^K][2B0
+
ZlIfIlR + £]
k
and
(4.5.20)
II(Pk )' - ( y O l I
[V
^
e
11H Z B 0 + 2|| iH Ir + E],
k
for all k 2 max{--- , 2, n + I).
e
I I (pky -
y ^I I
for I
k
max{—
Let 0= max{
k
I |pk
2, n + 1}. Then let
6 = max{ [r -j a^ ] [2BQ + 2 ||<J)||r + £],
Then
I ICpk) ( i) - ^ ( D j I <
- y| |
, i = 0 ,!,
k
for all k > I and this completes our proof.
^ b q + 2||<MI+e],0}
102
4.6
Comparison of RSAS to MAS.
The RS a S of degree k can also be compared with the MAS of degree
k.
In fact, the entire discussion follows in the exact same manner
as for the S A S .
In theorem 3.5 we simply make the assumption (4.5.1),
replace SAS with RSAS and change the definition of S, to
&
Sk = {P £
Pk :
I IP
" hi
I
< R}.
Likewise corollaries similar to corollaries 3.1 and 3.2 hold under
the appropriate changes.
4.7
Computation of Fixed Points
We now turn to the task of computing a fixed point for T^.
Let "Pq be any element of S^ for fixed k ^ n + I (this may be
taken to be h).
Define
(4-7-1)
Pntti= Tv pm
We know that Pm e
m = 0, I, ... .
for each m _> 0.
Since S^ is compact,
the sequence {pm }m_Q has a cluster point p £ S^.
Therefore,
there exists a subsequence {pm ^ ^ } j _ ^ such that P ^ j) "y P uni­
formly on [0,T] as j ■*” .
We now proceed to show that "p is a
fixed point of T^.
Let
(4.7.2)
-x .
V
m
for each m > 0.
—
Then v
(4.7.3)
Pi,
m
“ Epm+1
= (V1 ,..., v
). where
l,m*
’ n,m
inf I|v - F [p ]
vEV.
1 m
k-n
llvi,m -
11’
103
. I ^ i ^ n, m 2 0.
Define the following sets for each I < i < n and m > 0:
Y ^ m=
it E [6,T J :
FJ^JCt)
- v1$m(t) = JlF1 ^ m ] - V ijJ I ) ,
Yi{*= {t e [0,T]:
F1 I t J C t) - V ljJ t )
Y+2m= {t E [O'?]=
V ljfflCt) = -4>(t)R},
Yif=
V ijmCt) = 4>(t)R)
{t £ [0,T];
=^IlF1ItJ
- V ljJ I ) ,
and
Y1 »m = Y i f U Y i f U 'Y ^ ,m
-I
p
+1
+2
u Y ^ ,m
-2
A l s o , since
(4.7.4)
.
If 1 Ip ii1](t)I < <f>(t)R
for all t B [0,t] , I _< i"_< n and m
(4.7.5)
for I
u Y ^ m) D ( Y ^ m
i
t E Y 1 ’m
-I
0.
n and m
U Y1
-2
0.
and a.
i,m
Then define
We have that
U Y ^ m)
m (t) = -I if
(t) = +1 if t e Y ^ m
U Y * ’m for
each I _< i <_ n and m _> 0.
By theorem 4.3, there exists k - n + 2. consecutive points
*1,1,0. < *2,i,m < -• < Vn.i.rn in Yp ,m ^tisfying
. 104
(4.7.6)
for each I _< i _< n, m
and m > 0 let X 1 ^
0 and I <^ £ _< k-n + 2.
= {tl ^± >m,...,
For each I
i
The sequence
(X1 m }m_Q is contained in the compact set [0,T]^ n+^ for each
1\< i j< n.
Therefore they have cluster points X
t^_n+2 £/ > I
i
n.
= {t^
.,•••>
Without loss of generality we can assume
that all subsequences from (Pffi)m-Q and (x ^ m ^m=o that converge
to "p and X 1 , I < i < n, involve the same indices.
CO
In that case
OO
let ^Pm (J))jz=i and (X i m(j)^j=l
t^ie subsequences such that
^m(j) -yP and X i , m ( j ) ^ X i» I < i < n, as
j -> =.
Let
(4.7.7)
£m (t) = ^ m (t) - F t f j C t )
-
T
where e* = (e,
,...,e
) .
m
l,m*
* n,m'
(4.7.8)
- =Pm H t t ) - iN
Also let
Ti(t) = p'(t) - Ep(t)
and
:
(4.7.9)
.
«j(t) = u(t) - F[p] (t)
= P #(t) - E^(t) - f[£](t)
where
"e = (B1 ,...,e )^.
1’
*n
m K t)
n
105
Theorem 4.12
• I£ £or each tZ1V t*!,! = Xl> 1 < 1 < "•
I _< I X k-n+1, we have
(4.7.10)
eIttZ 1P ■ "eIctZ-H1P -
then p is a fixed point of T^.
Proof. Let
(4.7.11)
<-v
and
(4.7.12)
v = q< - Ef ..
Also, let
(4.7.13)
5=v=
t [ t \
q ‘ -
Eq
-
FfpJ
where & = (e^,... ,e^)T . For each I <_ i _< n define the following
sets:
Y^1 = {t e f0,T ]:, F1(PJCt) - ViCt)
M f 1 Ip J - v ^ ] } ,
Y^1 = {t e [0,t ]:
F1IpJCt) - V1Ct)
-I
Y+2 = {t e [0,t ]:
V1 Ct) = -<Kt)R},
Y^2 = {t e [0,x]:
V 1 Ct) = <j)Ct)R}
and
Yi - 4
u u,
U Y
-I
U Y
-
2*
If 1 Ip "] - V 1 11},
106
A g a i n , we have that
.(Y^1 U Y^2) n (Y^1
(4.7.14)
for each I < i < n.
—
—
and ^ ( t )
U
Y^2 ) " «<
Then define o.(t) = -I if t E Y i 1 U Y in
I
- 1 - 2
= +1 if t £ Y ^ 1 U Y ^ 2 , I _< i
n.
If we then proceed
in the same manner as the proof of theorem 3.6 we find that
Fi^pm( j ) ^ t-^,i,m( j)^
v i,m( j)^tZ,i,m( j)^ ">‘Fi^p ^ t£,i^
as j -Xe, for each I ^ 2 ^ k-rri-2 and I <_ I
established that
, .x I I x
Ne.
| | e. I I
as
n.
jxoo
vI^t^ , i^
It can also be
for each I
%
<
i < n.
—
—
Therefore, we have the following convergence relationships for each
I < I <, n, as j-x» :
Yi,m(j) ^ Yi
-2
Y^»m(
and Yi,m(j)
-2
p .
Y^1, Y
^
) x Y^,
Y^,
yi > This implies
p
r
JL+]
(4.7.15)
for each I
a i(t4 i ) = (-1J
i
1 < ^ < k-h+2,
ff1CtI
n.
We have that for each I <_ i _< n and I ^
(4.7.16)
I ^ k-n+2
U1(tZ il) " V1(tZ il) = [^[Pl (tZ fl) - V1(tZ ll)] - [F1 If] (tZ ll) - U 1 (tZ ll)]
= ® i (tZ, i } *
G i C = Z , i>'
Since we have assumed (4.7.10) hold, theorem 4.6 gives us that
(4.7.17)
IIf .[ft - V1 II > IF1 I f t ( ^ ii) - U1( ^ il)I,
I
for each I < i < n.
—
—
k-n+2,
We know that t » .£
S x
Y i 1 I < Z < k-n+2,
p’
—
—
*
107
I _< i < n, so t ^ ± E Y^1 U Y^2 or
any given
i
Case I.
and
i £
U Y^2 for
£.
Assume t»
-^1. £=- Y ^
U
u Y
(4.7.18)
F1 Ep]
(4.7.19)
vi(tZ i) = ""^(tZ i)R*
This implies that
(-tI fi') - V i (tZ il) = IlFi Ip] - V 1 II
If the former condition holds then from (4.7.17)
FiCPKtZli) - Vi(tZ il) > Ip1 ["PKtZil) " P1(tZ il)I
which implies that U1(t^ 1) - Vj,(t^ ^
0.
If the latter
condition holds then
‘ vIttZ 1V ■ uV tZ 1V 4Zt ttZ 1V liSince v^^ j) - l^ ^ as j "^co we know that I Uj^t) I _< <J)(t)R
for all t e [ 0 , t ] and I <_ i _< n.
We also know that for each I <_ I <_ n
U1 is a polynomial of degree k-n or less.
Therefore, we again have
fhat: U1( I ^ 1) - V1(t^ 1) > 0.
Case II. Assume tZ 1 e
(4.7.20)
u Y^2. This implies that
Fi CpKtZ,!) - V tZj l )
-I If1I pI - V1I
or
(4.7.21)
V1(t^ j^) = 4)(t^ 1)R.
I f (4.7.20) Holds, then from (4.7.17) we have
IF1Ip K tz j i ) - V 1(Cfcji)I > IF1A ( t Zi l ) - U1( t Zj l )I
which implies that u ^ t^ 1) - v1(t£ ^) < 0.
If (4.7.21) holds.
108
then
Ui (t£ , i ) " vI ctZ , ! 5 = ui CtZ,i5 - ^ ctZ , i 5R < 0'
Therefore, for all I < Z
—
i
i
Yj-l U Y+2 we have that
(4.7.22)
< k-n+2 and I < i < n such that t» . B
—
— '
Z,i
U1 ( ^ 1 ) - V i ( ^ fi) > 0 .
If t^ i e
y Y^
for all I ^ Z ^ k-n+2 and I < i < n,
then
(4.7.23)
u I ctZ , ! 5 " vI ctZ , ! 5 i °-
From (4.7.15)
we then have that for each I <_ i <_ n, u^(t) - v^(t)
has k-n+1 zero's
in [0,T]. Since
degree k-n or less for I <_
I < i ( n.
i
and v^ are polynomials of
n, we have that u^(t) = v ^ ( t ) ,
Therefore ti(t) = v(t) which implies that p" is a fixed
point of T^ and our proof is complete.
It should be mentioned, as it was in chapter III, that if
P^
4.8
as m -»co then condition (4.7.10) is always satisfied.
Scalar Equations
As in Chapter I I I , we can simplify the hypothesis of the pro­
ceeding theorems by considering second order differential equations
with boundary conditions.
Here a Green's function H(t,s), defined by
(3.7.9) , will be used instead of a G r e e n ’s matrix.
relatively simple calculation for
T
j" I |H(» ,s) I I(j)(s)ds
0
This allows a
109
or
T
/ I IHfX1,s) I |c()(s)ds.
O
t
In many cases, when dealing with second order scalar equations,
the hypothesis, needed to include vector systems, are too stringent.There are theorems that can be proven which hold only for second order
equations with various restrictions on the boundary conditions.
In the process of finding an R SAS, one such case will be included and
the remainder of this section will be devoted to that case.
We will be considering the problem
(4.8.1)
y"(t). = f (t,y), t £ [0,t ],
y(0) = a, y(r) = b ,
where f(t,y) is a continuous real-valued function on [ 0 , t ] x
a, b are constants.
The G r e e n ’s function, H(t,s), associated
with the problem
(4.8.2)
IR and
y ”(t) = g(t),
t 6 LO1T],
y(0) = 0 = y(T).
is given by
I ^s(t - 1). s < t,
H(t,s) = I
I Y"t(s -T) . s > t,
HO
The scalar "version" of (4.3.5) takes- the form
(4.8.3)
h(t) = (■£-=-S-)t + a.
I
Then h" = 0, h(0) =a
and h ( T ) = b .
.
We will also assume that there
exists a positive continuous real-valued function <Kt) on [0, t ] such
that
T
(4.8.4)
- / H(t,s)<J)(s)ds
I
0
for all t E [0,T ] . Also, there exists an K > 0 such that 0
^(t)R when I Iy - h| I <C R and y(t)
, ■ ■ .
' 1 Z
h(t) for all t e [0,?].
f(t,y)
iAs
-'
before we will need to define sets of polynomials as follows:
|
I '' .!
= {p:
p is a polynomial of degree k or less},
Pk = {p:
P E Qk and p(0) = a, p(x) = b},
Vfc = {p:
p e Qk and O X
Sfc = {p:
P E Pk ,
'
|
p(t) _< <j)(t)R for t e [0,t ] }.
and
I Ip
- h|
I
£ R and p(t)
< h(t) for t e [0, t]}•
We have that h £ Sk for all k ^ I.
If p £ Sk , then N p - hI I j< R and p(t) j< h(t) for all t e [0,T].
Therefore, 0
f (t,p) £
(j)(t)R for t e [0,t ] and we can apply the
theorems of section 4,2, provided V. is our approximating set.
■
KLet p £ S, .
k
(4.8.5)
Then for k > 3
—
inf
M v -
F[p]
II =
| Iv - F[p] I I
^k-2
for some v £
•
From theorem 4.4 v is unique.
Define the
I
Ill
Ffc:
operator
for k > 3, by
FfcP =
v.
Since F[y](t) = f(t,y)
is uniformly continuous on compact sets, theorem 4.7 implies that F,
is a continuous operator.
For v e V, „ let
I
q(t) = h(t) + / H(t,s)v(s)ds
0
and define the operator
by
= q. We have that q B
is a continuous linear operator from
T^:
Sjt-+ Pjt by TjtP = ^jc(FjtP).
to P^.
P^, so 0^
Finally define
Since Tjt is the composition of
continuous operators, Tjt is continuous.
Theorem 4.13.
For fixed k ^ 3 the operator T^ has a fixed
point Pjt.
Proof.
We will, again, use theorem 3.1.
We already have that Sjt
is a compact convex subset of the Banach space Qjt and Tjt is a con­
tinuous operator from S1 into
k
W
Q1 .
'k
W e must show that
c V
Let p E Sjt-
t E
Pk1c
Then F^p = “ where 0
v(t)
[O.T] and
inf
Let q - T , p .
I Iv
P
Ilv-
F[p]I I .
Then
K
and q E
- F[p]I I =
T
q(t) = h(t) + / H(t,s)v(s)ds
0
. Also,
q"(t)
= v(t) > 0
^ (t)R for all
112
for t £ [0,T].
q(t)
Since q(0) = a and q(T) =
h(t) for all t £ [0,Tj.
(4.8.6)
b , this implies that
We also get that
|q(t) - h(t)| < / |H(t,s)||v(s)|ds
0
T
. = - / H(t,s)v(s)ds
0
T
~ / H(t,s)<j>(s)Rds
0
<K
for all t £
[0,T] .
Therefore q £ S, and our proof is complete.
K
We have that, if p^ is a fixed point of
inf
I Iv - F [p, ] I I =
I Ip"
then
- F[p ] I I .
V=Vk-2
We can also prove the lemma and theorems corresponding to each
of the lemma's and theorems given previously in chapter IV.
However,
the statements and proofs follow in the same manner and therefore will
not be included.
4.9
Examples
This section will be devoted to examples that illustrate the
theory developed in the current chapter.
Examples 4.1 and 4.2 satis­
fy the requirements of section 4.3 but not those of Chapter III.
113
Example 4.3 is one which does not satisfy the conditions of section
4.3, and therefore, does not satisfy those of Chapter III, but is an
example of a second order equation that meets the requirements of
section 4.8.
Section 4.7 was used to compute the following examples;
Example 4.1.
y"(t) = -r |-T y 2 (t), t e [0,1],
y(0) = I, y(l) = 1/2.
9
Here, we will let <f>(t) == — — -- which satisfies (4.3.6) and (4.3.7)
with R = 1/2.
From (4,5,1) and (4.5,2) we see that K = 6 and a = 1/8,
which ensures that Ka •< I .
An RSAS of degree 6 is
P6(C) = I - 1.0000996t + 0.9975879t2 - 0.9501808t3 + 0.744l602t4
- 0.3774842t5 + 0.08601653t6.
The actual solution is
y(t) ’TTT
and the uniform error is
Ilp6 - y | I =:
0.00006400.
.
Example 4.2.
y"(t) = e ' V c t ) ,
t £ [0 , 1],
y(0.) = I, y(l) = e = 2.7182818.
We will let <}>(t) = y
Also,
(5/4
+ e)2e ^ and R =
a = 1/8 so that Kex <1.
1.25,
An RSAS of degree
then K =
7. is
5/2
given
by
P7(C) = I + t + 0.4999994t2 + 0.1666799c3 + 0.0415913C4
+ 0.0085200t5 + 0.0011602t6 + 0.0003310t
1.
+ 2e.
114
The actual solution is
y(t) = Bt .
and the uniform error is
IIp 7 - ylI = .00000003.
Example 4.3.
y" = I y 2 , t £ [0 ,1 J,
y(0) = 4, y(l) = I.
In this case
h(t) = 4 - 3t.
Let <f>(t) = 8. and R = 3.
If I Iy - h| I <_ 3 and y(t) <_ h(t) for all
f
t E [0,1] then
..
‘
-2 < y(t) < 4
for t E
[0,1].
This implies that
f(t,y) = 3/2y2
< 24
=
<l>(t )R.
Therefore
0 < f (t,.y) < * ( t ) R
whenever
I|y - h | I
R and y(t) £ h(t) for all t E
(j)(t) = 8 we have that
_ Jr H(t,s) (j) (s)ds = I.
0
An RSAS of degree 7 is given by .
[0,1].
Also if
115
P7 Ct) = 4 - 8.0004l6t + 11.985416t2 - 15.574462t3 + 16.923827t
- 13.440560t5 + 6.482342t6 - 1.376148t7
The actual solution is
y (t) = —
--- (2 + 2t)
and the Uniform error is
I Ip7
- y l I = 0.00020819.
CHAPTER V
CONCLUSIONS
.
In dealing with linear systems of differential equations with boundare conditions, w e have shown that if a unique solution "y exists, then
there is a sequence of M A S ’s which converge uniformly to "y„
Also, the
sequence of derivatives of the MAS's converge uniformly to "y'.
When con­
sidering a system of nonlinear differential equations with boundary con­
ditions, assuming certain conditions on the system are satisfied, it has
been shown that a subsequence of S A S ’s or RSAS's converge to a solution
"y.
Again, the derivatives of the subsequence converge to "y'.
The
examples indicate that.S A S 's and RSAS's are very good approximations in
themselves and in some cases are in fact MAS's.
More work, however,
needs to be done on determining, when an SAS or RSAS is ah MAS and under
what conditions we actually get convergence of the sequence of S A S 's or
RSAS's instead of a subsequence.
It would, also, be interesting to use
other known theorems, on the existence of solutions of boundary value
problems, in attempting to prove theorems concerning the existence of an
■.
-SAS or an R S A S .
found in [7],
Of particular interest, to the auth o r , are those theorems
Finally, as was mentioned in section 4,8, there may be
many theorems concerning second order scalar equations which are not
covered by theorems dealing with systems.
These types of problems would
be very worthrwhile pursuing, since they are important in many areas of
application.
BIBLIOGRAPHY '
1.
'
Allinger, G.; and M. Henry. 1"Approximate Solutions of Differential
Equations with Deviating Arguments".
SIAM J. N u m e r . AnAl.
.
13(1976).: . 412-426. .
..
"
r
'
2.
Bradley, John S . . "Generalized Green's Matrices for Compatible
Differential Systems". Michigan Math. J . 13(1966): 97-108.
3.
Cheney, E . W . Introduction to Approximation Theory.
New York.
1966.
McGraw-Hill,
- '
4.
Cline, A. K.
"Lipschitz Conditions on Uniform Approximation
■ Operators". J. Approximation Theory; .8(1973):
160.-172.
5.
Cole, R. Theory of. Ordinary Differential Equations.
Century-Crofts, New York.
1968.
6.
DeBoor, C . and.B . Swartz. "Collocation at Gaussian Points".
J. N u m e r . Anal.
10(1973):
582-606.
■
.
7.
Gustafson, G. B . and K. Schmitt.
"Nonzero Solutions of Boundary .
Value Problems for Second Order Ordinary and Delay- Differ­
ential Equations".
J. D i f f . Equations.
12(1972) : 129-147.
8.
Hartman, Philip.
Ordinary Differential Equations.
S o n s , I n c ., New York.
1964.
Appleton-.
SIAM
John Wiley &
9.
. Henry, M . .S . and K . L-. Wiggins.
"Applications of Approximation
Theory to the Initial Value Problems".
J. Approximation
..
Theory. 17(1976) : 66-85.
.
.
10.
Krasnosel'skii, M.. A. Topological Methods in the Theory of Non­
linear Integral-Equations. Pergamon, New York.
1964.
11. .Kreyszig, Erwin.
Introductory Functional Analysis with Applica­
tions . John Wiley & Sons, Inc.,.1978.
'
12.
'
.
I
Penrose, R.
"A Generalized Inverse for Matrices".
Philos. S o c . 51(1955):
406-413. .
13.
Reid, W . T . "Generalized Green's Matrices for Compatible Systems.
of Differential Equations". A m e r . J ; M a t h ■ 53(1931):
443459.
P r o c , Cambridge
. ' ■
118
14.
Russell, R. D. and L. F. Shampine. "A Collocation Method for
Boundary Value Problesm". N u m e r . Math.
19(1972) : I - 28.
15.
Schmidt, D. and K. L . Wiggins.
"Minimax Approximate Solutions
of Linear Boundary Value Problems".
Math. Comp.
33(1979):
139-148.
16.
Taylor, G. D.
"Approximation by Functions Having Restricted
Ranges, III". A p p l . 27(1969) : 241-248.
17.
Taylor, G. D . and M. J . Winter.
"Calculation of Best Restriced
Approximations".
SIAM J. N u m e r 0 Anal.
7(1970):
248-255.
18.
U r a b e , M.
"Numerical Solution of Boundary Value Problems in
Chebyshev Series - A Method of Computation and Error Esti­
mation".
Lecture Notes in Mathematics Springer-Verlag.
109(1969).
6
M O N T A N A STATE UNIVERSITY LIBRARIES
stks D378J468@Theses
RL
Uniform approximate solutions of differe
3 1762 00176311 7
D 37 8
J1+68
cop.2
Jcnpson, Ronald M
Uniform approximate
solutions of differential
systems with boundary
conditions
DATE
-
O (if A
ISSUED T O
Download