Convergence of Newton-like methods for solving systems of

advertisement
Numer. Math. 27, 27t --281 (1977)
9 by Springer-Veflag t977
Convergence
of Newton-Like
Methods
for Solving Systems of Nonlinear Equations
J. C. P. Bus
Received February 25, 1976
Summary. An analysis is given of the convergence of Newton-like methods for
solving systems of nonlinear equations. Special attention is paid to the computational
aspects of this problem.
1. Introduction
A well-known iterativemethod for solving a system of equations
F(x)=o,
(1.1)
where F : / 3 (IR n-+IRn is some continuous function on some region/3, is Newton's
method, which can be defined by
xk+l = r (xk)= x ~ - [J (xk)l-lF (xk),
(1.2)
where J(x) denotes the jacobian matrix of partial derivatives of F. However,
usually J (x) is not available, so that in practice an approximation to J (x) is used.
In fact, calculating on a computer with finite wordlength, J (x) cannot be obtained
exactly. We will therefore consider a class of methods, including Newton's method
which we will call Newton-like methods. These methods are defined by
x~+1 =~o (x~)= x~- M~-1F (xk),
(t. 3 )
where M, is some approximation to J (xk).
We mention the following examples:
1. The approximation to J (x) obtained by using forward difference formulas.
Define the (i, ])-th element of a matrix B (x, h) by:
i
hi1 ~. [/i(x+hii e)-]i(x)]'
(B(x, h)),i= / ~
[--0-~(" ]i(x),
if hii~O,
(1.4)
if h,j = 0 ,
where h = (hn, h12. . . . . hx, , hsl . . . . . h~,) T E ~ ' , F ( x ) = ([1 (x) . . . . . /~ (x))r, and d denotes the ]'-th unit vector in IR~. Then M k is obtained by
M ~ = e ( x k, h~).
(t.5)
2. The approximation to J (x) obtained by evaluating the analytic expressions
for the partial derivatives on a computer with finite wordlength.
272
J . C . P . Bus
The analysis of Newton-like methods given in this paper is essentially based
on the Newton-Kantorovich Theorem (Kantorovich [51) and its extension given
by Ortega and Rheinboldt [6]. In our approach the dependence of the a s y m p t o t i c
order of convergence on the error in M k as an approximation to J (xk) is clearly
expressed. This approach enables us to incorporate the influence of rounding
errors in the computation and leads to conditions for convergence of a Newton-like
method when all computation is done in finite precision. Moreover, we introduce
a quantity, the solvability number, which can be used as a measure of the degree
of difficulty of a problem when solving it with a Newton-like method on a computer. To calculate this n u m b e r one needs information about the first as well as
the second derivative of the problem function, which is not available for m a n y
practical problems. Therefore, this q u a n t i t y is of limited value for solving systems
of nonlinear equations. However, this notion m a y be extremely useful for testing
programs implementing Newton-like methods. Test functions can be created such
t h a t all information required for calculating the solvability n u m b e r is available.
We can therefore create a representative set of test functions of various degrees
of difficultly (see Bus [tl).
2. A n a l y s i s of N e w t o n - L i k e M e t h o d s
Let F be given as in Section t and let H (x) denote the tensor of partial derivatives of F at x. Suppose x 0 E/) is given and {xk}~~ is defined b y (I .3). Moreover,
let zE/) be a solution of the system of nonlinear equations defined b y F. Then
the aim of this section is to derive sufficient conditions such that {xi}i~176
0 converges to z. We assume that exact arithmetic is used. To simplify the notation
we omit, whenever possible, the subscripts denoting the iteration index, and we
denote the current iterate b y x and the new one by ~v(x).
Furthermore, except for some cases in which it is stated explicitly, we do not
specify the norms used in this paper. When I1"II is used, the reader m a y think of
a n y norm, provided it is used throughout and provided that the norm of L (L (F,*))
is subordinate to the norm of L (lKn), which in turn is subordinate to the norm
of lR *. Here L (A) denotes the linear space of linear operators from A to A, for
some space A, and a norm [1"IlLin L (A) is called subordinate to some given norm II [I
in A if it is defined, for GEL(A) and xEA, b y
[IGIIc= sup I[Gxll
x,0
Ilxll "
The following definition appears to be useful (cf. Ortega and Rheinboldt [6]).
D e f i n i t i o n 2.1. Let F be differentiable on D ( D < ~ , n and let, for some real
n u m b e r r > 0 and integer m > 0, an operator M be defined b y
M: Do• U," (IR"x IR"-+ L (P."),
(2.t)
where Do(D and U , ~ = { y ~ I R "~1 Ilxll<r}. Then M(x, h) is called a strongly consistent approximation to the jacobian matrix J(x) on D o if a constant g, called
the consistency/actor, exists such t h a t
x~Do,
h~ g,m~llJ(x)--M(x, h)ll llhll.
(2.2)
Convergence of Newton-Like Methods
273
An example of a strongly consistent approximation to the jacobian matrix
of F is given by the forward difference approximation B(x, h) defined by (t.4).
The following result for B (x, h) can be proved.
Theorem 2.2. Assume that F is continuously di[[erentiable on some open set
D CD. Then/or any compact set D 0 ( D there exists a ~ > 0 such that B (x, h), given
by (t.4), is well de/ined /or hE U~' and xED o. Moreover, i/
IIJ(x)-J(y)[[<=ylIx-y[t,
for all x , y ~ O
(2.3)
and some constant y > O, then B (x,h) is a strongly consistent approximation to
J (x) on O o.
Proo]. See Ortega and Rheinboldt [6], Section 11.2.5.
We are now ready to define precisely the class of methods that we are going
to analyze.
Definition 2.3. We call a method as given by (1.3), for solving (1.1), a proper
Newton-like method if there exist an operator M, as given by (2A), for some integer m and some real r, and h,E U,~(k=0, t, 2 . . . . ), such that
M~=M(x~, h~),
k=0, t .....
(2.4)
and M (x, h) is a strongly consistent approximation to J (x) on D 0. We will denote
such a method by N (M).
To study the convergence behavior of proper Newton-like methods we compare
them with Newton's method. Define, in a manner similar to (1.2) and (1.3),
(x) = x -
[J (x)]-lF(x)
(2.5)
and
(x) = x - - [M (x, h) ]-l F(x),
(2.6)
where ~p(x) defines a proper Newton-like method. From now on we assume that
D (/3 is convex, z, xoED, F is twice differentiable, and J(x) is nonsingular on D.
Then, using the mean value theorem we obtain the following expression for the
error in ~b(x) as an approximation to the solution vector z:
[[c~(x)-z[[=ll[J(x)]-~(J(x) (x-z)-F(x))[l~S(x,z)[[x-zl[~,
(2.7)
where
S(x,z)= 89
sup
[[H(y)l[)I[[J(x)]-l[[
(2.8)
yELls, x]
and
L[z,x]={uER"lu=Ox+(l--O)z,
0_~0_~1}.
(2.9)
(2.7) expresses the well-known result that the asymptotic order of convergence
of Newton's method is quadratic. Using the well-known perturbation lemma
(e.g. RALL [7], Section t0) we obtain for the difference between ~0(x) and $(x):
r (x) --~e (x) = ([J (x)]-x _ [M (x, h)] -1)F(x)
= [I-- ~=o(t--[J(x)]-XM(x,h))"] [J(x)]-lF(x).
Hence, if
Ilhll <
t9 NUmer. Math., Bd. 27
274
J . C . P . Bus
where ~ is the consistency factor of M, then
[]r (x)--~v (x)[[ <=C (x, h)II [J (x) ]-lF(x)
II,
<2.1o)
where
c(~, h) =2~ IIh UII IJ(~)]-' II.
(2.t 1 )
[[IJ(x)] qF(x) ]l = Jlx - - r (x) II=< I]x - z II+ ]1r (x) - z I[.
(2. t 2)
Furthermore,
So, combining (2.7), (2.10) and (2.t2) we obtain the following upper bound for
the error in ~v(x) as an approximation to z:
If~ ix/- z II~ Jl~(x) - r (x/II + I[r (x/- z [I
~_C(,,h)ltx_z]l+O+C(x,h))S(x, zi[ix_zH2"
<2.t3)
Since C(x,h)=O(][h]D, we can only expect t h a t the a s y m p t o t i c order of convergence of a proper Newton-like method is quadratic if [[hll--o (11~-z[I).
The above results are summarized in the following definition.
Definition 2.4. Let a nonlinear system be defined b y (t.t) and let xoED be an
approximation to the solution z of (1.1). Then we say t h a t this problem is solvable
b y a proper Newton-like method N(M) if the following conditions are satisfied:
a) J(x) and H(x) exist on D and J(xo) is nonsingular.
b) If C(x,h) is defined b y (2.11), then h 0 satisfies
8(Xo, ho) <
1
(2. t 4)
and
r0= C(x0, ho)I[r (x0)--xoll + I1r (~0)-zl[ < fix0-- z I1.
(2.t 5)
c) Uo={ye]R~l [[y-zll ~ro}(D and J(x) is nonsingular on Uo.
d) If K is defined b y
~ 7 = sup C(x,h~),
xEUo
k=t,
2, . . .
then h k satisfies ~" < t.
e)
(F, z, xo, M ) = K + (~, + 1)Sr o < t,
(2.t6)
where
S = sup S(x,z).
x E Uo
If a) to d) are satisfied, then 0 (F, z, x o, M) is called the solvability number of
the Newton-like method N(M) for solving the nonlinear system F(x)~ 0 with x 0
as initial guess and z as solution. If a) to d) are not all satisfied, then the solvability
n u m b e r is defined to be infinite.
The following theorem is now easily proved.
Theorem 2.S. I1 a nonlinear system delined by (t.1) with initial approximation xo
and solution z is solvable by a proper Newton-like method, then the sequence o] points
Convergence of Newton-Like Methods
275
generated by this method converges to z. I[. moreover, the method is such that tlh~lt=
0(r
11)]or k-~ ~o, then the asymptotic order o/convergence is quadratic.
Pro@ Since (2.t4) is satisfied we obtain from (2.t0)
IIv, (Xo) -zl] ~ JIv, (~o) - r (~o)II § I1r (Xo) -zlt
c (xo, ho)II ~ (Xo) --~o II + fl~' (Xo) -- z IIBecause of (2.t 5) we know that
U~ (x0) - ~ H< Hx0 - z JI
Because of c), d) and e) we can use (2.13), so that with condition (2.t6) the result
follows immediately.
Although in practice condition e) is a rather strong condition, it gives us a
clear insight into the behavior of a certain Newton-like method, provided one can
derive results about the consistency factor of the method. In fact 6 gives us a
possibility of measuring the degree of difficulty for solving the problem with the
method. Furthermore, condition d) shows that the larger sup II[J(x)]-aH is, the
xE Uo
smaller h~ should be chosen. Note that conditions c), d) and el, with U 0 replaced by
U = {y~IP."[ ffy- zll < Ilxo- zl]} (D,
together with the condition J(x) and H(x) exist on U, are sufficient to prove
convergence. However, the relevance of Definition 2.4 lies in its use for calculating
the solvability number of problems used for testing programs, and using condition b) we m a y considerably reduce this solvability number. We expect the
given definition to be more realistic, which is illustrated by the examples given
in Section 4.
3. The Effect of Rounding Errors
In this section we consider the effect of round-off errors on the convergence
behavior of Newton-like methods. We use the following notation:
e:
[l,(-):
the precision of computation used;
the expression inside the parentheses calculated with the precision
of computation e.
If we wish to apply the theory given in Section 2 to a Newton-like method where
all computation is done in finite precision (such a method is called a numerical
Newton-like method in this section) we are immediately confronted with the problem that a numerical Newton-like method will, in general, not be a proper Newtonlike method. Even when we choose
M k = / l , (J(xk)),
which is the best we can do anyhow, we can, in general, only guarantee that
[[M k-- J(x~)]t ~ •
[IJ(xk)I],
(3. t )
where O ___e is some value depending on e and the way in which M~ is calculated.
Therefore, the notion "strongly consistent approximation" (cf. Def. 2.1) is not
19"
276
J . C . P . Bus
a useful concept when dealing with numerical Newton-like methods. We give an
extension of the theory given in Section 2, which is applicable to numerical
Newton-like methods. First we introduce a more general concept for measuring
the consistency of M k as an approximation to J(xk).
Definition 3.1. (see Def. 2.t). Let F be differentiable on D ( / ) ( I R ~ and let the
operator M be defined by (2.t) for some real number r > 0 and integral n u m b e r
m ~ 0. Then M(x, h) is called a numerically consistent approximation to J (x) on
D o ( D if there exist a constant q and a function co (e,h) which is continuous in
for fixed h ~= 0 and e ~ 0, such that the following conditions are satisfied:
I[J(x)--/l~(M(x,h))[[<Co(e,h)+qHh[[,
lira Co(e,h ) = 0 ,
for all xED o,
hEU,~\{O),
for h 4:0.
e--cO
(3.2)
(3.3)
We call
c (e, h) = co (e, h) + c11[h[[
(3.4)
the consistency function of M on D o.
As an example of a numerically consistent approximation we again consider
the forward difference approximation B(x,h), defined by (t.4). We prove the
following theorem.
Theorem 3.2. Assume that F is continuously differentiable on some open set
D ( D . Then for any compact set D o ( D there exists a 0 > 0 such that B(x,h), given
by (t.4), is well defined for hEU~" and xED o. Moreover, if (2.3) is satisfied, then
B (x, h) is a numerically consistent approximation to J(x) on D o.
Proof. W e use the following relations (Wilkinson [8]; Dekker [31):
Ill,(a+b)--(a+b)[
Ilt,(alb)--(alb) I <la/b[~.
<(lal+lbl)~,
We assume that for some ~ = ~ (~)~
Ilt,(t,(x))-/,(x)l
Vx
D,
i = t , 2 . . . . . n,
where F (x) = (/l (x) . . . . . /~ ( X) ) T. N o w suppose h i i 4=0. Then some simple algebra
shows that the error in the forward difference approximation to an element of
the jacobian matrix can be bounded b y
0. 1 +,lW(x.h)).l
(I/,(x)l+l/,(x+h.e")l).
where we assumed that/~ < 88 which seems reasonable. Hence, using the/l-norm,
we obtain
fill, (B (x, h)) - J ( x ) I]~- (t + e)[[B (x, h) - J ( x ) ]l+ ~ ][J(x)[[
+
h~
where h~i~=min([ h,~ [, i, i----1 ..... n, l h , I *0).
sup (llF(y)ll),
ily_xll<llhll
Convergence of Newton-Like Methods
277
From Theorem 2.2 and the fact that D O is compact and D open we know that
there exist a Q > 0 and a cl such that
IIB(x,h)--J(x)]l<=-d,}lhH,
for
hEUg'.
Choose
co (e, h) = 3 (n + 1)6/sup ([IF(x)[[)) + e sup ([IJ(x)[[),
hmin ~xED
xEDo
(~.5)
C1 = (a + ~) "C1"
Then the theorem is proved, since
lim[b(e)[=0.
[]
We are now ready to define whether we m a y expect a numerical Newton-like
method to behave like Newton's method.
Definition 3.3. (see Def. 2.3). We call a numerical Newton-like method for
solving (t.1) a proper numerical Newton-like method if there exist an operator M
as given b y (2A), for some integer m and real r, and h~EU,~\{0} (k=O, t, 2 . . . . ),
such that
Mk=/l,(M(xk, hk))
and M(x,h) is a numerically consistent approximation to J(x) on D o. Such a
method is denoted b y N(M, ~).
We give an analysis of proper numerical Newton-like methods which is analogous to the analysis of a proper Newton-like method. Denote b y ~(x) the
vector which exactly satisfies the equation
/l, (M(x, h)) (~ (x) --x) ----F(x).
(3.6)
With the same assumptions as in Section 2, we obtain (cf. (2A0)) :
I[d~(x) -- ~, (x) {[<=C( x, h, e)II [J (x) ]-I F(x) [[,
(3.7)
where it is assumed that
C(x, h, e) = 2 c (e, h)l[ [J(x)3-' II< 1
(3.8)
and c (e, h) is given b y (3.4).
L e t / l , (~p(x)) be the numerical approximation to ~p(x). Then
/l, (~/(x)) =/l, (/l, (~ (x) --x) + x),
(3.9)
where [l, (~p(x)--x) denotes the numerical solution of the system (3.6), where F(x)
is replaced b y / l , (F(x)).
Now, suppose we want to solve with gaussian elimination, on a c o m p u t e r
with precision of arithmetic e, the linear system
Ax=b,
where A is given exactly b u t b is not. Let the error in b be bounded b y [[~b[I.
278
J . C . P . Bus
Then the error in the numerical solution i as an approximation to the exact
solution x* is bounded by
, , ~IIx*l[
- x . I,- _<n(A)[
(n)
-t --~eg(A)~g(n)
+ ,,,16bbi,[!] '
(3.tO)
where n ( A ) = IIA II IrA-all and g (n) is some function depending on the order n, the
norm used and the pivoting strategy used (Wilkinson I8]), and where it is assumed that
Using the perturbation lemma it is easily shown that x(/l, (M(x,h))) ~ 3~(J(x)).
Applying these results t o / l , (~(x)--x) we obtain
rl/l~ (~ (x) --x) -- (~ (x) -- x)II < ~ (x, e, ~,)fl~ (x) --x [I,
(3.t a)
~(x,e,n) = 3 ~ ( J ( x ) )
(3"t2)
where
1 - 3~ (J(x)) eg (n) + ~
and d satisfies
[I/Z, (V (x)) --V(x)I[ < a liE(x)[I.
(3.13)
We assumed that 3 n (J(x))eg ( n ) < t. Combining (3.9) and (3.t t) we obtain
IF/Z~(m(~)) -~ ix)II--<~/ni/z~(~ ix) -x)II + iixII)+ in/z~im(x) -x) -/m/~) _x)ii
where
fl(x,e,n)--(l + e)a(x,e,n)+ e.
(3A5)
Finally, combining (2.7), 0.7) and 0.t4) we obtain for the error in ]l,(V/(x)) as
an approximation to solution z:
[I/z,(~ (-))-41 =<II/Z,(~ (x))-~(~)U + I~ (~)-4
--<~ I1~11+~(x, ~,~)IIx-zll + (a +fl (x, ~,~))11~(~)-~11.
(3A6)
With
II~(~)- zll__<I1~(x)- ~ (~)It + I1~ (x)-zll
we obtain as the final result:
II/Z,(~(x))--zll~llxU+Z(x,~,h,~)l[x--zll+Q(x,~,h,~,~)llx--zp,
(3.t7)
where
L(x, e,h,n) =fl (x, e,n) + (t + fl (x,e,n) ) C(x,h, e)
(3.18)
Q(x,e,h,n,z) = (1 +fl(x,e,n)) (t +C(x,h,e))S(x,z).
(3.t9)
and
From the first term in the right-hand side of (3.t 7) we see t h a t one cannot
expect to find a solution of a nonlinear system with a proper numerical Newtonlike method within a relative precision which is higher than the precision of computation. Furthermore, whether there is convergence at all depends on the
quantities:
C o n v e r g e n c e of N e w t o n - L i k e M e t h o d s
279
S(x,z), the convergence factor of the exact Newton m e t h o d ;
CCx, h,,), which is a measure for the error in [l,(M(x,h)) as a numerical approximation to J(x) ; this quantity depends on the method ;
fl(x,e,n), which reflects the condition number of the linear subproblem; the
condition number x(J(x)) should be small relative to l/e.
In either case, L(x,e,h,n)+Q(x,e,h,n,z)llx-z[I has to be less than I in order
to be able to guarantee convergence.
We summarize these results in the following definition:
Definition 3.4 (see Def. 2.4). Let a nonlinear system be defined by (l.l) and
let xoED be an approximation to the solution z of (t .I). Then we call this problem
solvable by a proper numerical Newton-like method N (M, e) if the following conditions are satisfied:
a) J(x) and H(x) exist on D and
(J(xo)) < I/(3 e g (n)),
where g(n) depends on the method used for solving the linear system (el. (3.t0)).
b) h 0 satisfies C(x o, h 0, e) < t, and if
to-- ~ IIXo II + IIr <Xo) - z II § [t~ <xo, e, n) § (~ +t~/~o, ~, n)> C (~o, ho, ~)D[I ~' (Xo) -- xo I[,
then
ro < UXo-- z II"
Uo= {y EJR"I [{y--zl[ <=ro}(D
c)
and
sup ~(J(x)) < t/(3 eg(n)).
xE U0
d) If K is defined b y
K=
C(x,h k, e),
sup
then h~ satisfies K < I.
xE Uo
k=l, 2, ...
e)
a(F,z, xo, M,e ) =fl+ (1 +fl)K + (1 + fl) (1 + K) Sr o < t ,
where
S = sup S(x,z),
xEUo
fl= supfl(x,e,n).
xEU,
If a) to d) are satisfied, then a (F, z, x o, M, e) is called the solvability number
of the proper numerical Newton-like method N(M, e) for solving the nonlinear
system F(x)----0 with x 0 as initial guess and z as solution. If a), b), c) or d) are
not satisfied, then tile solvability number is defined to be infinite.
The following theorem is now easily proved.
Theorem 3.5 (see Theorem 2.5). I] a system of nonlinear equations defined
by (t.t) with initial approximation x o and solution z is solvable by a proper numerical
Newton-like method N(M, e), then the sequence o[ points generated by this method
converges to a point x* ~ith llx*-- ~tl < ~ llx* tl-
Proo[.
The proof is similar to the proof of Theorem 2.5.
[]
280
J . C . P . Bus
4. Some Examples
Consider the problem given b y Gheri and Mancino [4] :
where
li(x)=flnxi+(i--n/2)~'+ ~. [zii(sin~(log (zii))+cos~(log(z~i)))],
i=l
i4:i
(4.t)
F(x) = (/1 (x) . . . . . t~ (x)) ~
and
z,;=V#+qi.
Elementary computation leads to the following inequalities for the elements J i j(x)
of the jacobian matrix and Hii k (x) of the hessian tensor:
J,(x)=fln,
Hijk(x)=0
IJ,/(x)l_~+t
if
IH.~(x)l<(~+t) *
for i 4 : i ,
i=?'
or
i,i
and
]':4=k,
]'=k.
Use of Gershgorin's Theorem for bounding the eigenvalues of a matrix leads to
IIJ(x) 11--<[,8',,*+ (2fln+ (n-- 1) (~q- t)) (aq- 1) (n-- t)] 89
IIU(x)3-111< [~*~*-(2~n+ (n-2) (~+ 1)) (~+ 1) (n -1)3-*.
(4.2)
Furthermore
IIH(x) II--< V~:-q--~ (~+ t)..
(4.3)
Now let methods A and B be Newton-like methods with
Mka=lZ~(J(xk)), MkB=lt~(B(xk,0.000t)),
where B(x, 0.0001) is defined by (1.4) with hi i = h = 0.0001 (i, j = t . . . . . n). The
precision of arithmetic is chosen to be
~= I0 -I~,
and we assume that in both methods gaussian elimination with complete pivoting
is used (see Wilkinson [8]), so that
g (n) ~ 20 n 3.
We use (3.t) and (3.5) to obtain the consistency functions cA (s,h) and cB(e,h) of
methods A and B respectively, where ~ in (3A) is assumed to be 10 - n , which is
very reasonable.
We consider the problem for which
0c=5,
fl=t4,
~=3,
n=t0.
(4.4)
Our goal is not to solve this problem (in fact we assume that it is already solved),
but we want to know whether it is solvable by the given methods A and B,
and what the values of the solvability numbers are.
Let z denote the solution and suppose x o is chosen such that
x~) = zr + t,
(4.5)
Convergence of Newton-Like Methods
281
where x c0 denotes the i-th c o m p o n e n t of the vector x. Then c o m p u t a t i o n of IIx0--z ]t,
I[q~( x 0 ) - z[] a n d limb(x0)--x011 with a c o m p u t e r delivers a p p r o x i m a t e l y :
llXo-Zll=3.2, IIr
IIr
Using the above results it is easy to verify the conditions of Definition
W e obtain
<0.023,
3.4.
roB < 0 . 0 2 5 ,
a n d finally for the s o l v a b i l i t y numbers
a (F, z, x 0, M a, e) ~ t .24 • r o ~ 0.029,
a(F,z, xo, M B, e) ~ t .24 • r 0 < 0.03t.
Therefore, we m a y conclude t h a t Problem 4.1 with ~, fl, 7 and n given b y (4.4)
and t h e s t a r t i n g point given b y (4.5) is an excellent test problem which should
be solved easily b y each p r o g r a m i m p l e m e n t i n g algorithm N(M a, t 0 -14) or
N(M B, 10-x4).
Note t h a t if we h a d simplified Definition 3.4 such t h a t U o was chosen to be
{y ~ ~ J t i y - z I[ --< llXo- z tI~ a n d condition b) deleted, then the solvability n u m b e r s
would have been 4.0 a p p r o x i m a t e l y a n d convergence would not have been proved,
Hence, the given Definition is preferable (see also the end of Section 2).
References
1. Bus, J. C. P.: A comparative study of programs for solving nonlinear equations.
Report N W 25/75, Mathematisch Centrum, Amsterdam (1975)
2 Collatz, L.: Funktionalanalysis und numerische Mathematik. Berlin-G6ttingenHeidelberg-New York: Springer 1964. English edition. New York: Academic Press
1966
3. Dekker, T. J.: Numerieke Algebra. MC Syllabus 12, Mathematisch Centrum,
Amsterdam (1971)
4. Gheri, G., Mancino, O. G.: A significant example to test methods for solving
systems of nonlinear equations. Calcolo 8, 107-1t3 (1971)
5. Kantorovich, L.: On Newton's method for functional equations ~Russian]. Dokl.
Akad. Nauk. SSSR 59, 1237-1240 (t948)
6. Ortega, J . M . , Rheinboldt, W. C. : Iterative solution of nonlinear equations in
several variables. New York: Academic Press t970
7. Rall, L. B. : Computational solution of nonlinear operator equations. New York:
Wiley 1969
8. Wilkinson, J. H. : Rounding errors in algebraic processes, Notes on Applied Science
no. 32. Englewood Cliffs, N. J. : Prentice Hall 1963
9. Wilkinson, J. H. : The algebraic eigenvalue problem. Oxford: Clarendon Press 1965
J. C. P. Bus
Mathematical Centre
Tweede Boerhaavestraat 49
Amsterdam, The Netherlands
Download