File

advertisement
MATH 175: Numerical Analysis II
Lecturer: Jomar Fajardo Rabajante
IMSP, UPLB
2nd Sem AY 2012-2013
5th Method: Newton’s
Method/Newton-Raphson Iteration
• http://www.youtube.com/watch?v=hhT25CO6wDI
____________________________________________
Theorem: Assume that r is a zero of the differentiable
function f. Then if
0  f ( r )  f ' ( r )  f " ( r )  ...  f
but
f
(q)
( q 1 )
(r )  0
then f has a zero of multiplicity q at r.
The root is called simple if the multiplicity is one.
(r )
5th Method: Newton’s
Method/Newton-Raphson Iteration
x i 1  x i 
f ( xi )
f ' ( xi )
f ' ( xk )  0
y=x^2, has root of
multiplicity 2.
Newton’s Method
converges, but is
slowing down near
x=0.
5th Method: Newton’s
Method/Newton-Raphson Iteration
Assume that Newton’s method converges to the root r.
Usually, Newton’s method is quadratically (fast) convergent
if the root r is simple. If the root is NOT simple Newton’s
method converges linearly.
Newton’s Method Convergence Theorem: Assume that the
(q+1)-times continuously differentiable function f on [a,b]
has a multiplicity q root at rє [a,b]. Then Newton’s method
is locally convergent to r, and the error errork at iteration k
satisfies
lim
k
error
error
k
k 1
  , where  
( q  1)
q
5th Method: Newton’s
Method/Newton-Raphson Iteration
Example: If q=3
 
( 3  1)
3
Hence,
error
k

2
3
 2 error
3
Near convergence, this means that
the error will decrease by 2/3 on
each iteration (O((2/3)^k).
k 1
For the previous
methods, you can
interpret lambda same as
this. This interpretation is
not applicable when
lambda=0
5th Method: Newton’s
Method/Newton-Raphson Iteration
If f is (q+1)-times continuously differentiable on [a,b],
which contains a root r of multiplicity q>1, then
Modified Newton’s method
x i 1  x i  q
f ( xi )
f ' ( xi )
converges locally and quadratically to r.
5th Method: Newton’s
Method/Newton-Raphson Iteration
Assuming Newton’s method is applied to a
function with a zero of multiplicity q>1. The
multiplicity of the zero can be estimated as
the integer nearest to
1
q
Apply this
only if we
do not
know q.
1
x k  x k 1
x k 1  x k  2
An iterative method is
called locally convergent
to r if the method
converges to r for initial
guesses sufficiently close
to r.
Example: Newton’s
Convergence Theorem
guarantees convergence
to a simple zero if f is
twice differentiable and if
we start the iteration
sufficiently close to the
zero.
Another Failure of Newton’s Method
5th Method: Newton’s
Method/Newton-Raphson Iteration
Stopping Criterion:
If Newton’s method is linearly (i.e. not
quadratically) convergent use (same as
Regula Falsi)
 
x k  x k 1
x k 1  x k  2

 1
x k  x k 1  tol
5th Method: Newton’s
Method/Newton-Raphson Iteration
Stopping Criterion:
If Newton’s method is superlinearly (e.g.
quadratically) convergent or if we do not
know the order of convergence use
x k  x k 1  tol
A GENERAL METHOD: FIXED POINT
ITERATION
Definition:
An iteration of the form:
x k  g ( x k 1 )
is called a fixed point (or functional) iteration.
And any x* such that
x *  g ( x *)
is called a fixed point of g.
A GENERAL METHOD: FIXED POINT
ITERATION
Questions:
Is secant method a fixed point iteration?
Is Newton’s method a fixed point iteration?
We are going to study the generalization of Secant and
Newton’s methods and look at the origin of the
sufficient conditions for the methods to converge to a
zero of a function (e.g. Newton’s Method Convergence
Theorem was proved using the Fixed Point Theorem)…
Actually, everything that is true for Fixed Point
Iteration is applicable to Secant and Newton’s
Method.
A GENERAL METHOD: FIXED POINT
ITERATION
How to generate a fixed point iteration for solving
roots?
2
Example: x  2 x  1  0
1st Method: Use Secant or Newton’s Transform
Newton’s Transform:
2
x k  x k 1 
x k 1  2 x k 1  1
2 x k 1  2
A GENERAL METHOD: FIXED POINT
ITERATION
2nd Method:
Separate x in the equation. Examples:
x 1
2
x
x
2x 1
2
2
xk 
x k 1  1
2
xk 
2 x k 1  1
A GENERAL METHOD: FIXED POINT
ITERATION
2nd Method:
What is the idea of this method?
x  2x 1  0
2
x 1
2
x
2
Solving the zero of
f ( x)  x  2 x  1
2
y  x and
Solving the
2
intersection
x 1
y  g (x) 
of
2
f ( x)  x  2 x  1
2
Fixed
point:
y-value is
equal to
x-value
y  x and
x 1
2
y
2
x 1
2
y
2
y
f ( x)  x  2 x  1
2
y x
x-value of the
fixed point is the
root of the
original function
2x 1
A GENERAL METHOD: FIXED POINT
ITERATION
Exercise:
Let us go back to our original equation:
2
From x  2 x  1  0
Derive this fixed point formula:
5 ( x k 1 )  1
2
xk 
6 x k 1  2
A GENERAL METHOD: FIXED POINT
ITERATION
Question: Can every equation f(x)=0 be
turned into a fixed point problem?
YES, and in many different ways…
However, not all are converging!!! And
actually, not all has fixed points (no
intersection with y=x)!!!
A GENERAL METHOD: FIXED POINT
ITERATION
• How to do the iterations work?
THE COBWEB DIAGRAM (Geometric
representation of FPI)
1. draw a line segment vertically to the function
2. then draw a horizontal line segment to the
diagonal y=x.
3. repeat.
DIVERGING!!!
FPI fails!
What if we start
here?
A GENERAL METHOD: FIXED POINT
ITERATION
• Intuitively, what do you think is the condition
for convergence? (Hint: look at the slopes)
CONVERGING!
|Slopes|>1
Slope=1
|Slopes|<1
|Slopes|<1
|Slopes|>1
A GENERAL METHOD: FIXED POINT
ITERATION
FIXED POINT THEOREM: Let the iteration function g be
continuous on the closed interval [a,b] with g:[a,b][a,b].
Furthermore, suppose that g is differentiable on the open
interval (a,b) and there exists a positive constant K<1
(capital letter K) such that |g’(x)|<K<1 for all xє(a,b). Then
• The fixed point x* in [a,b] exists and is unique.
• The sequence {xk} generated by xk=g(xk-1) converges to x*
for any x0є[a,b].
x k  x k 1  K max{ x 0  a , b  x 0 }
k
xk  x * 
K
k
1 K
x1  x 0
Assignment:
PROVE!!!
Example: xk=sqrt(xk-1) on [0.3,2]. We should be
sure that there is a fixed point in [0.3,2]
1. g(x)=sqrt(x), continuous on [0.3,2]? YES!
2. g:[0.3,2][0.3,2]? YES! Why? This square root function is
monotonically increasing. sqrt(0.3)=0.5477… and
sqrt(2)=1.4142... So [sqrt(0.3),sqrt(2)] is a subset of [0.3,2].
3. |g’(x)|=0.5/sqrt(x)<1 for all x in [0.3,2]? YES! Why? |g’(x)|
is monotonically decreasing so
|0.5/sqrt(0.3)|=0.91287…<1. Actually, K=0.5/sqrt(0.3)
Hence, the fixed point in [0.3,2] exists and is unique. And if
you use any starting value in [0.3,2], the iteration will
converge to the fixed point.
Try [0.1,2] would the theorem hold?
This is also a solution t
xx0
o
0.547723
0.740083
0.860281
0.927513
0.963075
0.981364
0.990638
0.995308
0.997651
0.998825
0.999412
0.999706
0.999853
0.999927
0.999963
0.999982
0.999991
0.999995
0.999998
0.999999
0.999999
1
1
A GENERAL METHOD: FIXED POINT
ITERATION
Note that the hypotheses of the fixed point theorem are
sufficient conditions for convergence of the iteration
scheme, but not necessary.
However, we may add for all xє(a,b): if |g’(x)|>1, then the
iteration diverges. If |g’(x)|=1, no conclusion can be
made.
If |g’(x)|<1, then the fixed point is said to be attracting.
If |g’(x)|>1, then the fixed point is said to be repelling.
A GENERAL METHOD: FIXED POINT
ITERATION
Assume that the hypotheses of the fixed point
theorem are met, also assume g’ is continuous
on (a,b). If g’(x*)≠0, then for any starting
value in [a,b], the iteration will converge only
linearly to the fixed point.
Example: try to get g’(x) consider [a,b]=[-1,1]
g ( x)  x 
f ( x)
f '( x)
where
f ( x)  x
2
A GENERAL METHOD: FIXED POINT
ITERATION
To obtain a higher-order convergence, the
iteration function must have a zero derivative
at the fixed point. The more derivatives of the
iteration function which are zero at the fixed
point, the higher will be the order of
convergence.
A GENERAL METHOD: FIXED POINT
ITERATION
Stopping Criterion:
If fixed point iteration is linearly convergent use
g ' ( x *)   

 1
x k  x k 1
x k 1  x k  2
x k  x k 1  tol
A GENERAL METHOD: FIXED POINT
ITERATION
Stopping Criterion:
If fixed point iteration is superlinearly
convergent or if we do not know the order of
convergence use
x k  x k 1  tol
A GENERAL METHOD: FIXED POINT
ITERATION
ANOTHER FIXED-POINT TRANSFORM:
HALLEY’S METHOD (has cubic order of
convergence for simple zeros)
x k  x k 1 
2 f ( x k 1 ) f ' ( x k 1 )
2 ( f ' ( x k 1 ))  f ( x k 1 ) f " ( x k 1 )
2
Halley’s Method is under the group of
Householder’s Methods for root-finding.
A GENERAL METHOD: FIXED POINT
ITERATION
Fixed Point Iteration has many other
applications other than root-finding.
It is also used in the analysis of Discrete
Dynamical Systems leading to the study of
Chaos.
Download