# File

```MATH 175: NUMERICAL ANALYSIS II
Lecturer: Jomar Fajardo Rabajante
IMSP, UPLB
2nd Semester AY 2012-2013
CHAPTER 1: NUMERICAL METHODS IN
SOLVING NONLINEAR EQUATIONS
This chapter involves numerically solving for the
roots of an equation or zeros of a function.
e.g.
2
Solve for x in 2 x  3x  7 x  2
This problem may boil down in solving for the
2
root of 2 x  4 x  2  0 or
2
solving for the zeros of f ( x)  2 x  4 x  2
Go back to HS Math or Math 17
• The whole of Elementary Algebra (including college
algebra) generally focuses on the ways and means
of easily solving equations (or systems of
equations)…
Why do we factor? Why do we simplify?
Do these methods have real life applications?
Probably, none?
Ahhh… Solving equations has...
Recall
MATH MODELING IN HIGH SCHOOL
AND EARLY MATH SERIES…
Example: Age problem!!!
We need a working EQUATION in
solving this problem… and how
can we solve it?
Recall
Let us say we have a quadratic equation:
- we can use factoring
- we can use completing the square
- we can use quadratic formula
But what if we have a cubic equation
- we can use Cardano’s formula
What if we have a polynomial of degree 8?
Note that, from a theorem in Algebra (by
Abel), we cannot have a closed-form formula
for polynomials of degree higher than 4.
Or an equation involving transcendentals?
2
x
e.g.
cos(x)  5sin(x)  e  4
A Na&iuml;ve way…
GRAPH IT!!!
cos(x)  5 sin(x)  e  4
x2
 f ( x)  cos(x)  5 sin(x)  e  4
x2
Then get the zeros of f !!!
You can use
MS Excel,
GraphCalc,
Grapes, etc…
A Na&iuml;ve way…
Ah, eh… but what is the exact value?
cos(x)  5 sin(x)  e  4
x2
 f ( x)  cos(x)  5 sin(x)  e  4
x2
Or if we cannot get the exact value, what is the
nearest solution? (say with error at most 10-4)
First Method: Exhaustive search or
Incremental search
Just use MS Excel…
Let’s say we
want to get
this zero
=A122+0.0001
=COS(A123)+5*SIN(A123)
+EXP(A123^2)-4
We want f(x)=0, so if
f(0.3919)=-0.000082≈0,
then our APPROXIMATE
solution is 0.3919.
Smaller step sizes
increases our accuracy.
However, we cannot
measure the order of
convergence.
First Method: Exhaustive search or
Incremental search
Exhaustive search may miss a blip:
Second Method: Bisection (a bracketing
method)
Recall IVT (specifically the Intermediate Zero Theorem)
and Math 174 discussion about Bisection Method
Second Method: Bisection
1. Given f(x), choose the initial interval [x1,1 ,x2,1] such
that f(x1,1)*f(x2,1)&lt;0.
Input tol, the tolerance error level.
Input kmax, maximum number of iterations.
Define a counter, say k, to keep track of the number of
bisections performed.
2. For k&lt;kmax+2 (starting k=2), compute x3,k := (x1,k-1 + x2,k-1)/2.
Set x3,1 := x1,1 .
If |x3,k – x3,k-1|&lt;tol then print results and exit the program
(the root is found in k-1 iterations).
Otherwise, if f(x1,k-1)*f(x3,k)&lt;0 then set x2,k := x3,k and
x1,k := x1,k-1 , else set x1,k := x3,k and x2,k := x2,k-1.
3. If convergence is not obtained in kmax+1 iterations, inform
the user that the tolerance criterion was not satisfied, and
exit the program.
DIFFERENT STOPPING CRITERION
Recall Math 174
Terminate when:
f ( x3,k )  f tol
x3,k  x3,k 1  tol
x3,k  x3,k 1
x3,k
 tol
If tol=10-n, then x3,k
should approximate
the true value to
n decimal places or
n significant digits
DIFFERENT STOPPING CRITERION
Recall Math 174
Terminate when:
k  k max
Iterations have gone on
“long enough”
f ( x3,k )  f big
x3,k  xbig
Second Method: Bisection
Notice that this algorithm locates only one root of the
equation at a time. When an equation has multiple
roots, it is the choice of the initial interval provided
by the user which determines which root is
located.
The contrary condition, f(a)*f(b)&gt;0 does NOT imply
that there are no real roots in the interval [a,b].
Convergence of bisection method is guaranteed
provided that IVT (IZT) is always met in every
iteration.
Second Method: Bisection
The bisection algorithm will fail in this case
yx
2
Second Method: Bisection
The bisection algorithm may also fail in these cases:
-not continuous on the initial interval
-multiple roots (may fail to converge to the desired
root). It is better to bracket only one root.
Second Method: Bisection
How many iterations do we need to have an answer
accurate at least up to m decimal places?
Let n be the number of iterations, n&gt;1.

n  log2 (b  a)10
m

Note that the computed n should be rounded-up to
the nearest integer.
Second Method: Bisection
DERIVATION
Length of the 1st interval: b – a
Length of the 2nd interval: (b – a)/2
Length of the n-th interval: (b – a)/2n–1
At the n-th iteration we get the midpoint, so the
length of the interval will become: (b – a)/2n
Second Method: Bisection
DERIVATION
Let x* be the exact root.
xn

b  a
m
 x* 
 10
b  a 10
2
m

n
2
n
n  log 2 b  a 10
m

HENCE, THE
(worst-case)
NUMBER OF
ITERATIONS
NEEDED IS:
n
 
ORDER OF CONVERGENCE
(Q-convergence)
Given an asymptotic error constant λ&gt;0, the
sequence {x3,k} converges to x* with order
p&gt;0 iff
lim
k 
xk  x
*
xk 1  x
* p
Order of convergence: p
If p=1, rate of convergence: O(λk)

ORDER OF CONVERGENCE
When p=1
If 0&lt;λ&lt;1, then the sequence {x3,k} converges to x* linearly
If λ=1, then the sequence {x3,k} converges to x*
sublinearly
If λ=0, then the sequence {x3,k} converges to x*
superlinearly, meaning it is possibly also of quadratic
order, or possibly of higher order (but definitely faster
than mere linear, e.g. p=1.6, p=2, p=3, etc…)
As λ becomes nearer to 0, the convergence gets faster.
ORDER OF CONVERGENCE
When p=r&gt;1
If 0&lt;λ&lt;∞, then the sequence {x3,k} converges to x* with
order r
If λ=∞, then the sequence {x3,k} converges to x* with
order less than r
If λ=0, then the sequence {x3,k} converges to x* possibly
also with order r+1 or possibly of higher order (but
definitely faster than mere order r)
As λ becomes nearer to 0, the convergence gets faster.
Second Method: Bisection
ORDER OF CONVERGENCE
errork 1
Recall: error k 
2
meaning the error is reduced by half in every
iteration.
x3,k  x *
errork
1
lim
 lim

1
k  x
k  error
2


x
*
k 1
3, k 1
Hence, bisection method
converges to the root
linearly (p=1… slow!)
Rate of convergence:
O(1/2k)
```