x - Studygig

advertisement
2.14
Example
Solve 𝑥 4 – 11 x + 8 modified Newton-Raphson method.
Solution:
f (x) = 𝑥 4 – 11x + 8
f '(x) = 4x3 – 11
f "(x) = 12x2
The modified Newton-Rapson’s formula is
The calculations are given as
Hence, the root is 1.891876.
MULLER’S METHOD:
Muller’s method is an iterative method and free from the evaluation of derivative.
It requires three starting points (𝑥𝑛−2 , 𝑓𝑛−2 ), (𝑥𝑛−1 , 𝑓𝑛−1 ) and (𝑥𝑛 , 𝑓𝑛 ). A parabola
is constructed that passes through these points then the quadratic formula is
employed to find a root for the next approximation. We assume that 𝑥𝑛 is the best
approximation to the root and consider the parabola through the three starting
values as shown in Fig.
Let the quadratic polynomial be f (x) = a𝑥 2 + bx + c
[1]
2.15
Passing through the points (𝑥𝑛−2 , 𝑓𝑛−2 ), (𝑥𝑛−1 , 𝑓𝑛−1 ) and (𝑥𝑛 , 𝑓𝑛 ), then
[2]
Eliminating a, b, c from Eq. 2, we obtain the following determinant
[3]
By expanding this determinant in Eq. 3, the function f (x) can be written as
[4]
Equation 4 is a quadratic polynomial passing through the three given points.
Let
Now, Eq. 4 becomes
[5]
Noting f(x)=0,
Let
Eq. 5 will take the form
[6]
[7]
2.16
Where first part of [7] can be written as
[8]
Solving Eq.(3.35) for 1/𝝀, we obtain
[9]
The sign in the denominator of Eq. 9 is ± according as 𝑔𝑛 > 0 or 𝑔𝑛 < 0.
Hence
[10]
Now, replacing x on left hand side by 𝑥𝑛+1 in Eq. 10, we obtain
Summary
Example
Find a root of the equation 𝑥 3 – 3x – 7 = 0 using the Miller’s method where the
root lies between 2 and 3.
Solution:
Let 𝑥0 = 2, 𝑥1 = 2.5 and 𝑥2 = 3. The calculations are shown in Tables 1 and 2.
Table 1
2.17
Table 2
Hence one root is 2.42599 correct up to five decimal places.
AITKEN’S 𝚫𝟐 METHOD:
Suppose we have an equation f (x) = 0 whose roots are to be determined.
Let I be an interval containing the point x = 𝛼.
Now, can be written as x = 𝜑(x) such that 𝜑 (x) and 𝜑‘(x) are continuous in I and
| 𝜑 ‘(x)| < 1 for all
x in I.
Denoting 𝑥𝑖−1 , 𝑥𝑖 and 𝑥𝑖 +1 as the three successive approximations to the desired
root 𝛼, we can write
𝛼 -𝑥𝑖 = 𝜆(𝛼 – xi –1)
(1)
𝛼 – xi +1 = 𝜆 (𝛼 – xi)
(2)
where 𝜆 is a constant so that | 𝜑 (x)| ≤ 𝜆 ≤1 for all i.
Dividing Eq.1 with Eq. 2, we obtain
(3)
(4)
Now,
And
(5)
Using equation 5 and 4 , we get
(6)
Equation (6) gives the successive approximation to the root 𝛼 and method is
known as the Aitken’s𝚫𝟐 method.
2.18
Example:
1+𝑐𝑜𝑠𝑥
Find the root of the function 𝑥 = (
3
)correct to four decimal places using
Aitken’s iteration method.
Solution:
f (x) = cos x – 3x + 1
f (0) = 1
f (𝝅/2) = cos(𝝅 /2) – 3(𝝅 /2) + 1 = – 8.42857
Hence
f (0) > 0 and f (𝝅 /2) < 0
Also,
f (0) f (𝝅 /2) = 1(– 8.42857) = – 8.42857 < 0
Therefore, a root exists between 0 and 𝝅 /2.
The given equation can be written as
1 + 𝑐𝑜𝑠𝑥
𝑥=(
)
3
Now,
Equation shows that Aitken’s method can be employed.
Let 𝑥0 = 0 be an initial approximation to the root
Now, drawing the table
2.19
Therefore,
Hence, the root is 0.6070 correct up to four decimal places.
SECANT METHOD:
The secant method is very similar to the Newton-Raphson method. The main
disadvantage of the Newton-Raphson method is that the method requires the
determination of the derivatives of the function at several points. Often, the
calculation of these derivatives takes too much time. In some cases, a closed-form
expression for f ′(x) may difficult to obtain or may not be available.
Now, from the Newton-Raphson method,
Convergence of the Iteration Methods:
We now study the rate at which the iteration methods converge to the exact root, if
the initial approximation is sufficiently close to the desired root.
Defining the error of approximation at the kth iterate as 𝜀𝑘 = 𝑥𝑘 –  , k = 0, 1, 2,...
Definition An iterative method is said to be of order p or has the rate of
convergence p, if p is the largest positive real number for which there exists a finite
constant C ≠ 0 , such that
|𝜀𝑘+1 | ≤ C |𝜀𝑘 |𝑝 .
2.20
The constant C, which is independent of k, is called the asymptotic error constant
and it depends on the derivatives of f(x) at x = .
Method of false position
If the root lies initially in the interval (x0, x1), then one of the end points is fixed
for all iterations. If the left end point 𝑥0 is fixed and the right end point moves
towards the required root, the method behaves like
Subtitiuting
we expand each term in
Taylor’s series and simplify using the fact that f(α) = 0. We obtain the error
equation as
Since 𝜀0 is finite and fixed, the error equation becomes
|𝜀𝑘+1 | ≤ |𝐶 ∗ | |𝜀𝑘 |
where C*=C𝜀0
Hence, the method of false position has order 1 or has linear rate of convergence.
Method of successive approximations or fixed point iteration
method
We have
Subtracting we get
Therefore,
Hence, the fixed point iteration method has order 1 or has linear rate of
convergence.
Newton-Raphson method
The method is given by
Subtitiuting
we obtain
2.21
Expanding each term in Taylor’s series and simplify using the fact that f(α) = 0.
Neglecting the terms containing 𝜀𝑘 3 and higher powers of 𝜀𝑘 , we get
|𝜀𝑘+1 | ≤ |𝐶| |𝜀𝑘 |2
And
Therefore, Newton’s method is of order 2 or has quadratic rate of convergence.
Convergence of Newton-Raphson Method
The Newton-Raphson iteration formula is given by
(1)
The general form of Eq. is given by
x = φ (x)
(2)
The Newton-Raphson iteration method given by Eq. converges if |φ′(x)| < 1.
2.22
Hence Newton-Raphson method converges if
If α denotes the actual root of f (x) = 0, then we can select a small interval in which
f (x), f ′(x) and f ″(x) are all continuous and the condition given is satisfied.
Therefore, Newton-Raphson method always converges provided the initial
approximation 𝑥0 is taken very close to the actual root α.
Summing up of all methods
The Bisection method and the method of False Position always converge to an
answer, provided a root is bracketed in the interval (a, b) to start with. Since the
root lies in the interval (a, b), on every iteration the width of the interval is reduced
until the solution is obtained. The Newton-Raphson method and the method of
Successive Approximations require only one initial guess and on every iteration it
approaches to the true solution or the exact root. The Bisection method is
guaranteed to converge. The Bisection method may fail when the function is
tangent to the axis and does not cross the x-axis at f (x) = 0.
The Bisection method, the method of False Position, and the method of Successive
Approximations converge linearly while the Newton-Raphson method converges
quadratically. Newton-Raphson method requires less number of iterations than the
other three methods. One disadvantage with Newton-Raphson method is that when
the derivative f ′(xi) is zero, a new starting or initial value of x must be selected to
Continue with the iterative procedure. The Successive Approximation method
converges only when the condition |φ′(x)| < 1 is satisfied.
Futhermore, Muller method and aitken’s method can find real or complex roots of
a polynomial or an arbitrary function.
Question arises What is the importance of defining the order or rate of
convergence of a method?
Suppose that we are using Newton’s method for computing a root of f(x) = 0. Let
us assume that at a particular stage of iteration, the error in magnitude in
computing the root is 10−1 =0.1. We observe from (1.31), that in the next iteration,
the error behaves like 𝐶(0.1)2 = C(10−2 ).
That is, we may possibly get an accuracy of two decimal places. Because of the
quadratic convergence of the method, we may possibly get an accuracy of four
2.23
decimal places in the next iteration. However, it also depends on the value of C.
From this discussion, we conclude that both fixed point iteration and regula-falsi
methods converge slowly as they have only linear rate of convergence. Further,
Newton’s method converges at least twice as fast as the fixed point iteration and
regula-falsi methods.
Download