COMPARATIVE STUDY OF A NEW ITERATIVE METHOD WITH THAT OF NEWTONS METHOD FOR SOLVING ALGEBRAIC AND TRANSCEDENTAL EQUATIONS AZIZUL HASAN1 , NAJMUDDIN AHMAD2 1 Department of Mathematics, Jazan University , Jazan, KSA. 2 Department of Mathematics, Integral University, lucknow, India. Abstract: The aim of this paper is to construct an efficient iterative method to solve non linear equations. One new iterative method for solving algebraic and transcendental equations is presented using a Taylor series formula . using the , the Newton’s method and the an Improve iterative method and the result compared. It was observed that the Newton method required more number of iteration in comparison to improve iterative method. By the use of numerical experiments to show that this method are more efficient than Newton – Raphson method. Keywords: Newton’s method, Improve iterative method, Algebraic and Transcendental equations, Numerical examples Introduction: Solving non linear equations is one of the most important and challenging problems in science and engineering applications. For Solving nonlinear equations Newton’s method is one of the most pre dominant problems in numerical analysis [1]. Some historical points on this method can ๐ผ be found in [17,18 19,20,21, 22]. Recently, some methods have been proposed and analyzed for solving nonlinear equations [ 2, 3, and 6]. These methods have been suggested by using quadrature formulas, decomposition and Taylor’s series [4, , 9 and 14]. As we know, quadrature rules play an important and significant role in the evaluation of integrals. One of the most well – known iterative methods is Newton’s classical method which has a quadratic convergence rate. Some authors have derived new iterative methods which are more efficient than that of Newton’s [10,11,13, and 15]. This paper is organized as follows. Part 1 provides some preliminaries which are needed. Part 2 is devoted to suggest one new iterative method by using a Taylor Series expansion upto four terms. These are implicit – type methods. To implement these methods, we use Newton’s method as a predictor and then use one new method as a corrector. The resultant methods can be considered as two – step iterative methods. In Part 3, a comparison between this methods with that of Newton’s methods. Several examples are given to illustrate the efficiencies and advantages of these methods. Definition and Notation: Let ๐ผ ∈ ๐ and ๐ฅ๐ ∈ ๐ , ๐ = 0,1,2,3, … … … Then the sequence ๐ฅ๐ is said to be convergence to ๐ผ if lim |๐ฅ๐ − ๐ผ| = 0. If there exists a constant ๐ > 0, an integer ๐→∞ ๐0 ≥ 0 and ๐ ≥ 0 such that for all ๐ > ๐0 we have |๐ฅ๐+1 − ๐ผ| ≤ ๐|๐ฅ๐ − ๐ผ|๐ then ๐ฅ๐ is said to be converges to ๐ผ with convergence order at least p. If ๐ = 2, the convergence is to be quadratic or if ๐ = 3 then it is cubic. Notation: The notation ๐๐ = ๐ฅ๐ − ๐ผ, is the error in the ๐๐กโ iteration. The equation ๐๐+1 = ๐๐๐๐ + ๐(๐๐๐+1 ) is called the error equation. By substituting ๐๐ = ๐ฅ๐ − ๐ผ for all n in any iterative method and simplifying. We obtain the error equation for that method. The value of p obtained is called the order of this method. NEWTON-RAPHSON METHOD We consider the problem of numerical determine a real root ๐ผ of non linear equation ๐(๐ฅ) = 0, ๐: ๐ท ⊂ ๐ → ๐ (1.1) The Newton-Raphson method finds the slope (tangent line) of the function at the current point and uses the zero of the tangent line as the next reference point. The process is repeated until the root is found [5-7]. The method is probably the most popular technique for solving nonlinear equation because of its quadratic convergence rate. But it is sometimes damped if bad initial guesses are used [8-9].It was suggested however, that Newton’s method should sometimes be started with Picard iteration to improve the initial guess [9.16]. Newton Raphson method is much more efficient than the Bisection method. However, it requires the calculation of the derivative of a function as the reference point which is not always easy or either the derivative does not exist at all or it cannot be expressed in terms of elementary function [6,7-8]. Furthermore, the tangent line often shoots wildly and might occasionally be trapped in a loop [6]. The function, f(x)=0 f can be expanded in the neighbourhood of the root xk through the Taylor expansion: (๐ฅ− x )2 f(x) = f(xk ) + (๐ฅ − xk ) ๐ ′ (๐ฅ๐ ) + 2! k ๐ ′′ (xk ) where x can be seen as a trial value for the root at the nth step and the approximate value of the next step ๐ฅ๐+1 can be derived from f(๐ฅ๐+1 ) = f(๐ฅ๐ ) + (๐ฅ๐+1 - ๐ฅ๐ ) ๐ ′ (๐ฅ๐ ) =0 The known numerical method for solving non linear equations is the Newton’s method is given by ๐(๐ฅ ) ๐ ๐ฅ๐+1 = ๐ฅ๐ − ๐′ (๐) , ๐ = 0, 1, 2,3, … … … where ๐ฅ0 is an initial approximation sufficiently near to ๐ผ. The convergence order of the Newton’s method is quadratic for simple roots [4]. By implication, the quadratic convergence we mean that the accuracy gets doubled at each iteration. Algorithm of the Newton- Raphson Method : Inputs: f(x) –the given function, xo –the initial approximation, ๐-the error tolerance and N –the maximum number of iteration. Output: An approximation to the root x =๐พ or a message of a failure. Assumption: x=๐พ is a simple root of f (x)=0 ๏ท Compute f(x) , and ๐ ′ (๐ฅ) ๐(๐ฅ๐ ) ๏ท Compute ๐ฅ๐+1 = ๐ฅ๐ − ๐′ (๐) , ๐ = 0, 1, 2,3, … … … do until convergence or failure. Test for convergence of failure If |๐(๐ฅ๐+1 )| < ๐, |๐ฅ๐+1 − ๐ฅ๐ |/๐ฅ๐ < ๐ or k>N, stop. End. It was remarked in [1], that if none of the above criteria has been satisfied, within a predetermined, say, N, iteration, then the method has failed after the prescribed number of iteration. In this case, one could try the method again with a different xo. Meanwhile, a judicious choice of xo can sometimes be obtained by drawing the graph of f(x), if possible. However, there does not seems to exist a clear- cut guideline on how to choose a right starting point, xo that guarantees the convergence of the Newton-Raphson method to a desire root. Description of the Improve iterative method: Let us consider the algebraic and transcendental equation is f(x) = 0 (1.2) Let ๐พ be the correct root of this equation in the open interval I in which the function f is defined and continuous and the function is also differentiable term by terms. Following the basic assumptions and Abbasbandy and maheshwari (1, 11) and also see (7,13) we also taking Taylor series expansion of f(x) up to four terms f(x) = f(xk ) + (๐ฅ − xk ) ๐ ′ (๐ฅ๐ ) + (๐ฅ − xk )3 3! ๐ ′′′ (xk ) (๐ฅ− xk )2 2! ๐ ′′ (xk ) + (1.3) where xk is the k-th approximations to the root of the given equation (1.2) since ๐พ is the correct root of equation ( 1.3) , so (๐พ − xk )2 ′′ f(γ) = f(xk ) + (๐พ − xk ) ๐ ′ (๐ฅ๐ ) + ๐ (xk ) 2! (๐พ − xk )3 ′′′ + ๐ (xk ) (1.4) 3! and f(γ) = 0 0 = f(xk ) + (๐พ − xk ) ๐ ′ (๐ฅ๐ (๐พ − xk )2 ′′ )+ ๐ (xk ) 2! (๐พ − xk )3 ′′′ + ๐ (xk ) (1.5) 3! Taking four terms of the equation (1.5) ,the value of the root of equation (1.2) can be Obtained if let ๐พ = ๐ฅ๐+1 0 = f(xk ) + (๐ฅ๐+1 − xk ) ๐ ′ (๐ฅ๐ ) + (๐ฅ๐+1 −xk )2 2! ๐ ′′ (xk ) + (๐ฅ๐+1 − xk )3 3! ๐ ′′′ (xk ) (1.6) xk+1 = f(xk ) + (๐ฅ๐+1 − xk )๐ ′ (๐ฅ๐ ) + (๐ฅ๐+1 −xk )2 2! ๐ ′′ (xk ) + (๐ฅ๐+1 − xk )3 3! ๐ ′′′ (xk ) (1.7) Solving right hand side of the above equation. We get the desire root of the equation upto numerical accuracy of fourteen decimal places. Algorithm: For a given ๐ฅ0 , compute the approximate solution ๐ฅ๐+1 by the following iterative scheme (๐ฅ๐+1 − xk )2 ′′ (๐ฅ๐+1 − xk )3 ′′′ xk+1 = f(xk ) + (๐ฅ๐+1 − xk )๐ ′ (๐ฅ๐ ) + ๐ (xk ) + ๐ (xk ) 2! 3! where k = 0,1,2, … … … … Numerical Experiments: In this section, we employ the methods obtained in this paper to solve some nonlinear equations and compare them with the Newton’s method (NM). We use the stopping criteria |๐ฅ๐+1 − ๐ฅ๐ | < ๐ and |๐(๐ฅ๐+1 )| < ๐, where ๐ = 10−14, for computer programs. All programs are written in Matlab. Comparison between the New Iterative method, Newtons method and number of Iteration for the various functions. EXAMPLE 1. Function ๐(๐ฅ) = ๐ฅ 3 − ๐ฅ + 3, Methods Newton method Present method Number of iterations 7 1 EXAMPLE 2. Function Methods Newton method Present method Number of iterations 5 1 Initial Approximation ๐ฅ0 = −1 ๐ฅ๐ -1.67169988165716 -1.67169988165716 |๐(๐ฅ๐ )| 7.105427357601002E-015 7.105427357601002E-015 ๐(๐ฅ) = ๐ฅ 3 + 4๐ฅ 2 − 10, Initial Approximation ๐ฅ0 = 1 ๐ฅ๐ 1.36523001341410 1.36523001341410 |๐(๐ฅ๐ )| 5.151434834260726E-014 5.151434834260726E-014 EXAMPLE3.Function Methods Newton method Present method ๐(๐ฅ) = ๐ฅ 3 − 2๐ฅ + 5, Newton method Present method ๐(๐ฅ) = ๐ฅ 4 − 24, Number of iterations 5 3 Methods Newton method Present method |๐(๐ฅ๐ )| 2.21336383940064 2.21336383940064 -1.421085471520200E-013 -1.421085471520200E-013 Initial Approximation ๐ฅ0 = 1 ๐ฅ๐ 5 3 0.77288295914921 0.77288295914921 ๐(๐ฅ) = sin ๐ฅ − 0.5 ๐ฅ, Number of iteration 5 4 EXAMPLE7. Function EXAMPLE8. Function Number of iterations Methods Newton method 6 Present method 2 |๐(๐ฅ๐ )| 7.771561172376096E-016 7.771561172376096E-016 Initial Approximation ๐ฅ๐ 1.88808675302834 1.88808675302834 ๐(๐ฅ) = ๐ฅ 2 − 5, |๐(๐ฅ๐ )| -2.775557561562891E-016 -2.775557561562891E-016 Initial Approximation ๐ฅ0 = 1.6 ๐ฅ๐ 1.89549426703398 1.89549426703398 ๐(๐ฅ) = ๐ฅ log ๐ฅ − 1.2, Number of iterations Methods Newton method 5 Present method 4 ๐ฅ0 = 2 ๐ฅ๐ Number of iterations EXAMPLE6. Function |๐(๐ฅ๐ )| -3.819167204710539E-014 7.460698725481052E-014 Initial Approximation ๐(๐ฅ) = ๐ฅ 3 − ๐ −๐ฅ , EXAMPLE 5. Function Methods Newton method Present method ๐ฅ๐ −2.09455148154233 −2.09455148154232 Number of iterations 8 2 EXAMPLE4. Function Methods Initial Approximation ๐ฅ0 = 2 |๐(๐ฅ๐ )| -5.773159728050814E-015 -5.773159728050814E-015 Initial Approximation ๐ฅ๐ 2.23606797749979 2.23606797749978 ๐ฅ0 = 2 ๐ฅ0 = 1 |๐(๐ฅ๐ )| 8.881784197001252E-016 -4.352074256530614E-014 EXAMPLE9. Function ๐(๐ฅ) = ๐ฅ 3 − 3๐ฅ 2 + 2.5, Initial Approximation Number of iterations Methods Newton method 5 Present method 1 ๐ฅ๐ 2.64178352745293 2.64178352745293 ๐ฅ0 = 2.5 |๐(๐ฅ๐ )| 2.131628207280301E-014 2.131628207280301E-014 Conclusion: Based on our results and discussions, we now conclude that the new method is formally the most effective of the newton methods we have considered here in the study. requires only a single function evaluation per iteration We derived one improve iterative method based on Taylor series formula using upto four temrs for solving nonlinear equations. Analysis of efficiency from the numerical compution shows that this method are preferable to the well-known Newton’s method. From numerical examples, we showed that these methods have great practical utilities. References: 1. S. Abbasbandy, Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 145, (2003), 887-893. 2. S. Amat, S. Busquier, J.M. Gutierrez, Geometric constructions of iterative functions to solve nonlinear equations, J. Comput. Appl. Math. 157, (2003) 197205. 3. K. E. Atkinson, An introduction to numerical analysis, 2nd ed., John Wiley & Sons, New York, 1987. 4. E. Babolian, J. Biazar, Solution of nonlinear equations by adomian decomposition method, Appl. Math. Comput. 132, (2002) 167-172. 5. H. Bateman, Halley's method for solving equations, The American Mathematical Monthly, Vol. 45, No. 1, (1938), 11–17. 6. F. Costabile, M.I. Gualtieri, S.S. Capizzano, An iterative method for the solutions of nonlin-ear equations, Calcolo 30, (1999) 17-34. 7. M.Frontini,Hermite interpolation and a new iterative method for the computation of the roots of non-linear equations, Calcolo 40, (2003) 109-119. 8. M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149, (2004), 771782. 9. A.Y. Ozban, Some new variants of Newton's method, Appl. Math. Lett. 17, (2004), 677-682. 10. J. Stoer, R. Bulrisch, Introduction to Numerical Analysis, Second edition, Springer–Verlag, 1993. 11. Amit kumar Maheshwari, A fourth order iterative method for solving nonlineaeqations, Applied Mathematics and computation, Vol.211, 2009, pp. 383-391. 12. J. Traub, Iterative methods for the solution of equations, Prentice? Hall, Englewood Cliffs, New Jersey, 1964. 13. Biswa Nath Datta (2012), Lecture Notes on Numerical Solution of root Finding Problems. www.math.niu.edu/dattab. 14. Iwetan, C.N, Fuwape,I.A, Olajide, M.S and Adenodi, R.A(2012), Comparative Study of the Bisection and Newton Methods in solving for Zero and Extremes of a Single-Variable Function. J. of NAMP vol.21pp173-176. 15. Srivastava, R.B and Srivastava, S (2011), Comparison of Numerical Rate of Convergence of Bisection, Newton and Secant Methods. Journal of Chemical, Biological and Physical Sciences. Vol 2(1) pp 472-479. 16. Ehiwario, J.C (2013), Lecture Notes on MTH213: Introduction to Numerical Analysis, College of Education Agbor. An Affiliate of Delta State University, Abraka. 17. http://www.efunda.com/math/num_rootfinding-cfm February,2014 18. Wikipedia, the free Encyclopedia 19. Autar, K.K and Egwu, E (2008), http://www.numericalmethods.eng.usf.edu Retrieved on 20th February, 2014. 20. McDonough, J.M (2001), Lecture in Computationl Numerical Analysis . Dept. of Mechanical Engineering, University of Kentucky. 21. Allen, M.B and Isaacson E.L (1998), Numerical Analysis for Applied Science. John Wiley and sons. Pp188-195. 22. T. Yamamoto, Historical development in convergence analysis for Newton's and Newton-like methods, J. Comput. Appl. Math., 124, (2000), 1–23. Corresponding Authors: Najmuddin Ahmad Department of Mathematics Integral University lucknow India.