Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. The growth in computing power has revolutionized the use of realistic mathematical models in science and engineering, and subtle numerical analysis is required to implement these detailed models of the world. For example, ordinary differential equations appear in celestial mechanics (predicting the motions of planets, stars and galaxies); numerical linear algebra is important for data analysis;[2][3][4] stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. Before the advent of modern computers, numerical methods often depended on hand interpolation formulas applied to data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas nevertheless continue to be used as part of the software algorithms.[5] The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Numerical analysis continues this long tradition: rather than exact symbolic answers, which can only be applied to real-world measurements by translation into digits, it gives approximate solutions within specified error bounds. 1. Root Finding Objectives: find solutions of quadratic and cubic equations derive the formula and follow the algorithms for the solutions of non-linear equations using the following methods: o Bisection o Newton-Raphson o Secant o False-Position Newton's Method (a.k.a Newton-Raphson Method) is an open method for solving non-linear equations. Contrary to a bracketing-method (e.g. bisection method) Newton's method needs one initial guess but it doesn't guarantee to converge. 2. System of Linear Algebraic Equations In the field of numerical analysis, numerical linear algebra is an area to study methods to solve problems in linear algebra by numerical computation. The following problems will be considered in this area: 1. Numerically solving a system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. Mathematicians show the relationship between different factors in the form of equations. "Linear equations" mean the variable appears only once in each equation without being raised to a power. A "system" of linear equations means that all of the equations are true at the same time. So, the person solving the system of equations is looking for the values of each variable that will make all of the equations true at the same time. If no such values can satisfy all of the equations in the system, then the equations are called "inconsistent." 3. Optimization Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element (with regard to some criterion) from some set of available alternatives.[1] Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.[2] In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. More generally, optimization includes finding "best available" values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains. 4. Curve Fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points,[3] possibly subject to constraints. Curve fitting can involve either interpolation,[6][7] where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis,[10][11] which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data,[16] and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data. 5. Ordinary Differential Equation Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term is sometimes taken to mean the computation of integrals. Many differential equations cannot be solved using symbolic computation ("analysis"). For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution. Ordinary differential equations occur in many scientific disciplines.[1] For example, in physics, chemistry, biology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved. 6. Ordinary Differential Equation Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). Methods 1.1Finite difference method 1.2Method of lines 1.3Finite element method 1.4Gradient discretization method 1.5Finite volume method 1.6Spectral method 1.7Meshfree methods 1.8Domain decomposition methods 1.9Multigrid methods