# mathematics

```Numerical Methods/Equation Solving
An equation of the type
is either algebraic or transcendental.
E.g, these equations are algebraic.
and these are transcendental
While roots can be found directly for algebraic equations of fourth order or lower, and for a few
special transcendental equations, in practice we need to solve equations of higher order and also
arbitrary transcendental equations.
As analytic solutions are often either too cumbersome or simply do not exist, we need to find an
approximate method of solution. This is where numerical analysis comes into the picture.
Some Useful Observations


The total number of roots an algebraic equation can have is the same as its degree.
An algebraic equation can have at most as many positive roots as the number of changes

.
An algebraic equation can have at most as many negative roots as the number of changes

.
In an algebraic equation with real coefficients, complex roots occur in conjugate pairs

If
with roots
then the following hold good:
o
o
o

If
is continuous in the interval
the interval
and
then a root must exist in
Initial Approximation
The last point about the interval is one of the most useful properties numerical methods use to
find the roots. All of them have in common the requirement that we need to make an initial guess
for the root. Practically, this is easy to do graphically. Simply plot the equation and make a rough
estimate of the solution. Analytically, we can usually choose any point in an interval where a
change of sign takes place. However, this is subject to certain conditions that vary from method
to method.
Convergence
A numerical method to solve equations will be a long process. We would like to know, if the
method will lead to a solution (close to the exact solution) or will lead us away from the solution.
If the method, leads to the solution, then we say that the method is convergent. Otherwise, the
method is said to be divergent.
Rate of Convergence
Various methods converge to the root at different rates. That is, some methods are slow to
converge and it takes a long time to arrive at the root, while other methods can lead us to the root
faster. This is in general a compromise between ease of calculation and time.
For a computer program however, it is generally better to look at methods which converge
quickly. The rate of convergence could be linear or of some higher order. The higher the order,
the faster the method converges.
If is the magnitude of the error in the th iteration, ignoring sign, then the order is
approximately constant.
It is also important to note that the chosen method will converge only if
if
is
.
Bisection Method
This is one of the simplest methods and is strongly based on the property of intervals. To find a
root using this method, the first thing to do is to find an interval
such that
. Bisect this interval to get a point
. Choose one of
or so that the
sign of
is opposite to the ordinate at that point. Use this as the new interval and proceed
until you get the root within desired accuracy.
Example
Solve
correct up to 2 decimal places.
Error Analysis
The maximum error after the th iteration using this process will be given as
As the interval at each iteration is halved, we have
linearly.
. Thus this method converges
If we are interested in the number of iterations the Bisection Method needs to converge to a root
within a certain tolerance than we can use the formula for the maximum error.
Example
How many iterations do you need to get the root if you start with a = 1 and b = 2 and the
tolerance is 10−4?
The error
needs to be smaller than 10−4. Use the formula for the maximum error:
Solve for i using log rules
```