Uploaded by zaferakay1907

Numerical Methods for Engineers: Lecture Notes

advertisement
MATH380 Numerical Methods for Engineers
2020 - 2021
Week 1
Prof. Dr. Ayhan Aydın, Prof. Dr. Sofiya Ostrovska
Atılım University Mathematics Department, Ankara
Textbook: J.H. Mathews, K.D. Fink, Numerical Methods using
Matlab, 4th ed.(2004)
Introduction
Estimating the accuracy is an important part of numerical
analysis. In real problems, there are three main sources of
errors:
(i) errors in the input data;
(ii) roundoff errors;
(iii) truncation errors.
Errors in the input data occur by incorrect measurements,
imperfect devices, etc.
Roundoff errors arise when we perform calculations with
numbers, whose representation is restricted to a certain
finite number of digits, as usually is the case (calculators).
Truncation errors are related to the methods of calculation.
Mostly, numerical methods do not provide the exact
solution to a problem, even if the calculations are carried
out without rounding.
1
For example, the problem of summing a convergent infinite
∞
X
series S =
an can be replaced by the simpler problem of
n=1
finding a partial sum of the first N terms: SN =
N
X
an and then
n=1
using the approximation
S ≈ Sn .
2
In this case, E = S − SN is a truncation error.
3
Truncation errors occur as a result of simplifying mathematical
problem with the help of an approximate formula. Here are the
examples of some commonly used approximate formulae:
sin x ≈ x
cos x ≈ 1 −
for small x;
x2
2
for small x;
Representation of numbers
There are different ways to present real numbers. For example,
3.99999999 . . . and 4 are two different representations of the
same number 4, that is, 3.99999999 · · · = 4. Also, we may write
either 2000 or 2 × 103 to represent the same number.
For computer calculations, it is common to use a floating-point
representation of the form:
a × 10m where |a| < 1 and m ∈ Z (positive or non-negative
integer).
For example, we have 315 = 0.315 × 103 . here, a = 0.315 is called
the mantissa and m = 3 is said to be the exponent, which indicates
the position of the decimal point with respect to mantissa.
Any computer has a limited number of digits both in a
representation of mantissa and exponent. Obviously, a
floating-point representation of a number is not unique.
The most common forms of number representation are:
a normalized floating point representation, where
0.1 ⩽ |a| < 1, that is
a = ±0.d1 d2 . . . dk . . . × 10m , where d1 6= 0.
For example,
315 = 0.315 × 103 = 0.0315 × 104 = 0.00315 × 104 and only
the first one is normalized.
a scientific notation representation:
a = ±t.d1 d2 . . . dk . . . × 10m , where t 6= 0.
For example, 315 = 0.315 × 103 = 3.15 × 102 . Here, the
second one is a scientific notation form.
C HOPPING OFF AND ROUNDING OFF
Let x be a number written in the normalized form as
x = ±0.d1 d2 . . . dk dk+1 · · · × 10m .
Suppose that the maximum number of decimal digits allowed by the
computer’s floating-point representation is k.
The approximation x = ±0.d1 d2 . . . dk × 10m is the chopped
floating-point representation.
Another way to approximate x using k-digit decimals mantissa is
the rounded floating-point representation constructed as follows:


 0.d1 d2 . . . dk

0.d1 d2 . . . (dk + 1)
0.d1 d2 . . . dk dk+1 · · · −→
0.d1 d2 . . . (dk−1 + 1)0



and so on
if dk+1 ⩽ 4
if dk+1 ⩾ 5 and dk 6= 9
if dk+1 ⩾ 5, dk = 9, dk−1 6= 9
Example. For each number, find 3-digit chopped and rounded
floating point representations.
x = 0.2483 × 104 .
Then chopped representation
x = 0.248 × 10 , and rounded representation is
x∗∗ = 0.248 × 104 , that is x∗ = x∗∗ ;
∗
4
y = 0.2487 × 104 .
Then chopped representation
y = 0.248 × 10 , and rounded representation is
y∗∗ = 0.249 × 104 , that is x∗ 6= x∗∗ ;
∗
4
z = 0.2497 × 104 .
Then chopped representation
z = 0.249 × 10 , and rounded representation is
z∗∗ = 0.250 × 104 , that is, z∗ 6= z∗∗ .
∗
4
O RDER OF APPROXIMATION
Let f (h) and g(h) be functions defined in a neighbourhood of 0. We
say that f (h) = O(g(h)) as h → 0 if there exist M > 0 and h0 > 0 such
that
|f (h)| ⩽ M|g(h)|
for all |h| ≤ h0 .
f (h)
In other words, the ratio g(h) is bounded when |h| < h0 and g(h) 6= 0.
E XAMPLES
1
sin h = O(h) as h → 0.
2
ln(1 + h) = O(h) as h → 0.
3
3h2 + 2h3 = O(h2 ) as h → 0.
Indeed, 3h2 + 2h3 = h2 (3 + 2h), hence |3h2 + 2h3 | ⩽ 5h2
when |h| ⩽ 1.
TAYLOR ’ S FORMULA
Let us recall the Taylor’s formula: If a function f has (n + 1)
derivatives in some neighbourhood of a point x0 , then
f (x0 + h) = f (x0 ) + f 0 (x0 ) · h +
f (n) (x0 )
f 00
x0 )2!h2 + · · · +
+ Rn (x0 ; h),
(
n!
where Rn (x0 ; h), is a reminder satisfying
Rn (x0 ; h) = O(hn+1 ) as h → 0.
E XAMPLES
1
x0 = 0, n = 3 McLaurin’s formula
eh = 1 + h +
2
h2
h3
h2
h3
+
+ O(h4 ) = 1 + h +
+
+ O(h4 ), h → 0.
2!
3!
2
6
x0 = 0, n = 3 McLaurin’s formula
sin h = h −
3
h3
+ O(h4 ), h → 0.
6
x0 = 0, n = 4 McLaurin’s formula
sin h = h −
h3
h3
+ 0 · h4 + O(h5 ) = sin h = h −
+ O(h5 ), h → 0.
6
6
E RROR ANALYSIS
The results of numerical computations are usually not exact solutions
to practical problems. Therefore, it is important to estimate the errors
occuring in such computations.
Let p̂ serve as an approximation to exact value p. Then Ep = |p̂ − p| is
E
|p̂−p|
an absolute error of approximation, and, Rp = pp = p , p 6= 0 is a
relative error.
The relative error expresses an error as a percentage of the true value.
For example, we think that 400cm ± 1cm is a good accuracy of
measurements, while 4cm ± 1cm is definitely not. Notice that the
absolute errors in both cases are equal: E1 = E2 = 1cm. However, the
1
relative errors are not close as R1 = 400
= 0.0025 = 0.25%, while
1
R2 = 4 = 0.25 = 25%, that is R1 << R2 .
E RROR ANALYSIS
A number p̂ approximates a value p to d significant digits if d is the
greatest non-negative integer satisfying
Rp <
101−d
= 5 × 10−d .
2
Example. For p = 2.71828183 and b
p = 2.71 we have
|p − b
p|
|2.71828183 − 2.71|
=
= 0.003046 7 < 0.005 = 5 × 10−3 ,
|p|
|2.71828183|
but 0.003046 7 ≮ 0.0005 = 5 × 10−4 .
So, b
p approximates p to d = 3 significant digits.
Example. If p = 4713.82 and b
p = 4712, then
|p − b
p|
= 0.00038609 < 0.0005 = 5 × 10−4 ,
|p|
but 0.00038609 ≮ 0.00005 = 5 × 10−5 .
So, b
p approximates p to d = 4 significant digits.
E XAMPLE
Example. If p = 0.00004567 and b
p = 0.00003567, then
|p − b
p|
= 0.21896212 < 0.5 = 5 × 10−1 ,
|p|
but 0.21896212 ≮ 0.05 = 5 × 10−2 ,
so, b
p approximates p to d = 1 significant digits.
L OSS OF SIGNIFICANCE
Apart from roundoff errors which are inevitable and often difficult to
control, there are errors that can be avoided by using more
appropriate programming algorithms.
Let x = 2.71828182 and y = 2.71826143. They have 9 decimal digits
of the precision. If we perform the operation x − y we have
x − y = 0.00002039 = 0.2039 × 10−4 which has 4 decimal digits of
precision. This phenomenon is called loss of significance.
One of the typical situations: subtraction of nearly equal quantities.
Let x and y be real numbers, whose exact values are:
x = 0.3721478693 and y = 0.3720230572. If we use 5-digit rounded
mantissa representation, we have x∗ 0.37215 and y∗ = 0.37202. Here,
the relative errors are
Rx =
0.21307 × 10−5
x∗ − x
= 0.57254 × 10−5 (small),
=
x
0.3721478693
Ry =
y∗ − y
0.30572 × 10−5
= 0.82178 × 10−5 (small).
=
y
0.3720230572
However, if we take x − y ≈ x∗ − y∗ , we obtain
Rx−y =
(x∗ − y∗ ) − (x − y
0.00013 − 0.000124812
=
= 0.4157×10−1 .
x−y
0.000124812
The relative error is much greater in magnitude, may be not sufficient
for practical calculations.
E XAMPLE .
√
Consider the calculation of the expression 1 + x2 − 1 when x is close
to 0. Here, we have the subtraction of 1 from the quantity very close
to 1 for small x (x ≈ 0.) To overcome the problem, we may write the
given expression in a mathematically equivalent form, which does
not have this type of subtraction.
√
√
p
2 − 1)( 1 + x2 + 1)
1
+
x
(
x2
√
1 + x2 − 1 =
=√
.
1 + x2 + 1)
1 + x2 + 1
E XAMPLE .
Consider the calculation of the expression x − sin x when x is close to
0. As we know, sin x ≈ x when x is small, so we have the same
situation with subtraction of nearly equal quantities. To cope with the
problem, we may use Taylor’s formula:
sin x = x −
Hence,
x3
x5
+
+ O(x6 ),
3!
5!
x → 0.
x5
x3
−
+ O(x6 ), x → 0.
3!
5!
The truncation error may be estimated from the range of values for x
and desired accuracy.
x − sin x =
E XERCISE .
Suggest ways to avoid the loss of significance in these
calculations.
1
2
3
ln x − ln y when x is close to y.
Re-write: ln x − ln y = ln xy .
√
√
x + 2 − 2 when x is close to 0.
We can transform the expression as follows:
√ √
√
√
√
√
x + 2 − 2)( x + 2 + 2)
x
√
√ .
x+2− 2=
=√
√
x+2+ 2
x+2+ 2
ex − 1 when x is close to 0.
We use Taylor’s formula
ex = 1 + x +
x2
x3
x4
+
+
+ O(x5 ),
2!
3!
4!
x→0
and obtain:
x x2
x3
+ O(x5 ),
ex − 1 = x 1 + +
+
2
6
24
x → 0.
S OLVING NON - LINEAR EQUATIONS
Let f (x) be a continuous function. Consider equation f (x) = 0. A
number x0 for which f (x0 ) = 0 is a solution to a given equation, also
called a root or zero of function f . In general, an equation may have 1,
several, infinitely many solutions or no solutions at all. To solve an
equation means to find all of its solutions. We know that a linear
equation of the form ax + b = 0, a 6= 0 has a unique solution and can
be easily solved as x = − ba . If a functions f is not linear, then we face a
non-linear equation. Some of the can be solved easily (quadratic
equations, standard logarithmc, exponential or trigonometric
equations). However, in many applied problems, we have to solve
equations which either do not have explicit formulae providing
solutions or these formulae are too complicated to be used practically.
In this case, practitioners usually find solutions with the help of
modern software or numerical algorithms.These ways, as a rule, lead
to approximate solutions, which may be sufficient for applications.
S OLVING NON - LINEAR EQUATIONS
A common procedure of solving equations numerically can be
outlined as follows.
1
Localization of roots. Here, we try to find intervals so that every
one contains exactly one root. This is usually done with the help
of Calculus methods.
2
Choosing an initial approximation to the root in this interval.
3
Constructing a sequence converging to this root, that is,
choosing a suitable algorithm.
4
Estimating error after n steps.
E XAMPLES .
In order to locate and separate roots, we have to examine the
behaviour of a function f . The following intermediate value
theorem from the calculus course is often helpful.
Theorem
If f (x) ∈ C[a, b] and f (a) · f (b) < 0, then there is at least on point
c ∈ [a, b] such that f (c) = 0.
If, in addition, f (x) is strictly increasing or decreasing on [a, b],
then such point c is unique.
Graphical representations of functions may be convenient to
locate their roots.
Example
Consider an equation ex + x = 0. Show that the equation has exactly
one real solution. Find the an interval of length at most 1 containing
the solution.
Solution: We can write this equation as ex = −x. A solution of the
equation is an intersection point of the graphs y = ex and y = −x.
y
−1 x0
0
x
From the graphs, we see that there is exactly one intersection point,
and, hence, exactly one solution to the given equation. Take
f (x) = ex + x. Then, f (0) = e0 = 1 > 0, while f (−1) = e−1 − 1 < 0.
Since f (x) is continuous, we conclude that it has a root on [−1, 0].
How to find solutions of non-linear equations numerically?
Next, we are going to discuss different methods used for
solving such equations.
Download