Approximation and Errors

advertisement
Approximation and Errors
Well-Posed Problems
 Problem
is well-posed if solution
 exits
 is
unique
 depends continuously on problem data
Otherwise, problem is ill-posed
Numerical Methods © Wen-Chieh Lin
2
Remedy to ill-posed problem
 Replace
difficult problem by easier one having
same or close enough solution
 linear
 infinite  finite
 differential  algebraic
 complicate  simple
 nonlinear
 Solution
obtained may only approximate that
of the original problem
Numerical Methods © Wen-Chieh Lin
3
Sources of Approximation

Before computation
Modeling
 Empirical measurements
 Previous computations


During computation
Truncation or discretization
 Rounding




Accuracy of final result reflects all these
Uncertainty in input may be amplified by problem
Perturbations during computation may be amplified
by algorithm
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
4
Example: Approximations
 Computing
surface area of Earth using formula
V 4r 2 involves several approximations
 Earth
is modeled as sphere, idealizing its true
shape
 Value for radius is based on empirical
measurements and previous computations
 Value forπrequires truncating infinite process
 Values for input data and results of arithmetic
operations are rounded in computer
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
5
Error Definition
Absolute error: |approximate value –true value|
 Relative error: (absolute error)/(true value)
 True value is usually unknown, so we estimate or
bound error than compute it exactly
 Relative error often taken relative to approximate
value rather than (unknown) true value

current approximation - previous approximation
current approxiamtion
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
6
Truncation Error
 Difference
between true result and result
produced by given algorithm using exact
arithmetic
 Due to approximations such as truncating
infinite series or terminating iterative sequence
before convergence
2
3
n

x
x
x
x
e x 1    
1! 2! 3! n 4 n!
2
3
x
x
x
e x 1   
Truncation error
1! 2! 3!
Numerical Methods © Wen-Chieh Lin
7
2
3
n

x
x
x
x
x
e 1    
1! 2! 3! n 4 n!
e 1
x
x
e 1 
1!
x
e0.5 1
e
0.5
1.5
current approximation - previous approximation

current approxiamtion
1.5 1

100% 33.3%
1.5
Numerical Methods © Wen-Chieh Lin
8
2
3
n

x
x
x
x
x
e 1    
1! 2! 3! n 4 n!
e 1
x
e0.5 1
x
0.5
e 1 
e 1.5
1!
2
x x
x
e 1  
e0.5 1.625
1! 2!
x
33.3%
1.625 1

100% 7.69%
1.625
Numerical Methods © Wen-Chieh Lin
9
2
3

n
x x
x
x
e 1    
1! 2! 3! n 4 n!
x
e 1
x
e0.5 1
x
0.5
e 1 
33.3%
e 1.5
1!
2
x x
x
7.69%
e 1  
e0.5 1.625
1! 2!
2
3
x x
x 0.5
x
e 1    e 1.645833333
1! 2! 3!
1.27%
x
Numerical Methods © Wen-Chieh Lin
10
Round-off Error
 Difference
between result produced by given
algorithm using exact arithmetic and result
produced by same algorithm using limited
precision arithmetic
 Due to inexact representation of real numbers
and arithmetic operations upon them
 Computational error = truncation error +
round-off error
Numerical Methods © Wen-Chieh Lin
11
Propagated Error
 An
error in the succeeding steps of a process
due to an occurrence of an earlier error
Numerical Methods © Wen-Chieh Lin
12
Forward and Backward Error

Suppose we want to compute y=f(x), where f is a real
value function, but obtain approximate value ycalc
E fwd ycalc yexact
Forward error

Backward error

Ebackw xcalc x
yexact f (x )
yexact
ycalc
1
xcalc f ( ycalc )
xcalc x
Numerical Methods © Wen-Chieh Lin
13
Forward and Backward Error

Suppose we want to compute y=f(x), where f is a real
value function, but obtain approximate value yˆ
y yˆy
Forward error

Backward error

x xˆx
y f (x )
y
yˆ
xˆf 1 ( yˆ
)
xˆ x
Numerical Methods © Wen-Chieh Lin
14
Backward Error Analysis
 Idea:
approximate solution is exact solution to
modified problem
 How much must original problem change to
give result actually obtained?
 How much data error in input would explain
all error in computed result?
 Approximate solution is good if it is exact
solution to near problem
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
15
Example: Backward Error Analysis
Approximating cosine function by truncating
Taylor series after two terms gives
yˆfˆ
( x ) 1 x 2 2
 Forward error is given by

yˆy fˆ
( x ) f ( x ) 1 x 2 2 cos( x )

Backward error is given by xˆx , where
xˆf 1 ( yˆ
) arccos( yˆ
)
Numerical Methods © Wen-Chieh Lin
16
Example (cont.)
 For
x=1,
y f (1) cos(1) 0.5403
yˆfˆ
(1) 1 12 2 0.5
xˆarccos( yˆ
) arccos(0.5) 1.0472
error: y yˆy 0.5 0.5403 0.0403
 Backward error: 
x xˆx 1.0472 1 0.0472
 Forward
Numerical Methods © Wen-Chieh Lin
17
Sensitivity and Conditioning



Problem is insensitive or well-conditioned, if relative
change in input causes similar change in solution
Problem is sensitive or ill-conditioned, if relative
change in solution can be much larger than that in
input
Condition number
| relative change in solution |
cond 
| relative chage in input data |
[ f ( xˆ
) f ( x )] f ( x )

( xˆx ) x
Numerical Methods © Wen-Chieh Lin
18
Sensitivity and Conditioning (cont.)

Problem is sensitive or ill-conditioned, if
cond >> 1

A well-posed problem can be ill-conditioned

Computational algorithm should not make
sensitivity worse
Numerical Methods © Wen-Chieh Lin
19
Condition Number

Condition number is amplification factor relating
relative forward error to relative backward error
[ f ( xˆ
) f ( x )] f ( x ) y y
cond 

( xˆx ) x
x x

Condition number is usually not known exactly and
may vary with input, so rough estimate or upper
bound is used
Numerical Methods © Wen-Chieh Lin
20
Example: Evaluating Function

Evaluating function f for approximate input xˆx h
| relative change in solution |
cond 
| relative chage in input data |
xˆx h
f ( x h ) f ( x ) hf ' ( x )
[ f ( xˆ
) f ( x )] f ( x )

( xˆx ) x
[ f ( x h ) f ( x )] f ( x )

( x h x ) x
hf ' ( x ) f ( x )
xf ' ( x )


h x
f ( x)
Numerical Methods © Wen-Chieh Lin
21
Example: Sensitivity
 Tangent
function is sensitive for arguments
near π/2
tan(1.57078) 6.58058 104
tan(1.57079) 1.58058 10
5
 For
x 1.57079, cond 2.48275 10
5
Numerical Methods © Wen-Chieh Lin
22
Stability

Algorithm is stable if result produced is relatively
insensitive to perturbations during computation

Stability of algorithm is analogous to conditioning of
problems

From viewpoint of backward error analysis,
algorithm is stable if result produced is exact solution
to nearby problem

For stable algorithm, effect of computational error is
worse than effect of small data error in input
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
23
Accuracy

Accuracy: closeness of computed solution to true
solution of problem

Stability alone does not guarantee accurate results

Accuracy depends on conditioning of problem as well
as the stability of algorithm

Inaccuracy can result from applying
stable algorithm to ill-conditioned problem
 unstable algorithm to well-conditioned problem


Applying stable algorithm to well-conditioned
problem yields accurate solution
From Michael T. Heath’
s slide
Numerical Methods © Wen-Chieh Lin
24
Floating-Point Arithmetic
 Floating-point
number is represented by
 sign
 fraction
(mantissa)
 exponent
d p 1 E
d1 d 2
x (1   2   p 1 )2
2 2
2
where 0 d i 1, i 1,, p 1, and L E U
Numerical Methods © Wen-Chieh Lin
25
Machine Numbers
 Not
all real numbers exactly representable;
those that are are called machine numbers
 Machine numbers are unequally spaced
 If
a real number is not exactly repersentable,
then it is approximated by a nearby machine
number  causing round-off error
Numerical Methods © Wen-Chieh Lin
26
Machine Precision
 Accuracy
of floating-point system
characterized by unit round-off (or machine
precision or machine epsilon)
 Smallest number such that
float (1 eps ) 1
 If
eps
(1 ) 
1 () 
Numerical Methods © Wen-Chieh Lin
27
Machine Precision
 Accuracy
of floating-point system
characterized by unit round-off (or machine
precision or machine epsilon)
 Smallest number such that
float (1 eps ) 1
 If
eps
(1 ) 1
1 () 
Numerical Methods © Wen-Chieh Lin
28
Machine Precision
 Accuracy
of floating-point system
characterized by unit round-off (or machine
precision or machine epsilon)
 Smallest number such that
float (1 eps ) 1
 If
eps
(1 ) 1
1 () 1
Numerical Methods © Wen-Chieh Lin
29
Summary
 Well-posed
problem
 Computational error
 truncation
error
 round-off error
 Sensitivity
 Condition
 Stability
(conditioning) of problem
number
of algorithm
Numerical Methods © Wen-Chieh Lin
30
Download