Differentiation and Richardson Extrapolation

Differentiation and
Richardson Extrapolation
Douglas Wilhelm Harder, M.Math. LEL
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada
ece.uwaterloo.ca
dwharder@alumni.uwaterloo.ca
© 2012 by Douglas Wilhelm Harder. Some rights reserved.
Differentiation and Richardson Extrapolation
Outline
This topic discusses numerical differentiation:
– The use of interpolation
– The centred divided-difference approximations of the derivative
and second derivative
• Error analysis using Taylor series
– The backward divided-difference approximation of the derivative
• Error analysis
– Richardson extrapolation
2
Differentiation and Richardson Extrapolation
Outcomes Based Learning Objectives
By the end of this laboratory, you will:
– Understand how to approximate first and second derivatives
– Understand how Taylor series are used to determine errors of
various approximations
– Know how to eliminate higher errors using Richardson
extrapolation
– Have programmed a Matlab routine with appropriate error
checking and exception handling
3
Differentiation and Richardson Extrapolation
Approximating the Derivative
Suppose we want to approximate the derivative:
u
(1)
 x   lim
h 0
u  x  h  u  x
h
4
Differentiation and Richardson Extrapolation
Approximating the Derivative
If the limit exists, this suggests that if we choose a very
small h,
u  x  h  u  x
(1)
u  x 
h
Unfortunately, this isn’t as easy as it first appears:
>> format long
>> cos(1)
ans = 0.540302305868140
>> for i = 0:20
h = 10^(-i);
(sin(1 + h) - sin(1))/h
end
5
Differentiation and Richardson Extrapolation
Approximating the Derivative
6
At first, the approximations improve:
sin 1  h   sin 1
h
h
1
0.067826442017785
0.1
0.01
0.001
0.0001
0.00001
0.000001
0.0000001
0.00000001
0.497363752535389
0.536085981011869
0.539881480360327
0.540260231418621
0.540298098505865
0.540301885121330
0.540302264040449
0.540302302898255
>> cos(1)
ans = 0.540302305868140
Differentiation and Richardson Extrapolation
Approximating the Derivative
7
Then it seems to get worse:
sin 1  h   sin 1
h
h
0.00000001
0.540302302898255
0.000000001
0.0000000001
0.00000000001
0.000000000001
10-13
10-14
10-15
10-16
10-17
10-18
10-19
10-20
0.540302358409406
0.540302247387103
0.540301137164079
0.540345546085064
0.539568389967826
0.544009282066327
0.555111512312578
0
0
0
0
0
>> cos(1)
ans = 0.540302305868140
Differentiation and Richardson Extrapolation
Approximating the Derivative
There are two things that must be explained:
– Why do we, to start with, appear to get one more digit of
accuracy every time we divide h by 10
– Why, after some point, does the accuracy decrease, ultimately
rendering a useless approximations
8
Differentiation and Richardson Extrapolation
Increasing Accuracy
We will start with why the answer appears to improve:
– Recall Taylor’s approximation:
where
1
u  x  h   u  x   u (1)  x  h  u (2) x  h 2
2
x   x, x  h 
, that is, x is close to x
9
Differentiation and Richardson Extrapolation
Increasing Accuracy
We will start with why the answer appears to improve:
– Recall Taylor’s approximation:
where
1
u  x  h   u  x   u (1)  x  h  u (2) x  h 2
2
x   x, x  h 
, that is, x is close to x
– Solve this equation for the derivative
10
Differentiation and Richardson Extrapolation
Increasing Accuracy
First we isolate the term u (1)  x  h :
1
u (1)  x  h  u  x  h   u  x   u (2) x  h 2
2
11
Differentiation and Richardson Extrapolation
Increasing Accuracy
Then, divide each side by h:
u (1)  x  
u  x  h  u  x
h
1
 u (2) x  h
2
– Again, x   x, x  h  , that is, x is close to x
12
Differentiation and Richardson Extrapolation
Increasing Accuracy
Assuming that u (2)  x  doesn’t vary too wildly, this term is
approximately a constant:
u  x  h   u  x  1 (2)
u (1)  x  
 u x  h
h
2
13
Differentiation and Richardson Extrapolation
Increasing Accuracy
We can easily see this is true from our first example:
u (1)  x  
1
2
where M  u (2)  x 
u  x  h  u  x
h
 Mh
14
Differentiation and Richardson Extrapolation
15
Increasing Accuracy
Thus, the absolute error of
approximation of u (1)  x  is
Eabs
u  x  h  u  x
h
as an
u  x  h  u  x
 u  x 
 Mh
h
(1)
Therefore,
– If we halve h, the absolute error should drop approximately half
– If we divide h by 10, the absolute error should drop by
approximately 10
Differentiation and Richardson Extrapolation
Increasing Accuracy
>> cos(1)
ans = 0.540302305868140
sin 1  h   sin 1
1.
0.067826442017785
0.47248
1
sin 1 h
2
0.42074
0.1
0.497363752535389
0.042939
0.042074
0.01
0.536085981011869
0.0042163
0.0042074
10–3
0.539881480360327
0.00042083
0.00042074
10–4
0.540260231418621
0.000042074
0.000042074
10–5
0.540298098505865
0.0000042074
0.0000042074
10–6
0.540301885121330
0.00000042075
0.00000042074
10–7
0.540302264040449
0.0000000418276
0.000000042074
10–8
0.540302302898255
0.0000000029699
0.0000000042074
10–9
0.540302358409406
0.000000052541
0.00000000042074
h
h
Absolute Error
16
Differentiation and Richardson Extrapolation
Increasing Accuracy
>> cos(1)
ans = 0.540302305868140
sin 1  h   sin 1
1.
0.067826442017785
0.47248
1
sin 1 h
2
0.42074
0.1
0.497363752535389
0.042939
0.042074
0.01
0.536085981011869
0.0042163
0.0042074
10–3
0.539881480360327
0.00042083
0.00042074
10–4
0.540260231418621
0.000042074
0.000042074
10–5
0.540298098505865
0.0000042074
0.0000042074
10–6
0.540301885121330
0.00000042075
0.00000042074
10–7
0.540302264040449
0.0000000418276
0.000000042074
10–8
0.540302302898255
0.0000000029699
0.0000000042074
10–9
0.540302358409406
0.000000052541
0.00000000042074
h
h
Absolute Error
17
Differentiation and Richardson Extrapolation
Increasing Accuracy
sin 1  h   sin 1
1.
0.067826442017785
0.47248
1
sin 1 h
2
0.42074
0.1
0.497363752535389
0.042939
0.042074
0.01
0.536085981011869
0.0042163
0.0042074
10–3
0.539881480360327
0.00042083
0.00042074
10–4
0.540260231418621
0.000042074
0.000042074
10–5
0.540298098505865
0.0000042074
0.0000042074
10–6
0.540301885121330
0.00000042075
0.00000042074
10–7
0.540302264040449
0.0000000418276
0.000000042074
10–8
0.540302302898255
0.0000000029699
0.0000000042074
10–9
0.540302358409406
0.000000052541
0.00000000042074
h
h
Absolute Error
18
Differentiation and Richardson Extrapolation
Increasing Accuracy
Let’s try this with something less familiar:
– The Bessel function J2(x) has the derivative:
2J2  x
x
3J  x  6 J 2  x 
J 2 (2)  x   J 0  x   1

x
x2
J 2 (1)  x   J1  x  
– These functions are implemented in Matlab as:
J2(x)
besselj( 2, x )
J1(x)
besselj( 1, x )
J0(x)
besselj( 0, x )
– Bessel functions appear any time you are dealing with
electromagnetic fields in cylindrical coordinates
19
Differentiation and Richardson Extrapolation
Increasing Accuracy
20
>> x = 6.568;
>> besselj( 1, x ) - 2*besselj( 2, x )/x
ans = -0.039675290223248
J 2  6.568  h   J 2  6.568 
1.
0.067826442017785
0.133992
1  2
J 2  6.568  h
2
0.144008
0.1
–0.025284847088251
0.0143904
0.0144008
0.01
–0.038235218035143
0.00144007
0.00144008
10–3
–0.039531281976313
0.000144008
0.000144008
10–4
–0.039660889397664
0.0000144008
0.0000144008
10–5
–0.039673850132926
0.00000144009
0.00000144008
10–6
–0.039675146057405
0.000000144166
0.000000144008
10–7
–0.039675276397588
0.0000000183257
0.0000000144008
10–8
–0.039675285279372
0.00000000494388
0.00000000144008
10–9
–0.039675318586063
0.0000000283628
0.000000000144008
h
h
Absolute Error
Differentiation and Richardson Extrapolation
Increasing Accuracy
We could use a rule of thumb: Use h = 10–8
– It appears to work…
Unfortunately:
– It is not always the best approximation
– It may not give us sufficient accuracy
– We still don’t understand why our approximation breaks down…
21
Differentiation and Richardson Extrapolation
Decreasing Precision
Suppose we want 10 digits of accuracy in our answer:
– If h = 0.01, we need 12 digits when calculating sin(1.01) and
sin(1):
0.846831844618
sin 1.01  sin 1
 0.841470984808
0.01
0.005360859810
– If h = 0.00001, we need 15 digits when calculating sin(1.00001)
and sin(1):
0.841476387788881
sin 1.00001  sin 1
 0.841470984807896
0.00001
0.000005402980985
22
Differentiation and Richardson Extrapolation
Decreasing Precision
Suppose we want 10 digits of accuracy in our answer:
– If h = 10–12, we need 22 digits when calculating sin(1 + h) and
sin(1):
0.8414709848084368089584
sin 1  1012   sin 1
 0.8414709848078965066525
12
10
0.0000000000005403023059
– Matlab, however, uses double-precision floating-point numbers:
• These have a maximum accuracy of 16 decimal digits:
>> format long
>> sin( 1 + 1e-12 )
ans =
0.841470984808437
>> sin( 1 )
ans =
0.841470984807897
0.841470984808437
 0.841470984807897
0.000000000000540
23
Differentiation and Richardson Extrapolation
Decreasing Precision
Because of the limitations of doubles, our approximation
is
sin 1  1012   sin 1
10
12
 0.540
Note: this is not entirely true because Matlab uses base
2 and not base 10, but the analogy is faithful…
24
Differentiation and Richardson Extrapolation
Decreasing Precision
We can view this using the binary representation of
doubles:
>> cos( 1 )
ans =
3fe14a280fb5068c
3
f
e
1
4
a
2
8
0
f
b
5
0
6
8
c
0011 1111 1110 0001 0100 1010 0010 1000 0000 1111 1011 0101 0000 0110 1000 1100
1.0001010010100010100000001111101101010000011010001100
× 201111111110 – 011111111
= 1.0001010010100010100000001111101101010000011010001100
× 2–1
= 0.10001010010100010100000001111101101010000011010001100
25
Differentiation and Richardson Extrapolation
Decreasing Precision
From this, we see:
0.10001010010100010100000001111101101010000011010001100
>> format long
>> 1/2 + 1/32 + 1/128 + 1/1024 + 1/4096 + 1/65536 + 1/262144 + 1/33554432
ans = 0.540302306413651
>> cos( 1 )
ans = 0.540302305868140
>> format hex
>> 1/2 + 1/32 + 1/128 + 1/1024 + 1/4096 + 1/65536 + 1/262144 + 1/33554432
ans = 3fe14a2810000000
>> cos(1)
ans = 3fe14a280fb5068c
26
Differentiation and Richardson Extrapolation
Decreasing Precision
Approximation with h = 2–n
n
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0111111101
0111111110
0111111110
0111111110
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
10001010111010001001011011110010001010011101011000000
10011111110001001100000110000100011011000001001110100
11011100001100000001101111000010000011010010011110000
11111001000001011101110110001001110100000111000000000
00000011011111110110111010001101110101110100101110000
00000110111011011110010010001110111011111011010000000
00001000101000001111110010011011101000110100000000000
00001001011110010111100111101010111001011000110000000
00001001111001010111010000100110110111000100000000000
00001010000110110110000000010010010001101100000000000
00001010001101100101000110111000011001000110000000000
00001010010000111100100101110111001011110000000000000
00001010010010101000010100010001011101111000000000000
00001010010011011110001011001101010100110000000000000
00001010010011111001000110100110111011100000000000000
00001010010100000110100100010010101010000000000000000
00001010010100001101010011001000010000000000000000000
00001010010100010000101010100010111100000000000000000
00001010010100010010010110010000010000000000000000000
00001010010100010011001100000111000000000000000000000
00001010010100010011100111000010000000000000000000000
00001010010100010011110100100000000000000000000000000
00001010010100010011111011001110000000000000000000000
00001010010100010011111110100100000000000000000000000
00001010010100010100000000010000000000000000000000000
00001010010100010100000001000000000000000000000000000
00001010010100010100000001100000000000000000000000000
0 0111111111 00001010010100010100000001111101101010000011010001100
27
Differentiation and Richardson Extrapolation
Decreasing Precision
n
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
0 0111111111 00001010010100010100000001111101101010000011010001100
Approximation with h = 2–n
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0111111111
0000000000
00001010010100010100000010000000000000000000000000000
00001010010100010100000010000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010100000000000000000000000000000000000
00001010010100010000000000000000000000000000000000000
00001010010100010000000000000000000000000000000000000
00001010010100000000000000000000000000000000000000000
00001010010100000000000000000000000000000000000000000
00001010010100000000000000000000000000000000000000000
00001010010100000000000000000000000000000000000000000
00001010010000000000000000000000000000000000000000000
00001010010000000000000000000000000000000000000000000
00001010000000000000000000000000000000000000000000000
00001010000000000000000000000000000000000000000000000
00001010000000000000000000000000000000000000000000000
00001000000000000000000000000000000000000000000000000
00001000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000
28
Differentiation and Richardson Extrapolation
Decreasing Precision
This effect when subtracting two similar numbers is
called
subtractive cancellation
In industry, it is also referred to as
catastrophic cancellation
Ignoring the effects of subtractive cancellation is one of
the most significant sources of numerical error
29
Differentiation and Richardson Extrapolation
Decreasing Precision
Consequence:
– Unlike calculus, we cannot make h arbitrarily small
Possible solutions:
– Find a better formulas
– Use completely different approaches
30
Differentiation and Richardson Extrapolation
Better Approximations
Idea: find the line that interpolates the two points
(x, u(x)) and (x + h, u(x + h))
31
Differentiation and Richardson Extrapolation
Better Approximations
The slope of this interpolating line is our approximation
of the derivative:
u  x  h  u  x
h
32
Differentiation and Richardson Extrapolation
Better Approximations
What happens if we find the interpolating quadratic going
through the three points
(x – h, u(x – h)) (x, u(x)) (x + h, u(x + h))
?
33
Differentiation and Richardson Extrapolation
Better Approximations
The interpolating quadratic is clearly a local
approximation
34
Differentiation and Richardson Extrapolation
Better Approximations
The slope of the interpolating quadratic is easy to find:
35
Differentiation and Richardson Extrapolation
Better Approximations
The slope of the interpolating quadratic is also closer to
the slope of the original function at x
36
Differentiation and Richardson Extrapolation
Better Approximations
Without going through the process, finding the
interpolating quadratic function gives us a similar formula
u  x  h  u  x  h
(1)
u  x 
2h
37
Differentiation and Richardson Extrapolation
Better Approximations
Additionally, we can approximate the concavity (2nd
derivative) at the point x by finding the concavity of the
interpolating quadratic polynomial
u (2)  x  
u  x  h   2u  x   u  x  h 
h2
38
Differentiation and Richardson Extrapolation
Better Approximations
For those interested, this Maple code finds these
formulas
39
Differentiation and Richardson Extrapolation
Better Approximations
Question: how much better are these two
approximations?
u  x  h  u  x  h
u (1)  x  
u (2)  x  
2h
u  x  h   2u  x   u  x  h 
h2
40
Differentiation and Richardson Extrapolation
Better Approximations
Using Taylor series, we have approximations for both u(x
+ h) and u x  h  u x  u (1) x h  1 u (2) x h 2  1 u (3) x h3

  
 
 
 
2
6
u(x – h):
1
1
u  x  h   u  x   u (1)  x  h  u (2)  x  h 2  u (3) x   h3
2
Here, x   x, x  h  and x   x  h, x 
6
41
Differentiation and Richardson Extrapolation
Better Approximations
Subtracting the second approximation from the first, we
getu x  h  u x  u (1) x h  1 u (2) x h 2  1 u (3) x h3

  
 
 
 
2
6
1
1
u  x  h   u  x   u (1)  x  h  u (2)  x  h 2  u (3) x   h3
2
6
u  x  h  u  x  h 
2u (1)  x  h
 2u (1)  x  h 
1
1
 u (3) x   h3  u (3) x   h3
6
6
1 (3)
u x    u (3) x    h3

6
42
Differentiation and Richardson Extrapolation
Better Approximations
Solving the equation
u  x  h   u  x  h   2u (1)  x  h 
for the derivative, we get:
u  x  h  u  x  h
u (1)  x  

2h
1 (3)
u x    u (3) x    h3

6
1 (3)
u x    u (3) x    h 2

12
43
Differentiation and Richardson Extrapolation
Better Approximations
The critical term is the h2
u  x  h  u  x  h
u (1)  x  

2h
1 (3)
u x    u (3) x    h 2

12
This says
– If we halve h, the error goes down by a factor of 4
– If we divide h by 10, the error goes down by a factor of 100
44
Differentiation and Richardson Extrapolation
Better Approximations
Adding the two approximations
1
1
1
u  x  h   u  x   u (1)  x  h  u (2)  x  h 2  u (3)  x  h3  u (4) x   h 4
2
6
24
1
1
1
u  x  h   u  x   u (1)  x  h  u (2)  x  h 2  u (3)  x  h 3  u (4) x   h 4
2
6
24
u  x  h   u  x  h   2u  x 
 u (2)  x  h 2
 2u  x   u (2)  x  h 2 

1 (4)
1
u x   h 4  u (4) x   h 4
24
24
1 (4)
u x    u (4) x    h 4

24
45
Differentiation and Richardson Extrapolation
Better Approximations
Solving the equation
u  x  h   u  x  h   2u  x   u (2)  x  h 2 
1 (4)
u x    u (4) x    h 4

24
for the 2nd derivative, we get:
u
(2)
 x 
u  x  h   2u  x   u  x  h 
h2

1 (4)
u x    u (4) x   h2

24
46
Differentiation and Richardson Extrapolation
Better Approximations
Again, the term in the error is h2
u  x  h   2u  x   u  x  h 
u (2)  x  

2
h
1 (4)
u x    u (4) x   h2

24
Thus, both of these formulas are reasonable
approximations for the first and second derivatives
47
Differentiation and Richardson Extrapolation
Example
We will demonstrate this by finding the approximation of
both the derivative and 2nd-derivative of u(x) = x3 e–0.5x at x
= 0.8
Using Maple, the correct values to 17 decimal digits are:
u(1)(0.8) = 1.1154125566033037
u(2)(0.8) = 2.0163226984752030
48
Differentiation and Richardson Extrapolation
49
Example
u  x  h  u  x
h
u  x  h  u  x  h
u  x  h   2u  x   u  x  h 
2h
h2
h
Approximation
Error
Approximation
Error
10-1
1.216270589620254
1.0085e-1
1.115614538793770
2.020e-04
2.013121016529673
3.2017e-3
10-2
1.125495976919111
1.0083e-2
1.115414523410804
1.9668e-6
2.016290701661316
3.1997e-5
10-3
1.116420737455270
1.0082e-3
1.115412576266073
1.9663e-8
2.016322378395330
3.2008e-7
10-4
1.115513372934029
1.0082e-4
1.115412556799700
1.9340e-10
2.016322686593242
1.1882e-8
10-5
1.115422638214847
1.0082e-5
1.115412556604301
9.9676e-13
2.016322109277269
5.8920e-7
10-6
1.115413564789503
1.0082e-6
1.115412556651485
4.8181e-11
2.016276035021747
4.6663e-5
10-7
1.115412656682580
1.0082e-7
1.115412555929840
6.7346e-10
2.015054789694660
1.2679e-3
10-8
1.115412562313622
5.7103e-9
1.115412559538065
2.9348e-9
0.555111512312578
1.4612
10-9
1.115412484598011
7.2005e-8
1.115412512353586
4.4250e-8
-55.511151231257820
57.5275
u(1)(0.8) = 1.1154125566033037
Approximation
Error
u(2)(0.8) = 2.0163226984752030
Differentiation and Richardson Extrapolation
Better Approximations
To give names to these formulas:
First Derivative
u  x  h  u  x
h
u  x  h  u  x  h
2h
1st-order forward divided-difference formula
2nd-order centred divided-difference formula
Second Derivative
u  x  h   2u  x   u  x  h 
h
2
2nd-order centred divided-difference formula
50
Differentiation and Richardson Extrapolation
51
Better Approximations
Suppose, however, you don’t have access to both x + h
and x – h , y
– This is often the case in a time-dependant system
u  t  
1
u  t   u  t  t 
t
Differentiation and Richardson Extrapolation
Better Approximations
Using the same idea: find the interpolating polynomial,
but now find the slope at the right-hand point:
3u  t   4u  t  t   u  t  2t 
(1)
u t  
2t
52
Differentiation and Richardson Extrapolation
Better Approximations
Using Taylor series, we have approximations for both
u(t – t) and u(t – 2t):
1
1
2
3
 u (2)  t  t   u (3)  1  t 
2
6
1
1
2
3
u  t  2t   u  t   u (1)  t  2t   u (2)  t  2t   u (3)  2  2t 
2
6
u  t  t   u  t   u (1)  t  t
Here,  1   t  t , t  and  2   t  2t , t 
53
Differentiation and Richardson Extrapolation
Better Approximations
Expand the terms (2t)2 and (2t)3 :
1
1
2
3
u  t  t   u  t   u (1)  t  t  u (2)  t  t   u (3)  1  t 
2
6
4
2
3
u  t  2t   u  t   2u (1)  t  t  2u (2)  t  t   u (3)  2  t 
3
Now, to cancel the order (t)2 terms, we must subtract
the second equation from four times the first equation
54
Differentiation and Richardson Extrapolation
Better Approximations
55
This leaves us a formula containing the derivative:
2
2
3
4u  t  t   4u  t   4u (1)  t  t  2u (2)  t  t   u (3)  1  t 
3
4
2
3
u  t  2t   u  t   2u (1)  t  t  2u (2)  t  t   u (3)  2  t 
3
2
4
3
3
4u  t  t   u  t  2t   3u  t   2u (1)  x  t 
 u (3)  1  t   u (3)  2  t 
3
3
2
3
3u  t   2u (1)  x  t    2u (3)  2   u (3)  1    t 
3
Differentiation and Richardson Extrapolation
Better Approximations
Solving
4u  t  t   u  t  2t   3u  t   2u (1)  t  t  
2
3
2u (3)  2   u (3)  1    t 

3
for the derivative yields
u (1)  t  
3u  t   4u  t  t   u  t  2t  1
2
  2u (3)  2   u (3)  1    t 
2t
3
This is the backward divided-difference approximation of the
derivative at the point t
56
Differentiation and Richardson Extrapolation
Better Approximations
Comparing the error term, we see that both are second order
– The coefficient, however, for the centred divided difference formula, has
a smaller coefficient
u (1)  x  
u (1)  x  
3u  x   4u  x  t   u  x  2t  1
2
  2u (3) x2   u (3) x1    t 
2t
3
u  x  h  u  x  h
2h

1 (3)
u x    u (3) x    h 2

12
Question: is a factor of ¼ or a factor of ½?
57
Differentiation and Richardson Extrapolation
Better Approximations
58
You will write four functions:
function [dy] = D1st( u, x, h )
function [dy] = Dc( u, x, h )
function [dy] = D2c( u, x, h )
function [dy] = Db( u, x, h )
that implement, respectively, the formulas
u  x  h  u  x
h
u  x  h  u  x  h
2h
u  x  h   2u  x   u  x  h 
h2
3u  x   4u  x  h   u  x  2h 
2h
Yes, they’re all one line…
Differentiation and Richardson Extrapolation
Better Approximations
For example,
>> format long
>> D1st( @sin, 1, 0.1 )
ans =
0.497363752535389
>> Dc( @sin, 1, 0.1 )
ans =
0.539402252169760
>> D2c( @sin, 1, 0.1 )
ans =
-0.840769992687418
>> Db( @sin, 1, 0.1 )
ans =
0.542307034066392
>> D1st( @sin, 1, 0.01 )
ans =
0.536085981011869
>> Dc( @sin, 1, 0.01 )
ans =
0.540293300874733
>> D2c( @sin, 1, 0.01 )
ans =
-0.841463972572898
>> Db( @sin, 1, 0.01 )
ans =
0.540320525678883
59
Differentiation and Richardson Extrapolation
Richardson Extrapolation
There is something interesting about the error terms of
the centred divided-difference formulas for the 1st and 2nd
derivatives:
– If you calculate it out, we only have every second term…
u   x  
1
u
2
 x 
u  x  h  u  x  h
2h
1 3
1  5
1 7
1
9
 u    x  h2 
u  x  h4 
u  x  h6 
u    x  h8 
6
120
5040
362880
u  x  h   2u  x   u  x  h 
2h

1  4
1  6
1
1
8
10
u  x  h2 
u  x  h4 
u    x  h6 
u    x  h8 
12
360
20160
1814400
60
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Let’s see if we can exploit this….
– First, define
def
Dc  u, x, h  
u  x  h  u  x  h
2h
– Therefore, we have
1
1  5
1 7
1
u 1  x   Dc  u, x, h   u 3  x  h 2 
u  x  h4 
u  x  h6 
u 9  x  h8 
6
120
5040
362880
61
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Let’s see if we can exploit this….
1
1  5
u 1  x   Dc (u, x, h)  u 3  x  h 2 
u  x  h4  O  h6 
6
120
2
4
h 1 3
1  5

h
h
1
u    x   Dc  u , x,   u    x    
u  x     O  h6 
2 6

 2  120
2
A better approximation: ¼ the error
62
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Expanding the products:
1
1  5
u 1  x   Dc (u, x, h)  u 3  x  h 2 
u  x  h4  O  h6 
6
120
h  1  3
1

u 1  x   Dc  u , x,  
u  x  h2 
u  5  x  h 4  O  h 6 
2  46
16 120

63
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Now, subtract the first equation from four times the
second:
1
1  5
u 1  x   Dc (u, x, h)  u 3  x  h 2 
u  x  h4  O  h6 
6
120
h  1  3
1

u 1  x   Dc  u , x,  
u  x  h2 
u  5  x  h 4  O  h 6 
2  46
16 120

h

4u 1  x   u 1  x   4 Dc  u , x,   Dc  u , x, h 
2


1  5
u  x  h4  O  h6 
160
64
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Solving for the derivative:
h

4 Dc  u, x,   Dc  u, x, h 
1  5
2

u 1  x  

u  x  h4  O  h6 
3
480
By taking a linear combination of two previous
approximations, we have an approximation which has an
O(h4) error
65
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Let’s try this with the sine function at x = 1 with h = 0.01:
u   x  
1
4 Dc  sin,1, 0.005  Dc  sin,1, 0.01
3

1  5
u  x  0.014
480
Doing the math,
sin(1.01)  sin(0.99)
 0.540293300874733666
0.2
sin(1.005)  sin(0.995)
Dc  sin,1, 0.005  
 0.540300054611346006
0.1
cos(1.0)  0.54030230586813971740
Dc  sin,1, 0.01 
we see neither approximation is amazing, five digits in
the second case…
66
Differentiation and Richardson Extrapolation
Richardson Extrapolation
If we calculate the linear combination, however, we get:
4 Dc  sin,1, 0.005   Dc  sin,1, 0.01
3
 0.54030230585688345267
cos(1.0)  0.54030230586813971740
All we did was take a linear combination of not-so-great
approximations and we get an approximation good
approximation…
Let’s reduce h by half
– If the error is O(h6), reducing h by half should reduce the error by
1/64th
67
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Again, we get almost more digits of accuracy…
4 Dc  sin,1, 0.0025   Dc  sin,1, 0.005 
 0.54030230586743619800
3
cos(1.0)  0.54030230586813971740
How small must h be to get this accurate an answer?
– The error is given by the formula
1
  u (3) x    u (3) x    h 2
12
and thus we must solve 
to get h = 0.00000224:
1
sin (3) 1  h 2  7.035 1012

6
sin(1.00000224)  sin(0.99999776)
 0.54030230586769
2  0.00000224
68
Differentiation and Richardson Extrapolation
Richardson Extrapolation
As you may guess, we could repeat this again:
– Suppose we are solving some function f with a formula F
– Suppose the error is O(hn), then we can write
f  F  h   Kh n  o  h n 
h 1
f  F    n Kh n  o  h n 
2 2
and now we can subtract the first formula from 2n times the
second:
h
2n f  x   f  x   2n F    F  h   o  h n 
2
69
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Solving for f(x), we get
h

2 n F  f , x,   F  f , x , h 
2

n
f  x 

o
h


2n  1
Note that the approximation is a weighted average of two
other approximations
70
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Question:
– Is this formula subject to subtractive cancellation?
h

2 n F  f , x,   F  f , x , h 
2

f  x 
 o  hn 
n
2 1
71
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Therefore, if we know that the powers of the
approximation, we may apply the appropriate
Richardson extrapolations…
– Given an initial value of h, we can define:
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
72
Differentiation and Richardson Extrapolation
Richardson Extrapolation
If the highest-order error is O(h2), then each subsequent
approximation will have an absolute error ¼ the previous
– This applies for both centred divided-difference formulas for the
1st and 2nd derivatives
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R3,1 = D(u, x, h/22)
R4,1 = D(u, x, h/23)
R5,1 = D(u, x, h/24)
73
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Therefore, we could now calculate further
approximations according to our Richardson
extrapolation formula:
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R2,2 
R3,1 = D(u, x, h/22) R3,2 
R4,1 = D(u, x, h/23) R4,2 
R5,1 = D(u, x, h/24) R5,2 
4 R2,1  R1,1
3
4R3,1  R2,1
3
4R4,1  R3,1
3
4R5,1  R4,1
3
74
Differentiation and Richardson Extrapolation
Richardson Extrapolation
These values are now dropping according to O(h4)
– Whatever the error is for R2,2, the error of R3,2 is 1/16th that, and
the error for R4,2 is reduced a further factor of 16
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R2,2 
R3,1 = D(u, x, h/22) R3,2 
R4,1 = D(u, x, h/23) R4,2 
R5,1 = D(u, x, h/24) R5,2 
4 R2,1  R1,1
3
4R3,1  R2,1
3
4R4,1  R3,1
3
4R5,1  R4,1
3
75
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Replacing n with 4 in our formula, we get:
h

2 4 F  f , x,   F  f , x , h 
2

4
f  x 

o
h


24  1
and
thus we have
R = D(u, x, h)
1,1
R2,1 = D(u, x, h/2)
R2,2 
R3,1 = D(u, x, h/22) R3,2 
R4,1 = D(u, x, h/23) R4,2 
R5,1 = D(u, x, h/24) R5,2 
4 R2,1  R1,1
3
4R3,1  R2,1
3
4R4,1  R3,1
3
4R5,1  R4,1
3
R3,3 
R4,3 
R5,3 
16R3,2  R2,2
15
16 R4,2  R3,2
15
16 R5,2  R4,2
15
76
Differentiation and Richardson Extrapolation
77
Richardson Extrapolation
Again, now the errors are dropping by a factor of O(h6)
and each approximation has 1/64th the error of the
previous
– Why not give it another go?
R1,1 = D(u, x, h)
R2,1 = D(u, x, h/2)
R2,2 
R3,1 = D(u, x, h/22) R3,2 
R4,1 = D(u, x, h/23) R4,2 
R5,1 = D(u, x, h/24) R5,2 
4 R2,1  R1,1
3
4R3,1  R2,1
3
4R4,1  R3,1
3
4R5,1  R4,1
3
R3,3 
R4,3 
R5,3 
16R3,2  R2,2
15
16 R4,2  R3,2
15
16 R5,2  R4,2
15
R4,4 
64 R4,3  R3,3
R5,4 
63
64 R5,3  R4,3
63
Differentiation and Richardson Extrapolation
Richardson Extrapolation
We could, again, repeat this process:
R5,5 
256 R5,4  R4,4
255
Thus, we would have a matrix of entries
R1,1
R2,1
R2,2
R3,1
R3,2
R3,3
R4,1
R4,2
R4,3
R4,4
R5,1
R5,2
R5,3
R5,4
of which R5,5 is the most accurate
R5,5
78
Differentiation and Richardson Extrapolation
Richardson Extrapolation
You will therefore be required to write a Matlab function
function [du] = richardson22( D, u, x, h, N_max, eps_abs )
that will implement Richardson extrapolation:
1. Create an (Nmax + 1) × (Nmax + 1) matrix of zeros
2. Calculate R1,1 = D(u, x, h)
3. Next, create a loop that iterates a variable i from 1 to Nmax that
a. Calculates the value Ri + 1,1 = D(u, x, h/2i ) and
b. Loops to calculate Ri + 1,j + 1 where j running from 1 to i using
Ri 1, j 1 
c. If
4 j Ri 1, j  Ri , j
4 j 1
Ri 1,i 1  Ri ,i   abs , return the value Ri + 1,i + 1
4. If the loop finishes and nothing was returned, throw an
exception indicating that Richardson extrapolation did not
converge
79
Differentiation and Richardson Extrapolation
Richardson Extrapolation
The accuracy is actually quite impressive
>> richardson22( @Dc, @sin, 1, 0.1, 5, 1e-12 )
ans =
0.540302305868148
>> richardson22( @Dc, @cos, 1, 0.1, 5, 1e-12 )
ans =
-0.841470984807898
>> cos( 1 )
ans =
0.540302305868140
>> -sin( 1 )
ans =
-0.841470984807897
>> richardson22( @Dc, @sin, 2, 0.1, 5, 1e-12 )
ans =
-0.416146836547144
>> richardson22( @Dc, @cos, 2, 0.1, 5, 1e-12 )
ans =
-0.909297426825698
>> cos( 2 )
ans =
-0.416146836547142
>> -sin( 2 )
ans =
-0.909297426825682
80
Differentiation and Richardson Extrapolation
Richardson Extrapolation
In reality, expecting an error as small as 10 is
>> richardson22( @D2c, @sin, 1, 0.1, 5, 1e-12 )
ans =
-0.841470984807975
>> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-12 )
??? Error using ==> richardson22 at 20
Richard extrapolation did not converge
>> -sin( 1 )
ans =
-0.841470984807897
>> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-10 )
ans =
-0.540302305869316
>> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-12 )
??? Error using ==> richardson22 at 35
Richard extrapolation did not converge
>> -cos( 1 )
ans =
-0.540302305868140
>> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-10 )
ans =
-0.909297426827381
>> richardson22( @D2c, @cos, 2, 0.1, 5, 1e-10 )
ans =
0.416146836545719
>> -sin( 2 )
ans =
-0.909297426825682
>> -cos( 2 )
ans =
0.416146836547142
81
Differentiation and Richardson Extrapolation
Richardson Extrapolation
The Taylor series for the backward divided-difference
3u  t   4u  t  t   u  t  2t 
formula
u (1)  t  
2t
does not drop off so quickly:
u  t  
1
3u  t   4u  t  t   u  t  2t 
2t
1 3
1 4
7 5
1 6
31  7 
2
3
4
5
6
 u    t  t   u    t  t   u    t  t   u    t  t  
u  t  t  
3
4
60
24
2520
Once you finish richardson22, it will be trivial to write
richardson21 which is identical except it uses the
2 j 1 Ri 1, j  Ri , j
formula:
Ri 1, j 1 
2 j 1  1
82
Differentiation and Richardson Extrapolation
Richardson Extrapolation
Question:
– What happens if an error is larger than that expected by
Richardson extrapolation? Will this significantly affect the
answer?
– Fortunately, each step is just a linear combination with significant
weight placed on the more accurate answer
• It won’t be worse than just calling, for example, Dc( u, x,
h/2^N_max )
83
Differentiation and Richardson Extrapolation
Summary
In this topic, we’ve looked at approximating the
derivative
– We saw the effect of subtractive cancellation
– Found the centred-divided difference formulas
• Found an interpolating function
• Differentiated that interpolating function
• Evaluated it at the point we wish to approximate the derivative
– We also found one backward divided-difference formula
– We then applied Richardson extrapolation
84
Differentiation and Richardson Extrapolation
References
[1]
Glyn James, Modern Engineering Mathematics, 4th Ed., Prentice Hall,
2007, p.778.
[2]
Glyn James, Advanced Modern Engineering Mathematics, 4th Ed.,
Prentice Hall, 2011, p.164.
85