Introduction to Numerical Analysis I Handout 12 1 Approximation of Functions

advertisement
Introduction to Numerical Analysis I
Handout 12
1
Approximation of Functions
1.1
Theorem
p 1.4. Every Inner Product Induce a Norm
kvk = (v, v)
Proof: The conditions 1 and 2 implied by the definition of the inner product. To show that the 3rd
condition is satisfied we use
A Norm
Until now our approach in approximation of function
was to find an interpolation. When we wanted to improve the error we used special interpolation points,
that is roots of orthogonal polynomials.
We now learn another approach of approximation.
Instead of interpolation condition Pn (xj ) = f (xj ) we
will attempt to find to minimize the error
|Re (v1 , v2 )| 6
Thus
kv1 + v2 k = (v1 + v2 , v1 + v2 ) = (v1 , v1 ) + (v1 , v2 ) + (v1 , v2 )+
(v2 , v2 ) = (v1 , v1 ) + 2 Re (v1 , v2 ) + (v2 , v2 ) 6
C - S
kv1 k + 2 kv1 k kv2 k + kv2 k = (kv1 k + kv2 k)2
where the function || · || is defined as following
For the current discussion we interesting in two following L2 -norms
Definition 1.1. A norm over Vector Space V is a
function || · || : V 7→ R that for each scalar λ ∈ F and
vector v ∈ V satisfies the following conditions:
n
1. For a vector in v ∈ Rn , including {vj = f (xj )}j=0 ,
P
the inner product (u, v) = uj vj induces kvk2 =
s
n
P
2
|vi |
1. Positivity: kvk ≥ 0, and also kvk = 0 iff v = 0
i=1
2. Homogeneity: kλvk = |λ|kvk
2. For real functions continuous
R in an interval [a, b],
the
inner
product
(f,
g)
=
f ḡ dx induces kf k2 =
s
Rb
2
|f (x)| dx
3. Triangle Inequality: kv1 + v2 k ≤ kv1 k + kv2 k
Example 1.2.
a
n
1. For a vector in v ∈ Rn , including {vj = f (xj )}j=0
n
P
6 kv1 k kv2 k .
C-S
|Re (v1 , v2 ) + iIm (v1 , v2 )|
||e(x)|| = ||Pn (x) − f (x)||
• L1 -norm kvk1 =
|(v1 , v2 )|
| {z }
1.2
|vi |
Least Square Fit
Let f (x) be real valued continuous function. We want
to approximate it using function of the following form
X
g (x) =
cn bn (x),
i=1
• L∞ -norm kvk∞ = max |vi |
i
2. For real functions continuous in an interval [a, b]
n
• L1 -norm kf k1 =
Rb
where bn (x) is some basis.
P The error is given by e (x) =
f (x) − g (x) = f (x) − cn bn (x). We want to find the
|f (x)| dx
a
n
• L∞ -norm kf k∞ = max |f (x)|
coefficients c1 , . . . , cn such that ke(x)k2 is minimal.
a6x6b
!
kek22 = g (c1 , ..., cn ) =
Theorem 1.3 (Cauchy Schwartz inequality (C-S)).
f−
X
cn bn , f −
X
n
= (f, f ) −
X
cn b n
= kf k − 2
(x, y)
(x, y)
y, x −
y =
06 x−
(y, y)
(y, y)
X
cn (f, bn ) +
n
X
cn bn ,
!
−
n
Proof:
X
cn bn , f
!
+
n
XX
n
n
X
cn bn
n
cn cm (bn , bm )
m
In order to find the minimum we need to consider the
derivatives, which gives
(x, y) 2
(x, y)
(x, y)
(y, y) =
(x, x) −
(x, y) −
(y, x) + (y, y)
(y, y) (y, y)
= (x, x) − 2
f,
=
n
!
2
|(x, y)| 6 (x, x) (y, y)
cn bn
X
∂g
= −2 (f, bj ) + 2
cn (bn , bj ) = 0
∂cj
n
|(x, y)|2
|(x, y)|2
|(x, y)|2
+
= (x, x) −
(y, y)
(y, y)
(y, y)
Thus, we got for all j (f, bj ) =
P
cn (bn , bj ) One write
n
it in a matrix form as
M (c0 , · · · , cn )T = ((f, b0 ), · · · , (f, bN ))T
1
=
the Hilbert matrix denoted as Hn+1 (a, b), for example
for [a, b] = [0, 1] and n = 4 one get
where Mm,n = (bm , bn ).
We need to verify that the linear system is not singular, for which we will show that the homogeneous
linear system M~c = 0 has only the trivial solution,
that is ~c = ~0.
For the homogeneous
system
N
M~c = 0, we have
N
P
P
cn (bn , bj ) =
cn bn , bj = 0, ∀j
n=1

1
2


H5 (0, 1) =  13

1
4
1
5
n=1
that is, ∀j the function g(x) =
N
P
cn bn (x) is orthog-
Span{bn }N
n=1 ,
onal to bj . However, since g ∈
the orthogonality to all {bn }N
n=1 implies that g = 0. Since
{bn }N
n=1 is linear independent we get cn = 0, ∀n.
The minimality is due to the following, for any sequence
εn :
Example 1.7. {cn } =
2
(cn + εn ) bn =
f −
2
X
X
X
cn b n +
cn b n −
(cn + εn ) bn =
f −
2 X
2
X
X
cn b n + cn bn −
(cn + εn ) bn +
f −
2
X
X
X
X
2 f−
cn bn ,
cn bn −
(cn + εn ) bn > f −
cn bn 1.2.2
since
!
f−
cn bn ,
n
X
cn bn −
n
X
(cn + εn ) bn
=
n
!
X
cn bn − f,
n
X
εn bn
!
=
X
n
=
X
εn bn
!
−
f,
X
n
εn bn
n
)
X
εj
cn b n ,
n
(
X
cn (bn , bj ) − (f, bj )
=0
|
{z
=0
n
f,
√1
2
q
q o
, f, 32 x , f, 12 52 (3x2 − 1) , ...
Sensetivity to error
X
X
X
˜
f
−
(f,
b
)
b
−
(e,
b
)
b
=
f
,
b
b
f
−
n n
n n 6
n
n
n
n
n
X
X
(e, bn ) bn (f, bn ) bn + 6 f −
n
n
X
X
6
LSF
(e)
= f −
(f, bn ) bn + f
−
(f,
b
)
b
n n + ε
| {z } n
n
≈e(x) that is LSF (f ) ≈ LSF f˜ .
n
j
1
5
1

6
1
7

1
8
1
9
Consider f (x) was measured with error ke(x)k ≤ ε,
that is f˜ (x) = f (x) + e (x). LSF have the following
interesting property
n
X
1
4
1
5
1
6
1
7
1
8
The solution to the problem above is the use of
orthogonal polynomial basis. In this case the matrix
M become diagonal, so the coefficients are given cn =
(f,bn )
(bn ,bn ) .
n=1
X
1
3
1
4
1
5
1
6
1
7
1
2
1
3
1
4
1
5
1
6
1
}
1.2.3 Discrete Fourier Transform as LSF
Example 1.5. Given (xi , f (xi )) = (1, 3.2) , (2, 4.5) , (3, 6.1)
If we use a functional basis bn (x) = ei2πnx/L the coeffind a line that approximate the function. That is,
ficients become
Z
 
 
 
1 L/2
3.2
1
1
cn = (f, bn ) = (f, ei2πnx/L ) =
f (x)e−i2πnx/L dx
L −L/2
 
 
 
f =  4.5
g = αb1 + βb0 = α  2 + β  1 ,
become a Fourier Transform coefficients or a Discrete
1
6.1
3
Fourier Transform coefficients:
M/2
X
1
cn = (f, bn ) =
f (xm )e−i2πnm/M dx
3 1+2+3
b
3.2 + 4.5 + 6.1
M
Thus
=
and
m=−M/2
6 1+4+9
a
3.2 + 9 + 18.3
therefore
This can be formulated with Vandermunde matrix of ω = e−2πi/N
a=
13.8 · 14 − 30.5 · 6
= 1.7
42 − 36
1.2.1
b=

30.5 · 3 − 13.8 · 6
= 1.45
6



cn = 



LSF Using Orthogonal Polynomials
1
1
1
..
.
1
1
ω
ω2
..
.
1
ω2
ω4
..
.
1
...
...
ω N −1
ω 2(N −1)
...
1
ω N −1
ω 2(N −1)
..
.
2
ω (N −1)

  x0 

 . 
 . 

.


xn
The DFT can be calculated very efficiently using an algorithm of Fast Furier Transform (FFT), here is how to use it in
matlab:
The serious problem of the LSF method described above
is that the matrix M is in general case is a full matrix,
thus the numerical solution may be unstable.
xn
cn
cn
cn
Example 1.6. For example if we use inner product
Rb
(f, g) = a f (x)g(x)dx with the standard polynomial
basis 1, x, x2 , . . .. The entries of the matrix entries beRb
i+k+1 b
come Mij = (bi , bj ) = a xi+j dx = xi+k+1 a which give
=
=
=
=
linspace(a,b,M);
fft(f(xn));
fftshift(cn);
cn/M;
Use help fft and help fftshift to understand why.
2
Download