MATH 311 Topics in Applied Mathematics I Lecture 29:

advertisement
MATH 311
Topics in Applied Mathematics I
Lecture 29:
Orthogonality in inner product spaces.
Norm
The notion of norm generalizes the notion of length
of a vector in Rn .
Definition. Let V be a vector space. A function
α : V → R, usually denoted α(x) = kxk, is called
a norm on V if it has the following properties:
(i) kxk ≥ 0, kxk = 0 only for x = 0 (positivity)
(ii) kr xk = |r | kxk for all r ∈ R
(homogeneity)
(iii) kx + yk ≤ kxk + kyk
(triangle inequality)
A normed vector space is a vector space endowed
with a norm. The norm defines a distance function
on the normed vector space: dist(x, y) = kx − yk.
Examples. V = Rn , x = (x1, x2, . . . , xn ) ∈ Rn .
• kxk∞ = max(|x1|, |x2|, . . . , |xn |).
• kxkp = |x1|p + |x2|p + · · · + |xn |p
1/p
Examples. V = C [a, b], f : [a, b] → R.
• kf k∞ = max |f (x)|.
a≤x≤b
• kf kp =
Z
a
b
|f (x)|p dx
1/p
, p ≥ 1.
, p ≥ 1.
Inner product
The notion of inner product generalizes the notion
of dot product of vectors in Rn .
Definition. Let V be a vector space. A function
β : V × V → R, usually denoted β(x, y) = hx, yi,
is called an inner product on V if it is positive,
symmetric, and bilinear. That is, if
(i) hx, xi ≥ 0, hx, xi = 0 only for x = 0 (positivity)
(ii) hx, yi = hy, xi
(symmetry)
(iii) hr x, yi = r hx, yi
(homogeneity)
(iv) hx + y, zi = hx, zi + hy, zi (distributive law)
An inner product space is a vector space endowed
with an inner product.
Examples. V = Rn .
• hx, yi = x · y = x1y1 + x2y2 + · · · + xn yn .
• hx, yi = d1 x1y1 + d2x2 y2 + · · · + dn xn yn ,
where d1 , d2, . . . , dn > 0.
Examples. V = C [a, b].
Z b
• hf , g i =
f (x)g (x) dx.
a
• hf , g i =
Z
b
f (x)g (x)w (x) dx,
a
where w is bounded, piecewise continuous, and
w > 0 everywhere on [a, b].
Norms induced by inner products
Theorem Suppose hx, yi is anpinner product on a
vector space V . Then kxk = hx, xi is a norm.
Examples. • The length of a vector in Rn ,
p
|x| = x12 + x22 + · · · + xn2 ,
is the norm induced by the dot product
x · y = x1 y1 + x2 y2 + · · · + xn yn .
• The norm kf k2 =
Z
b
2
|f (x)| dx
a
1/2
on the
vector space C [a, b] is induced by the inner product
Z b
hf , g i =
f (x)g (x) dx.
a
Angle
Let V be an inner product space with an inner
product h·, ·i and the induced norm k · k. Then
|hx, yi| ≤ kxk kyk
for all x, y ∈ V (the Cauchy-Schwarz inequality).
Therefore we can define the angle between nonzero
vectors in V by
hx, yi
∠(x, y) = arccos
.
kxk kyk
Then hx, yi = kxk kyk cos ∠(x, y).
In particular, vectors x and y are orthogonal
(denoted x ⊥ y) if hx, yi = 0.
Orthogonal sets
Let V be an inner product space with an inner
product h·, ·i and the induced norm k · k.
Definition. A nonempty set S ⊂ V of nonzero
vectors is called an orthogonal set if all vectors in
S are mutually orthogonal. That is, 0 ∈
/ S and
hx, yi = 0 for any x, y ∈ S, x 6= y.
An orthogonal set S ⊂ V is called orthonormal if
kxk = 1 for any x ∈ S.
Remark. Vectors v1, v2, . . . , vk ∈ V form an
orthonormal set if and only if
1 if i = j,
hvi , vj i =
0 if i 6= j.
Example
• V = C [−π, π], hf , g i =
Z
π
f (x)g (x) dx.
−π
f1 (x) = sin x, f2(x) = sin 2x, . . . , fn (x) = sin nx, . . .
hfm , fn i =
Z
π
−π
sin(mx) sin(nx) dx =
π
0
if m = n,
if m =
6 n.
Thus the set {f1, f2, f3, . . . } is orthogonal but not
orthonormal.
It is orthonormal with respect to a scaled inner
product
Z
1 π
f (x)g (x) dx.
hhf , g ii =
π −π
Orthogonality =⇒ linear independence
Theorem Suppose v1, v2, . . . , vk are nonzero
vectors that form an orthogonal set. Then
v1 , v2, . . . , vk are linearly independent.
Proof: Suppose t1 v1 + t2v2 + · · · + tk vk = 0
for some t1 , t2, . . . , tk ∈ R.
Then for any index 1 ≤ i ≤ k we have
ht1 v1 + t2 v2 + · · · + tk vk , vi i = h0, vi i = 0.
=⇒ t1 hv1 , vi i + t2 hv2 , vi i + · · · + tk hvk , vi i = 0
By orthogonality, ti hvi , vi i = 0 =⇒ ti = 0.
Orthonormal basis
Suppose v1, v2, . . . , vn is an orthonormal basis for
an inner product space V .
Theorem 1 Let x = x1v1 + x2v2 + · · · + xn vn and
y = y1v1 + y2v2 + · · · + yn vn , where xi , yj ∈ R.
Then
(i) hx, yi = x1 y1 + x2y2 + · · · + xn yn ,
p
(ii) kxk = x12 + x22 + · · · + xn2 .
Theorem 2 For any vector x ∈ V ,
x = hx, v1iv1 + hx, v2 iv2 + · · · + hx, vn ivn .
Orthogonal projection
Theorem Let V be an inner product space and V0 be a
finite-dimensional subspace of V . Then any vector x ∈ V is
uniquely represented as x = p + o, where p ∈ V0 and
o ⊥ V0 .
The component p is called the orthogonal projection of the
vector x onto the subspace V0 .
x
o
p
V0
The projection p is closer to x than any other vector in V0 .
Hence the distance from x to V0 is kx − pk = kok.
Theorem Let V be an inner product space and V0
be a finite-dimensional subspace of V . Then any
vector x ∈ V is uniquely represented as x = p + o,
where p ∈ V0 and o ⊥ V0 .
Theorem Suppose v1, v2, . . . , vn is an orthogonal
basis for the subspace V0 . Then for any vector
x ∈ V the orthogonal projection p onto V0 is given
by
hx, v1i
hx, v2i
hx, vn i
p=
v1 +
v2 + · · · +
vn .
hv1 , v1i
hv2 , v2i
hvn , vn i
The Gram-Schmidt orthogonalization process
Let V be a vector space with an inner product.
Suppose x1, x2, . . . , xn is a basis for V . Let
v1 = x1 ,
hx2 , v1i
v1 ,
hv1 , v1i
hx3 , v1i
hx3 , v2i
v3 = x3 −
v1 −
v2 ,
hv1 , v1i
hv2 , v2i
.................................................
hxn , vn−1i
hxn , v1i
v1 − · · · −
vn−1.
vn = xn −
hv1 , v1i
hvn−1, vn−1i
v2 = x2 −
Then v1, v2, . . . , vn is an orthogonal basis for V .
Normalization
Let V be a vector space with an inner product.
Suppose v1, v2, . . . , vn is an orthogonal basis for V .
v2
vn
v1
, w2 =
,. . . , wn =
.
Let w1 =
kv1k
kv2 k
kvn k
Then w1, w2, . . . , wn is an orthonormal basis for V .
Theorem Any finite-dimensional vector space with
an inner product has an orthonormal basis.
Remark. An infinite-dimensional vector space with
an inner product may or may not have an
orthonormal basis.
Problem. Approximate the function f (x) = e x
on the interval [−1, 1] by a quadratic polynomial.
The best approximation would be a polynomial p(x)
that minimizes the distance relative to the uniform
norm:
kf − pk∞ = max |f (x) − p(x)|.
|x|≤1
However there is no analytic way to find such a
polynomial. Instead, one can find a “least
squares” approximation that minimizes the integral
norm
Z
1/2
1
2
|f (x) − p(x)| dx
kf − pk2 =
−1
.
The norm k · k2 is induced by the inner product
Z 1
hg , hi =
g (x)h(x) dx.
−1
Therefore kf − pk2 is minimal if p is the
orthogonal projection of the function f on the
subspace P3 of quadratic polynomials.
We should apply the Gram-Schmidt process to the
polynomials 1, x, x 2, which form a basis for P3 .
This would yield an orthogonal basis p0 , p1, p2.
Then
hf , p1 i
hf , p2i
hf , p0i
p0 (x) +
p1 (x) +
p2 (x).
p(x) =
hp0 , p0i
hp1 , p1i
hp2 , p2i
Fourier series: view from linear algebra
Suppose v1 , v2 , . . . , vn , . . . are nonzero vectors in an inner
product space V that form an orthogonal set S. Given
x ∈ V , the Fourier series of the vector x relative to the
orthogonal set S is a series
hx, vi i
c1 v1 + c2 v2 + · · · + cn vn + · · · , where ci =
.
hvi , vi i
The numbers c1 , c2 , . . . are called the Fourier coefficients
of x relative to S.
By construction, a partial sum c1 v1 + c2 v2 + · · · + cn vn of the
Fourier series is the orthogonal projection of the vector x onto
the subspace Span(v1 , v2 , . . . , vn ).
Classical Fourier series
Consider a functional vector space
Z πV = C [−π, π] with the
standard inner product hf , g i =
f (x)g (x) dx.
−π
Then the functions 1, sin x, cos x, sin 2x, cos 2x, . . . form an
orthogonal set in the inner product space V . This gives rise
to the classical Fourier series of a function F ∈ C [−π, π]:
X∞
X∞
bn sin nx,
an cos nx +
a0 +
n=1
n=1
where
1
a0 =
2π
Z
and for n ≥ 1,
Z
1 π
F (x) cos nx dx,
an =
π −π
π
F (x) dx
−π
1
bn =
π
Z
π
−π
F (x) sin nx dx.
Convergence of Fourier series
Suppose v1 , v2 , . . . , vn , . . . are vectors in an inner product
space V that form an orthogonal set S. The set S is called a
Hilbert basis forP
V if any vector x ∈ V can be expanded
into a series x = ∞
n=1 αn vn , where αn are some scalars.
Theorem 1 If S is a Hilbert basis for V , then the above
expansion is unique for any vector x ∈ V . Namely, it
coincides with the Fourier series of x relative to S.
Theorem 2 The functions 1, sin x, cos x, sin 2x, cos 2x, . . .
form a Hilbert basis for the space C [−π, π].
As a consequence, Fourier series of a continuous function on
[−π, π] converges to this function with respect to the distance
Z π
1/2
2
dist(f , g ) = kf − g k =
|f (x) − g (x)| dx
.
−π
Note that this need not imply pointwise convergence.
Download