Document

advertisement
CHAPTER 5
INNER PRODUCT
SPACES
5.1 Length and Dot Product in Rn
5.2 Inner Product Spaces
5.3 Orthonormal Bases: Gram-Schmidt Process
5.4 Mathematical Models and Least Square Analysis
5.5 Applications of Inner Product Space
Elementary Linear Algebra
R. Larson (7 Edition)
投影片設計編製者
淡江大學 電機系 翁慶昌 教授
CH 5 Inner Product Space
Electric/Magnetic Flux (p.234)
Heart Rhythm Analysis (p.249)
Work (p.242)
Revenue (p.260)
Torque (p.271)
2/91
5.1 Length and Dot Product in Rn

Length:
The length of a vector v  (v1 , v2 ,, vn ) in Rn is given by
|| v ||  v1  v2    vn
2
2
2

Notes: The length of a vector is also called its norm.

Notes: Properties of length
1
2
3
4
v 0
v  1  v is called a unit vector.
v  0 iff v  0
cv  c v
Elementary Linear Algebra: Section 5.1, p.226
3/91

Ex 1:
(a) In R5, the length of v  (0 ,  2 , 1 , 4 ,  2) is given by
||v ||  0 2  (2) 2  12  4 2  (2) 2  25  5
(b) In R3 the length of v  (
2
2
17
2
,
2
17
,
3
17
) is given by
2
17
 2   2   3 
|| v ||  
1
 
 
 
17
 17   17   17 
(v is a unit vector)
Elementary Linear Algebra: Section 5.1, p.226
4/91

A standard unit vector in Rn:
e1, e2 ,, en   1,0,,0, 0,1,,0, 0,0,,1

Ex:
the standard unit vector in R2:
i, j  1,0, 0,1
the standard unit vector in R3: i, j, k  1,0,0, 0,1,0, 0,0,1

Notes: (Two nonzero vectors are parallel)
u  cv
1 c  0  u and v have the same direction
2 c  0  u and v have the opposite direction
Elementary Linear Algebra: Section 5.1, p.226
5/91

Thm 5.1: (Length of a scalar multiple)
Let v be a vector in Rn and c be a scalar. Then
|| cv ||  | c | || v ||
Pf:
v  (v1 , v2 ,  , vn )
 cv  (cv1 , cv2 ,  , cvn )
|| cv ||  || ( cv1 , cv2 ,  , cvn ) ||
 (cv1 ) 2  (cv2 ) 2    (cvn ) 2
 c 2 (v1  v2    vn )
2
2
 | c | v1  v2    vn
2
2
2
2
 | c | || v ||
Elementary Linear Algebra: Section 5.1, p.227
6/91

Thm 5.2: (Unit vector in the direction of v)
If v is a nonzero vector in
Rn,
v
then the vector u 
| |v | |
has length 1 and has the same direction as v. This vector u
is called the unit vector in the direction of v.
Pf:
v is nonzero  v  0 
1
 u
v
v
1
0
v
(u has the same direction as v)
v
1
||u || 

||v ||  1
||v || ||v ||
Elementary Linear Algebra: Section 5.1, p.227
(u has length 1 )
7/91

Notes:
(1) The vector
v
is called the unit vector in the direction of v.
|| v ||
(2) The process of finding the unit vector in the direction of v
is called normalizing the vector v.
Elementary Linear Algebra: Section 5.1, p.227
8/91

Ex 2: (Finding a unit vector)
Find the unit vector in the direction of v  (3 ,  1 , 2) ,
and verify that this vector has length 1.
Sol:
v  (3 ,  1 , 2)  v  3   1  22  14
2
2
v
(3 ,  1 , 2)
1
1
2 
 3



(3 ,  1 , 2)  
,
,

2
2
2
||v ||
14
14
14 
 14
3  (1)  2
2


2
2
14
 3   1   2 
1

 
 
 
14
 14   14   14 
v
is a unit vector.
v
Elementary Linear Algebra: Section 5.1, p.227
9/91

Distance between two vectors:
The distance between two vectors u and v in Rn is
d (u , v)  | |u  v | |

Notes: (Properties of distance)
(1)
d (u , v)  0
(2)
d (u , v)  0 if and only if
(3)
d (u , v)  d ( v , u)
Elementary Linear Algebra: Section 5.1, p.228
uv
10/91

Ex 3: (Finding the distance between two vectors)
The distance between u=(0, 2, 2) and v=(2, 0, 1) is
d (u , v) ||u  v ||  || (0  2 , 2  0 , 2  1) ||
 (2) 2  2 2  12  3
Elementary Linear Algebra: Section 5.1, p.228
11/91

Dot product in Rn:
The dot product of u  (u1 , u 2 ,  , u n ) and v  (v1 , v2 ,  , vn )
is the scalar quantity
u  v  u1v1  u 2 v2    u n vn

Ex 4: (Finding the dot product of two vectors)
The dot product of u=(1, 2, 0, -3) and v=(3, -2, 4, 2) is
u  v  (1)(3)  (2)( 2)  (0)( 4)  (3)( 2)  7
Elementary Linear Algebra: Section 5.1, p.229
12/91

Thm 5.3: (Properties of the dot product)
If u, v, and w are vectors in Rn and c is a scalar,
then the following properties are true.
(1) u  v  v  u
(2) u  ( v  w)  u  v  u  w
(3) c(u  v)  (cu)  v  u  (cv)
(4) v  v  ||v ||2
(5) v  v  0 , and v  v  0 if and only if v  0
Elementary Linear Algebra: Section 5.1, p.229
13/91

Euclidean n-space:
Rn was defined to be the set of all order n-tuples of real
numbers. When Rn is combined with the standard
operations of vector addition, scalar multiplication, vector
length, and the dot product, the resulting vector space is
called Euclidean n-space.
Elementary Linear Algebra: Section 5.1, p.229
14/91

Ex 5: (Finding dot products)
u  (2 ,  2) , v  (5 , 8), w  (4 , 3)
(a) u  v
(b) (u  v)w (c) u  (2v) (d) | |w | |2 (e) u  ( v  2w)
Sol:
(a ) u  v  (2)(5)  (2)(8)  6
(b) (u  v)w  6w  6(4 , 3)  (24 , 18)
(c) u  (2v)  2(u  v)  2(6)  12
(d) || w ||2  w  w  (4)(4)  (3)(3)  25
(e) v  2w  (5  (8) , 8  6)  (13 , 2)
u  ( v  2w)  (2)(13)  (2)(2)  26  4  22
Elementary Linear Algebra: Section 5.1, p.230
15/91

Ex 6: (Using the properties of the dot product)
Given u  u  39 u  v  3
v  v  79
Find (u  2v)  (3u  v)
Sol:
(u  2v)  (3u  v)  u  (3u  v)  2v  (3u  v)
 u  (3u)  u  v  (2v)  (3u)  (2v)  v
 3(u  u)  u  v  6( v  u)  2( v  v)
 3(u  u)  7(u  v)  2( v  v)
 3(39)  7(3)  2(79)  254
Elementary Linear Algebra: Section 5.1, p.230
16/91

Thm 5.4: (The Cauchy - Schwarz inequality)
If u and v are vectors in Rn, then
| u  v |  | |u | || |v | |

( | u  v | denotes the absolute value of u v )
Ex 7: (An example of the Cauchy - Schwarz inequality)
Verify the Cauchy - Schwarz inequality for u=(1, -1, 3)
and v=(2, 0, -1)
Sol: u  v  1, u  u  11, v  v  5
 u  v  1  1
u v  u  u  v  v  11  5  55
 uv  u v
Elementary Linear Algebra: Section 5.1, p.231
17/91

The angle between two vectors in Rn:
uv
cos  
, 0  
| | u | || | v | |
Opposite
direction
uv  0


 
cos  1

uv  0

 
2
cos  0



uv  0
Same
direction

2
cos  0
0  

cos  0
2
 0
cos  1
Note:
The angle between the zero vector and another vector is
not defined.
Elementary Linear Algebra: Section 5.1, p.232
18/91

Ex 8: (Finding the angle between two vectors)
u  (4 , 0 , 2 ,  2) v  (2 , 0 ,  1 , 1)
Sol:
u  u u 
 42  02  22   22
 24
v  v  v  22  0   1  12  6
2
2
u  v  (4)(2)  (0)(0)  (2)(1)  (2)(1)  12
uv
 12
12
 cos 


 1
|| u || || v ||
24 6
144
  
 u and v have opposite directions. (u  2v)
Elementary Linear Algebra: Section 5.1, p.232
19/91

Orthogonal vectors:
Two vectors u and v in Rn are orthogonal if
uv 0

Note:
The vector 0 is said to be orthogonal to every vector.
Elementary Linear Algebra: Section 5.1, p.232
20/91

Ex 10: (Finding orthogonal vectors)
Determine all vectors in Rn that are orthogonal to u=(4, 2).
Sol:
u  (4 , 2)
Let v  (v1 , v2 )
 u  v  (4 , 2)  (v1 , v2 )
 4v1  2v2
0
t
, v2  t
 v1 
2
 t 
 v   ,t ,
 2 
Elementary Linear Algebra: Section 5.1, p.232
4
2


0  1

1

0
2 
tR
21/91

Thm 5.5: (The triangle inequality)
If u and v are vectors in Rn, then | |u  v | |  | |u | |  | |v | |
Pf:
||u  v ||2  (u  v)  (u  v)
 u  (u  v)  v  (u  v)  u  u  2(u  v)  v  v
 || u ||2 2(u  v) || v ||2
 || u ||2 2 | u  v |  || v ||2
 ||u ||2 2 ||u ||||v ||  ||v ||2
 (| |u ||  ||v || )2
|| u  v ||  || u ||  || v ||

Note:
Equality occurs in the triangle inequality if and only if
the vectors u and v have the same direction.
Elementary Linear Algebra: Section 5.1, p.233
22/91

Thm 5.6: (The Pythagorean theorem)
If u and v are vectors in Rn, then u and v are orthogonal
if and only if
||u  v ||2 ||u ||2  ||v ||2
Elementary Linear Algebra: Section 5.1, p.233
23/91

Dot product and matrix multiplication:
 u1 
u 
u   2

 
u n 
 v1 
v 
v   2

 
v n 
u  v  u T v  [u1 u 2
(A vector u  (u1 , u2 ,  , un ) in Rn
is represented as an n×1 column matrix)
v1 
v 
 u n ]  2   [u1v1  u 2 v 2    u n v n ]
 
 
v n 
Elementary Linear Algebra: Section 5.1, p.234
24/91
Key Learning in Section 5.1
▪ Find the length of a vector and find a unit vector.
▪ Find the distance between two vectors.
▪ Find a dot product and the angle between two vectors, determine
orthogonality, and verify the Cauchy-Schwarz Inequality, the
triangle inequality, and the Pythagorean Theorem.
▪ Use a matrix product to represent a dot product.
25/91
Keywords in Section 5.1












length: 長度
norm: 範數
unit vector: 單位向量
standard unit vector : 標準單位向量
normalizing: 單範化
distance: 距離
dot product: 點積
Euclidean n-space: 歐基里德n維空間
Cauchy-Schwarz inequality: 科西-舒瓦茲不等式
angle: 夾角
triangle inequality: 三角不等式
Pythagorean theorem: 畢氏定理
26/91
5.2 Inner Product Spaces

Inner product:
Let u, v, and w be vectors in a vector space V, and let c be
any scalar. An inner product on V is a function that associates
a real number <u, v> with each pair of vectors u and v and
satisfies the following axioms.
(1) 〈u , v〉〈v , u〉
(2) 〈u , v  w〉〈u , v〉〈u , w〉
(3)
c〈u , v〉〈cu , v〉
(4) 〈v , v〉 0 and 〈v , v〉 0 if and only if v  0
Elementary Linear Algebra: Section 5.2, p.237
27/91

Note:
u  v  dot product (Euclidean inner product for R n )
 u , v  general inner product for vector space V

Note:
A vector space V with an inner product is called an inner
product space.
Vector space:
Inner product space:
V ,
V ,
Elementary Linear Algebra: Section 5.2, Addition
, 
, , , 
28/91

Ex 1: (The Euclidean inner product for Rn)
Show that the dot product in Rn satisfies the four axioms
of an inner product.
Sol:
u  (u1 , u2 ,, un ) , v  (v1 , v2 ,, vn )
〈u , v〉 u  v  u1v1  u2v2   unvn
By Theorem 5.3, this dot product satisfies the required four axioms.
Thus it is an inner product on Rn.
Elementary Linear Algebra: Section 5.2, p.237
29/91

Ex 2: (A different inner product for Rn)
Show that the function defines an inner product on R2,
where u  (u1 , u2 ) and v  (v1 , v2 ) .
〈u , v〉 u1v1  2u 2 v2
Sol:
(a) 〈u , v〉 u1v1  2u2v2  v1u1  2v2u2 〈v , u〉
(b) w  (w1 , w2 )
〈u , v  w〉 u1 (v1  w1 )  2u2 (v2  w2 )
 u1v1  u1w1  2u2 v2  2u2 w2
 (u1v1  2u2 v2 )  (u1w1  2u2 w2 )
〈u , v〉〈u , w〉
Elementary Linear Algebra: Section 5.2, p.238
30/91
(c) c〈u , v〉 c(u1v1  2u2v2 )  (cu1 )v1  2(cu2 )v2 〈cu , v〉
(d ) 〈v , v〉 v  2v2  0
2
1
2
〈v , v〉 0  v1  2v2  0
2

2
 v1  v2  0 (v  0)
Note: (An inner product on Rn)
〈u , v〉 c1u1v1  c2u2v2    cnunvn ,
Elementary Linear Algebra: Section 5.2, p.238
ci  0
31/91

Ex 3: (A function that is not an inner product)
Show that the following function is not an inner product on R3.
〈u  v〉 u1v1  2u 2 v 2  u 3 v3
Sol:
Let v  (1 , 2 , 1)
Then  v , v  (1)(1)  2(2)(2)  (1)(1)  6  0
Axiom 4 is not satisfied.
Thus this function is not an inner product on R3.
Elementary Linear Algebra: Section 5.2, p.238
32/91

Thm 5.7: (Properties of inner products)
Let u, v, and w be vectors in an inner product space V, and
let c be any real number.
(1) 〈0 , v〉〈v , 0〉 0
(2) 〈u  v , w〉〈u , w〉〈v , w〉
(3) 〈u , cv〉 c〈u , v〉

Norm (length) of u:
|| u||  〈u , u〉

Note:
|| u ||2 〈u , u〉
Elementary Linear Algebra: Section 5.2, p.239
33/91

Distance between u and v:
d (u , v) || u  v|| u  v, u  v

Angle between two nonzero vectors u and v:
〈u , v〉
cos 
, 0  
|| u || || v ||

Orthogonal: (u  v)
u and v are orthogonal if〈u , v〉 0 .
Elementary Linear Algebra: Section 5.2, p.240
34/91

Notes:
(1) If | |v | |  1 , then v is called a unit vector.
(2)
v 1
v 0

Normalizin g
v
v
(the unit vector in the
direction of v)
not a unit vector
Elementary Linear Algebra: Section 5.2, p.240
35/91

Ex 6: (Finding inner product)
〈 p , q〉  a0b0  a1b1    anbn is an inner product
Let p( x)  1  2 x 2 , q( x)  4  2 x  x 2 be polynomial s in P2 ( x)
(a) 〈p , q〉 ?
(b) || q || ?
(c) d ( p , q)  ?
Sol:
(a) 〈p , q〉 (1)(4)  (0)(2)  (2)(1)  2
(b) ||q ||  〈q , q〉 4 2  (2) 2  12  21
(c)  p  q  3  2 x  3x 2
 d ( p , q)  || p  q ||   p  q, p  q
 (3) 2  2 2  (3) 2  22
Elementary Linear Algebra: Section 5.2, p.240
36/91

Properties of norm:
(1) | |u | |  0
(2) | |u | |  0 if and only if u  0
(3) | |cu | |  | c | | |u | |

Properties of distance:
(1) d (u , v)  0
(2) d (u , v)  0 if and only if u  v
(3) d (u , v)  d ( v , u)
Elementary Linear Algebra: Section 5.2, p.241
37/91

Thm 5.8:
Let u and v be vectors in an inner product space V.
(1) Cauchy-Schwarz inequality:
〈
| u , v〉|  | |u | || |v | |
Theorem 5.4
(2) Triangle inequality:
| |u  v | |  | |u | |  | |v | |
Theorem 5.5
(3) Pythagorean theorem :
u and v are orthogonal if and only if
||u  v ||2  ||u ||2  ||v ||2
Elementary Linear Algebra: Section 5.2, p.242
Theorem 5.6
38/91

Orthogonal projections in inner product spaces:
Let u and v be two vectors in an inner product space V,
such that v  0. Then the orthogonal projection of u
onto v is given by
 u , v
projv u 
v
 v , v

Note:
If v is a init vector, then 〈v , v〉 || v ||2  1.
The formula for the orthogonal projection of u onto v
takes the following simpler form.
projvu  u , v v
Elementary Linear Algebra: Section 5.2, p.243
39/91

Ex 10: (Finding an orthogonal projection in R3)
Use the Euclidean inner product in R3 to find the
orthogonal projection of u=(6, 2, 4) onto v=(1, 2, 0).
Sol:
 u , v  (6)(1)  (2)(2)  (4)(0)  10
 v , v  12  22  02  5
uv
 projvu 
v  105 (1 , 2 , 0)  (2 , 4 , 0)
vv

Note:
u  projv u  (6, 2, 4)  (2, 4, 0)  (4,  2, 4) is orthogonaltov  (1,2,0).
Elementary Linear Algebra: Section 5.2, p.243
40/91

Thm 5.9: (Orthogonal projection and distance)
Let u and v be two vectors in an inner product space V,
such that v  0 . Then
d (u , projvu)  d (u , cv) ,
Elementary Linear Algebra: Section 5.2, p.244
u , v
c
 v , v
41/91
Key Learning in Section 5.2
▪ Determine whether a function defines an inner product, and find
the inner product of two vectors in Rn, Mm,n, Pn and C[a, b].
▪ Find an orthogonal projection of a vector onto another vector in
an inner product space.
42/91
Keywords in Section 5.2












inner product: 內積
inner product space: 內積空間
norm: 範數
distance: 距離
angle: 夾角
orthogonal: 正交
unit vector: 單位向量
normalizing: 單範化
Cauchy – Schwarz inequality: 科西 - 舒瓦茲不等式
triangle inequality: 三角不等式
Pythagorean theorem: 畢氏定理
orthogonal projection: 正交投影
43/91
5.3 Orthonormal Bases: Gram-Schmidt Process

Orthogonal:
A set S of vectors in an inner product space V is called an
orthogonal set if every pair of vectors in the set is orthogonal.
S  v1 , v 2 ,, v n   V

Orthonormal:
 vi , v j   0
i j
An orthogonal set in which each vector is a unit vector is
called orthonormal.

Note:
S  v1 , v 2 ,  , v n   V
1
 vi , v j   
0
i j
i j
If S is a basis, then it is called an orthogonal basis or an
orthonormal basis.
Elementary Linear Algebra: Section 5.3, p.248
44/91

Ex 1: (A nonstandard orthonormal basis for R3)
Show that the following set is an orthonormal basis.
v1
 1
1

S  
,
, 0 ,
2 
 2
v2

2
2 2 2


 6 , 6 , 3 ,


v3
 2 2 1 
 ,  , 
 3 3 3 
Sol:
Show that the three vectors are mutually orthogonal.
v1  v 2   16  16  0  0
v1  v 3 
2
3 2

2
3 2
0  0
2
2 2 2
v 2  v3  


0
9
9
9
Elementary Linear Algebra: Section 5.3, p.249
45/91
Show that each vector is of length 1.
| |v 1 | |  v 1  v 1 
 12  0  1
1
2
1
2
36

8
9
 94 
1
9
1
| |v 2 | |  v 2  v 2 
2
36
| |v 3 | |  v 3  v 3 
4
9

Thus S is an orthonormal set.
Elementary Linear Algebra: Section 5.3, p.249
46/91

Ex 2: (An orthonormal basis for P3 ( x ) )
In P3 ( x ) , with the inner product
 p, q  a0b0  a1b1  a2b2
The standard basis B  {1, x, x2} is orthonormal.
Sol:
v1  1  0x  0x 2 ,
v 2  0  x  0x2 ,
v3  0  0 x  x 2 ,
Then
 v1 , v 2   (1)(0)  (0)(1)  (0)(0)  0,
 v1 , v 3   (1)(0)  (0)(0)  (0)(1)  0,
 v 2 , v 3   (0)(0)  (1)(0)  (0)(1)  0
Elementary Linear Algebra: Section 5.3, p.249
47/91
v1   v1 , v1  
v2  v2 , v2  
v3   v3 , v3  
11  00  00  1,
00  11  00  1,
00  00  11  1
Elementary Linear Algebra: Section 5.3, p.249
48/91

Thm 5.10: (Orthogonal sets are linearly independent)
If S  v1 , v 2 ,, v n  is an orthogonal set of nonzero vectors
in an inner product space V, then S is linearly independent.
Pf:
S is an orthogonal set of nonzero vectors
i.e.
 v i , v j   0 i  j and  v i , v i   0
Let
c1 v1  c2 v 2    cn v n  0

 c1 v1  c2 v 2    cn v n , v i   0, v i   0
i
 c1  v1 , vi   c2  v 2 , vi     ci  vi , vi     cn  v n , vi 
 ci  vi , vi 
  v i , v i   0  ci  0 i
Elementary Linear Algebra: Section 5.3, p.251
 S is linearly independen t.
49/91

Corollary to Thm 5.10:
If V is an inner product space of dimension n, then any
orthogonal set of n nonzero vectors is a basis for V.
Elementary Linear Algebra: Section 5.3, p.251
50/91

Ex 4: (Using orthogonality to test for a basis)
Show that the following set is a basis for R 4 .
v1
v2
v3
v4
S {(2 , 3 , 2 ,  2) , (1 , 0 , 0 , 1) , (1 , 0 , 2 , 1) , (1 , 2 ,  1 , 1)}
Sol:
v1 , v 2 , v3 , v 4 : nonzero vectors
v1  v 2  2  0  0  2  0
v 2  v 3  1  0  0  1  0
v1  v 3  2  0  4  2  0
v 2  v 4  1  0  0  1  0
v1  v 4  2  6  2  2  0
v3  v 4  1  0  2  1  0
 S is orthogonal.
 S is a basis for R 4 (by Corollary to Theorem 5.10)
Elementary Linear Algebra: Section 5.3, p.251
51/91

Thm 5.11: (Coordinates relative to an orthonormal basis)
If B  {v1 , v 2 ,  , v n } is an orthonormal basis for an inner
product space V, then the coordinate representation of a vector
w with respect to B is
w  w , v1  v1  w , v 2  v 2    w , v n  v n
Pf:
B  {v1 , v 2 ,  , v n } is a basis for V
w V
w  k1v1  k2 v 2    kn v n (unique representation)
 B  {v1 , v 2 ,  , v n } is orthonormal
1
  vi , v j   
0
i j
i j
Elementary Linear Algebra: Section 5.3, p.252
52/91
〈 w , v i〉 〈 (k1 v1  k 2 v 2    k n v n ) , v i〉
 k〈1 v1 , v i〉    k〈i v i , v i〉    k〈n v n , v i〉
 ki
i
 w  w, v1 v1  w, v 2  v 2   w, v n  v n

Note:
If B  {v1 , v 2 ,  , v n } is an orthonormal basis for V and w  V ,
Then the corresponding coordinate matrix of w relative to B is
w B
  w , v1  
 w , v  
2 

  


 w , v n  
Elementary Linear Algebra: Section 5.3, p.252
53/91

Ex 5: (Representing vectors relative to an orthonormal basis)
Find the coordinates of w = (5, -5, 2) relative to the following
orthonormal basis for R 3 .
B  {( 53 , 54 , 0) , ( 54 , 53 , 0) , (0 , 0 , 1)}
Sol:
 w, v1   w  v1  (5 ,  5 , 2)  ( 53 , 54 , 0)  1
 w, v 2   w  v 2  (5,  5 , 2)  ( 54 , 53 , 0)  7
 w, v 3   w  v 3  (5 ,  5 , 2)  (0 , 0 , 1)  2
  1
 [ w ]B    7 
 2 
Elementary Linear Algebra: Section 5.3, p.252
54/91

Gram-Schmidt orthonormalization process:
B  {u1 , u2 , , un } is a basis for an inner product space V
Let v1  u1
〈
v 2  u 2  projW1 u 2  u 2 
〈
〈
v 3  u3  projW2 u3  u3 
〈

w 1  span ({ v 1 })
u 2 , v1〉
v1
v1 , v1〉
w 2  span ({ v 1 , v 2 })
u3 , v1〉
〈 u3 , v 2〉
v1 
v2
v1 , v1〉
〈 v 2 , v 2〉
n 1
〈 v n , v i〉
v n  u n  projWn1 u n  u n  
vi
i 1〈 v i , v i〉
 B'  {v1 , v 2 , , v n } is an orthogonal basis.
vn
v1 v 2
 B' '  {
,
, ,
} is an orthonormal basis.
v1 v 2
vn
Elementary Linear Algebra: Section 5.3, p.253
55/91

Ex 7: (Applying the Gram-Schmidt orthonormalization process)
Apply the Gram-Schmidt process to the following basis.
u1
B  {(1 , 1 , 0) ,
u2
(1 , 2 , 0) ,
u3
(0 , 1 , 2)}
Sol: v1  u1  (1 , 1 , 0)
u 2  v1
3
1 1
v2  u2 
v1  (1 , 2 , 0)  (1 , 1 , 0)  ( , , 0)
v1  v1
2
2 2
u 3  v1
u3  v 2
v3  u3 
v1 
v2
v1  v1
v2  v2
1
1/ 2 1 1
 (0 , 1 , 2)  (1 , 1 , 0) 
( , , 0)  (0 , 0 , 2)
2
1/ 2 2 2
Elementary Linear Algebra: Section 5.3, p.254
56/91
Orthogonal basis
 B'  {v1 , v 2 , v 3 }  {(1, 1, 0), (
1 1
, , 0), (0, 0, 2)}
2 2
Orthonormal basis
v3
v1 v 2
1 1
1 1
 B' '  {
,
,
}  {(
,
, 0), (
,
, 0), (0, 0,1)}
v1 v 2
v3
2 2
2 2
Elementary Linear Algebra: Section 5.3, p.254
57/91

Ex 10: (Alternative form of Gram-Schmidt orthonormalization process)
Find an orthonormal basis for the solution space of the
homogeneous system of linear equations.
x1  x 2
Sol:
 7 x4  0
2 x1  x 2  2 x3  6 x 4  0
1 1 0 7 0
2 1 2 6 0



G. J . E
1 0 2  1 0
0 1  2 8 0


 x1   2 s  t 
  2  1 
 x   2 s  8t 
 2    8
  s   t  
  2  
 x3   s 
1 0
  

   
x
t

0 1
 4 
Elementary Linear Algebra: Section 5.3, p.256
58/91
Thus one basis for the solution space is
B  {u1 , u2 }  {(2 , 2 , 1 , 0) , (1 ,  8 , 0 , 1)}
v1  u1   2, 2,1, 0
 u2 , v1 
 18
 2, 2,1, 0
v 2  u2 
v1  1,  8, 0,1 
v1 , v1 
9
  3,  4, 2, 1
 B'   2,2,1,0 3,4,2,1
(orthogonal basis)
  2 2 1    3  4
2
1 
 B' '  
, , ,0  , 
,
,
,

 3 3 3   30 30 30 30 
(orthonormal basis)
Elementary Linear Algebra: Section 5.3, p.256
59/91
Key Learning in Section 5.3
▪ Show that a set of vectors is orthogonal and forms an
orthonormal basis, and represent a vector relative to an
orthonormal basis.
▪ Apply the Gram-Schmidt orthonormalization process.
60/91
Keywords in Section 5.3

orthogonal set: 正交集合

orthonormal set: 單範正交集合

orthogonal basis: 正交基底

orthonormal basis: 單範正交基底

linear independent: 線性獨立

Gram-Schmidt Process: Gram-Schmidt過程
61/91
5.4 Mathematical Models and Least Squares Analysis

Orthogonal subspaces:
T hesubspacesW1 and W2 of an inner productspace V are orthogonal
if v1 , v2   0 for all v1 in W1 and all v2 in W2 .
 Ex 2: (Orthogonal subspaces)
T hesubspaces
1 1
- 1
W1  span(0, 1 ) and W2  span( 1  )
1 0
 1 
are orthogonalbecause v1 , v 2   0 for any vectorin W1 and any vectorin W2 is zero.
Elementary Linear Algebra: Section 5.4, p.260
62/91

Orthogonal complement of W:
Let W be a subspace of an inner product space V.
(a) A vector u in V is said to orthogonal to W,
if u is orthogonal to every vector in W.
(b) The set of all vectors in V that are orthogonal to every
vector in W is called the orthogonal complement of W.
W   {v V |  v , w  0 ,  w W }
W  (read “ W perp”)
 Notes:
(1)
0

V
Elementary Linear Algebra: Section 5.4, p.260
(2) V   0
63/91

Notes:
W is a subspace of V
(1) W  is a subspace of V
(2) W  W   0
(3) (W  )   W
 Ex:
If V  R 2 , W  x  axis
Then (1) W   y - axis is a subspace of R 2
(2) W  W   (0,0)
(3) (W  )   W
Elementary Linear Algebra: Section 5.4, Addition
64/91

Direct sum:
Let W1 and W2 be two subspaces of R n . If each vector x  R n
can be uniquely written as a sum of a vector w1 from W1
and a vector w 2 from W2 , x  w1  w 2 , then R n is the
direct sum of W1 and W2 , and you can write
.

R n  W1 W2
Thm 5.13: (Properties of orthogonal subspaces)
Let W be a subspace of Rn. Then the following properties
are true.
(1) dim(W )  dim(W  )  n
(2) R n  W  W 
(3) (W  )  W
Elementary Linear Algebra: Section 5.4, pp.261-262
65/91

Thm 5.14: (Projection onto a subspace)
If {u1 , u2 ,  , ut } is an orthonormal basis for the
subspace S of V, and v V , then
Pf:
projW v   v , u1u1   v , u2 u2     v , ut ut
 projW v  W and{u1 , u2 ,  , ut } is an orthonorma
l basis for W
 projW v  projW v, u1  u1   projW v, ut  ut
  v  projW  v , u1  u1    v  projW  v , ut  ut
( projW v  v  projW  v )
 v, u1 u1   v, ut ut (projW  v, ui   0,  i)
Elementary Linear Algebra: Section 5.4, p.262

Ex 5: (Projection onto a subspace)
w1  0, 3, 1, w 2  2, 0, 0, v  1, 1, 3
Find the projection of the vector v onto the subspace W.
Sol:
W  span({w1 , w 2 })
w1 , w 2 
: an orthogonal basis for W

3
1

 w1 w 2 
 
 u1 , u 2   
,
,
), 1,0,0 :
  (0,

10 10

 w1 w 2 
 
an orthonormal basis for W
projW v  v, u1  u1  v, u2  u2
6
3
1
9 3

(0,
,
)  1,0,0  (1, , )
5 5
10
10 10
Elementary Linear Algebra: Section 5.4, p.263
67/91

Find by the other method:
A  w1 , w 2 , b  v
Ax  b
 x  ( AT A) 1 AT b
 projcs( A)b  Ax  A( AT A) 1 AT b
Elementary Linear Algebra: Section 5.4, p.263
68/91

Thm 5.15: (Orthogonal projection and distance)
Let W be a subspace of an inner product space V, and v V.
Then for all w  W , w  projW v
|| v  projW v ||  || v  w ||
or || v  projW v ||  min
|| v  w ||
wW
( projW v is the best approximation to v from W)
Elementary Linear Algebra: Section 5.4, p.263
69/91

Pf:
v  w  ( v  projW v)  (projW v  w)
( v  projW v)  (projW v  w)
By the Pythagorean theorem
|| v  w||2  || v  projW v ||2  || projW v  w||2
w  projW v  projW v  w  0
|| v  w ||2  || v  projW v ||2
|| v  projW v |||| v  w ||
Elementary Linear Algebra: Section 5.4, p.263
70/91

Notes:
(1) Among all the scalar multiples of a vector u, the
orthogonal projection of v onto u is the one that is
closest to v. (p.244 Thm 5.9)
(2) Among all the vectors in the subspace W, the vector
projW v is the closest vector to v.
Elementary Linear Algebra: Section 5.4, p.263
71/91

Thm 5.16: (Fundamental subspaces of a matrix)
If A is an m×n matrix, then
(1) (CS ( A))   NS ( A )
( NS ( A ))   CS ( A)
(2) (CS ( A ))   NS ( A)
( NS ( A))   CS ( A )
(3) CS ( A)  NS ( AT )  Rm
CS ( A)  ( NS ( A))  Rm
(4) CS ( AT )  NS ( A)  Rn
CS ( AT )  (CS ( A ))  Rn
Elementary Linear Algebra: Section 5.4, p.264
72/91

Ex 6: (Fundamental subspaces)
Find the four fundamental subspaces of the matrix.
1
0
A
0

0
2
0
0
0
0
1
0

0
(reduced row-echelon form)
Sol:
CS ( A)  span 1,0,0,0 0,1,0,0 is a subspace of R 4
CS ( A )  RS  A  span 1,2,0 0,0,1 is a subspace of R 3
NS ( A)  span  2,1,0 is a subspace of R 3
Elementary Linear Algebra: Section 5.4, p.264
73/91
1 0 0 0 
1 0 0 0 
A  2 0 0 0 ~ R  0 1 0 0
0 1 0 0
0 0 0 0
s t
NS ( A )  span 0,0,1,0 0,0,0,1 is a subspace of R 4

Check:
(CS ( A))  NS ( A )
(CS ( A ))  NS ( A)
CS ( A)  NS ( AT )  R4
CS ( AT )  NS ( A)  R3
Elementary Linear Algebra: Section 5.4, p.264
74/91

Ex 3 & Ex 4:
W  span({w1 , w 2 })
Let W is a subspace of R4 and w1  (1, 2,1, 0), w 2  (0, 0, 0,1) .
(a) Find a basis for W
(b) Find a basis for the orthogonal complement of W.
Sol:
1
2
A
1

0
w1
0
1
0
0
~R
0
0


1
0
w2
Elementary Linear Algebra: Section 5.4, p.261
0
1
0

0
(reduced row-echelon form)
75/91
(a) W  CS  A
 1,2,1,0, 0,0,0,1
is a basis for W
(b) W   CS  A  NS A 

1 2 1 0
A  

0
0
0
1



 x1   2 s  t 
 2  1
x   s 
 1  0
  s   t  
 2  
 x3   t 
 0  1
x   0 
 0  0

   
 4 
  2,1,0,0   1,0,1,0  is a basis for W 

Notes:
(1)
dim(W )  dim(W  )  dim( R 4 )
(2) W  W   R 4
Elementary Linear Algebra: Section 5.4, p.261
76/91

Least squares problem:
Ax  b
m  n n 1 m  1
(A system of linear equations)
(1) When the system is consistent, we can use the Gaussian
elimination with back-substitution to solve for x
(2) When the system is inconsistent, how to find the “best possible”
solution of the system. That is, the value of x for which the
difference between Ax and b is small.
Elementary Linear Algebra: Section 5.4, p.265
77/91

Least squares solution:
Given a system Ax = b of m linear equations in n unknowns,
the least squares problem is to find a vector x in Rn that
minimizes
Ax  b with respect to the Euclidean inner
product on Rn. Such a vector is called a least squares
solution of Ax = b.

Notes:
T heleast square problemis to find a vectorxˆ in R n such that
Axˆ  projCS ( A ) b in thecolumn space of A (i.e., Axˆ  CS ( A))
is as close as possible to b. T hatis,
b  projCS ( A ) b  b  Axˆ  min
b  Ax
n
xR
Elementary Linear Algebra: Section 5.4, p.265
A  M mn
x  Rn
Ax  CS ( A) (CS  A is a subspace of R m )
 b  CS ( A) ( Ax  b is an inconsistent system)
Let Axˆ  projCS ( A ) b
 (b  Axˆ )  CS ( A)
 b  Axˆ  (CS ( A))  NS ( A )
 A (b  Axˆ )  0
i.e.
A Axˆ  A b (the normal system associated with Ax = b)
Elementary Linear Algebra: Section 5.4, p.265
79/91

Note: (Ax = b is an inconsistent system)
The problem of finding the least squares solution of Ax  b
is equal to he problem of finding an exact solution of the
associated normal system A Axˆ  Ab .
Elementary Linear Algebra: Section 5.4, p.265
80/91

Ex 7: (Solving the normal equations)
Find the least squares solution of the following system
Ax  b
1 1 
0 
1 2 c 0   1 

 c   
1 3  1  3
(this system is inconsistent)
and find the orthogonal projection of b on the column space of A.
Elementary Linear Algebra: Section 5.4, p.265
81/91

Sol:
1 1
1 1 1 
3 6 

T
A A
1 2  



1
2
3
6
14

 1 3 



0 
1 1 1    4 
T
A b
1   


1 2 3 3 11
 
the associated normal system
AT Axˆ  AT b
 3 6   c0   4 
6 14  c   11

 1   
Elementary Linear Algebra: Section 5.4, p.265
82/91
the least squares solution of Ax = b
 53 
xˆ   3 
 2 
the orthogonal projection of b on the column space of A
1 1  5
 61 
3 8


projCS ( A ) b  Axˆ  1 2  3    6 
2
1 3    176 
Elementary Linear Algebra: Section 5.4, p.265
83/91
Key Learning in Section 5.4
▪ Define the least squares problem.
▪ Find the orthogonal complement of a subspace and the
projection of a vector onto a subspace.
▪ Find the four fundamental subspaces of a matrix.
▪ Solve a least squares problem.
▪ Use least squares for mathematical modeling.
84/91
Keywords in Section 5.4

orthogonal to W: 正交於W

orthogonal complement: 正交補集

direct sum: 直和

projection onto a subspace: 在子空間的投影

fundamental subspaces: 基本子空間

least squares problem: 最小平方問題

normal equations: 一般方程式
85/91
Key Learning in Section 5.5
▪ Find the cross product of two vectors in R3.
▪ Find the linear or quadratic least squares approximation of a
function.
▪ Find the n-th order Fourier approximation of a function.
86/91
5.1 Linear Algebra Applied

Electric/Magnetic Flux
Electrical engineers can use the dot product to calculate
electric or magnetic flux, which is a measure of the
strength of the electric or magnetic field penetrating a
surface. Consider an arbitrarily shaped surface with an
element of area dA, normal (perpendicular) vector dA,
electric field vector E and magnetic field vector B. The
electric flux Φe can be found using the surface integral Φe
= ∫ E‧dA and the magnetic flux can be found using the
surface integral Φe = ∫ B‧dA. It is interesting to note that
for a given closed surface that surrounds an electrical
charge, the net electric flux is proportional to the charge,
but the net magnetic flux is zero. This is because electric
fields initiate at positive charges and terminate at
negative charges, but magnetic fields form closed loops,
so they do not initiate or terminate at any point. This
means that the magnetic field entering a closed surface
must equal the magnetic field leaving the closed surface.
Elementary Linear Algebra: Section 5.1, p.234
87/91
5.2 Linear Algebra Applied

Work
The concept of work is important to scientists and
engineers for determining the energy needed to
perform various jobs. If a constant force F acts at an
angle  with the line of motion of an object to move
the object from point A to point B (see figure below),
then the work done by the force is given by
W  (cos  ) F AB
=F  AB
where AB represents the directed line segment from A
to B. The quantity (cos ) F
is the length of the
orthogonal projection of F onto AB
Orthogonal
projections are discussed on the next page.
Elementary Linear Algebra: Section 5.2, p.242
88/91
5.3 Linear Algebra Applied

Heart Rhythm Analysis
Time-frequency analysis of irregular physiological
signals, such as beat-to-beat cardiac rhythm variations
(also known as heart rate variability or HRV), can be
difficult. This is because the structure of a signal can
include multiple periodic, nonperiodic, and pseudoperiodic components. Researchers have proposed and
validated a simplified HRV analysis method called
orthonormal-basis partitioning and time-frequency
representation (OPTR). This method can detect both
abrupt and slow changes in the HRV signal’s
structure, divide a nonstationary HRV signal into
segments that are “less nonstationary,” and determine
patterns in the HRV. The researchers found that
although it had poor time resolution with signals that
changed gradually, the OPTR method accurately
represented multicomponent and abrupt changes in
both real-life and simulated HRV signals.
Elementary Linear Algebra: Section 5.3, p.249
89/91
5.4 Linear Algebra Applied

Revenues
The least squares problem has a wide variety of reallife applications. To illustrate, in Examples 9 and 10
and Exercises 37, 38, and 39, you will use least
squares analysis to solve problems involving such
diverse subject matter as world population, astronomy,
doctoral degrees awarded, revenues for General
Dynamics Corporation, and galloping speeds of
animals. In each of these applications, you will be
given a set of data and you will be asked to come up
with mathematical model(s) for the data. For instance,
in Exercise 38, you are given the annual revenues
from 2005 through 2010 for General Dynamics
Corporation, and you are asked to find the least
squares regression quadratic and cubic polynomials
for the data. With these models, you are asked to
predict the revenue for the year 2015, and you are
asked to decide which of the models appears to be
more accurate for predicting future revenues.
Elementary Linear Algebra: Section 5.4, p.260
90/91
5.5 Linear Algebra Applied

Torque
In physics, the cross product can be used to measure
torque—the moment M of a force F about a point A as
shown in the figure below. When the point of
application of the force is B, the moment of F about A
is given by
M  AB  F
where AB represents the vector whose initial point is
A and whose terminal point is B. The magnitude of the
moment M measures the tendency of AB to rotate
counterclockwise about an axis directed along the
vector M.
Elementary Linear Algebra: Section 5.5, p.271
91/91
Download