Vector Spaces and Subspaces • V S

advertisement
Vector Spaces and Subspaces
• Vector Space V
• Subspaces S of Vector Space V
–
–
–
–
The Subspace Criterion
Subspaces are Working Sets
The Kernel Theorem
Not a Subspace Theorem
• Independence and Dependence in Abstract spaces
–
–
–
–
–
Independence test for two vectors v1 , v2 . An Illustration.
Geometry and Independence
Rank Test and Determinant Test for Fixed Vectors
Sampling Test and Wronskian Test for Functions.
Independence of Atoms
Vector Space V
It is a data set V plus a toolkit of eight (8) algebraic properties. The data set consists of
~,Y
~ below.
packages of data items, called vectors, denoted X
Closure
Addition
Scalar
multiply
~ +Y
~ and kX
~ are defined and result in a new vector which is also
The operations X
in the set V .
~ +Y
~ =Y
~ +X
~
X
commutative
~
~
~
~
~
~
X + (Y + Z) = (Y + X) + Z
associative
~
~
~
~
Vector 0 is defined and 0 + X = X
zero
~ is defined and X
~ + (−X)
~ = ~0
Vector −X
negative
~ +Y
~ ) = kX
~ + kY
~
k(X
distributive I
~
~
~
(k1 + k2 )X = k1 X + k2 X
distributive II
~
~
k1 (k2 X) = (k1 k2 )X
distributive III
~ =X
~
1X
identity
Operations
+
.
Data
Set
Toolkit
rties
rope
8P
The
Figure 1. A Vector Space is a data set, operations + and ·, and the 8-property toolkit.
Definition of Subspace
A subspace S of a vector space V is a nonvoid subset of V which under the operations
+ and · of V forms a vector space in its own right.
Subspace Criterion
Let S be a subset of V such that
1. Vector 0 is in S .
~ and Y
~ are in S , then X
~ +Y
~ is in S .
2. If X
~ is in S , then cX
~ is in S .
3. If X
Then S is a subspace of V .
Items 2, 3 can be summarized as all linear combinations of vectors in S are again in S . In
proofs using the criterion, items 2 and 3 may be replaced by
~ + c2 Y
~ is in S.
c1X
Subspaces are Working Sets
We call a subspace S of a vector space V a working set, because the purpose of identifying
a subspace is to shrink the original data set V into a smaller data set S , customized for the
application under study.
A Key Example. Let V be ordinary space R3 and let S be the plane of action of a planar
kinematics experiment. The data set for the experiment is all 3-vectors v in V collected
by a data recorder. Detected and recorded is the 3D position of a particle which has been
constrained to a plane. The plane of action S is computed as a homogeneous equation
like 2x+3y+1000z=0, the equation of a plane, from the recorded data set in V . After least
squares is applied to find the optimal equation for S , then S replaces the larger data set V .
The customized smaller set S is the working set for the kinematics problem.
The Kernel Theorem
Theorem 1 (Kernel Theorem)
Let V be one of the vector spaces Rn and let A be an m × n matrix. Define a
smaller set S of data items in V by the kernel equation
S = {x : x in V,
Ax = 0}.
Then S is a subspace of V .
In particular, operations of addition and scalar multiplication applied to data items
in S give answers back in S , and the 8-property toolkit applies to data items in S .
Proof: Zero is in V because A0 = 0 for any matrix A. To verify the subspace criterion, we verify that
z = c1 x + c2 y for x and y in V also belongs to V . The details:
Az = A(c1 x + c2 y)
= A(c1 x) + A(c2 y)
= c1 Ax + c2 Ay
= c1 0 + c2 0
=0
The proof is complete.
Because Ax = Ay = 0, due to x, y in V .
Therefore, Az = 0, and z is in V .
Not a Subspace Theorem
Theorem 2 (Testing S not a Subspace)
Let V be an abstract vector space and assume S is a subset of V . Then S is not
a subspace of V provided one of the following holds.
(1) The vector 0 is not in S .
(2) Some x and −x are not both in S .
(3) Vector x + y is not in S for some x and y in S .
Proof: The theorem is justified from the Subspace Criterion.
1. The criterion requires 0 is in S .
2. The criterion demands cx is in S for all scalars c and all vectors x in S .
3. According to the subspace criterion, the sum of two vectors in S must be in S .
Definition of Independence and Dependence
A list of vectors v1 , . . . , vk in a vector space V are said to be independent provided
every linear combination of these vectors is uniquely represented. Dependent means not
independent.
Unique representation
An equation
a1v1 + · · · + ak vk = b1v1 + · · · + bk vk
implies matching coefficients: a1 = b1, . . . , ak = bk .
Independence Test
Form the system in unknowns c1 , . . . , ck
c1v1 + · · · + ck vk = 0.
Solve for the unknowns [how to do this depends on V ]. Then the vectors are independent
if and only if the unique solution is c1 = c2 = · · · = ck = 0.
Independence test for two vectors v1 , v2
In an abstract vector space V , form the equation
c1v1 + c2v2 = 0.
Solve this equation for c1 , c2 .
Then v1, v2 are independent in V if and only if the system has unique
solution c1 = c2 = 0.
Geometry and Independence
• One fixed vector is independent if and only if it is nonzero.
• Two fixed vectors are independent if and only if they form the edges of a parallelogram
of positive area.
• Three fixed vectors are independent if and only if they are the edges of a parallelepiped
of positive volume.
In an abstract vector space V , two vectors [two data packages] are independent if and only
if one is not a scalar multiple of the other.
Illustration
Vectors v1 = cos x and v2 = sin x are two data packages [graphs] in the vector space
V of continuous functions. They are independent because one graph is not a scalar multiple
of the other graph.
An Illustration of the Independence Test
Two column vectors are tested for independence by forming the system of equations
c1v1 + c2v2 = 0, e.g,
c1
−1
1
+ c2
2
1
=
0
0
c1
c2
.
This is a homogeneous system Ac = 0 with
A=
−1 2
1 1
,
c=
.
The system Ac = 0 can be solved for c by frame sequence methods. Because rref (A) = I , then c1 = c2 = 0,
which verifies independence.
If the system Ac = 0 is square, then det(A) 6= 0 applies to test independence.
There is no chance to use determinants when the system is not square, e.g., consider the homogeneous system


 
 
−1
2
0





1
1
0 .
c1
+ c2
=
0
0
0
It has vector-matrix form Ac = 0 with 3 × 2 matrix A, for which det(A) is undefined.
Rank Test
In the vector space Rn , the independence test leads to a system of n linear homogeneous equations in k variables c1 , . . . , ck . The test requires solving a matrix equation Ac = 0. The signal for independence is zero free variables, or nullity zero,
or equivalently, maximal rank. To justify the various statements, we use the relation
nullity(A) + rank(A) = k, where k is the column dimension of A.
Theorem 3 (Rank-Nullity Test)
Let v1 , . . . , vk be k column vectors in Rn and let A be the augmented matrix
of these vectors. The vectors are independent if rank(A) = k and dependent if rank(A) < k. The conditions are equivalent to nullity(A) = 0 and
nullity(A) > 0, respectively.
Determinant Test
In the unusual case when the system arising in the independence test can be expressed as
Ac = 0 and A is square, then det(A) = 0 detects dependence, and det(A) 6= 0
detects independence. The reasoning is based upon the adjugate formula A−1 =
adj(A)/ det(A), valid exactly when det(A) 6= 0.
Theorem 4 (Determinant Test)
Let v1 , . . . , vn be n column vectors in Rn and let A be the augmented matrix
of these vectors. The vectors are independent if det(A) 6= 0 and dependent if
det(A) = 0.
Sampling Test
Let functions f1 , . . . , fn be given and let x1 , . . . , xn be distinct x-sample values. Define

f1(x1) f2(x1)
 f1(x2) f2(x2)
A=
...
...

f1(xn) f2(xn)

· · · fn(x1)
· · · fn(x2) 
.
.
.

···
.
· · · fn(xn)
Then det(A) 6= 0 implies f1 , . . . , fn are independent functions.
Proof
We’ll do the proof for n = 2. Details are similar for general n. Assume c1 f1 + c2 f2 = 0. Then for all x,
c1 f1 (x) + c2 f2 (x) = 0. Choose x = x1 and x = x2 in this relation to get Ac = 0, where c has components
c1 , c2 . If det(A) 6= 0, then A−1 exists, and this in turn implies c = A−1 Ac = 0. We conclude f1 , f2 are
independent.
Wronskian Test
Let functions f1 , . . . , fn be given and let x0 be a given point. Define

f1(x0)
 f10 (x0)
W =
 ...
(n−1)
f1
(x0)
···
···
...
···
(n−1)
f2
(x0) · · ·
f2(x0)
f20 (x0)
fn(x0)
fn0 (x0)
...


.

fn(n−1)(x0)
Then det(W ) 6= 0 implies f1 , . . . , fn are independent functions. The matrix W
is called the Wronskian Matrix of f1 , . . . , fn and det(W ) is called the Wronskian
determinant.
Proof
We’ll do the proof for n = 2. Details are similar for general n. Assume c1 f1 + c2 f2 = 0. Then for all x,
c1 f1 (x) + c2 f2 (x) = 0 and c1 f10 (x) + c2 f20 (x) = 0. Choose x = x0 in this relation to get W c = 0, where
c has components c1 , c2 . If det(W ) 6= 0, then W −1 exists, and this in turn implies c = W −1 W c = 0. We
conclude f1 , f2 are independent.
Atoms
Definition. A base atom is one of the functions
1,
eax,
cos bx,
sin bx,
eax cos bx,
eax sin bx
where b > 0. Define an atom for integers n ≥ 0 by the formula
atom = xn (base atom).
The powers 1, x, x2 , . . . are atoms (multiply base atom 1 by xn ). Multiples of these
powers by cos bx, sin bx are also atoms. Finally, multiplying all these atoms by eax
expands and completes the list of atoms.
Alternatively, an atom is a function with coefficient 1 obtained as the real or imaginary part of
the complex expression
xn eax (cos bx + i sin bx).
Illustration
We show the powers 1, x, x2 , x3 are independent atoms by applying the Wronskian Test:

1 x0 x20 x30
 0 1 2x0 3x20 
W =
.
0 0
2 6x0 
0 0
0
6

Then det(W ) = 12 6= 0 implies the functions 1, x, x2 , x3 are linearly independent.
Subsets of Independent Sets are Independent
Suppose v1 , v2 , v3 make an independent set and consider the subset v1 , v2 . If
c1v1 + c2v2 = 0
then also
c1v1 + c2v2 + c3v3 = 0
where c3 = 0. Independence of the larger set implies c1 = c2 = c3 = 0, in particular,
c1 = c2 = 0, and then v1, v2 are indpendent.
Theorem 5 (Subsets and Independence)
• A non-void subset of an independent set is also independent.
• Non-void subsets of dependent sets can be independent or dependent.
Atoms and Independence
Theorem 6 (Independence of Atoms)
Any list of distinct atoms is linearly independent.
Unique Representation
The theorem is used to extract equations from relations involving atoms. For instance:
(c1 − c2) cos x + (c1 + c3) sin x + c1 + c2 = 2 cos x + 5
implies
c1 − c2 = 2,
c1 + c3 = 0,
c1 + c2 = 5.
Atoms and Differential Equations
It is known that solutions of linear constant coefficient differential equations of order n
and also systems of linear differential equations with constant coefficients have a general
solution which is a linear combination of atoms.
• The harmonic oscillator y 00 + b2y = 0 has general solution y(x) = c1 cos bx +
c2 sin bx. This is a linear combination of the two atoms cos bx, sin bx.
• The third order equation y 000 + y 0 = 0 has general solution y(x) = c1 cos x +
c2 sin x + c3. The solution is a linear combination of the independent atoms cos x,
sin x, 1.
• The linear dynamical system x0(t) = y(t), y 0(t) = −x(t) has general solution
x(t) = c1 cos t + c2 sin t, y(t) = −c1 sin t + c2 cos t, each of which is a
linear combination of the independent atoms cos t, sin t.
Download