2010Fall-LA-Lecture-13[1]

advertisement
Linear Algebra
Lecture 13
Linear Independence, Spans,
and Bases
Linear Independence
A set of vectors 𝑺 = {π’—πŸ , π’—πŸ , β‹― , 𝒗𝒏 }
is linearly independent if
𝜢𝟏 π’—πŸ + 𝜢𝟐 π’—πŸ + β‹― + πœΆπ’ 𝒗𝒏 = 𝟎
⟹ 𝜢𝟏 = 𝟎, 𝜢𝟐 = 𝟎, β‹― , πœΆπ’ = 𝟎
Otherwise it is linearly dependent.
𝑺 is linearly dependent means that there
are vectors in 𝑺 which can be represented
as linear combination of the other vectors
𝜢𝟏 π’—πŸ + 𝜢𝟐 π’—πŸ + β‹― + πœΆπ’ 𝒗𝒏 = 𝟎
𝜢𝟏 ≠ 𝟎
𝜢𝟏 π’—πŸ = −𝜢𝟐 π’—πŸ − β‹― − πœΆπ’ 𝒗𝒏
π’—πŸ =
−𝜢𝟐
𝒗
𝜢𝟏 𝟐
− β‹―−
πœΆπ’
𝒗
𝜢𝟏 𝒏
For example:
𝟏 𝟏 𝟏
[ 𝟐] [ πŸ’] [ 𝟎]
πŸ‘ πŸ” πŸ”
are linearly independent
Can we find a,b,c such that
𝟏
𝟏
𝟏
𝟎
𝒂 [𝟐] + 𝒃 [πŸ’] + 𝒄 [𝟎] = [𝟎]
πŸ‘
πŸ”
πŸ”
𝟎
Notice
𝟏 𝟏 𝟏 𝒂
𝟎
[𝟐 πŸ’ 𝟎] [𝒃] = [𝟎]
πŸ‘ πŸ” πŸ” 𝒄
𝟎
𝟏 𝟏 𝟏
Det([𝟐 πŸ’ 𝟎]) = 𝟏𝟐 ≠ 𝟎
πŸ‘ πŸ” πŸ”
The system has only one solution and that
one is a=0,b=0,c=0.
For example:
𝟏
[ 𝟐]
πŸ‘
𝟏
[ πŸ’]
πŸ”
𝟏
[ 𝟎]
𝟎
are linearly dependent(Keary)
Can we find a,b,c such that
𝟏
𝟏
𝟏
𝟎
𝒂 [𝟐] + 𝒃 [πŸ’] + 𝒄 [𝟎] = [𝟎]
πŸ‘
πŸ”
𝟎
𝟎
Solution: a=-2, b=1,c=1
𝟏
𝟏
𝟏
𝟏
𝟏
[
]
+
[𝟎] = [𝟐]
πŸ’
𝟐
𝟐
πŸ”
𝟎
πŸ‘
Yes. This set of vectors is linearly
dependent.
The minimal number of independent
vectors in the subspaces is called the rank
or dimension of the subspace.
How do we find the dimensions of the four
spaces?
Answer: Gauss-Jordan Elimination process
𝟏 πŸ‘ 𝟐 𝟐
𝑨 = [ 𝟎 𝟏 𝟐 πŸ‘]
𝟎 𝟎 𝟏 𝟐
The first 3 columns are independent. Why?
𝟏 πŸ‘
[𝟎 𝟏
𝟎 𝟎
𝟐 𝒂
𝟎
𝟐] [𝒃] = [𝟎]
𝟏 𝒄
𝟎
𝒅𝒆𝒕 = 𝟏 ≠ 𝟎
This means we have only one, the trivial
solution.
𝒂
𝟏 πŸ‘ 𝟐 𝟐 𝒃
𝟎
[𝟎 𝟏 𝟐 πŸ‘] [ ] = [𝟎]
𝒄
𝟎 𝟎 𝟏 𝟐 𝒅
𝟎
𝒂
−𝟏
𝒃
𝟏
[ ]=[ ]
𝒄
−𝟐
𝒅
𝟏
𝟏 𝟎 𝟎
πŸ‘ 𝟏 𝟎
[
]
𝟐 𝟐 𝟏
𝟐 πŸ‘ 𝟐
Because the system has only one solution
𝒂
𝟎
[ 𝒃 ] = [ 𝟎]
𝒄
𝟎
The fourth one is linearly dependent of the
others.
How about rows?
𝟏 πŸ‘ 𝟐 𝟐
𝑨 = [ 𝟎 𝟏 𝟐 πŸ‘]
𝟎 𝟎 𝟏 𝟐
if
π’‚πŸ π‘ΉπŸ + π’‚πŸ π‘ΉπŸ + π’‚πŸ‘ π‘ΉπŸ‘ = 𝟎
then
...
What else could happen?
𝟏 πŸ‘ 𝟐 𝟐
𝑨 = [ 𝟎 𝟏 𝟐 πŸ‘]
𝟎 𝟎 𝟎 𝟎
The Rank is 2.
𝟏 πŸ‘ 𝟐 𝟐
𝑨 = [ 𝟎 𝟎 𝟐 πŸ‘]
𝟎 𝟎 𝟎 𝟎
The Rank is 2.
𝟏 πŸ‘ 𝟐 𝟐
𝑨 = [ 𝟎 𝟎 𝟎 𝟎]
𝟎 𝟎 𝟎 𝟎
The Rank is 1.
Notice:
Rank(Col(A))=Rank(Row(A))
_____________________________________
Let
𝟏 πŸ‘ 𝟐
𝑨 = [𝟎 𝟏 𝟐
𝟎 𝟏 𝟏
𝟏 πŸ‘ 𝟐
𝑨 = [𝟎 𝟏 𝟐
𝟎 𝟎 −𝟏
𝟐
πŸ‘]
𝟏
𝟐
πŸ‘]
−𝟐
The rank of the Null(A) is called nullity.
Find the Rank and Nullity
Notice:
Rank + Nullity=the number of Columns
The Rank and Nullity Theorem
More on Linear Independence
____________________________________________
(1)A finite set of vectors that contains a
zero vector is linearly dependent
𝟏 𝟏 𝟏 𝟏 𝟎
πŸ’
𝟐 πŸ’ 𝟐 𝟎 𝟎
πŸ–
Let 𝑺 = πŸ‘ , πŸ‘ , 𝟎 , πŸ‘ , 𝟎 , 𝟏𝟐
πŸ“ πŸ“ πŸ“ πŸ“ 𝟎 𝟐𝟎
{[πŸ’] [πŸ’] [πŸ’] [πŸ’] [𝟎] [𝟏𝟐]}
Is S linearly independent or
dependent?
Dependent.
____________________________________________
(2) A set of two vectors is linearly
independent iff neither of the vectors is a
multiple of the other
𝟏
πŸ’
𝟐
πŸ–
Let 𝑺 = πŸ‘ , 𝟏𝟐
πŸ“ 𝟐𝟎
{[πŸ’] [𝟏𝟐]}
Is S linearly independent or dependent?
Independent, because the two vectors are
not multiple of each other.
__________________________________________
(2)Let 𝑺 = {π’—πŸ , π’—πŸ , β‹― , 𝒗𝒏 } be a set of vectors
in π‘Ήπ’Ž . If 𝒏 > π‘š, then S is linearly
dependent.
𝟏 𝟏 𝟏 𝟏 𝟏
πŸ’
𝟐 πŸ’ 𝟐 𝟎 𝟎
πŸ–
𝑺 = πŸ‘ , πŸ‘ , 𝟎 , πŸ‘ , 𝟏 , 𝟏𝟐
πŸ“ πŸ“ πŸ“ πŸ“ 𝟎 𝟐𝟎
{[πŸ’] [πŸ’] [πŸ’] [πŸ’] [𝟎] [𝟏𝟐]}
S is dependent.
_________________________________________
Let
𝑺 = {π’—πŸ , π’—πŸ , β‹― , 𝒗𝒏 } be a set of vectors
Then
𝒔𝒑𝒂𝒏(𝑺) = {𝒖|𝒖 = π’‚πŸ π’—πŸ + π’‚πŸ π’—πŸ + β‹― + 𝒂𝒏 𝒗𝒏 }
i.e.
Span(S) is a set of all vectors which can be
represented as a linear combination of
vectors in S.
Span(S) is a vector space.
Example:
Let
𝟏 𝟐
𝑺 = {[𝟐] , [𝟏]}
πŸ‘ 𝟏
𝟏
𝟐
𝒔𝒑𝒂𝒏(𝑺) = {𝒂 [𝟐] + 𝒃 [𝟏] , 𝒂, 𝒃 𝒂𝒓𝒆 𝒔𝒄𝒂𝒍𝒂𝒓𝒔}
πŸ‘
𝟏
A Basis of the Vector space
If V is a vector space and 𝑺 = {π’—πŸ , π’—πŸ , β‹― , 𝒗𝒏 } is
a set of vectors in V. Then S is called a
basis if
(1) S is linearly independent
(2) span(S)=V
Example: canonical example,
V=π‘ΉπŸ‘
Canonical Basis
𝟏 𝟎 𝟎
𝑺 = {[𝟎] , [𝟏] , [𝟎]}
𝟎 𝟎 𝟏
Pick𝒖 ∈ π‘ΉπŸ‘ .
−πŸ“
John’s 𝒖 = [ πŸ– ]
−πŸπŸ•
𝟏
𝟎
𝟎
−πŸ“
𝒖 = [ πŸ– ] = −πŸ“ [𝟎] + πŸ– [𝟏] − πŸπŸ• [𝟎]
𝟎
𝟎
𝟏
−πŸπŸ•
Give me another basis for π‘ΉπŸ‘
𝟏 πŸ‘ 𝟏
𝑻 = {[𝟎] , [𝟏] , [𝟎]}
𝟎 𝟎 𝟏
Q:Is T a basis?
Check independence!
Det=1 hence they are linearly independent.
−πŸ“
John’s 𝒖 = [ πŸ– ]
−πŸπŸ•
𝟏
πŸ‘
𝟏
−πŸ“
𝒖 = [ πŸ– ] = 𝒂 [𝟎] + 𝒃 [𝟏] + 𝒄 [𝟎] =
𝟎
𝟎
𝟏
−πŸπŸ•
𝟏 πŸ‘ 𝟏 𝒂
−πŸ“
[𝟎 𝟏 𝟎] [𝒃] = [ πŸ– ]
𝟎 𝟎 𝟏 𝒄
−πŸπŸ•
Does this system have a solution?
Det=1 means it has the solution no matter
what u you pick.
T is a basis for π‘ΉπŸ‘ .
Theorem: A vector representation in a basis
is unique.
Proof: Let u have two representation in the
same basis.
𝒖 = π’‚πŸ π’—πŸ + π’‚πŸ π’—πŸ + β‹― + 𝒂𝒏 𝒗𝒏
and
𝒖 = π’ƒπŸ π’—πŸ + π’ƒπŸ π’—πŸ + β‹― + 𝒃𝒏 𝒗𝒏
Then
𝒖 − 𝒖 = (π’‚πŸ − π’ƒπŸ )π’—πŸ + (π’‚πŸ − π’ƒπŸ )π’—πŸ + β‹― + (𝒂𝒏 − 𝒃𝒏 )𝒗𝒏
𝟎 = (π’‚πŸ − π’ƒπŸ )π’—πŸ + (π’‚πŸ − π’ƒπŸ )π’—πŸ + β‹― + (𝒂𝒏 − 𝒃𝒏 )𝒗𝒏
Since 𝒗𝒋 s are independent this can happen
only if
(𝒂𝒋 − 𝒃𝒋 ) = 𝟎 𝒋 = 𝟏, 𝟐, β‹― , 𝒏
Therefore
𝒂𝒋 = 𝒃𝒋
and representation is unique.
Theorem.
Every vector space has a basis.
All bases of the same vector space have the
same number of elements.
This number is called a DIMENSION of the
vector space.
Example and Problems:
Let
𝟏 𝟐
𝑺 = {[𝟐] , [𝟏]}
πŸ‘ 𝟏
𝟏
𝟐
𝒔𝒑𝒂𝒏(𝑺) = {𝒂 [𝟐] + 𝒃 [𝟏] , 𝒂, 𝒃 𝒂𝒓𝒆 𝒔𝒄𝒂𝒍𝒂𝒓𝒔}
πŸ‘
𝟏
𝟐
Q: Is [ −πŸ– ] in span(S)?
−𝟏𝟐
Yes, because
𝟏
𝟐
𝟐
[ −πŸ– ] = 𝒂 [𝟐] + 𝒃 [𝟏]
πŸ‘
−𝟏𝟐
𝟏
𝟏 𝟐 𝒂
𝟐
[𝟐 𝟏] [ ] = [ −πŸ– ]
𝒃
πŸ‘ 𝟏
−𝟏𝟐
..
𝟏 𝟎 −πŸ’
[𝟏 𝟏] [−𝟐]
𝟎 𝟎 𝟏
Chad says: It is not in the span!!!!!
_____________________
𝟏 𝟐 𝟐
𝟐 𝟏 𝟐
𝑺 = {[ ] , [ ] , [ ]}
𝟎 𝟏 𝟎
πŸ‘ 𝟏 𝟏
Is this set linearly independent?
𝟏 𝟐 𝟐
𝟐 𝟏 𝟐
[
]
𝟎 𝟏 𝟎
πŸ‘ 𝟏 𝟏
𝟏 𝟎 𝟐
𝟐 −πŸ‘ 𝟐
[
]
𝟎 𝟏 𝟎
πŸ‘ −πŸ“ 𝟏
𝟏 𝟎
𝟎
𝟐 −πŸ‘
𝟎
[
]
𝟎 𝟏 −𝟐/πŸ‘
πŸ‘ −πŸ“ πŸ“/πŸ‘
We performed Gauss –Jordan on columns
and we learned the following:
S is linearly independent.
S is a basis for its span.
Also
𝟎
𝟏
𝟎
𝟎
𝟐 −πŸ‘
𝑻 = {[ ] , [ ] , [
]}
−𝟐/πŸ‘
𝟎
𝟏
πŸ“/πŸ‘
πŸ‘ −πŸ“
T is also a basis for span(S).
Example 2.
𝟏 𝟐 πŸ”
𝟐 𝟏 πŸ”
𝑺 = {[ ] , [ ] , [ ]}
𝟎 𝟏 𝟐
πŸ‘ 𝟏 πŸ–
𝟏 𝟐 πŸ”
𝟐 𝟏 πŸ”
[
]
𝟎 𝟏 𝟐
πŸ‘ 𝟏 πŸ–
𝟏 𝟎 𝟎
𝟐 −πŸ‘ 𝟎
[
]
𝟎 𝟏 𝟎
πŸ‘ −πŸ“ 𝟎
We performed Gauss –Jordan on columns
and we learned the following:
S is linearly dependent.
S
isn’t basis for span(S).
But
𝟏
𝟎
𝟐 −πŸ‘
𝑻 = {[ ] , [ ]} is a basis for span(S).
𝟎
𝟏
πŸ‘ −πŸ“
Dimension span(S)=2
Example 3. Compression.
𝟏 𝟐
𝟐 𝟏
𝟎 𝟏
𝑺=
,
πŸ‘ 𝟏
𝟏 𝟎
{[𝟎] [𝟏]}
πŸ•
𝟏𝟏
𝟏
πŸπŸ”
πŸ“
[𝟏]
S is linearly independent because they are
not multiple of each other.
S is a basis for its span.
Assume that the signal vectors are coming
from span(S).
For example a signal vector from this span
πŸ•
𝟏𝟏
𝟏
is
.
πŸπŸ”
πŸ“
[𝟏]
πŸ•
𝟏
𝟐
𝟏𝟏
𝟐
𝟏
𝟏
𝟎
𝟏
=πŸ“
+𝟏
πŸπŸ”
πŸ‘
𝟏
πŸ“
𝟏
𝟎
[𝟏]
[𝟏 ]
[𝟎 ]
I need to send the signal to the other side.
πŸ•
𝟏𝟏
𝟏
I could send
𝒐𝒓 I could send what?
πŸπŸ”
πŸ“
[𝟏]
πŸ“
James says : Send instead [ ]. So we saved
𝟏
66%.
Let us test the system.
We are on the other side and we received
𝟐
[ ]. What was the original signal vector?
πŸ‘
πŸ–
𝟏
𝟐
πŸ•
𝟐
𝟏
πŸ‘
𝟎
𝟏
=𝟐
+πŸ‘
πŸ—
πŸ‘
𝟏
𝟐
𝟏
𝟎
[𝟏]
[πŸ‘ ]
[𝟎 ]
_______________________________________
Q: Find a different basis for span(S).
𝟐
Q:Find a representation of [ −πŸ– ] in the new
−𝟏𝟐
basis.
Big Question:
Transition between two bases.
Let
𝑺 = {π’—πŸ , π’—πŸ }
and
𝑻 = { π’‡πŸ , π’‡πŸ }
be two linear bases for the same vector space
X.
Let 𝒖 ∈ 𝑿.
Then
𝒖 = π’‚π’—πŸ + π’ƒπ’—πŸ
and
𝒖 = π’™π’‡πŸ + π’šπ’‡πŸ
Assume you know the first representation. How
do you find the second representation?
Or
If you know a and b how can you find x and
y(Really fast)?
First notice that
π’—πŸ , π’—πŸ ∈ 𝑿
Therefore it can be written in terms of the
basis T.
π’—πŸ = π’‘πŸ π’‡πŸ + π’‘πŸ π’‡πŸ
π’—πŸ = π’’πŸ π’‡πŸ + π’’πŸ π’‡πŸ
𝒖 = π’‚π’—πŸ + π’ƒπ’—πŸ =
= 𝒂(π’‘πŸ π’‡πŸ + π’‘πŸ π’‡πŸ ) + 𝒃(π’’πŸ π’‡πŸ + π’’πŸ π’‡πŸ ) =
= (π’‚π’‘πŸ + π’ƒπ’’πŸ )π’‡πŸ + (π’‚π’‘πŸ + π’ƒπ’’πŸ )π’‡πŸ =
= π’™π’‡πŸ + π’šπ’‡πŸ
Hence by __uniqueness____
𝒙 = π’‚π’‘πŸ + π’ƒπ’’πŸ
π’š = π’‚π’‘πŸ + π’ƒπ’’πŸ
We can write this in matrix notation:
π’‘πŸ π’’πŸ 𝒂
𝒙
[𝒑
] [ ] = [π’š]
𝒒
𝟐
𝟐 𝒃
π’‘πŸ π’’πŸ
𝑻𝒗→𝒇 = [𝒑
] transition matrix
𝒒
𝟐
𝟐
𝑻𝒗→𝒇 𝒖𝒗 = 𝒖𝒇
𝒖𝒗 = 𝑻𝒗→𝒇 −𝟏 𝒖𝒇
In the case 2x2 transformation matrix
𝑻𝒗→𝒇
−𝟏
π’’πŸ −π’’πŸ
=
[−𝒑
]
𝒑
π’‘πŸ π’’πŸ − π’‘πŸ π’’πŸ
𝟐
𝟏
𝟏
Download