LINEAR INDEPENDENCE Definition. Given a set of vectors {v 1

advertisement
LINEAR INDEPENDENCE
Definition. Given a set of vectors {v1 , . . . , vn } ⊂ V , we say that they are linearly independent
iff the only linear combination which equals 0, is the trivial one. That is, if
a1 v1 + · · · + an vn = 0, then a1 = · · · = an = 0.
Example. The set of vectors {(0, 1), (1, 0), (2, 3)} ∈ R2 is not linearly independent
since we can write 0 as
3(0, 1) + 2(1, 0) − (2, 3) = (0, 0)
However, the set of vectors {1, x, x2 } in the space of polynomials is linearly independent since if for all x ∈ R we have that
ax2 + bx + c = 0
then a = b = c = 0. Don’t let this confuse you, we’re not solving for x, we’re saying
that the polynomial is the zero polynomial, i.e. it’s graph is the flat line y = 0.
Proposition. The set of vectors {v1 , . . . , vn } ⊂ V is linearly independent if and
only if every vector v ∈ span{v1 , . . . , vn } can be written uniquely as a linear combination
v = a1 v1 + · · · + an vn
Proof. We must prove two statements, the “if” statement and the “only if” statement. To prove the “if” statement notice that if every vector can be written uniquely
as a linear combination then in particular, 0 ∈ V can be written uniquely as
0 = a1 v1 + · · · + an vn
But we know that 0 can be written as
0 = 0v1 + · · · + 0vn
By uniqueness, we know that these two linear combinations are the same and hence
a1 = · · · = an = 0
So, indeed {v1 , . . . , vn } is linearly independent. Conversely, suppose they were
linearly independent to begin with. Then given v ∈ span{v1 , . . . , vn } we can write
v = a1 v1 + · · · + an vn
Suppose we had another decomposition
v = b1 v1 + . . . + bn vn
Subtracting, we get
0 = (a1 − b1 )v1 + · · · + (an − bn )vn
Thus, by linear independence we must have
a1 − b1 = · · · = an − bn = 0
or simply,
a1 = b1 , . . . , an = bn
1
LINEAR INDEPENDENCE
2
This means that the two linear combinations have the same coefficients, which
demonstrates the claim.
Together, the notions of linear independence and span give the important concept
of a basis.
Definition. {v1 , . . . , vn } ⊂ V is a basis for V iff it’s linearly independent and
span{v1 , . . . , vn } = V
Example. Let ei ∈ F n be the n-tuple with 0 in every entry except the ith entry
which is 1. E.g. e2 ∈ F 3 is e2 = (0, 1, 0). The set {e1 , . . . , en } is a basis for F n .
We must check
• linear independence: Let a1 e1 + · · · + an en = 0. The left hand side is
simply the n-tuple (a1 , . . . , an ) while the right hand side is (0, . . . , 0). Hence
a1 = · · · = an = 0.
• spanning: Given any n-tuple x ∈ F n , we can write it as a linear combination
x = x1 e1 + · · · + xn en ∈ span{e1 , . . . , en }.
The basis {e1 , . . . , en } ⊂ F n is called the standard basis.
Example. Consider the set of monomials {1, x, x2 , x3 , . . .}. This is a basis for the
space of polynomials. Convince yourself that this set is linearly independent and
spans. This may be a little difficult since in fact, this set is infinite. The space of
polynomials is an example of an infinite dimensional vector space.
We will be concerned mostly with the case of finite dimensional vector spaces.
The following theorem guarantees that these these come in a somewhat simple form.
Theorem. If a vector space V has a finite basis {v1 , . . . , vn } then any basis for V
has n elements. We say that V is n-dimensional.
Clearly, F n is n-dimensional since the standard basis {e1 , . . . , en } has n elements.
Definition. Given an n-dimensional vector space V with bases {v1 , . . . , vn } and
{w1 , . . . , wn }, we can write the wi ’s as linear combinations of the vj ’s,
X
wi =
aij vj
j
The n × n-matrix A with entries aij is called the change of basis matrix.
Download