Uploaded by 何晏

LSE MA212 Lecture 1

advertisement
MA212 Further Mathematical Methods
Lecture 1: Some revision of MA100 linear algebra
Dr James Ward
■ Vector spaces and subspaces
■ Linear independence and span
The London School of Economics and Political Science
Department of Mathematics
Lecture 1, page 1
Information
■ Exercises 11 is on Moodle.
▶ Attempt questions 2, 3, 5, 6 and 9.
▶ Revision of MA100 linear algebra you should know.
▶ Submit your written solutions on Moodle by Thursday
or follow any instructions laid down by your class teacher.
■ Extra Examples Sessions
▶ Start in Week 2, Friday 13:00–14:00 in OT.
▶ Reply to the post in the Moodle forum if you want me to cover
anything in particular.
■ Classes start in Week 2. See your timetable for details.
Lecture 1, page 2
Vector spaces
A real vector space is a non-empty set, V , whose elements are
called vectors that is closed under two algebraic operations, namely
Closure under vector addition: Given the vectors u, v ∈ V , we
can add them to form the vector u + v ∈ V ;
Closure under scalar multiplication: Given the vector u ∈ V
and the scalar α ∈ R, we can multiply the vector by the scalar to
form the vector αu ∈ V .
In MA100, the scalars were always real numbers giving us a real
vector space. Later in the course, we will see what happens when
the scalars are complex numbers.
Lecture 1, page 3
Vector spaces
In each vector space V , there is a special element called the zero
vector and we denote this by 0. This has the property that
Zero vector: For all v ∈ V , v + 0 = v .
There is also a special scalar, called the unit scalar and we denote
this by 1. This has the property that
Unit scalar: For all v ∈ V , 1v = v .
Lecture 1, page 4
Vector spaces
Also, every vector in the vector space has an inverse under the
operation of vector addition
Inverse vectors: For all v ∈ V , there is a vector −v ∈ V such
that v + (−v ) = 0.
It should come as no surprise that the vector −v is the vector v
multiplied by the scalar −1, i.e. −v = (−1)v .
Lecture 1, page 5
Vector spaces
Lastly, the operations of vector addition and scalar multiplication
must satisfy some obvious properties.
For all vectors u, v , w ∈ V and all scalars α, β ∈ R,
■ u + v = v + u,
■ u + (v + w ) = (u + v ) + w ,
■ α(u + v ) = αu + αv ,
■ (α + β)u = αu + βu,
■ α(βu) = (αβu).
And these should all make sense given what you know about how
vectors and scalars behave.
Lecture 1, page 6
Example
The set

 


a
1


 
3
R = a2  a1 , a2 , a3 ∈ R ,




a3
where vector addition and scalar multiplication are defined by
    

  

a1
b1
a1 + b1
a1
αa1
    

  

a2  + b2  = a2 + b2  and α a2  = αa2  ,
a3
b3
a3 + b3
a3
αa3
is a real vector space.
 
0
 
Observe that the zero vector in this space is 0 = 0.
0
Lecture 1, page 7
Example
The set
(
a1 a2
a3 a4
)
!
a1 , a2 , a3 , a4 ∈ R ,
where vector addition and scalar multiplication are defined by
!
!
!
a1 a2
b1 b2
a1 + b1 a2 + b2
+
=
a3 a4
b3 b4
a3 + b3 a4 + b4
and
a1 a2
α
a3 a4
!
=
αa1 αa2
αa3 αa4
!
,
is a real vector space.
Observe that the zero vector in this space is 0 =
!
0 0
.
0 0
Lecture 1, page 8
Example
The set of all functions f : [0, 1] → R with the usual ‘point-wise’
operations of vector addition and scalar multiplication is a real
vector space. We’ll call this vector space F[0, 1].
Observe that the zero vector in this space is the function
0 : [0, 1] → R given by 0(x) = 0 for all x ∈ [0, 1].
Note: By ‘point-wise’ operations we just mean that, given the
functions f , g : [0, 1] → R and the scalar α ∈ R, we have the
functions f + g and αf whose values are given by
(f + g )(x) = f (x) + g (x)
and
(αf )(x) = αf (x),
for all x ∈ [0, 1].
Lecture 1, page 9
Subspaces
Suppose that V is a vector space and U is a subset of V . If U is
also a vector space, then we call U a subspace of V .
Essentially, this means that U is a subspace of V if it is a
non-empty subset of V that is closed under the operations of
vector addition and scalar multiplication being used in V .
Theorem: U is a subspace of a real vector space V if and
only if U is a non-empty subset of V which is both
CUVA: For all u, v ∈ U, u + v ∈ U
and
CUSM: For all u ∈ U and α ∈ R, αu ∈ U.
Lecture 1, page 10
Example
Show that the set

 



 x
 
U = y  2x − y + z = 0




z
is a subspace of R3 .
Lecture 1, page 11
Example
Show that the set of all continuous functions f : [0, 1] → R is a
subspace of F[0, 1].
Lecture 1, page 12
Linear independence
Definition: Suppose that V is a vector space.
A finite set of vectors, {v1 , v2 , . . . , vk } ⊆ V is linearly
independent if the only solution to the equation
α1 v1 + α2 v2 + · · · + αk vk = 0
is α1 = α2 = · · · = αk = 0.
Note: We often call this the trivial solution to the equation as it is
always one of the solutions. The key for linear independence is
that it is the only solution!
Lecture 1, page 13
Example
Show that the set of vectors
     

2 
0

 1
     
3 , 2 ,  0 




−1
1
1
is linearly dependent.
Lecture 1, page 14
Example
Show that the set of vectors
     

0 
0

 1
     
3 , 2 , 0




1
1
1
is linearly independent.
Lecture 1, page 15
A test for linear independence
Question: How can we easily decide if a finite set of vectors in Rn
is linearly independent?
Answer: Write the vectors as the rows of a matrix and perform
row operations to get the row-echelon form (REF) of the matrix.
The vectors are linearly independent if and only if the REF does
not have a row of zeroes.
Recall: We have three types of row operation:
(1) exchange two rows,
(2) multiply a row by a non-zero scalar,
(3) add a non-zero scalar multiple of one row to another row.
The REF has a one as the first entry of each non-zero row and
each such ‘leading one’ only has zeroes below it.
Lecture 1, page 16
Example
Is the set of vectors
     

−1 
2

 1
     
1 ,  0  ,  3 




1
−1
0
linearly independent?
Lecture 1, page 17
Example
Is the set of vectors
     

2 
0

 1
     
3 , 2 ,  0 




−1
1
1
linearly independent?
Lecture 1, page 18
More tests for linear independence
When we are dealing with a set of n vectors in Rn , the following
theorem from MA100 gives us a number of useful ways to test for
linear independence.
Theorem: For a square matrix A, the following statements
are equivalent.
(1) The rows of A are linearly independent.
(2) The REF of A does not have a row of zeroes.
(3) The determinant of A is non-zero.
(4) The inverse of A exists.
Lecture 1, page 19
Other thoughts on linear independence
When we are just dealing with two vectors, we also have the scalar
multiple test for linear independence.
Theorem: Suppose that V is a vector space.
Two non-zero vectors u, v ∈ V are linearly independent if
and only if there is no scalar α such that u = αv .
Also remember that any set that contains the zero vector is linearly
dependent.
Theorem: Suppose that V is a vector space and U ⊆ V .
If 0 ∈ U, then U is linearly dependent.
Lecture 1, page 20
Linear combinations
Definition: Suppose that V is a vector space and that S ⊆ V
is the set of vectors
S = {u1 , u2 , . . . , un }.
Any vector of the form
α1 u1 + α2 u2 + · · · + αn un
for scalars α1 , α2 , . . . , αn
is a linear combination of the vectors in S.
As V is closed under vector addition and scalar multiplication, we
know that every such linear combination will also be a vector in V .
Lecture 1, page 21
Linear span
Indeed, if we were to take the set of all possible linear combinations
of the vectors in S, we would get the linear span of S.
Definition: Suppose that V is a vector space and that S ⊆ V
is the set of vectors
S = {u1 , u2 , . . . , un }.
The linear span of S is the set of vectors
Lin(S) = {α1 u1 +α2 u2 +· · ·+αn un | α1 , α2 , . . . , αn are scalars}.
Instead of ‘Lin(S)’, you may also see ‘span(S)’ being used.
Lecture 1, page 22
Linear spans give subspaces
The linear span of a set of vectors is one way of generating
subspaces.
Theorem: If V is a vector space and S ⊆ V , then Lin(S) is
a subspace of V .
Indeed, we can even show that the linear span of S ⊆ V is the
smallest subspace of V that contains all of the vectors in S.
Theorem: Suppose that V is a vector space and S ⊆ V .
If U is any subspace of V with S ⊆ U, then Lin(S) ⊆ U.
That is, any subspace of V that contains all of the vectors in S
will also contain all of the vectors in Lin(S).
Lecture 1, page 23
Example
In R3 , consider the linear span

    


1

 

 1 
 
 
Lin 1 = α 1 α ∈ R .




 


1
1
It is a subspace of R3 and it is the smallest subspace of R3 that
contains the vector (1, 1, 1)t .
Geometrically, in R3 , we can think of this as a line through the
origin in the direction (1, 1, 1)t .
Lecture 1, page 24
Example
In F[0, 1], consider the functions fn : [0, 1] → R given by
fn (x) = x n
for n = 0, 1, 2, . . .. The linear span
Lin{f0 , f1 , . . . , fn } = {α0 f0 +α1 f1 +· · ·+αn fn | α0 , α1 , . . . , αn ∈ R}
is a subspace of F[0, 1] and it is the smallest subspace of F[0, 1]
that contains all of the functions f0 , f1 , . . . , fn .
For all x ∈ [0, 1], these functions have values given by
(α0 f0 + α1 f1 + · · · + αn fn )(x) = α0 + α1 x + · · · + αn x n ,
and so we can think of this linear span as the set of all polynomials
of degree at most n.
We’ll call this vector space Pn [0, 1].
Lecture 1, page 25
Linear independence and linear span
Lastly, we can use linear spans to say some useful things about
linear independence.
Theorem: Suppose that V is a vector space and that S ⊆ V
is a finite set of vectors.
The following statements are equivalent.
(1) S is linearly independent.
(2) Each vector in Lin(S) can be written as a linear
combination of the vectors in S in exactly one way.
(3) For each u in S, u is not in Lin(S \ {u}).
Note: (3) says that each vector u ∈ S can not be written as a
linear combination of the vectors left in S after you remove u.
Lecture 1, page 26
Download