Uploaded by Han Zhou

146LectureNotes (3)

advertisement
MATH 146 Lecture Notes, Jason Bell
Background
We assume you know material from MATH 145. Notably, one should know what groups, abelian groups, fields,1 and rings
are.
1. Vector spaces
We begin with the notion of a vector space. This is an abelian group (under addition) (V, +) that is associated to an
underlying field F , equipped with a map (called scalar multiplication)
·:F ×V →V
(we write the · symbol in the middle, so for λ ∈ F and v ∈ V we write λ · v for application of this map. This map has the
following properties:
• 1 · v = v for all v ∈ V ;
• (α + β) · v = α · v + β · v for all α, β ∈ F and v ∈ V ;
• α · (v + w) = α · v + α · w for all α ∈ F and v, w ∈ V .
Exercise 1.1. Show that 0 · v = v for all v ∈ V .
We now give some examples of vector spaces.
Example 1.2. Let F be a field, let X be a set, and let V = {f : f : X → F }; that is, V is the set of functions from X to F .
Then V is an abelian group with addition of functions given by the rule (f + g)(x) = f (x) + g(x), and 0 being the map that
sends every element of X to 0. Scalar multiplication is given by (λ · f )(x) = λ · f (x).
When X = {1, 2, . . . , n} with n a natural number, we can think of V as being all n-tuples of elements of F via the rule
f 7→ (f (1), f (2), . . . , f (n)). We call this vector space F n .
Example 1.3. F is an F -vector space! More generally, if K is a field extension of F then K is an F -vector space.
Example 1.4. Let F be a field and let F [x] denote the set of polynomials with coefficients in F . Then F [x] is an F -vector
space. If n is a natural number, we can take the set F [x]≤n of polynomials of degree at most n and this is also an F -vector
space.
In the above example, notice that F [x]≤n is an abelian subgroup of F [x] and the scalar multiplication map on F [x] restricts
to the same scalar multiplication on F [x]≤n . In general, if V is an F -vector space and W ⊆ V is a subgroup of V with the
property that λ · w ∈ W for all w ∈ W and λ ∈ F , we say that W is a subspace of V .
In general if V is a vector space, for W ⊆ V to be a subspace, we require the following:
• W is closed under addition;
• 0 ∈ W;
• W is closed under scalar multiplication.
Note that the first two conditions are not quite enough to give that W is a subgroup of V (we do not have inverses), but the
third condition gives this.
Exercise 1.5. Show that (−1) · w = (−w) for all w in a vector space W and use that to show that the three conditions given
above ensure that W is a subspace of V .
Exercise 1.6. If U ⊆ W ⊆ V and U is a subspace of W and U is a subspace of V is U a subspace of V ?
Exercise 1.7. Let V be the set of all maps from R to R. (Note that this is a vector space as in Example ?? with X = R.)
Let W be the subset of continuous functions from R to itself and let U be the subset of differentiable functions from R to
itself. Show that U and W are subspaces of V and that U is a subspace of W .
2. Categories
One of the most important ways of understanding algebraic objects is how they interact with similar objects. This notion
of “interaction” is made precise via the notion of a homomorphism between objects.
Definition 2.1. We say that a category C consists of a class of objects sharing common algebraic properties and for each
pair of objects X, Y ∈ C a set of functions (called morphisms) between X and Y , which we denote by HomC (X, Y ).
Moreover, we assume the following:
• for each object X we have an identity map idX from X to X;
• composition of functions is associative and if f : X → Y and g : Y → Z are morphisms then so is g ◦ f : X → Z.
This might seem a bit abstract, but we have encountered a lot of categories already.
1
For us, in a field F the zero element and the one element are distinct, so a field always has at least 2 elements.
1
2
Example 2.2. Here are some examples:
• Sets, where the morphisms are just the functions between sets.
• Groups, where are morphisms f : G → H are the group homomorphisms.
• Rings, where the morphisms f : R → S are ring homomorphisms.
• Abelian groups, where the morphisms f : G → H are group homomorphisms.
• Metric spaces (or more generally topological spaces), where the morphisms are continuous maps.
Intuitively, the morphisms from X to Y should just be the maps that preserve the common structure that X and Y have.
3. Linear Maps
So let’s put on our categorical hats and think about what a homomorphism between vector spaces should be. So let F
be a field and let V and W be F -vector spaces. Then for f : V → W to preserve the vector space structure, it should first
be a homomorphism of abelian groups; and second, it should preserve the scalar multiplication, so f (λ · v) = λ · f (v). For
historical reasons, we call such maps linear maps although one can think of them as homomorphisms of vector spaces.
Definition 3.1. Let F be a field and let V and W be F -vector spaces. We say that a map f : V → W is a linear map from
V to W if:
• f (v + w) = f (v) + f (w) for all v, w ∈ V ;
• f (λ · v) = λ · f (v) for all v ∈ V and λ ∈ F .
Example 3.2. Let V be the real vector space of continuous functions from [0, 1] to R. Let T : V → R be given by
R1
T (f (x)) = 0 f (x) dx. Show that T is a linear map.
Example 3.3. Let T : R3 → R2 be the map T (x, y, z) = (2x + 3y, 5x − z). Then T is linear.
Exercise 3.4. Let K be a field of characteristic p. Notice that K has a subfield {0, 1, 2, . . . , p − 1} that we denote by Fp and
so K is an Fp -vector space. Show that T : K → K given by T (x) = xp is a linear map when we regard K as an Fp -vector
space.
Exercise 3.5. Given a linear map T : V → W of F -vector spaces V and W , we call the set of v ∈ V such that T (v) = 0 the
kernel of T (and we denote it by ker(T )) and the set im(T ) := {T (v) : v ∈ V } the image of T . Show that the kernel of T is
a subspace of V and the image of T is a subspace of W .
Exercise 3.6. What are the kernels of the linear maps in Exercises ??–???
Lemma 3.7. A linear map T : V → W is one-to-one if and only if its kernel is {0} and it is onto if and only if im(T ) = W .
Proof. The second claim is immediate, so it suffices to prove the first. Notice that T (0) = 0, so if the kernel is strictly
larger than {0} then T cannot be one-to-one, since we would have at least two elements mapping to zero. Conversely, if
T (v) = T (w) with v ̸= w then by linearity T (v − w) = T (v) − T (w) = 0 and so v − w ̸= 0 is in the kernel and so the kernel
is not {0}.
□
We make a few additional remarks about linear maps.
Proposition 3.8. Let V , W , and U be F -vector spaces. If T : V → W is linear and S : W → U is linear then S ◦ T : V → U
is linear. If T : V → W is one-to-one and onto, then T −1 : W → V is a linear map. Finally, the identity map I : V → V
given by I(v) = v is linear.
Proof. Let v1 , v2 ∈ V then
S ◦ T (v1 + v2 ) = S(T (v1 ) + T (v2 )) = S(T (v1 )) + S(T (v2 ))
and S ◦ T (λv) = S(λT (v)) = λS(T (v)).. For the second claim, notice that
T (T −1 (v1 + v2 )) = v1 + v2 = T (T −1 (v1 ) + T −1 (v2 ))
and so since T is one-to-one we have that T −1 (v1 + v2 ) = T −1 (v1 ) + T −1 (v2 ). Similarly, T (T −1 (λv)) = λv = T (λT −1 (v)) so
T −1 (λv) = λT −1 (v). The fact that the identity map is linear is immediate.
□
A linear map from V to W that is one-to-one and onto is called an isomorphism. If there is an isomorphism from V to W
then we write V ∼
= W and say that V is isomorphic to W or that V and W are isomorphic. Intuitively, this means that V
and W can be regarded as being the same vector space after suitably relabelling the elements of V .
Exercise 3.9. Let V = F [x]. Let T (p(x)) = xp(x) and let S(p(x)) = p′ (x). Show that T and S are linear maps from V to
V and show that S ◦ T − T ◦ S = I, where I is the identity map on V .
Exercise 3.10. Show that being isomorphic is reflexive, transitive, and symmetric and hence is an equivalence relation on
the class of vector spaces.
3
Example 3.11. Let n be a nonnegative integer. Then F [x]≤n ∼
= F n+1 , which is witnessed via the map T : F [x]≤n → F n+1
n
given by T (a0 + a1 x + · · · + an x ) = (a0 , . . . , an ). Also if X = {1, 2, . . . , n} then the set of maps from X to F is isomorphic
as an F -vector space to F n .
Proposition 3.12. Let T : V → W . Then T is an isomorphism if and only if ker(T ) = (0) and im(T ) = W .
Proof. This is immediate from Lemma ??.
□
Finally, we let Hom(V, W ) denote the set of linear maps from V to W . In the case when V = W we write End(V ) for
Hom(V, V ) and call this the endomorphism ring of V (we’ll see that it’s a ring later).
Proposition 3.13. Let V and W be F -vector spaces. Then Hom(V, W ) is an F -vector space.
Proof. Show that if T, S : V → W are linear then so is T + S and −T and show that the map 0 : V → W which sends every
element of V to 0 is an additive identity. Use this to conclude that Hom(V, W ) is an abelian group. Next show that if λ ∈ F
and T ∈ Hom(V, W ) then (λ · T )(v) := λ · (T (v)) gives a scalar multiplication on Hom(V, W ).
□
4. Span, linear independence, and bases
Now we want to look at how to build vector spaces from subsets. To do this, we introduce the notions of linear independence
and span.
Definition 4.1. Let S ⊆ V be a subset of an F -vector space. We say that S is F -linearly independent (or just linearly
independent if F is understood) if whenever s1 , . . . , sd are distinct elements of S and x1 , . . . , xd ∈ F are such that
X
xi si = 0
we necessarily have x1 = · · · = xd = 0. If a subset S is not linearly independent, we say that it is linearly dependent.
Another way of saying this is in terms of linear combinations. Given a subset S of V , an F -linear combination of S is a
sum of the form
X
λs s
s∈S
with each λs ∈ F and all but finitely many λs equal to zero. Then a set S is linearly independent if the only F -linear
combination of S that is equal to zero is the one in which all λs = 0. More generally, if R ⊆ F (usually a subring) then we
will say that an R-linear combination of S is a sum of the form
X
λs s
s∈S
with each λs ∈ R and all but finitely many λs equal to zero.
Convention: We take an empty linear combination (i.e., a linear combination of the empty set) to be zero.
Example 4.2. Let V = R3 then the vectors (1, 2, 5) and (2, 5, 1) are linearly independent and (1, 2, 5) and (−2, −4, −6) are
not. If S contains the zero vector then it is never linearly independent. The empty set is linearly independent.
Exercise 4.3. Let S be a set of nonzero polynomials in which the degrees are pairwise distinct. Then S is a linearly
independent subset of F [x].
Exercise 4.4. Let v, w be vectors in V . Show that v and w are linearly dependent if and only if either v = 0 or w is a scalar
multiple of v.
Exercise 4.5. Let V be the space of real-valued continuous functions. Then sin(x), sin(2x), sin(3x) are R-linearly independent.
(Hint: suppose that c1 sin(x) + c2 sin(2x) + c3 sin(3x) = 0 for all real numbers x. Then if we plug in x = π/2, we see that
c1 − c3 = 0, so c1 = c3 . If we plug in x = π/3, we get c1 + c2 = 0, so c1 = −c2 . What happens if we plug in x = π/4?
Exercise 4.6. (Challenge) Let V be the space of real-valued continuous functions. Then sin(x), sin(2x), sin(3x) are R-linearly
independent. Prove that the set {sin(dx) : d ≥ 1} is linearly independent.
An orthogonal concept to linear independence is for a set to span a vector space.
Definition 4.7. We say that a subset S of an F -vector space V spans V if every element of V is expressible as a (finite)
F -linear combination of elements of S. We write Span(S) for the set of F -linear combinations of S. So S spans V if and
only if Span(S) = V .
Proposition 4.8. If S ⊆ V then Span(S) is a subspace of V .
P
Proof. Notice that 0 = s∈S 0 · s ∈ Span(S). The fact that Span(S) is closed under sum and under scalar multiplication can
be checked from the definition.
□
4
Example 4.9. Notice that S = V spans V as a vector space. The empty set spans V if and only if V = {0}. Is it linearly
independent?
Example 4.10. The set of polynomials that vanish at x = 0 along with the constant polynomial 1 spans F [x]. Is it linearly
independent?
Lemma 4.11. Let S = {v1 , . . . , vn } be a subset of V . Then S is linearly dependent if and only if there is some k < n such
that vk+1 ∈ Span({v1 , . . . , vk }).
Proof. Suppose that vk+1 ∈ Span({v1 , . . . , vk }). Then vk+1 = c1 v1 + · · · + ck vk for some ci ∈ F . Then
c1 v1 + · · · + ck vk − vk+1 + 0 · vk+1 + · · · + 0 · vn = 0
so S is linear dependent. Conversely if S is linearly dependent, then there is some largest k < n (possibly zero) such that
{v1 , . . . , vk } is linearly independent. Then {v1 , . . . , vk+1 } is linearly dependent, so there is some non-trivial combination
X
ci vi = 0.
. Notice ck+1 ̸= 0 since otherwise, {v1 , . . . , vk } would be linearly dependent, so ck+1 vk+1 = −c1 v1 − · · · − ck vk , with ck+1 ̸= 0.
Since we’re in a field, we can multiply both sides by c−1
□
k+1 and we get the desired result.
Definition 4.12. A vector space is finite-dimensional if it has a spanning set; it is infinite-dimensional if it has no spanning
set.
Example 4.13. R3 is finite-dimensional; F [x] is infinite-dimensional.
Notice that spanning and being linearly independent are fighting against one another: it’s easier for smaller sets to be
linearly independent, but it is harder for them to span; similarly it’s easier for big sets to span but harder for them to be
linearly independent. There’s a Goldilocks zone where both properties can hold and when this occurs, we say that a set is a
basis for the vector space V .
Definition 4.14. A subset S of V is a basis if it is linearly independent and it spans.
In other words we can build up our vector space from S in the following way: each vector v ∈ V is a unique F -linear
combination of elements of S.
Exercise 4.15. Show that {1, x, x2 , .. .} is a basis for F [x]. Let F be a field of characteristic zero and let nx be the polynomial
x(x − 1) · · · (x − n + 1)/n!, where x0 = 1. Show that { nx : n ≥ 0} is a basis for F [x]. Show that a polynomial p(x) ∈ Q[x]
maps Z into Z if and only if p(x) is an integer linear combination of the set { nx : n ≥ 0}.
Exercise 4.16. Show that if a polynomial p(x) ∈ Q[x] is a nonzero polynomial of degree d with the property that p(n) ∈ Z
for all n ∈ Z then d!p(x) ∈ Z[x].
Exercise 4.17. Show that if S spans and v ∈ V then S ∪ {v} is linearly dependent; show that if S is a non-empty linearly
independent set and v ∈ S then S \ {v} cannot span.
The two most important facts about bases are the following:
• every vector space has a basis;
• all bases have the same size (cardinality).
The proof that every vector space has a basis requires Zorn’s lemma (which is really equivalent to the axiom of choice and
hence taken as an axiom in this course).
5. Zorn’s Lemma
To give the statement of Zorn’s lemma, we need to introduce some terminology. First a partially ordered set (poset) is a
set X with a binary relation ≤ (that is, for each x, y ∈ X either x ≤ y or x ̸≤ y that has the following properties:
• ≤ is reflexive (i.e. x ≤ x for all x ∈ X;
• ≤ is antisymmetric (i.e., if x ≤ y and y ≤ x then x = y);
• ≤ is transitive (i.e., x ≤ y and y ≤ z implies that x ≤ z).
Example 5.1. Let U be a set and let X = P(U ) be the set of subsets of X. Then if we declare A ≤ B if and only if A ⊆ B
then ≤ is a partial order on X. Does this poset have a least element? Does it have a greatest element?
Example 5.2. Let X = R and let ≤ be the usual order. Then X is a poset.
Exercise 5.3. Put a binary relation on the set X of all living things that have ever lived, by declaring that x ≤ y if and only
if x is an ancestor of y. Is this a partial order on X?
Exercise 5.4. Let X = N and declare that x ≤ y if x | y. Is this a partial order? Does X have a least element? Does it
have a greatest element.
5
Definition 5.5. Let (X, ≤) be a partially ordered set. A chain in X is a subset Y of X with the following properties: for
each y, y ′ ∈ Y either y ≤ y ′ or y ′ ≤ y. (This means that (Y, ≤) is a totally ordered subset of X.)
Zorn’s lemma says the following: Let (X, ≤) be a non-empty poset in which each chain C has an upper bound (that is, if C
is a chain, there is some x ∈ X such that y ≤ x for all y ∈ C). Then X has at least one maximal element.
Notice that (R, ≤) doesn’t have a maximal element and the chain 1 < 2 < 3 < · · · does not have an upper bound.
Theorem 5.6. Let V be a vector space and let S be a linearly independent subset of V and let T be a spanning set of V
that contains S. Then there is a basis B of V that contains the set S and is contained in T ; in particular, every linearly
independent subset can be extended to a basis for V and every spanning set can be refined to a basis.
Proof. Let X denote the set of linearly independent subsets of V that contain S and are contained in T . Then X is a
non-empty set since S is in X. Notice that we can put a partial order on X by inclusion. Now let {Sα }α∈Y be a chain of
elements in X. What this means is that for each α in some index set Y we have aSlinearly independent subset Sα of V , and
for each α, β ∈ Y we either have Sα ⊆ Sβ or Sβ ⊆ Sα . Then we claim that U := Sα is a linearly independent set (so it is
in X as it contains S and it contained in T ) and is an upper bound for our chain. To see that it U is linearly independent,
suppose that we have a non-trivial dependence. Then there exist d ≥ 1 and s1 , . . . , sd ∈ U and λ1 , . . . , λd ∈ F , not all zero,
such that
X
λi si = 0.
Then each si ∈ Sαi . Since the set of Sα ’s form a chain, there is some biggest element from Sα1 , . . . , Sαd , say Sαj , which
contains each s1 , . . . , sd . But this contradicts the fact that Sαj ∈ X. It follows that U is linearly independent and so by
Zorn’s lemma X has a maximal element, which we call B. We claim that B is a basis. To see this, let W be the span of
B. If W = V then we’re done; if not, then since B spans V there must be some s ∈ T \ W . Notice that B ∪ {s} is linearly
independent and is in X, and this contradicts maximality of B, so B is indeed a basis.
□
In the case of finite-dimensional vector spaces we do not need Zorn’s lemma (or axiom of choice) to prove the existence
of a finite basis: we can just look at a finite spanning set and pick a minimal subset of it that still spans. We can then show
that it is linearly independent by Lemma ??. To see this, let B = {v1 , . . . , vn } be a minimal spanning set of a finite spanning
set S. If B is not linearly independent then by the lemma, there is some k such that vk+1 is already in the span of v1 , . . . , vk ,
so B ′ := {v1 , . . . , vn } \ {vk } still spans, contradicting minimality.
6. Bases for finite-dimensional all have the same size
One of the problems with our definition of being finite-dimensional is that it’s hard to actually show spaces are finitedimensional. For example, how can one prove that the continuous functions from R to R is finite-dimensional as a real vector
space? We’ll now prove the following theorem, which will make our lives easier.
Theorem 6.1. Let V be a finite-dimensional vector space. Then if B1 and B2 are two bases then either B1 and B2 are both
infinite, or they’re both finite and |B1 | = |B2 |. In particular, for a finite-dimensional vector space all bases are finite and we
call the size of a basis (and hence every basis) the dimension of V .
Proof. Let B1 and B2 be two bases. We consider a few cases.
Case I. B1 and B2 are infinite.
We’re happy in this case!
Case II. B1 is infinite and B2 is finite.
We have a finite basis B2 = {v1 , . . . , vn }. Observe that for each i there is a finite subset Si of B1 such that vi is a linear
combination of elements of Si . It follows that S1 ∪· · ·∪Sn is a finite subset of B1 that spans. Since B1 is linearly independent,
we must have B1 = S1 ∪ · · · ∪ Sn , since if not, we could produce a linear dependence.
Case III. B1 is finite and B2 is infinite.
Huh? That’s the same as the preceding case.
Case IV. B1 and B2 are finite.
WLOG we may assume that m = |B1 | ≤ |B2 | = n. We write B1 = {u1 , . . . , um } and B2 = {v1 , . . . , vn }. We must show
that n = m. We claim that for each i ∈ {0, 1, . . . , m} there exists a subset Ti of B1 of size i such that {v1 , . . . , vi } ∪ (B1 \ Ti )
still spans V . Notice that once we obtain this claim, we’re done, since when i = m we get that Tm = B1 and so the claim
gives that {v1 , . . . , vm } spans, which means that m = n since B2 is a basis.
We prove the claim by induction on i. When i = 0 there’s nothing to prove. Now suppose that the claim holds for i = k with
0 ≤ k < m. Then after relabelling, we may assume that Tk = {um−k+1 , . . . , um } and we have that {v1 , . . . , vk , u1 , . . . , um−k+1 }
spans V . In particular,
6
{vk+1 , v1 , . . . , vk , u1 , . . . , um−k+1 } is linearly dependent.
Since v1 , . . . , vk , vk+1 } is linearly independent, it follows that there is some j ∈ {0, . . . , m − k + 1} such that
{vk+1 , v1 , . . . , vk , u1 , . . . , uj }
is linearly independent, but
{vk+1 , v1 , . . . , vk , u1 , . . . , uj+1 }
is linearly dependent. Then we can see that
{vk+1 , v1 , . . . , vk , u1 , . . . uj , uj+2 , . . . , um−k+1 }
still spans V , so taking Tk+1 = Tk ∪ {uj+1 } gives the result. The claim now follows by induction.
□
Definition 6.2. We now say that the dimension of V is infinite if V has no finite basis; if V is finite-dimensional, we say
that the dimension of V is the size of a basis (and hence of every basis). We write dim(V ) for the dimension of V .
Proposition 6.3. Let U be a subspace of V . Then dim(U ) ≤ dim(V ).
Proof. Let B be a finite basis for U . Then we can extend B to a basis B ′ for V . The result now follows from the definition.
□
Example 6.4. The dimension of F n is n. What is the dimension of F [x]≤n ?
Exercise 6.5. Show that if V ∼
= W then dim(V ) = dim(W ).
Proposition 6.6. If T : V → W is an onto linear map then dim(V ) ≥ dim(W ). If V and W are finite-dimensional and
dim(V ) = dim(W ) then T is one-to-one.
Proof. Let B be a basis for V . Then use linearity to show that T (B) and the fact that T is onto to show that T (B) spans
W . But we know that spanning sets can be contracted to a basis, so W has a basis that is at most as large as B and so the
result follows. Now suppose that dim(V ) = dim(W ) < ∞. Then if T is not one-to-one then there is something in the kernel
so T (B) is not linearly independent, so a basis for W is strictly smaller than a basis for V , a contradiction.
□
Theorem 6.7. (Rank plus nullity theorem) Let V and W be finite-dimensional F -vector spaces and let T be a linear map
from V to W . Then dim(im(T )) + dim(ker(T )) = dim(V ).
Proof. Let v1 , . . . , vs be a basis for ker(T ) and let u1 , . . . , um be in V such that T (u1 ), . . . , T (um ) is a basis for im(T ).
Then show that v1 , . . . , vs , u1 , . . . , um spans V and is linearly independent. (Hint: given v ∈ V find a linear combination
u of u1 , . . . , um such that T (v) = T (u) and conclude that v − u is in the span of the vi and hence the set spans. To see
linear independence, take a zero linear combination, apply T to show that the ui ’s must have zero coefficients and now use
independence of the vi ’s.)
□
7. Matrices
Let F be a field and let m, n ∈ N. We let Mm,n (F ) (or Mm×n (F )) denote the set of m × n rectangular arrays


a1,1 a1,2 · · · a1,n
 a2,1 a2,2 · · · a2,n 


A= .
..
.. 
..
 ..
.
.
. 
am,1 am,2 · · · am,n
with each ai,j ∈ F . In the case when m = n we write Mn (F ) for this set. Given i ∈ {1, . . . , m} and j ∈ {1, . . . , n} we say
that ai,j is the (i, j)-entry of A (so it is in the i-th row and j-th column). We also write A(i, j) for the (i, j)-entry of a matrix
A. We note that Mm,n (F ) is an F -vector space in which for 0 is the matrix 0m,n which has all entries equal to zero; for
A, B ∈ Mm,n (F ), (A + B)(i, j) = A(i, j) + B(i, j) (i.e., addition is performed entry-wise); and for λ ∈ F and A ∈ Mm,n (F )
we have λ · A has (i, j)-entry λ · A(i, j). We let Ei,j denote the matrix whose (i, j)-entry is 1 and all other entries are 0.
Proposition 7.1. The matrices S := {Ei,j : 1 ≤ i ≤ m, 1 ≤ j ≤ n} forms a basis for Mm,n (F ) as an F -vector space.
Proof. Notice that if



A=

a1,1
a2,1
..
.
a1,2
a2,2
..
.
···
···
..
.
a1,n
a2,n
..
.





am,1 am,2 · · · am,n
P
then A = i=1 j=1 ai,j Ei,j and so S spans. Notice if
ci,j Ei,j = 0 then if we look at the (k, ℓ)-entry of both sides we see
that ck,ℓ = 0 for all k, ℓ and so S is linearly independent.
□
Pm Pn
7
We shall now put a multiplication
Mm,n (F ) × Mn,p (F ) → Mm,p (F )
as follows. Given A ∈ Mm,n (F ) and B ∈ Mn,p (F ) with

a1,1
 a2,1

A= .
 ..
a1,2
a2,2
..
.
···
···
..
.
a1,n
a2,n
..
.
am,1
am,2
···
am,n
b1,1
a2,1
..
.
b1,2
a2,2
..
.
···
···
..
.
a1,p
a2,p
..
.
an,1
an,2
···
an,p





and



B=




,

we define A · B = C where C is the matrix whose (i, j)-entry is the i-th row of A “dotted” with the j-th column of B; that is,
C(i, j) =
n
X
ai,k bk,j .
k=1
Notice this makes sense for i ≤ m and j ≤ p. We’ll see later on that this multiplication is associative (when it makes sense),
so (A · B) · D = A · (B · D) and that (A + λB) · D = A · D + λB · D and D · (A + λB) = D · A + λD · B whenever the products
make sense. We will assume these things for now. Notice that this gives a welcome surprise.
Return of the Ring!
We observe that if we take m = n = p as above then multiplication gives a binary operation on Mn (F ) and Mn (F ) is a ring
with addition and multiplication. The only axiom that needs to be checked at this point is the existence of a multiplicative
identity. This is the identity matrix I whose (i, j)-entry is 0 if i ̸= j and is 1 if i = j.
Exercise 7.2. Prove that Mn (F ) is not commutative for n ≥ 2.
Exercise 7.3. Show that if R is a ring and n ≥ 1 we can make a ring Mn (R) of n × n matrices with entries in R.
A matrix A ∈ Mn (F ) is called diagonal if A(i, j) = 0 whenever i ̸= j. A matrix A ∈ Mn (F ) is called upper-triangular
(resp. lower triangular) if A(i, j) = 0 whenever i > j (resp. whenever i < j).
Exercise 7.4. Let Dn (F ) and Un (F ) denote respectively the set of n × n diagonal and upper-triangular matrices with entries
in F . Show that Dn (F ) and Un (F ) are subspaces and subrings of Mn (F ). What are their dimensions?
Now we’ll identify F n with the space of column vectors Mn,1 (F ). Notice that if A ∈ Mm,n (F ) and v ∈ F n then
A · v ∈ Mm,n (F ) · Mn,1 (F ) ⊆ Mm,1 (F ) = F m , so multiplication gives us a map from F n to F m and we note that it is linear.
Exercise 7.5. Check directly that left multiplication by A is a linear map from F n to F m .
Exercise 7.6. Show that if n ≥ 2 then there are nonzero nilpotent elements in Mn (F ).
Proposition 7.7. Let A ∈ Mn (F ). Then A is either a unit or a left and right zero divisor.
Proof. Let T : Mn (F ) → Mn (F ) be the map T (X) = A · X. Then T is a linear map. If T is one-to-one then T is onto by the
rank plus nullity theorem so if T one-to-one there is some matrix B such that T (B) = AB = I. Now we claim that BA = I.
To see this, notice that T (BA − I) = A(BA − I) = (AB)A − A = 0 so BA = I since T is one-to-one.
Next if T is not one-to-one then it has a non-trivial kernel so there is some B such that AB = 0 and so A is a left zero
divisor. Working with the right-multiplication map gives that A is also a right zero divisor.
□
8. Universal Property
We show that linear maps from a vector space V to W are in bijection with set maps from B to W where B is a basis for
V.
More specifically we prove the following. If f : B → W is a map from the set B to W (i.e., it is a set map), then there
exists a unique linear map F : V → W with the property that F |B = f . We call this a universal property of linear maps,
which allows us to understand linear maps in terms of what they do on a basis.
Proof. Let B be our basis. Then if v ∈ V there is a unique linear combination
X
λb b
b∈B
8
equal to v with λb ∈ F and λb = 0 for all but finitely many b. We define
X
F (v) =
λb f (b).
b∈B
Notice that thisP
F is well-defined since
P for v ∈ V there is a unique combination that gives us v. To see that F is linear,
notice that v = b∈B λb b then λv = b∈B (λ · λb )b, and so by definition
!
X
X
F (λv) =
(λ · λb )f (b) = λ
λb f (b) ,
b∈B
b
P
P
by the distributive
law. The right-handPside is now equal to λF (v). Similarly, if v =
b λb b and w =
b γb b then
P
v + w = b (λb + γb )b and so F (v + w) = b (λb + γb )f (b) = F (v) + F (w) and so F is a linear map with the desired property.
To see that F is unique, suppose that G is another linear map with G|B = fP
. Then H := F − G : V → W is a linear map by
Proposition ?? and H(b) = 0 for all b ∈ B. By linearity it follows that H( b λb b) = 0 for all linear combinations of B and
hence H is identically zero on V .
□
9. Matrices and linear maps from finite-dimensional vector spaces
Let us now consider finite-dimensional vector spaces. If V and W are nonzero finite-dimensional vector spaces over a field
F then V ∼
= F n and W ∼
= F m for some n, m ∈ N. Thus to understand linear maps from V to W it suffices to understand
n
linear maps from F to F m . For what follows, it will be convenient to write our vectors in F d as column vectors


x1
 x2 


 .. 
 . 
xd
Notice we have a standard basis for F n given by the vectors


0
 0 


 .. 
 . 


n

ei := 
 1 ∈F
 0 


 . 
 .. 
0
for i = 1, . . . , n where we have a 1 in the i-th position and zeros in all other positions. We think of the standard basis as an
ordered list e1 , . . . , en rather than just as a set in what follows.
By the universal property, a linear map T : F n → F m is uniquely determined by a choice of where we send the basis
vectors e1 , . . . , en . Suppose that T (ej ) = vj ∈ F m . We write


a1,j
 a2,j 


vj =  .  ∈ F n
 .. 
am,j
Notice that we can make an m × n array A from this data by making a rectangular array with m rows and n columns in
which the j-th column is vj . That is


a1,1 a1,2 · · · a1,n
 a2,1 a2,2 · · · a2,n 


A= .
..
.. 
..
 ..
.
.
. 
am,1 am,2 · · · am,n
We say that A is an m × n matrix whose entries are in the field F and in the case of the matrix formed from the linear map
T as above, we call A the matrix of T with respect to the basis e1 , . . . , en .
We let Mm,n (F ) denote the set of m × n matrices with entries in a field F . We note that Mm,n (F ) is an F -vector space
with addition given by entry-wise addition of matrices and scalar multiplication given by multiplying each entry of the matrix
by the given scalar.
Exercise 9.1. Show that Mm,n (F ) ∼
= F mn as an F -vector space.
Proposition 9.2. If A ∈ Mm,n (F ) then there is a unique linear map T : F n → F m such that A is the matrix of T with
respect to the standard basis of F n .
9
Proof. We let T be the unique linear transformation that sends ei to the i-th column of A. Then the matrix of T is A and
if T ′ is another linear transformation with this property then T (ei ) = T ′ (ei ) for all i, so T − T ′ is a linear map that sends ei
to 0 for all i and hence it must be the zero map by the universal property
□
10. Matrix multiplication
Suppose that T : F n → F m and S : F p → F n are two linear maps. Then T ◦ S : F p → F m is also a linear map. Let
A be the m × n matrix of T with respect to the standard basis for F n and let B be the n × p matrix of S with respect to
the standard basis for F p . Then T ◦ S : F p → F m is a linear transformation and we let C denote its matrix with respect to
the standard basis for F p . What is C in terms of A and B? We can figure this out. We let bi,j denote the (i, j)-entry of B.
Then by definition of B, S(ej ) = b1,j e1 + · · · + bn,j en . Since T is linear, we then have that
T (S(ej )) = b1,j T (e1 ) + · · · + bn,j T (en ).
So the (i, j)-entry of T ◦ S is just b1,j ai,1 + · · · + bn,j ai,n . We can switch the order to get that the (i, j)-entry of C is
n
X
k=1
. We call C the product of A and B.
ai,k bk,j .
Download