SOLUTION TO ASSIGNMENT 3 - MATH 251, WINTER 2007 1. Let T

advertisement
SOLUTION TO ASSIGNMENT 3 - MATH 251, WINTER 2007
1. Let T : V → V be a nilpotent linear operator. Prove that if n = dim(V ) then T n ≡ 0. Show that for
every n ≥ 2 there exists a vector space V of dimension n and a nilpotent linear operator T : V → V such
that T n−1 6≡ 0.
Proof. We prove the result by induction on n. Suppose that n = 1, we then have a linear map T : F → F.
T cannot be surjective (else it is an isomorphism), hence its image must be the zero subspace, which shows
that T = T 1 = 0. Now for the induction step.
Again, T : V → V cannot be surjective, because it would then be an isomorphism and in particular
T n is an isomorphism for every n. Thus, W = Im(T ) is of dimension at most n − 1. Note that T (W ) ⊂
T (V ) = W so we may view T as a map from W to itself. Now T n−1 (W ) = {0} by induction and so
T n (V ) = T n−1 (T (V )) = T n−1 (W ) = {0}.
To show this result is optimal consider the linear transformation on Fn given by
T (α1 , . . . , αn ) = (0, α1 , . . . , αn−1 ).
One finds that T n−1 (α1 , . . . , αn ) = (0, . . . , 0, α1 ) and so T n−1 is not the zero transformation, though T is
nilpotent. (Clearly T n = 0.)
¤
2. (a) Find a linear map T : R3 → R3 whose image is generated by (1, 2, 3) and (3, 2, 1). Here ‘find’ means
represent by a matrix with respect to the standard basis.
(b) Find a linear map T : R4 → R3 whose kernel is generated by (1, 2, 3, 4), (0, 1, 0, 1).
Proof. (a) Let us define T by T (1, 0, 0) = (1, 2, 3), T (0, 1, 0) = (3, 2, 1), T (0, 0, 1) = (0, 0, 0), or in terms of
a matrix


1 3 0
T (x1 , x2 , x3 ) = 2 2 0 .
3 1 0
Clearly the image is spanned by (1, 2, 3), (3, 2, 1).
(b) We claim that B = {(1, 2, 3, 4), (0, 1, 0, 1), (0, 1, 0, 0), (0, 0, 1, 0)} is a basis for R4 . It is enough to
show that S spans R4 (then it must be a minimal spanning set). For that, it is enough to show that
e1 , . . . , e4 are in Span(B). For e2 , e3 we have it by construction. We also e4 as (1, 2, 3, 4) − 4(0, 1, 0, 1) +
2(0, 1, 0, 0)−3(0, 0, 1, 0). Once we know e2 , e3 , e4 are in the span it also follows that (1, 2, 3, 4)−2(0, 1, 0, 0)−
3(0, 0, 1, 0) − 4(0, 0, 0, 1) is in the span.
We can now define a linear transformation T uniquely by requiring
T (1, 2, 3, 4) = T (0, 1, 0, 1) = 0,
Then we have
T (0, 1, 0, 0) = (0, 1, 0),

0 0
0 0
St [T ]B =
0 0
We also have

B MSt
= St MB −1
1
2
=
3
4
0
1
0
1
0
1
0
0
1
T (0, 0, 1, 0) = (0, 0, 1).

0 0
1 0 .
0 1
−1 
0
1
−4
0
 =
2
1
0
−3
0 0
0 0
1 0
0 1

0
1
.
−1
0
2
SOLUTION TO ASSIGNMENT 3 - MATH 251, WINTER 2007
Thus,
St MSt
= St [T ]B B MSt

0
= 0
0
0 0
0 1
0 0

 1
0 
−4
0 
2
1
−3
0 0
0 0
1 0
0 1


0
0 0 0

1 
2 1 0
=
−1
−3 0 1
0

0
−1 .
0
By construction, the kernel of T contains the two vectors (1, 2, 3, 4), (0, 1, 0, 1) and so is at least 2 dimensional, while the image of T contains the vectors (0, 1, 0), (0, 0, 1) and so is at least 2 dimensional too. It
follows from 4 = dim(Ker(T )) + dim(Im(T )) that both the kernel and image are two dimensional and so
Ker(T ) = Span({(1, 2, 3, 4), (0, 1, 0, 1)}),
Im(T ) = Span({(0, 1, 0), (0, 0, 1)}).
¤
3. Let W = {(x, y, z, w) : x + y + z + w = 0, x − y + 2z − w = 0}, U1 = {(x, x, x, x) : x ∈ R} be subspaces
of R4 . Find a subspace U ⊃ U1 such that R4 = U ⊕ W . Let T be the projection of R4 on U along W .
Write the matrix representing T with respect to the standard basis.
Proof. We begin by finding a basis for W . View W as the kernel of the linear transformation T :
R4 → R2 , T (x, y, z, w) = (x+y +z +w, x−y +2z −w). We have T (1, 0, 0, 0) = (1, 1), T (0, 1, 0, 0) = (1, −1).
We conclude that T is surjective and so that dim(W ) = 2. The vectors (−3, 1, 2, 0), (−3, 0, 2, 1) are in W
and so form a basis for W . Since (1, 1, 1, 1) is not in W we have that {(−3, 1, 2, 0), (−3, 0, 2, 1), (1, 1, 1, 1)}
is independent. By Steinitz’s lemma, for some ei we have that {(−3, 1, 2, 0), (−3, 0, 2, 1), (1, 1, 1, 1), ei } is
a basis. We try e1 and it works. We can now let
W = Span({(−3, 1, 2, 0), (−3, 0, 2, 1)}),
U = Span({(1, 1, 1, 1), (1, 0, 0, 0)}).
We have R4 = W ⊕ U , because U + W = R4 and dim(U ) + dim(W ) = 4. Let
B = {b1 , b2 , b3 , b4 } = {(−3, 1, 2, 0), (−3, 0, 2, 1), (1, 1, 1, 1), (1, 0, 0, 0)}.
Now, define T as
T (b1 ) = 0, T (b2 ) = 0, T (b3 ) = b3 , T (b4 ) = b4 .
Thus,

0
0

St [T ]B = 
0
0
We conclude

st [T ]St
0
0
=
0
0
0
0
0
0
1
1
1
1

1
−3
1
0

0  2
0
0
−3
0
2
1
0
0
0
0
1
1
1
1

1
0
.
0
0
−1

1 1
3
0
1
1 0
 = 
1 0
3 0
1 0
0
−3
2
2
2
6
−1
−1
−1

−3
2
.
2
2
¤
4. Let V and W be any two finite dimensional vector spaces over a field F. Let T : V → W be a linear
map. Prove that there are bases of V and W such that with respect to those bases T is represented by a
matrix composed of 0’s and 1’s only. If T is an isomorphism, prove that with respect to suitable bases it
is represented by the identity matrix.
Proof. Let {b1 , . . . , bm } be a basis for Ker(T ) and complete them to a basis of V by {c1 , . . . , cn }. In
the proof of the theorem dim(V ) = dim(Ker) + dim(Im), we showed that {T (c1 ), . . . , T (cm )} are linearly
SOLUTION TO ASSIGNMENT 3 - MATH 251, WINTER 2007
3
independent vectors of W . Let B = {b1 , . . . , bm , c1 , . . . , cn } Choose a basis D = {d1 , . . . , dt } of W then
such that di = T (ci ), i = 1, 2, . . . , n. Then, by definition, we find that
¶
µ
0n
In
.
D [T ]B =
0n−t 0n−t
¤
5. Let V be a vector space of dimension n and let W be a vector space of dimension m, both over the same
field F. Prove that Hom(V, W ) ∼
= Mm×n (F) as vector spaces over F. Here Mm×n (F) stands for matrices
with n columns and m rows with entries in F.
Proof. That was proven in class, in fact.
¤
6. Consider the transformation that rotates the plane R2 by angle θ counter-clockwise. Write this transformation as a matrix in the standard basis. Write it also as a matrix with respect to the basis (1, 1), (1, 0).
Proof. The vector (1, 0) goes to (cos θ, sin θ) and the vector (0, 1) goes to (− sin θ, cos θ). The matrix is
therefore:
µ
¶
cos θ − sin θ
.
sin θ cos θ
µ
¶
µ
¶
1 1
0 1
This is St [T ]St . Let B = {(1, 1), (1, 0)} then St MB =
and B MSt =
. Thus,
1 0
1 −1
¶
¶ µ
¶µ
¶µ
µ
cos θ + sin θ
sin θ
1 1
cos θ − sin θ
0 1
.
=
B [T ]B =
−2 sin θ
cos θ − sin θ
1 0
sin θ
cos θ
1 −1
¤
7. Let G be a bipartite regular graph, whose set of left vertices is L and right vertices is R. For a set
S ⊂ L denote by ∂S := {r ∈ R : r is connected to a vertex in S}. Suppose that |L| = n, |R| = 3n/4.
n
we have
Suppose that G has the following expansion property. For every S ⊂ L such that |S| ≤ 10d
L
5dL
|∂S| ≥ 8 |S|. Prove that for every such S there is a vertex rS (many, in fact) such that rS is a neighbor
of exactly one element of S.
Consider now the linear code defined by the “half adjacency matrix M ” whose columns are indexed by
the elements of L and rows by the elements of R, having 1 as an entry if the corresponding vertices are
connected and 0 otherwise. (Refer to the previous assignment). Prove that if x is a non-zero vector in the
n
code then x has more than 10d
non-zero coordinates. Conclude that we get a code where the distance
L
n
between any two code words is at least 10d
and whose rate is at least 1/4.
L
Proof. Let dS (r) be the number of vertices of S that a vertex r ∈ R is connected to. Then
P
|S| · dL = r∈∂S dS (r), which implies
8
1 X
≥·
dS (r)
5
|∂S|
8
5dL |∂S| · dL
≥
r∈∂S
and so the average dS (r) is less than 2. That implies that there are vertices r with dS (r) = 1 (many, in
fact).
n
and let S be its support, the indices i such that xi = 1. There
Now let x be a code word of weight 10d
L
is a row j of M corresponding to a vertexPj ∈ R that connects to exactly one vertex in S. That means
n
that the j-th coordinate of M x, which is i=1 mij xi , is one. Indeed, the sum has exactly one none zero
summand corresponding to mi0 j where i0 is the unique element of S to which j is connected. Therefore,
x is not in the kernel of M and we have our contradiction.
¤
Download