Math 416 Homework 5. Solutions. 1. Throughout this problem, V is a

advertisement
Math 416 Homework 5. Solutions.
1. Throughout this problem, V is a vector space with dim(V ) = n.
Recall the definition of L(V ) from class. Compute its dimension.
Next, define in words what the set L(L(V )) is comprised of. Prove that it is a vector space,
and compute its dimension. Give an example of a basis of L(L(V )) and describe what each
element of this basis is.
Finally, compute the dimension of L(L(L(L(V )))). What is the dimension of L(L(L(L(R2 ))))?
Solution: There are two approaches to compute dim(L(V )):
(a) The most straightforward way to compute the dimension of a vector space is to write
down a basis, and count it.
Pick and fix a basis β = {v1 , v2 , . . . , vn } for V . Then define a series of linear transformation Lij defined by
Lij (vi ) = vj ,
Lij (vk ) = 0, for k 6= i.
Clearly these are all in L(V ) and clearly there are n2 of them. If we can show that they
span, and are independent, then we have a basis and we are done.
First, let us show that they span. P
Let T ∈ L(V ), and assume that we have w1 , . . . , wn
such that T vi = wi . Write wi = j aij vj . We know that the aij exist and are unique
since the v’s form a basis. Then we claim that
T =
X
αij Lij .
i,j
To see this, note that
T vk =
X
αi,j Lij vk =
X
i,j
αk,j vj = wk .
j
Since we can write T as a linear combination of Lij , then this set spans.
To show that it is independent, let us assume that
X
αij Lij .
0=
i,j
Now choose some k. Then
0=
X
i,j
αij Lij vk =
X
αk,j vj
j
and since the v’s form a basis, this means that αk,j = 0 for all j. Since this works for ay
k, this means that αk,j is zero for all k, j.
(b) The simpler, but less “hands on”, way to prove this is to use Theorem 2.20. Since
there exists an isomorphism from L(V ) to Mn×n (R), and we can only have isomorphisms
between vector spaces of the same dimension, and we have computed dim(Mn×n (R)) = n2
in an earlier homework, then we are done. Of course, we didn’t know this theorem when
this homework was assigned.
Since L(V ) is a vector space whose dimension is (dim V )2 , then we have no trouble computing
that
dim L(L(V )) = (dim L(V ))2 = ((dim V )2 )2 = (dim V )4 = n4 ,
and continuing, so that dim(L(L(L(L(V ))))) = n16 . In particular, if V = R2 , then this
dimension is 216 = 65536.
2. Why is it true that when you look in a mirror, it flips left-to-right but does not flip up-to-down?
Solution: There’s a short answer and a long answer to this question.
The short answer is “neither”. This is because it flips back-to-front.
The long answer would be that the previous short answer is not entirely satisfactory, since we
certainly perceive the mirror as flipping left-to-right. For example, if we hold a newspaper in
front of us, the letters in the mirror will always look “backwards” and never “upside down”.
But notice that we are now talking about perception. There is no natural law at work here,
just our perception of it. The main reason we perceive the “backwards” direction is that our
brain is trying to invert the transformation, and of course we have a cognitive bias towards
spinning in a circle versus flipping upside down.
To see this, first imagine that you are standing facing away from the mirror, holding a newspaper with large type in front of you. This is what your brain is expecting to see, and when you
mentally rotate that image to the one in the mirroe, you end up making the words “backwards”.
But of course, you could also imagine craning your head back really far, looking at yourself
upside down, then (if one is nimble enough) putting the top of your head on the floor and then
doing a handstand. Then you would look upside down compared to your mental image.
Of course, it is clear which way our brain prefers to invert the transformation.
3. Recall that we define the transformation Tθ : R2 → R2 that rotates vectors by θ. Let Tx be
the transformation that reflects in the x-axis.
Show that
Tx ◦ Tθ 6= Tθ ◦ Tx .
Next, show that there is some angle ψ such that
Tx ◦ Tψ = Tθ ◦ Tx .
What is the relationship between θ and ψ? Discuss the geometric meaning of this computation.
Solution: We know from class, or can compute now, that the matrix of rotation wrt the
standard basis is
cos θ sin θ
Aθ = [Rθ ] =
.
− sin θ cos θ
Now, the matrix of reflection in the x-axis can be computed as follows:
Tx (1, 0) = (1, 0),
so we have
Tx (0, 1) = (0, −1),
B = [Tx ] =
1 0
0 −1
.
Now we compute
cos θ sin θ
1 0
cos θ − sin θ
Aθ B =
=
,
− sin θ cos θ
0 −1
− sin θ − cos θ
1 0
cos θ sin θ
cos θ sin θ
BAθ =
=
.
0 −1
− sin θ cos θ
sin θ − cos θ
These are clearly not equal, so the transformations do not commute.
The next question is: what angle should we choose so that
Aψ B = BAθ ?
Looking at the computations above, this would mean that we have
cos ψ − sin ψ
cos θ sin θ
=
,
− sin ψ − cos ψ
sin θ − cos θ
or ψ = −θ.
Goemetrically, this makes some
pens to our reflection when we
opposite direction that we do.
to the mirror, my mirror image
sense (recalling the mirror problem above). Think what haprotate a certain direction: our mirror image rotates in the
If I rotate in such a way as to bring my left shoulder closer
rotates, but it is his right shoulder that is getting closer, &c.
4. Write down five linear maps T0 , T1 , T2 , T3 , T4 , where for all k,
Tk : P4 (R) → P4 (R),
and the rank of Tk is k.
Solution: Since we have a map defined on P4 , which is a five dimensional space, to define a
map Tk with rank k, this is the same as trying to find a map Tk with a 5 − k-dimensional
nullspace.
So, if we can find maps with larger and larger nullspaces, this would also solve the problem.
Now, we already know and love a map on T4 with a one-dimensional nullspace: our friend the
derivative. So, let us define T4 p = p0 . We know that the nullspace of T4 is exactly the constant
polynomials, and this is a one-dimensional space, so the rank of T4 is 4. (Alternatively, we
could note that if we are plugging in polynomials of degree less than or equal to four, then the
derivative will always be a polynomial of degree less than or equal to three, i.e. the range of
T4 is P3 (R), which is a four-dimensional space.)
Now, the next thing we would guess is to iterate this map, i.e. let us define
T3 p = p00 .
What is in the nullspace of T3 ? Note that if p00 (x) = 0, then p(x) = ax + b. Clearly this is a
two-dimensional space, since {1, x} forms a basis for this space. Since T3 has nullity two, it
must have rank 3.
Continuing, we choose
T2 p = p000 ,
T1 p = p0000 ,
T0 p = p00000 .
In each case, we see that the rank of Tk is, in fact, k.
A completely different scheme would be as follows: let us define
(
xj , j < k,
Tk xj =
0, j ≥ k.
In other words,
T2 (ax4 + bx3 + cx2 + dx + e) = dx + e,
etc. From construction, it is clear that the rank of Tk is k, since a basis for R(Tk ) is
{1, x, . . . , xk }.
5. Let V, W be vector spaces, with dim(V ) = n, dim(W ) = m, and n > m.
(a) Show that there is no one-to-one linear transformation T : V → W .
(b) Show that there is no onto linear transformation T : W → V (notice that V, W have
flipped in this expression!)
(c) Show that a linear map T : V → W need not be onto by illustrating an example where
it is not.
Solution:
(a) From Rank–Nullity, we have
dim(N (T )) + dim(R(T )) = dim(V ).
Since dim(V ) = n and dim(R(T )) ≤ dim(W ) = m (see next problem!), this means that
dim(R(T )) < dim(V ) and thus dim(N (T )) > 0. This means that something nontrivial
is in the kernel of T and therefore T is not one-to-one.
(b) Again using Rank–Nullity, we have
dim(N (T )) + dim(R(T )) = dim(W ).
(Notice that the roles of V and W have switched!) This means, in particular, that
dim(R(T )) ≤ dim(W ) < dim(V ). Since dim(R(T )) < dim(V ), then R(T ) 6= V and T is
not onto.
(c) The previous problem should have many examples! Pick one or all
6. Let V be a vector space and W a subspace of V . Show that dim(W ) ≤ dim(V ). Moreover,
show that if dim(W ) = dim(V ), then V = W .
Solution: Let W be a subspace of V , and let us say that dim(V ) = n, dim(W ) = m. Then W
has a basis {w1 , . . . , wm }. Since this set is independent, and is a et of vectors in V , then it
can be extended to a basis for V . This means that a basis of V is at least as large as the basis
of W , and thus dim(W ) ≤ dim(V ).
Now, let us assume that dim(W ) = n as well. The list of vectors mentioned above is still
linearly independent, but now it has a length equal to the dimension of V . As we know, any
independent set of n vectors in V will form a basis, and therefore the basis of W is also a
basis for V . Thus it spans V , giving that Span(W ) ⊇ Span(V ), and thus W = V .
7. Recall Theorem 2.6: that if V, W are vector spaces, if {v1 , . . . , vn } is a basis for V , and
{w1 , . . . , wn } are any vectors in W , then there is a unique linear transformation T : V → W
with T vi = wi .
Show that the assumption that the vi form a basis is necessary to make this work; in particular,
give one example each of V, W , and v1 , . . . , vn and w1 , . . . , wn such that
(a) there is no linear transformation T : V → W with T vi = wi ,
(b) there is more than one linear transformation T : V → W with T vi = wi .
Solution:
(a) Note that if v2 = 2v1 then T v2 = 2T v1 , by linearity. Thus, if we choose v2 = v1 and
w2 6= w1 , then there is no linear T with T v1 = w1 and T v2 = w2 .
More generally, if there is any linear relation amongst the vectors vi , this is inherited by
the vectors T vi , so if there is a linear relation for the vi that is not shared by the wi , then
there is no transformation that matches vi to wi .
For example, if we have any nontrivial relation of the form
α1 v1 + · · · + αn vn = 0,
then
α1 T v1 + · · · + αn T vn
= T (α1 v1 + · · · + αn vn )
= T (0) = 0,
so that T vi must satisfy the same linear relation. As one example, if the vi are linearly
dependent then so must be the T vi .
(b) In the other direction, if the vi do not span, then T will not be fully specified and will not
be unique.
For example, if we consider T : R3 → R3 where we define
T (1, 0, 0) = w1 ,
T (0, 1, 0) = w2 ,
(1)
then we have not specified T , since T (x, y, z) will depend on how we choose T (0, 0, 1).
This means that there are infinitely many T ’s consistent with (1).
8. (q.v. Freidberg, Insel, Spence 2.2.2).
Solution:
(a)


2 −1
 3 4 .
1 0
(b)
2 3 −1
1 0 1
2 1 −3
.
(c)
.
(d)


0 2 1
 −1 4 5  .
1 0 1
(e)





1 0 0 ...
1 0 0 ...
..
.
1 0 0 ...

0
0 

.

0
(f )







1 0 0 ...
0 1 0 ...
0 0 1 ...
..
.
0
0
0
0 0 0 ...
1




.


(g)
1 0 0 ...
0 0 1
.
9. (q.v. Freidberg, Insel, Spence 2.2.5).
Solution:
(a)

1
 0

 0
0
0
0
1
0

0
0 

0 
1
0
1
0
0
(b)

0
 2

 0
0
1
2
0
0

0
2 

0 
2
(c)
1 0 0 1
(d)


1
 −2 


 0 
4
(e)


3
 −6 
1
(f )
a
Download