Hand in problems#2

advertisement
Dr. Marques Sophie
Office 519
Linear algebra I
Fall Semester 2013
marques@cims.nyu.edu
Hand in problems#2
The grader cannot be expected to work his way through a sprawling mess of identities
presented without a coherent narrative through line. If he can’t make sense of it in finite
time you could lose serious points. Coherent, readable exposition of your work is half the
job in mathematics.
Exercise 1 (Uniquess of Spectral Decompositions): Suppose T : V → V is
diagonalizable
on an arbitrary vector space (not necesarily an inner product space),
Pr
) = {λ1 , . . . , λr } and Pλi is the projection onto the λi so T = i=1 λi Pλi where sp(TP
eigenspace. Now suppose T = sj=1 µj Qj is some other decomposition such that
Q2j
= Qj 6= 0
Qj Qk = Qk Qj = 0 if j 6= k
s
X
Qj = I
µi 6= µj , f or i 6= j
j=1
Prove that
(a) r = s and if the µj are suitably relabeled we have µi = λi for 1 ≤ i ≤ r.
(b) Qi = Pλi for 1 ≤ i ≤ r.
Hint: First show {µ1 , . . . , µs } ⊆ {λ1 , . . . , λr } = sp(T ); then relabel.
Exercise 2 : Let V = Cc∞ (R) be the space of real-valued functions f (t) on the real
line that have continuous derivatives Dk f of all orders, and have “bounded support” –
each f is zero off of some bounded interval (which is allowed to vary with f ). Because
all such functions are “zero near ∞” there is a well defined inner product
Z ∞
(f, h)2 =
f (t)h(t) dt
−∞
The derivative Df = df /dt is a linear operator on this infinite dimensional space.
1. Prove that the adjoint of D is skew-adjoint, with D∗ = −D.
2. Prove that the second derivative D2 = d2 /dt2 is self-adjoint.
Hint: Integration by parts.
Solution:
R M df
1. Fix f, h and let R > 0 such that f, h ≡ 0 off of [−R, R]. Then (Df, h)2 = −N dx
·
df
h(x)dx. Applying Integration by Parts (u = h(x), v = dx dx), we get (Df, h)2 =
RN
RN
d
dh
[f (x)h(x)]N
−N − −N f (x) dx {h(x)}dx) = 0 = − −N f (x) dx dx = −(f, Dh)2 =
(f, −Dh)2 . But this is, by definition, = (−D∗ f, h), ∀f, h → Df = −D∗ f , for all
f , ⇒ D∗ = −D as operators on V .
1
2
d
2
∗
∗
∗
2
2. As for dx
2 = D = D ◦ D, we have (D ◦ D) = D ◦ D = (−D) ◦ (−D) = D . So,
D2 is a self-adjoint operator on V .
Exercise 3:
If T : V → V is a linear operator on an finite dimensional inner product space and
λ ∈ sp(T ), prove that
1. Eλ (T ∗ ) = K(T ∗ − λI) is equal to R(T − λI)⊥ .
2. dim Eλ̄ (T ∗ ) = dim Eλ (T ).
P
Solution: If T is diagonalizable so is T ∗ because
Eλ̄ (T ∗ ) is a direct sum with
X
X
X
|Eλ (T )| = |V |.
|Eλ̄ (T ∗ )| =
|
Eλ̄ (T ∗ )| =
λ∈Sp(T )
λ∈Sp(T )
λ∈Sp(T )
1. We know Sp(T ∗ ) = Sp(T ). Also for any operator B : V → V , K(B ∗ ) = R(B)⊥
by definition of adjoint; since
(T ∗ − λ̄I) = (T − λI)∗ ,
K(T ∗ − λ̄I) = R(T − λ)⊥
2. and by the Dimension formula
|R(T − λ)| + |K(T − λ)| = |U |,
So
|Eλ̄ (T ∗ )| = |K(T ∗ − λ̄I)| = |V | − [|V | − |K(T − λ)|] = |K(T − λ)| = |Eλ (T )|
Exercise 4:
If m < n and the coordinate spaces Km , Kn are equipped with the standard inner
products, consider the linear operator
T : Km → Kn
T (x1 , · · · , xm ) = (x1 , · · · , xm , 0, · · · , 0)
This is an isometry from Km into Kn , with trivial kernel K(T ) = (0) and range R(T ) =
Km × (0) in Kn = Km ⊕ Kn−m .
1. Provide an explicit description of the adjoint operator T ∗ : Kn → Km and determine K(T ∗ ), R(T ∗ ).
2. Compute the matrices of [T ] and [T ∗ ] with respect to the standard orthonormal
bases in Km , Kn .
3. How is the action of T ∗ related to the subspaces K(T ), R(T ∗ ) in Km and R(T ), K(T ∗ )
in Kn ? Can you give a geometric description of this action?
2
Unitary operators can be described in several different ways, each with its own advantages in applications.
Solution:
1. The kernel/range are easy
K(T ∗ ) = R(T )⊥ = (Rm × {0})⊥ = {0} × Rn−m in Rn
R(T ∗ ) = K(T )T = {0}T = Rm
For the explicit actions, let {e1 , · · · , em } = X , {f1 , · · · , fn } = Y the standard ON
bases, so fi = ei = (0, · · · , 1, · · · , 0) where the 1 is in the ith place for 1 < i ≤ n,
1 ≤ j ≤ m, hence T ∗ (h) = 0, for m < i ≤ n. For 1 ≤ i ≤ m and 1 ≤ j ≤ m,
(fi , T (ej )) = ((0, ·, 1, · · · , 0|0, · · · , 0), (0, ·, 1, · · · , 0|0, · · · , 0))Fn = δi,j
where the first (0, ·, 1, · · · , 0|0, · · · , 0) has the first part running through 1leqi ≤ m
th
th
and the one
Pmis on the i spot (resp. on the j spot for the second part). So,
∗
T (fi ) = j=1 δji ej = ei for 1 ≤ i ≤ m.
In terms of n tuples (x1 , · · · , xm , · · · , xn ) ∈ Fn , T ∗ (x1 , · · · , xm , · · · , xn ) = (x1 , · · · , xm ).
So, T ∗ first projects x ∈ Rm × Rn−m onto Rm × {0} and then transfers the result
over to Fm .
∗
∗
2. Since these bases
[T ]X ,Y is the adjoint matrix ([T ]Y,X ) . It is obvious
are ON
Im
while [T ∗ ]X ,Y = Im 0m×n .
that [T ]Y,X =
0n×m
3. Geometric actions:
T injects Rm → Rm × {0} ' Rn = Rm × Rn−m .
T ∗ projects Rn = Rm × Rn−m → Rm × {0} ' Rm
Exercise 5:
Let X = {e1 , . . . , en } be an arbitrary basis (not necessarily orthonormal) in a finite
dimensional inner product space V .
(a) Use induction on n to prove that there exist vectors Y = {f1 , . . . , fn } such that
(ei , fj ) = δij .
(b) Explain why the fj are uniquely determined and a basis for V .
Note: If the initial basis X is orthonormal then fi = ei and the result trivial; we are
interested in arbitrary bases in an inner product space.
Solution: First V ' Fn via a bijective linear map φ : V → Fn such that (φ(v1 ), φ(v2 ))Fn =
(v1 , v2 )V ; in fact, if {hi } is any ON basis in V (they exist) and {ei } is the standard
basis inP
Fn then there is a unique linear map φ : V → Fn such that φ(hj ) = ej (or
φ(v) = nj=1 (v, hj )ej ). Then (φ(v1 ), φ(v2 ))Fn = (v1 , v2 )V . Thus we may as well assume
V = Fn , and (u1 , · · · , un ) ⊆ Fn .
3
Let
Fn ; write vi = (ai1 , · · · , ain ) =
Pn {e1 , · · · , en } be the standard orthonormal basis in P
n
k=1 aik ek and let {fi } be vectors fi = (bi1 , · · · , bin ) =
k=1 bik ek .
Let A = [aij ] be the matrix of coefficient for the {vi } and B = [bij ] be the matrix of
coefficient for the {fi } and B non singular with det(B) 6= 0 because its rows Rowi (A)
are linearly independent because {vi } is a basis. To solve the problem we seek {fi }
such that (vi , fj )Fn = δij (Kronecker delta), which means
n
X
aik bjk = δij
k=1
, then bjk = ckj
If we let B = [bij ] and C = B ∗ = [cijP
] (the matrix adjoint with cij = bjiP
and the previous equality becomes nk=1 aik ckj = δij . Since (AC)ij = k aik ckj = δij ,
we are looking for a coefficient matrix C such that AC = I (identity matrix), or
equivalently, we seek a matrix B such that AB ∗ = I; this holds if and only if
I = I ∗ = (AB ∗ )∗ = BA∗ ,
then P
B = (A∗ )−1 = (A−1 )∗ . B is uniquely determined and the system {fi }, with
fi = nk=1 bik ek , given the coefficient matrix A for the given vectors {ui }.
Exercise 6 :
Let V be an inner product space and T a linear operator that is diagonalizable in the
ordinary sense, but not necessarily orthogonally diagonalizable. Prove that
(a) The adjoint operator T ∗ is diagonalizable. What can you say about its eigenvalues
and eigenspaces?
(b) If T is orthogonally diagonalizable so is T ∗ .
Hint: If {ei } diagonalizes T what does the “dual basis” {fj } of Exercise 5 do for T ∗ ?
Exercise 7: Here V = R3 and f1 , f2 , f3 ∈ V ∗ are the linear functionals fk : V → R
given by (∗∗) f1 (x, y, z) = x − 2y, f2 (x, y, z) = x + y + z and f3 (x, y, z) = y − 3z;
1. Prove Y = {f1 , f2 , f3 } is a basis in V ∗ .
2. Find a basis X = {e01 , e02 , e03 } ⊆ V whose dual basis X ∗ in V is equal to Y.
Solution:
)| = |Eλ (T )| (You can
(a) It is well known that λ ∈ Sp(T ) ⇔ λ ∈ Sp(T ∗ ) and |Eλ (T ∗L
prove this as an exercise). Since T is diagonalizable, V = λ∈Sp(T ) Eλ (T ) hence
X
X
|V | =
|Eλ (T )| =
|Eλ (T ∗ )|
λ∈Sp(T )
If W =
W =
λ∈Sp(T )
|Eµ (T ∗ )|, this is a direct sum as
M
M
X
X
Eµ (T ∗ ) =
Eλ (T ∗ ) ⇒ |W | =
|Eλ (T ∗ )| =
|Eλ (T )| = |V |
P
µ∈Sp(T ∗ )
µ∈Sp(T ∗ )
λ
λ∈Sp(T )
4
λ
L
This implies W = V = µ∈Sp(T ∗ ) Eµ (T ∗ ) and T ∗ is diagonalizable.
Second solution (using the previous exercise): Given a diagonalizing basis
{vi } for T , let {fj } be the dual family (another basis on V ) such that (ei , fj ) = δij .
If T (ei ) = λi ei . Then
(T ∗ (fj ), ei ) = (fj , T (ei )) = (fj , λi ei ) = λj (fj , ej ) = λj if i = j
,
= 0 if i 6= j
so T ∗ (fj ) = λj fj and {fj } is a diagonalizing basis for T ∗ . Note: This approach
explicitly identifies the eigenspaces for T ∗ .
(b) If T is orthogonally diagonalizable, we appeal to the second solution. Given
an orthonormal diagonalizing bases {ui } for T , the unique dual family {fj } is
just {ui } (since we already have (ui , uj ) = δij , we can take fj = uj ). Then
T ∗ (fj ) = T ∗ (uj ) = λj uj .
5
Download