2.4 Spectral Decomposition

advertisement
2.4. SPECTRAL DECOMPOSITION
2.4
2.4.1
49
Spectral Decomposition
Direct Sums
• Let U and W be vector subspaces of a vector space V. Then the sum of the
vector spaces U and W is the space of all sums of the vectors from U and
W, that is,
U + W = {v ∈ V | v = u + w, u ∈ U, w ∈ W}
• If the only vector common to both U and W is the zero vector then the sum
U + W is called the direct sum and denoted by U ⊕ W.
• Theorem 2.4.1 Let U and W be subspaces of a vector space V. Then V =
U ⊕ W if and only if every vector v ∈ V in V can be written uniquely as the
sum
v=u+w
with u ∈ U, w ∈ W.
Proof.
• Theorem 2.4.2 Let V = U ⊕ W. Then
dim V = dim U + dim W
Proof.
• The direct sum can be naturally generalized for several subspaces so that
r
�
V=
Ui
i=1
• To such a decomposition one naturally associates orthogonal complementary projections Pi on each subspace Ui such that
r
�
2
Pi = I,
Pi P j = 0
if i � j,
Pi = I
i=1
• A complete orthogonal system of projections defines the orthogonal decomposition of the vector space
V = U1 ⊕ · · · ⊕ Ur ,
where Ui is the subspace the projection Pi projects onto.
mathphyshass1.tex; September 24, 2013; 9:58; p. 49
50
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• Theorem 2.4.3 1. The dimension of the subspaces Ui are equal to the
ranks of the projections Pi
dim Ui = rank Pi .
2. The sum of dimensions of the vector subspaces Ui equals the dimension of the vector space V
r
�
i=1
dim Ui = dim U1 + · · · + dim Ur = dim V .
• Let M be a subspace of a vector space V. Then the orthogonal complement
of M is the vector space M ⊥ of all vector in V orthogonal to all vectors in
M
M ⊥ = {v ∈ V | (v, u) = 0
∀u ∈ M}
• Show that M ⊥ is a vector space.
• Theorem 2.4.4 Every vector subspace M of V defines the orthogonal decomposition
V = M ⊕ M⊥
such that the corresponding projection operators P and P⊥ are Hermitian.
• Remark. The projections Pi are Hermitian only in inner product spaces
when the subspaces Ei are mutually orthogonal.
2.4.2
Invariant Subspaces
• Let V be a finite-dimensional vector space, M be its subspace and P be the
projection onto the subspace M.
• The subspace M is an invariant subspace of an operator A if it is closed
under the action of this operator, that is, A(M) ⊆ M.
• An invariant subspace M is called proper invariant subspace if M � V.
• Theorem 2.4.5 Let v be a vector in an n-dimensional vector space V and
A be an operator on V. Then
M = span {v, Av, . . . , An−1 v}
mathphyshass1.tex; September 24, 2013; 9:58; p. 50
2.4. SPECTRAL DECOMPOSITION
51
is an invariant space of A.
Proof.
• Theorem 2.4.6 The subspace M is invariant under an operator A if and
only if M ⊥ is invariant under its adjoint A∗ .
Proof.
• The vector subspace M reduces the operator A if both M and its orthogonal
complement M ⊥ are invariant subspaces of A.
• If a subspace M reduces an operator A then we write
A = A1 ⊕ A2
where A1 acts on M and A2 acts on M ⊥ .
• If the subspace M reduces the operator A then, in a natural basis, the matrix
representation of the operator A has a block-diagonal form
�
�
A1 0
A=
0 A2
• An operator whose matrix can be brought to this form by choosing a basis
is called reducible; otherwise, it is irreducible.
• Theorem 2.4.7 A subspace M reduces an operator A if and only if it is
invariant under both A and A∗ .
Proof: follows from above.
• Theorem 2.4.8 A self-adjoint operator is reducible if and only if it has a
proper invariant subspace.
• Theorem 2.4.9 The subspace M is invariant under an operator A if and
only if
AP = PA = PAP.
Proof.
• Theorem 2.4.10 The subspace M reduces the operator A if and only if A
and P commute.
Proof.
mathphyshass1.tex; September 24, 2013; 9:58; p. 51
52
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
2.4.3
Eigenvalues and Eigenvectors
• Let A be an operator on a vector space V. A scalar λ is an eigenvalue of A
if there is a nonzero vector v in V such that
Av = λv,
or
(A − λI)v = 0
Such a vector is called an eigenvector corresponding to the eigenvalue λ.
• Theorem 2.4.11
1. The eigenvalues of a Hermitian operator are real.
2. A Hermitian operator is positive if and only if all of its eigenvalues are
positive.
3. The eigenvalues of a unitary operator are complex numbers of unit
modulus.
4. The eigenvalues of a projection operator can be only 0 and 1.
5. The eigenvalues of a self-adjoint involution can be only 1 and −1.
6. The eigenvalues of an anti-symmetric operator can be either 0 or be
purely imaginary, which appear in complex conjugated pairs.
• The eigenspace of A corresponding to the eigenvalue λ is the vector space
Mλ = Ker (A − λI)
• The eigenspace Mλ is the span of all eigenvectors corresponding to the
eigenvalue λ.
• The dimension of the eigenspace of the eigenvalue λ is called the multiplicity (also called the geometric multiplicity) of λ,
dλ = dim Mλ
• An eigenvalue of multiplicity 1 is called simple (or non-degenerate).
• An eigenvalue of multiplicity greater than 1 is called multiple (or degenerate).
• The norm of an operator A is defined by
||A|| = sup
v∈V
||Av||
||v||
mathphyshass1.tex; September 24, 2013; 9:58; p. 52
2.4. SPECTRAL DECOMPOSITION
53
• An operator A is bounded if it has a final norm.
• In finite dimensions all operators are bounded.
• In finite dimensions the norm of the operator is equal to the absolute value
of its largest eigenvalue
||A|| = max |λi |
1≤i≤n
• For an invertible operator A
||A−1 || =
1
1
= max
||A|| 1≤i≤n |λi |
• An operator is invertible if it does not have zero eigenvalues.
• The resolvent of an operator A is the operator
R(λ) = (A − λI)−1
The resolvent is defined for all complex numbers λ for which the operator
A − λI is invertible.
• The norm of the resolvent is
1
1≤i≤n |λi − λ|
||R(λ)|| = max
• The resolvent set of an operator A is the set of all complex numbers λ ∈ C
such that the operator (A − λI) is invertible,
ρ(A) = {λ ∈ C | (A − λI) is invertible}
• The spectrum of an operator A is the complement of the resolvent set,
σ(A) = C − ρ(A)
• In finite dimensions the spectrum of an operator A is equal to the set of all
eigenvalues of A,
σ(A) = {λ ∈ C | λ is an eigenvalue of A}
mathphyshass1.tex; September 24, 2013; 9:58; p. 53
54
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• The characteristic polynomial of A is defined by
χ(λ) = det(A − λI)
• The eigenvalues of an operator A are the roots of its characteristic polynomial
χ(λ) = 0
Theorem 2.4.12 Every operator on a n-dimensional complex vector space
has exactly n eigenvalues.
• If there are p distinct roots λi then
χ(λ) = (λ1 − λ)m1 · · · (λ p − λ)m p
• Here m j is called the algebraic multiplicity of λ j .
• The geometric multiplicity di of an eigenvalue λi is less or equal to the
algebraic multiplicity mi ,
di ≤ mi .
• Example. Let A : R2 → R2 be defined by
A(x, y) = (x + y, y)
Then it has one eigenvalue λ = 1 with geometric multiplicity 1 and algebraic
multiplicity 2. The eigenvectors are of the form (a, 0).
Let the operator B be defined by
B(x, y) = (y, 0)
Then
A=I+B
Since B is nilpotent
then
and
B2 = 0
An = I + nB
exp(tA) = et (1 + tB)
mathphyshass1.tex; September 24, 2013; 9:58; p. 54
2.4. SPECTRAL DECOMPOSITION
55
• An operator A on a vector space V is diagonalizable if there is a basis in V
consisting of eigenvectors of A.
• In such basis the operator A is represented by a diagonal matrix.
• Theorem 2.4.13 Let A be a diagonalizable operator, λ j , j = 1, . . . , p, be
its distinct eigenvalues, Mi be the corresponding eigenspaces and Pi be the
projections onto Mi . Then:
1.
I=
p
�
P j,
Pi P j = 0
if i � j,
j=1
2.
V=
p
�
Mi
i=1
3.
A=
p
�
λ i Pi .
j=1
• In other words, for any
v=
n
�
ei (ei , v) ,
i=1
we have
Av =
n
�
λi ei (ei , v) .
i=1
2.4.4
Spectral Decomposition
• An operator is normal if it commutes with its adjoint.
• Both Hermitian and unitary operators are normal.
• Theorem 2.4.14 An operator A is normal if and only if for any v ∈ V
||Av|| = ||A∗ v||
Proof.
mathphyshass1.tex; September 24, 2013; 9:58; p. 55
56
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• Theorem 2.4.15 Let A be a normal operator. Then λ is an eigenvalue of A
with an eigenvector v if and only if λ̄ is an eigenvalue of A∗ with the same
eigenvector v.
Proof.
• Theorem 2.4.16 Let A be a normal operator. Then:
1. every eigenspace Mλ reduces A,
2. the eigenspaces Mλ and Mµ corresponding to distinct eigenvalues λ �
µ are orthogonal.
Proof.
• The projections to the eigenspaces of a normal operator are Hermitian.
• Theorem 2.4.17 Spectral Decomposition Theorem. Let A be a normal
operator on a complex vector space V. Let λ j , j = 1, . . . , p, be the distinct eigenvalues of A, M j be the corresponding eigenspaces and P j be the
projections on M j . Then:
1.
V=
p
�
M j,
j=1
2.
p
�
dim M j = dim V ,
j=1
3. the projections are Hermitian, orthogonal and complete
p
�
Pj = I
j=1
Pi P j = 0
P∗i = Pi
if
i � j,
4. there is the spectral decomposition of the operator
A=
p
�
j=1
Aj =
p
�
λ jP j
j=1
mathphyshass1.tex; September 24, 2013; 9:58; p. 56
2.4. SPECTRAL DECOMPOSITION
57
Proof.
• Theorem 2.4.18 Let A be a normal operator on a complex vector space V.
Then:
1. there is an orthonormal basis consisting of eigenvectors of A,
2. the operator A is diagonalizable.
Proof.
• Theorem 2.4.19 A Hermitian operator is diagonalizable by a unitary operator, that is, for every Hermitian operator H there is a diagonal operator
D and a unitary operator U such that
H = UDU −1 .
• Two operators A and B are simultaneously diagonalizable if there is there
is a complete system of Hermitian orthogonal projections Pi , i = 1, . . . , p
and numbers λi , µi such that
A=
p
�
λi Pi ,
i=1
B=
p
�
µi Pi .
i=1
• Theorem 2.4.20 Let A a normal operator and Pi be its projections to the
eigenspaces. Then an operator B commutes with A if and only if B commutes with all projections Pi .
Proof.
• Theorem 2.4.21 Two normal operators are simultaneously diagonalizable
if and only if they commute.
• Let A be a self-adjoint operator with distinct eigenvalues {λ1 , . . . , λ p } with
multiplicities m j . Then the trace of the operator and the determinant of
the operator A are defined by
tr A =
p
�
m j λi ,
i=1
mathphyshass1.tex; September 24, 2013; 9:58; p. 57
58
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
det A =
p
�
m
λj j .
j=1
• The zeta-function of a positive operator A is defined by
ζ(s) =
p
�
i=1
• There holds
mj
1
.
λis
ζ � (0) = − log det A
• The trace of a projection P onto a vector subspace S is equal to its rank, or
the dimension of the vector subspace S ,
tr P = rank P = dim S .
2.4.5
Functions of Operators
• Let A be a normal operator on a vector space V given by its spectral decomposition
p
�
λi Pi ,
A=
i=1
where Pi are the projections to the eigenspaces..
• Let f : C → C be a complex function analytic at 0.
• Then one can define the function of the operator A by
f (A) =
p
�
f (λi )Pi .
i=1
• The exponential of A is defined by
exp A =
p
∞
�
1 k � λi
A =
e Pi
k!
i=1
k=1
mathphyshass1.tex; September 24, 2013; 9:58; p. 58
2.4. SPECTRAL DECOMPOSITION
59
• The trace of a function of a self-adjoint operator A is then
tr f (A) =
p
�
m j f (λi ) .
i=1
where m j is the multiplicity of the eigenvalue.
• Let A be a positive definite operator, A > 0. The zeta-function of the
operator A is defined by
ζ(s) = tr A
−s
=
p
�
mj
i=1
1
.
λis
• Theorem 2.4.22 For every unitary operator U there is an Hermitian operator H with real eigenvalues λ j and the corresponding projections P j such
that
p
�
eiλ j P j
U = exp(iH) =
j=1
• The positive square root of a positive operator A is defined by
√
A=
p
�
�
λ jP j
j=1
• Theorem 2.4.23 Let A be a normal operator and P j , j = 1, . . . , p be the
projections to eigenspaces. Then
p
�
A − λk I
Pj =
λ − λk
k� j;k=1 j
Proof: Let p j (z) be polynomials of the form
p
�
z − λk
p j (z) =
λ − λk
k� j;k=1 j
• The polynomial p j (z) has (p − 1) roots λk , k � j, that is,
p j (λk ) = 0
mathphyshass1.tex; September 24, 2013; 9:58; p. 59
60
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• Moreover, the polynomial p j (z) satisfies the equation
p j (λ j ) = 1.
• That is,
p j (λk ) = δ jk
and, therefore, we have
p j (A) =
p
�
p j (λk )Pk = P j
k=1
• Dimension-independent definition of the determinant of a positive operator:
1.
det A = exp tr log A
2.
det A = exp[−ζ � (0)]
3.
−1/2
det A
=
�
dx exp[−π(x, Ax)]
Rn
1
where dx = dx · · · dx
2.4.6
n
Polar Decomposition
• Theorem 2.4.24 Polar Decomposition Theorem Let A be an operator on
a complex vector space. Then there exist a unique positive operator R and
a unitary operator U such that
A = UR.
If the operator A is invertible then the operator U is also unique.
• Proof. Suppose A is invertible. Let
R=
and
√
A∗ A
U = AR−1 .
mathphyshass1.tex; September 24, 2013; 9:58; p. 60
2.4. SPECTRAL DECOMPOSITION
2.4.7
61
Real Vector Spaces
• Theorem 2.4.25 Let A be a symmetric operator on a real vector space V.
Let λ j , j = 1, . . . , p, be the distinct eigenvalues of A, M j be the corresponding eigenspaces and P j be the projections on M j . Then:
1.
V=
p
�
M j,
j=1
2.
p
�
dim M j = dim V ,
j=1
3. the projections are symmetric, orthogonal and complete
p
�
Pj = I
j=1
Pi P j = 0
PTi = Pi
if
i � j,
4. there is the spectral decomposition of the operator
A=
p
�
Aj =
j=1
p
�
λ jP j
j=1
• Theorem 2.4.26 The only eigenvalues of an orthogonal operator (on a real
vector space) are +1 and −1.
Proof.
• Theorem 2.4.27 Let O be an orthogonal operator in a real vector space V.
Then there exists an anti-symmetric operator A such that
O = exp A.
• The (complex) diagonal form of an anti-symmetric operator A is
à = diag (0, . . . , 0, iθ1 , −iθ1 , . . . , iθk , −iθk )
mathphyshass1.tex; September 24, 2013; 9:58; p. 61
62
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
where θi are real.
Therefore
�
�
Õ = exp à = diag 1, . . . , 1, eiθ1 , e−iθ1 , . . . , eiθk , e−iθk
• Some of the θ� s may be equal to ±π. By separating them we get
�
�
Õ = exp à = diag 1, . . . , 1, −1, . . . , −1, eiθ1 , e−iθ1 , . . . , eiθk , e−iθ p
where θi � ±π.
• The real block diagonal form of an anti-symmetric operator A is
à = diag (0, . . . , 0, θ1 ε, . . . , θk ε)
where
ε=
• Note that
and
• Therefore
�
0 1
−1 0
�
ε2 = −I
ε2n = (−1)n I,
ε2n+1 = (−1)n ε
exp(θε) = I cos θ + ε sin θ
• Theorem 2.4.28 Spectral Decomposition of Orthogonal Operators on
Real Vector Spaces. Let O be an orthogonal operator on a real vector
space V. Then the only eigenvalues of O are +1 and −1 (possibly multiple)
and there exists an orthogonal decomposition
V = V+ ⊕ V− ⊕ V1 ⊕ · · · ⊕ V p ,
where V+ and V− are the eigenspaces corresponding to the eigenvalues 1
and −1, and V1 , . . . , V p are mutually orthogonal two-dimensional subspaces
such that
dim V = dim V+ + dim V− + 2p .
mathphyshass1.tex; September 24, 2013; 9:58; p. 62
2.4. SPECTRAL DECOMPOSITION
63
Let P+ , P− , P1 , . . . , P p be the corresponding orthogonal complimentary system of projections, that is,
P+ + P− +
p
�
Pi = I .
i=1
Then there exists a corresponding system of operators N1 , . . . , N p satisfying
the equations
Ni2 = −Pi ,
Ni Pi = Pi Ni = Ni ,
Ni P j = P j Ni = 0 ,
if
i� j
and the angles θ1 , . . . θk such that −π < θi < π and
O = P+ − P− +
where
p
�
Ri (θi )
i=1
Ri (θi ) = cos θi Pi + sin θi Ni .
are the two-dimensional rotation operators in the planes corresponding to
Pi .
• Theorem 2.4.29 Every invertible operator A on a real vector space can be
written in a unique way as a product
A = OR
of an orthogonal operator O and a symmetric positive operator R.
2.4.8
Heisenberg Algebra, Fock Space and Harmonic Oscillator
• Heisenberg Algebra. The Heisenberg algebra is a 3-dimensional Lie algebra with generators X, Y, Z satisfying the commutation relations
[X, Y] = Z,
[X, Z] = 0,
[X, Z] = 0.
• A representation of the Lie algebra A is a homomorphism ρ : A → L(V)
from the Lie algebra to the space of operators on a vector space V such that
ρ([S , T ]) = [ρ(S ), ρ(T )].
mathphyshass1.tex; September 24, 2013; 9:58; p. 63
64
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• The Heisenberg algebra can be represented by matrices





 0 1 0 
 0 0 0 
 0 0 1





X =  0 0 0  ,
Y =  0 0 1  ,
Z =  0 0 0





0 0 0
0 0 0
0 0 0
or by the differential operators C ∞ (R3 ) → C ∞ (R3 ) defined by
1
X = ∂ x − y∂z ,
2
1
Y = ∂y + x∂z ,
2




Z = ∂z .
• Properties of the Heisenberg algebra
[X, Y n ] = nZY n−1
[Y, X n ] = nZX n−1
[X, exp(bY)] = bZ exp(bY)
[Y, exp(aX)] = −aZ exp(aX)
exp(−bY)X exp(bY) = X + bZ
exp(aX)Y exp(−aX) = Y + aZ
exp(aX) exp(bY) = exp(abZ) exp(bY) exp(aX)
exp(−bY) exp(aX) exp(bY) = exp(abZ) exp(aX)
exp(aX) exp(bY) exp(−aX) = exp(abZ) exp(bY)
exp(aX) exp(bY) = exp(abZ) exp(bY) exp(aX)
• Another useful formula is
�
�
�
�
1 2
1 2
X exp − Y = exp − Y (X − ZY)
2
2
• Campbell-Hausdorff formula
�
ab
exp(aX + bY) = exp − Z exp(aX) exp(bY)
2
�
�
ab
Z exp(bY) exp(aX)
= exp
2
�
mathphyshass1.tex; September 24, 2013; 9:58; p. 64
2.4. SPECTRAL DECOMPOSITION
65
• Heisenberg group. The Heisenberg group is a 3-dimensional Lie group
with the generators X, Y, Z.
• An arbitrary element of the Heisenberg group is parametrized by canonical
coordinates (a, b, c) as
g(a, b, c) = exp(aX + bY + cZ)
Obviously,
g(0, 0, 0) = I
and the inverse is defined by
[g(a, b, c)]−1 = g (−a, −b, −c)
• The group multiplication law in the Heisenberg group takes the form
�
�
1 �
� � �
�
�
�
�
g(a, b, c)g(a , b , c ) = g a + a , b + b , c + c + (ab − a b)
2
• Notice that
�
ab
g(0, b, 0)g(a, 0, 0)
g(a, b, c) = g 0, 0, c +
2
�
• A representation of a Lie group G is a homomorphism ρ : G → Aut (V)
from the group G to the space of invertible operators on a vector space V
such that for any g, h ∈ G
ρ(gh) = ρ(g)ρ(h)
and
ρ(g−1 ) = [ρ(g)]−1 ,
ρ(e) = I
• Representations of the Heisenberg group.
• The elements of the Heisenberg group could be represented by the uppertriangular matrices. Notice that
X 2 = Y 2 = Z 2 = XZ = YZ = 0
mathphyshass1.tex; September 24, 2013; 9:58; p. 65
66
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
and
XY = Y X = Z
or
Therefore,

2 

 0 a c 
 0 0 ab 




 0 0 b  =  0 0 0  ,
0 0 0
0 0 0

3
 1 a c 


 0 1 b  = 0.
0 0 1

 1 a c + ab

b
g(a, b, c) =  0 1

0 0
1



 ,
• Another representation is defined by the action on functions in R3 . Notice
that
�
a �
exp(aX) f (x, y, z) = f x + a, y, z − y
2
�
a �
exp(bY) f (x, y, z) = f a, y + b, z + x
2
exp(cZ) f (x, y, z) = f (x, y, z + c)
• Therefore,
�
a
b
g(a, b, c) f (x, y, z) = f x + a, y + b, z + c + x − y
2
2
�
• Fock space.
• Let us define the operator
N = YX
• It is easy to see that
[N, Y] = ZY,
[N, X] = −ZX.
• Suppose that there exists a unit vector v0 called the vacuum state such that
Xv0 = 0
• Let us define a sequence of vectors
1
vn = √ Y n v0
n!
mathphyshass1.tex; September 24, 2013; 9:58; p. 66
2.4. SPECTRAL DECOMPOSITION
67
• By using the properties of the Heisenberg algebra it is easy to show that
√
Yvn =
n + 1 vn+1 ,
n ≥ 0,
√
Xvn =
n Zvn−1 ,
n≥1
Therefore
Nvn = n Zvn ,
• Let us define vectors
n≥0
∞
�
bn
w(b) = exp(bY)v0 =
√ vn
n!
n=0
called the coherent states.
• Then by using the properties of the Heisenberg algebra we get
Xw(b) = bZw(b)
• Now, suppose that
Y = X∗
and
Z=I
• Then, the vectors vn are orthonormal
(vn , vm ) = δnm
and are the eigenvectors of the self-adjoint operator
N = X∗ X
with integer eigenvalues n ≥ 0.
• Then the space
span {vn | n ≥ 0}
is called the Fock space and the operators X and X ∗ are called the annihilation and creation operators and the operator N is called the operator of
the number of particles.
mathphyshass1.tex; September 24, 2013; 9:58; p. 67
68
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
• Note, also that the coherent states are not orthonormal
(w(a), w(b)) = eāb
• Finally, we compute the trace of the heat semigroup operator
∗
Tr exp(−tX X) =
∞
�
e−tn =
n=0
1
1 − e−t
• Harmonic oscillator. Let D be an anti-self-adjoint operator and Q be a
self-adjoint operator satisfying the commutation relations
[D, Q] = I
• The harmonic oscillator is a quantum system with the (self-adjoint positive) Hamiltonian
1
1
H = − D2 + Q2
2
2
• Then the operators
1
X = √ (D + Q),
2
1
X ∗ = √ (−D + Q).
2
are the creation and annihilation operators.
• The operator of the number of particles is
1
1
1
N = X ∗ X = − D2 + Q 2 −
2
2
2
and, therefore, the Hamiltonian is
H=N+
1
2
• The eigenvalues of the Hamiltonian are
λn = n +
1
2
with the eigenvectors vn
mathphyshass1.tex; September 24, 2013; 9:58; p. 68
2.4. SPECTRAL DECOMPOSITION
• It is clear that the vectors
−itλn
ψn (t) = e
�
69
�
1
vn = exp −it n +
2
��
1
√ (−D + Q)n v0
n/2
2
n!
satisfy the equation
(i∂t − H)ψn = 0
which is called the Schrödinger equation.
• The vacuum state is determined from the equation
(D + Q)v0 = 0
and has the form
with ψ0 satisfying
�
�
1 2
v0 = exp − Q ψ0
2
Dψ0 = 0 .
• The heat trace (also called the partition function) for the harmonic oscillator is
1
Tr exp(−tH) =
2 sinh(t/2)
2.4.9
Exercises
1. Find the eigenvalues of a projection operator.
2. Prove that the span of all eigenvectors corresponding to the eigenvalue λ of
an operator A is a vector space.
3. Let
E(λ) = Ker (A − λI) .
Show that: a) if λ is not an eigenvalue of A, then E(λ) = ∅, and b) if λ is an
eigenvalue of A, then E(λ) is the eigenspace corresponding to the eigenvalue
λ.
4. Show that the operator A−λI is invertible if and only if λ is not an eigenvalue
of the operator A.
mathphyshass1.tex; September 24, 2013; 9:58; p. 69
70
CHAPTER 2. FINITE-DIMENSIONAL VECTOR SPACES
5. Let T be a unitary operator. Then the operators A and
à = T AT −1
are called similar. Show that the eigenvalues of similar operators are the
same.
6. Show that an operator similar to a selfadjoint operator is selfadjoint and an
operator similar to an anti-selfadjoint operator is anti-selfadjoint.
7. Show that all eigenvalues of a positive operator A are non-negative.
8. Show that the eigenvectors corresponding to distinct eigenvalues of a unitary operator are orthogonal to each other.
9. Show that the eigenvectors corresponding to distinct eigenvalues of a selfadjoint operator are orthogonal to each other.
10. Show that all eigenvalues of a unitary operator A have absolute value equal
to 1.
11. Show that if A is a projection, then it can only have two eigenvalues: 1 and
0.
mathphyshass1.tex; September 24, 2013; 9:58; p. 70
Download