27 - 1 Lecture 5.73 #27

advertisement
5.73 Lecture #27
27 - 1
Wigner-Eckart Theorem
CTDL, pages 999 - 1085, esp. 1048-1053
Last lecture on 1e– Angular Part
Next: 2 lectures on 1e– radial part
Many-e– problems
What do we know about 1 particle angular momentum?
1.
JM Basis set
[J , J ] = i ∑ ε
i
j
h
ijk
Jk
definition → all matrix elements in JM J basis set.
k
2.
J = J1 + J 2 Coupling of 2 angular momenta
coupled ↔ uncoupled basis sets
transformation via J − = L − + S− plus orthogonality. Also more general methods.
HSO + H Zeeman example * easy vs. hard basis sets
* limiting cases, correlation diagram
* pert. theory – patterns at both limits plus distortion
TODAY:
1. Define Scalar, Vector, Tensor Operators via Commutation
Rules. Classification of an operator tells us how it
transforms under coordinate rotation.
2. Statement of the Wigner-Eckart Theorem
3. Derive some matrix elements from Commutation Rules
Scalar
Vector
S
V
∆J = ∆M = 0, M independent
∆J = 0,±1, ∆M = 0, ±1, explicit M dependences of
matrix elements
These commutation rule derivations of matrix elements are tedious. There
is a more direct but abstract derivation via rotation matrices. The goal
here is to learn how to use 3-j coefficients.
updated November 4, 2002
5.73 Lecture #27
27 - 2
Classification of Operators via Commutation Rules with CLASSIFYING
ANGULAR MOMENTUM
ω
Like
c o m p o n e n t s (µ )
µ =0
scalar "constant"
0
J = 0
µ= 0↔z
v e c t o r 3 components
1
J = 1
+1 ↔ −21/ 2 ( x + iy)
–1 ↔ +2 −1/ 2 ( x − iy)
(2 ω + 1)
components
[ ω is "rank"]
tensor
2nd
2
2
3rd
3
3
+2, …, –2
Spherical Tensor Components [CTDL, page 1089 #8] …
Definition:
[J , T ] =
[J , T ] =
[ω(ω + 1) − µ(µ ± 1)]1/ 2 Tµ(ω±1)
±
(ω )
µ
h
z
(ω )
µ
h
µTµ( ω )
This classification is useful for matrix elements of Tµ( ω ) in JM basis set.
Example: J = L + S
1.

common sense ?2.
(vector analysis)
r r
[L , S] = 0
∴ L & S act as scalar operators with respect to each other.
r
r
L and S act as vectors wrt J
r r
L ⋅ S acts as scalar wrt J
r r
L × S gives components of a vector wrt J .
3.

4.
[Because L × S is composed of products of components of two vectors,
it could act as a second rank tensor. But it does not !]
[Nonlecture: given 1 and 2, prove 3]
Once operators are classified (classifications of same operator are
different wrt different angular momenta), Wigner-Eckart Theorem
provides angular factor of all matrix elements in any basis set!
)
N′J′M′ Tµ(ω ) NJM = AJMωµJ′[ M′ ] δ M′,M+µ 1
N4
T(ω44
NJ
′J4
′2
3
123
reduced matrix element
specifies
everything
else
vector coupling
coefficient
no M ′, M, no µ
redundantusually omitted
updated November 4, 2002
5.73 Lecture #27
27 - 3
rank of tensor – like an
angular momentum
J − ω ≤ J ′ ≤ J + ω : selection rule for Tµ(ω) , ∆J = ± ω , ± (ω − 1), … 0.
*
triangle rule
*
reduced matrix element contains all radial dependence – when there is no radial
factor in the operator, then the J′,J dependence can often be evaluated as well
*
vector coupling coefficients are tabulated – lots of convenient symmetry properties:
A.R. Edmonds, “Ang. Mom. in Q.M.”, Princeton Univ. Press (1974).
Nonlecture: L & S act as vectors wrt J but scalars wrt each other
[L,S] = 0 scalars wrt each other
[J, L] = [L + S, L] = [L, L] ∴ vector wrt. J if L is an angular momentum
•
•
components of L satisfy the Tµ( 1) definition
T+(11)[ L] = −2 −1/ 2 [L x + iL y ]
[
]
−1/ 2
−1/ 2
−1/ 2
• J z, −2 [L x + iL y ] = −2 ih[L y − iL x ] = −2 h[L x + iL y ]
[J ,T
z
•
etc.
(1)
+1
This notation means: construct
(1)
an operator classified as T+1
r
out of components of L.
= + hT+(11)[L ]
[L ]] = h( +1)T+(11)[L ]
[J, S] = [S, S] ∴
S is vector wrt J
J=L+S
••• Show that L•S acts as scalar wrt J
[J , L ⋅ S ] = [J
±
x
] [
, LxS x + LyS y + LzSz ± i J y , LxS x + LyS y + LzS z
= four terms
]
[J ± , L ⋅ S] = [L x , L x S x + L yS y + L zSz ] + [S x , L x S x + L yS y + L zSz ]
±i[L y , L x S x + L yS y + L z S z ] ± i[S y , L x S x + L yS y + L z S z ]
[
= ih(L z S y − L yS z ) + ih(L yS z − L z S y )
±iih( – L z S x − L x S z ) ± iih( – L x S z + L z S x )]
=0
[J z , L ⋅ S] = [L z , L x S x + L yS y + L zSz ] + [Sz , L x S x + L yS y + L zSz ]
= ih(L yS x − L x S y ) + ih(L x S y − L yS x )
This notation means:
construct an operator
=0
classified as T0(0)
out of the
ω
∴ L ⋅ S acts as T0( 0 )
T [ A, B] = ∑ (−1) T [ A]T−(ωk )[B] components of A
( 0)
0
k
k =−ω
(ω )
k
and B.
updated November 4, 2002
5.73 Lecture #27
27 - 4
Vector Coupling Coefficients
all serve Clebsch - Gordan Coefficients
same 
function 3 - J coefficients
all related to what you already know how to obtain by ladders and orthogonality for
JJ1 J2 M =
+ J2
∑
M 2 = M − M1
M2 = − J 2
J1 M1 J2 M2 J1 M1 J2 M2 JJ1 J2 M
144424443
completeness
v.c. coefficient
p. 46 Edmonds (1974) general formula
J
J2
3 − J:  1
 M1 M 2
1
J3 
J1 − J 2 − M 3
(2J 3 + 1) − 2 (J1M1J 2M 2 J1J 2 J 3 − M 3 )
 = (−1)
M 3
Constraint: M1 + M2 + M3 = 0 This constraint is imposed in (|) notation but not
in ⟨|⟩ notation.]
W-E Theorem is an extension of V-C idea because we think of operators as
“like angular momenta” and we couple them to angular momenta to make new
angular momentum eigenstates.
What is so great about W-E Theorem?
vast reduction of independent matrix elements
e.g. J = 10, ω = 1 (vector operator)
possible values of J′ limited to 9, 10, 11 by triangle rule
J ′M ′ Tµ( 1) JM
Total # of M. E.
J′ = 9 (2•9+1)(2•10+1)
399
#R.M.E.
1
10 (2•10+1)(2•10+1)
441
1
10 Tµ(1) 10
11 (2•11+1)(2•10+1)
483
1
11 Tµ(1) 10
1323
3
9 Tµ(1) 10
updated November 4, 2002
5.73 Lecture #27
27 - 5
CTD-L, pages 1048-1053
Outline proof of various parts of W-E Theorem
Scalar Operators S
[Ji,S] = 0
Definition (for all i)
3.
]
∆M = 0 selection rule from [J , S] = 0
M - independence from [J , S] = 0
1.
show ∆J = 0:
1.
2.
[
∆J = 0 selection rule from J 2 , S = 0
z
±
J ′M S JM = 0 if J ′ ≠ J
[J , S] = 0
2
0 = J ′M ′ ( J2S − SJ2 ) JM =
direction of
operation by J2
2.
h
2
[J ′(J ′ + 1) − J (J + 1)] J ′M ′ S JM
either J ′ = J or J ′M ′ S JM = 0
(only ∆J = 0 matrix elements of S can be nonzero)
show ∆M = 0:
JM ′ S JM = 0 if M′ ≠ M
[J , S] = 0
0 = JM ′ ( J S − SJ ) JM
z
z
z
=
h
(M ′ − M )
JM ′ S JM
either M ′ = M or JM ′ S JM = 0
3
show M - Independence of matrix elements
uses ∆J = ∆M = 0 for S
[J , S] = 0
±
direction of
operation
by S
(
)
0 = JM ′ J± S − SJ± JM = sJM JM ′ J± JM
− sJM ′ JM ′ J± JM
(
)
= sJM − sJM ′ JM ′ J± JM
we already
know that S is
diagonal in M.
[Should skip pages 27-6,7,8 and go directly to recursion relationship on page 27-10.]
updated November 4, 2002
5.73 Lecture #27
27 - 6
Thus either sJM = sJM ′ or JM ′ J± JM = 0
Thus sJM is independent of M
Putting all results for S together
J ′M ′ S JM = δ J ′J δ M ′M J S J
Vector Operators V
[J , V ] = i
i
h
j
∑ ε ijkVk
k
1.
r
M selection rules from Jz , V
2.
J selection rules from J2 , J2 , V
3.
M - dependence of matrix elements of V from double commutator
1.
M selection rules
[
[ [
[J , V ] = 0
0 = J ′M ′ ( J V
a.
z
z
z
z
]r
)
− Vz Jz JM =
h
]]
(M ′ − M )
J ′M ′ Vz JM
either M = M ′ or ME = 0
b.
[J , V ] = [J , V ] ± i[J , V ]
z
±
z
x
(
z
)
y
(
(
= ih V y ± i − Vx
= ± hV±
J ′M ′ Jz V± − V± Jz JM
= ± h J ′M ′ V± JM
h
(M ′ − M )
= ± h J ′M ′ V± JM
h
( M ′ − M m 1)
J ′M ′ V± JM
J ′M ′ V± JM
M ′ = M ± 1 or
ME
))
=0
=0
updated November 4, 2002
5.73 Lecture #27
27 - 7
Thus we have selection rules:
Vz acts like Jz on M
V± acts like J± on M
2.
M selection rules
need to use a result that requires lengthy derivation
 2 2
2
2
2
 J , J , V = 2h J V − 2( J ⋅ V )J + VJ

see proof in Condon and Shortley, pages 59 - 60
[ [
]]
Take J ′M ′
[
]
JM Matrix elements of both sides of above Eq.
LHS = J ′M ′ J2 ( J2V ) − J2VJ2 − J2VJ2 + VJ2J2 JM
=
h
4
[(J ′(J ′ + 1)) − 2J (J + 1)J ′(J ′ + 1) + J (J + 1) ] J ′M ′ Vr JM
2
2
2
r
RHS = 2h 4 J ′( J ′ + 1) + J ( J + 1) J ′M ′ V JM − 4h 4 J ′M ′ ( J ⋅ V )J JM
[
]
scalar
J ′M ′ ( J ⋅ V )J JM =
∑
J ′′M ′′
J ′M ′ ( J ⋅ V ) J ′′M ′′ J ′′M ′′ J JM
J ′ = J ′′ , M ′ = M ′′
= J ′ J ⋅ V J ′ J ′M ′ J JM
144244
3
J ′=J
= J J ⋅ V J JM ′ J JM δ J ′J
two cases for overall matrix element
A.
J′ ≠ J
B.
J′ = J
updated November 4, 2002
5.73 Lecture #27
A.
27 - 8
J′ ≠ J
r
RHS = 2h 4 J ′( J ′ + 1) + J ( J + 1) J ′M ′ V JM
[
LHS =
h
4
[J ′ (J ′ + 1)
2
]
2
− 2J ( J + 1)J ′( J ′ + 1) + J 2 ( J + 1)
2
] J ′M ′ V JM
2
2
0 = LHS − RHS = algebra = h 4 J ′M ′ V JM [( J ′ − J ) − 1][( J ′ + J + 1) − 1]
ME = 0 unless J′ = J ± 1 or J′ = –J
r
∴ ∆J = ±1 selection rule for V
B.
(J ′ = –J is impossible
except for J′ = –J = 0,
but this violates J′ ≠ J
assumption)
J′ = J
LHS = 0
0 = RHS = 4h 2
[
h
2
J ( J + 1) JM ′ V JM − J J ⋅ V J JM ′ J JM
]
J J⋅ V J
r
r
JM ′ V JM =
JM ′ J JM
2 (
h J J + 1)
1
4
4244
3
C 0 (J )
A WONDERFUL AND MEMORABLE RESULT. It says that all ∆J = 0 matrix
r
r
elements of V are ∝ corresponding matrix element of J! A simplified form of
W - E Theorem for vector operators.
updated November 4, 2002
5.73 Lecture #27
27 - 9
Lots of (NONLECTURE)
algebra needed to generate all ∆J = ±1 matrix
r
V.
elements of
SUMMARY OF C.R. RESULTS: Wigner-Eckart Theorem for Vector Operator
 special case: These are exactly
 the same form as corresponding
1/ 2 
JM ± 1 V ± JM = Co ( J)[ J( J + 1) − M( M ± 1)]  matrix element of Ji.
∆J = 0 JM Vz JM = Co ( J)M
J + 1M ± 1 V ± JM = mC + ( J)[( J ± M + 2)( J ± M + 1)]
1/ 2
∆J = +1
J + 1M Vz JM = + C + ( J)[( J + M + 1)( J − M + 1)]
1/ 2
J − 1M ± 1 V ± JM = ± C−( J)[( J m M)( J ± M + 1)]
1/ 2
∆J = −1
J − 1M Vz JM = + C−( J)[( J − M)( J + M)]
1/ 2
only Co(J), C+(J), C–(J) : 3 unknown J-dependent constants for each J.
NONLECTURE (to end of notes). Example of how recursion relationships (reduced
matrix elements) are derived for each possible ∆J.
r
∆J = ±1 matrix elements of V
[
]
∆M = +1 using J+ , V+ = 0
∆M selection rule for V+ is ∆M = +1
∆M selection rule for J+ V+ is ∆M = +2
(
)
CR = 0 = J + 1M + 1 J+ V+ − V+ J+ JM − 1
(arrow denotes J+
operates to right)
0 = J + 1M + 1 J+ J + 1M J + 1M V+ JM − 1
− J + 1M + 1 V+ JM JM J+ JM − 1
J + 1M V+ JM − 1
[(J + M)(J − M + 1)]
h
1/ 2
(J+ operates to left)
J + 1M + 1 V+ JM
=
JM J + JM − 1
expand using
completeness
J + 1M + 1 J + J + 1M
[(J + M + 2)(J − M + 1)]1/ 2
h
The matrix elements in the
denominator are to be
replaced by their values,
and a common factor is
cancelled
updated November 4, 2002
5.73 Lecture #27
27 - 10
−1 / 2
multiply both sides by [ J + M + 1]
to display symmetry
J + 1M V+ JM − 1
J + 1M + 1 V+ JM
=
≡ − C+ ( J )
( J + M )1/ 2 ( J + M + 1)1/ 2
( J + M + 1)1/ 2 ( J + M + 2)1/ 2
M → M + 1 recursion relationship
ratio is independent of M
C+ ( J ) ≡ α ′J + 1 V αJ
J + 1M + 1 V+ JM = − C+ [( J + M + 1)( J + M + 2)]
sign chosen so
that Vz matrix
elements will be
+C+(J)
1/ 2
Remaining to do for ∆J = ±1 matrix elements

J− , V+ = −2hVz gives Vz matrix element when we take ∆M = 0
 A.

matrix element of both sides



J− , Vz = hV− gives V− matrix element when we take ∆M = −1
B.

matrix element of both sides

A.
[
]
[
]
[J , V ] = −2
−
+
h
∆M = 0 selection rule for both sides
Vz
RHS = −2h J + 1M Vz JM
(
)
LHS = J + 1M J− V+ − V+ J− JM
= J + 1M J− J + 1M + 1 J + 1M + 1 V+ JM
− J + 1M V+ JM − 1 JM − 1 J− JM
=
[(J + 1)(J + 2) − M (M + 1)]
1/ 2
h
[
]
− h J ( J + 1) − M ( M − 1)
1/ 2
J + 1M + 1 V+ JM
J + 1M V+ JM − 1
rearrange this and use V+ recursion rule from above
LHS = hC+ ( J )[( J + M + 1)( J − M + 1)]
[( J + M ) − ( J + M + 2)]1
1/ 2
= −2 hC+ ( J )[( J + M + 1)( J − M + 1)]
1/ 2
updated November 4, 2002
5.73 Lecture #27
27 - 11
RHS = LHS
J + 1M Vz JM = C+ ( J )[( J + M + 1)( J − M + 1)]
1/ 2
B.
[J , V ] =
−
z
h
V−
take J + 1M − 1L JM
RHS = h J + 1M − 1 V− JM
LHS = J + 1M − 1 J − J + 1M J + 1M Vz JM
− J + 1M − 1 Vz JM − 1 JM − 1 J − JM
= hC+ ( J )[( J − M + 2)( J − M + 1)]
1/ 2
J + 1M − 1 V− JM = + C+ ( J )[( J − M + 2)( J − M + 1)]
1/ 2
VERY COMPLICATED AND TEDIOUS ALGEBRA
updated November 4, 2002
Download