Uploaded by Anuj Apte

1%2E522550

advertisement
Algebraic identities among U (n) infinitesimal generators
Susumu Okubo
Citation: Journal of Mathematical Physics 16, 528 (1975); doi: 10.1063/1.522550
View online: http://dx.doi.org/10.1063/1.522550
View Table of Contents: http://aip.scitation.org/toc/jmp/16/3
Published by the American Institute of Physics
Algebraic identities among U(n) infinitesimal generators
Susumu Okubo
Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627
(Received 23 September 1974)
Some algebraic identities among infinitesimal generators of the n -dimensional unitary group U (n)
have been found. They satisfy a simple quadratic equation for degenerate representations. A
generalization of Holstein-PrimakofT boson realization for the U (n) group is also given.
group is known to be equivalent to a unitary one, we can
hereafter impose an additional hermiticity condition
1. SUMMARY OF PRINCIPAL RESULTS
The n-dimensional unitary group U(n) is very important for studies 1 • 2 of the SU(3) and SU(6) symmetries
in the particle physics as well for the nuclear physics. 3
Louck and Biedenharn4 have established various fundamental theorems on properties of infinitesimal generators of the U(n) group. However, many of their results are rather involved with content being often implicit. The main purpose of the present note is to find
some explicit identities among these generators by a
simpler method. We shall see that we can express them
into surprisingly simple forms which are suitable for
various physical applications. Also, all vector operators are expressible as a linear combination of powers
of the generators.
(1.6)
without loss of generality.
Now, NXN matrices T~ (/J., v=1,2, •.• ,n) will be
called a vector operator, if they satisfy
(1. 7)
Comparing (1.7) to (1.1), we see that A ~ itself is a
vector operator. Although we can define4 • 7 more general
tensor operators, they are beyond the scope of the
present note. Suppose that we have two vector operators
S~ and T~. Then, we can define a product vector operatorS R~ by
The infinitesimal generators A~ of the U(n) group
satisfy the Lie commutation relations 5
(1.8)
(1.1)
Hereafter, all Greek indices assume n values
1,2, ... ,n. In case we are interested in the SU(n) subgroup, we have only to replace A~ by its traceless tensor
1
R=ST.
As is well known, 6 irreducible representations of the
U(n) group are characterized by n integers satisfying
(1.2)
(ST)U=S(TU)
+ n - A, n'" A ? 1,
(1.10)
(1. 3)
where E is the NXN identity matrix.
We can define the jth power Aj by the recursion
relation
which satisfy a strictly decreasing inequality
(1.4)
Then, the dimension N of the irreducible representation
(hereafter referred to as IR) characterized by the signature (1,2) is given by the Weyl's formula 6
(1.5)
Hereafter, we restrict ourselves in the given IR specified by the signature (1,2) so that n 2 infinitesimal generators A~ represent their NXN matrix representations, though all results are also valid for more abstract vector operators acting on the IR space of the
signature (1.2). Since any representation of the U(n)
528
(1. 9)
for products of three vector operators S~, T~, and U~.
Moreover, the unit vector operator [is a NXN matrix
It is sometimes more convenient to use
l" = f"
(1. 8')
We notice that a product defined in this way is associative, i. e., we have
n
.0
IL
ElL =AILl' - -6
n v >..=1 A".
).
I)
We may easily verify that R~ satisfies the required
commutation relation (1.7) of vector operator. Hereafter, we often suppress tensor indices /J. and IJ and
write (1.8) simply as
Journal of Mathematical Physics, Vol. 16, No.3, March 1975
A O =[, Aj+l=AAj.
(1.11)
These are vector operators. For example,
As we shall prove at the end of this paper that any vector operator is expressible as a linear combination of
[,A, ... ,An-l. Hence, our vector product is automatically Abelian, i. e., we have
ST=TS
(1.12)
for any two vector operators, since (1.12) is obvious
for any linear combinations of AJ.
Copyright © 1975 American Institute of Physics
528
Next, for any vector operator
scalar (T) by the formula
T~,
we can assign a
Then, the above rule implies the validity of
(1.21)
(A -li)(A -li) =0
n
6hI T~=(T)E.
(1.13)
or equivalently of
n
6Ad A~i =
This is so because (1.7) leads to
[A~,
n
:0 TU =0,
~=1
and hence by the Schur's lemma ~~=IT~ is a multiple of
the unit matrix E. Especially, we set
M J =(AJ),
j=O, 1, 2, .•. ,
(1.14)
which are eigenvalues of generalized Casimir operators
(or Gel'fand invariants) of the U(n) group. Its explicit
value has been computed by Louck and Biedenharn4 to
be
(1.15)
where the product on vomits the singular point v = A.
We shall also give an alternative derivation of this
formula in the next section. We may remark4 that M j is
a symmetric polynomial of ll' l2' ... , In of the degree j.
Therefore, n constants M J (j = 1 , 2,3, ... ,n) can be also
used to characterize the IR instead of the original n
integers fl .!2' ... .!n .
Louck and Biedenharn also proved4 that we can express An in terms of a linear combination of
I, A, ... ,An-1. For the special case n == 3, this fact is
well known and basic to derive the SU(3) mass
formula. 5 We shall show that we can express this linear
dependence in a very simple form of
(1.16)
Here, we have set for Simplicity
A(l) ==A -1I,
(1.17)
and the product in (1.16) is meant to be the vector
product defined as in (1.8') and (1.9).
Second, it may happen that two values of flL and fv for
fJ. '" v may coincide. In such a case, we can have a
stronger identity. To be more precise, let us suppose
that we have
fk > fk+1 = fk+2 == ••• == f k+P > f k+P+1'
We shall call all factors A(lj) with k + 1"" j < k + P
redundant factors. Then, our prescription is to omit all
redundant factors in (1.16). As an illustration, let us
consider a specific case n = 8 with
(1.18)
Now, all factors A (ll)' A (l2)' A (l4)' and A (l7) are redundant, and we have a stronger identity
(lJ
+ In)A~ -ljl/j~E.
(1.22)
Conversely, we can prove that (1.21) or (1.22) is also
sufficient for the representation to be degenerate. This
fact is previously known8 ,9 for the special case n = 3
and answers positively a conjecture stated elsewhere.lO
Returning to the general case, let us define a
Hermitian conjugate vector operator S~ of S~ by
(1. 23)
It is easy to verify the fact that S is indeed a vector
operator, because of (1. 7) and (1. 6). Next, we can introduce an inner product (S, T) for two vector operators
by
(1.24)
(S, T)=(ST).
Then, it is obvious that
(S,S) ~ 0
(1. 25)
and, moreover, (S,S)=O if and only if we have S~==O
identically. Therefore, with this inner product, all
vector operators form a finite-dimensional Hilbert
space which we denote by H. Similarly, a linear subspace of H spanned by all linear combinations of Aj (j
= 0, 1 ,2, ..• ) form the subHilbert space 1-1 0, Actually,
we can prove that 1-1 =1-10' i. e., all vector operators are
linear combinations of Ai. Also, (1.16) or (1.19) assures us that the dimension of 1-1 is at most n. More
precisely, it is equal to the number of nonredundant
values of f". We may regard any vector operator S as a
linear transformation in fI by assigning a mapping of a
vector operator T into ST. Then, all vector operators
of the U(n) group form a commutative Hilbert algebra
with dimension less than or equal to n. We can rephrase
our identity (1.16) or (1.19) as follows. The linear
operator A in our Hilbert space can have exactly n integer eigenvalues, l" (fJ. == 1,2, ... ,n) if all f" are distinct. However, in case that we have f" = fv for some
pair, fJ. and v, with J1 '" v, then A can assume only
those values of l" corresponding to nonredundant values
of f".
From (1.24), we have
(1. 26)
since the hermiticity condition (1.6) implies A =A.
Therefore, if cj (j = 0,1,2, •.• ) are arbitrary complex
numbers, then (1.26) and (1.25) lead to
(1.27)
(1.19)
instead of (1.16) with n = 8. Of course, the validity of
(1.19) implies that of (1.16). We shall prove that equations of the type (1.19) are the minimal polynomial
equations among Ai.
Especially, this gives
Let us call an IR degenerate if we have an integer j
such that
Now, a linear independence among p operators I,
A, ... ,AP-1 is equivalent to have nonzero Gram determinant detmik",O for pxp matrix, mjk=(Ai,Ak)=Mj +k ,
j, k = 0, 1,2, ... ,p - 1. This quantity has been studied in
(1. 20)
529
J. Math. Phys., Vol. 16, No.3, March 1975
M2J ~ 0,
M2jM2k ~ (M j +k)2,
(1. 28)
det(Ai ,Ak) = detMi +k ~ O.
Susumu Okubo
529
great detail by Louck and Biedenharn.4 Their result is
indeed that the maximal number of the linearly independent operators among Aj is precisely equal to the number of nonredundant f". We shall prove the same fact in
a different way.
We have noted that any degenerate representation
leads to the validity of (1.22). One particularly interesting example is that of the completely symmetric
IR with signature f2 =f3=' •• =fn = O. After setting f1 == f,
then (1.22) is rewritten as
t
~d
A~ A ~ =
{f + n - 1)A ~ .
(1. 29)
An interesting fact is that for completely symmetric
case, we can have the following additional relations, 11
(1.30)
as we shall show in the next section. If we note M1 = f,
then (1.30) immediately gives (1.29) by setting J.l=f3 and
summing over J.l. Other identities of this kind can be
found in Ref. 10. One simple wayll proving the
validity of (1.30) is to utilize n creation (a:) and annihilation (a,,) boson operators satisfying the standard
canonical commutation relations:
(1. 31)
[a",
aJ = [a:, a~J =0.
Then, if we set
it is easy to see that A~ satisfy the U(n) Lie algebra
(1.1) as well as the special relations (1.29) and (1.30).
Actually, these operators are defined in a dense subset
of the whole boson Fock space which will reduce into a
direct sum of finite dimensional IR's of the U(n) group.
The subspace consisting exactly of f bosons gives the
desired completely symmetric IR. In this construction,
we have utilized all of n boson operators. However, we
could find a slightly more economical realization in
which we use only n - 1 bosons as follows. Let us set
(1.33 )
For any positive integer f, this operator has a welldefined meaning in a boson Fock subspace satisfying
n-1
:0 a~a~?- O.
(1.34)
a~ - q{f)
of an and a~ by the same Hermitian operator e{f). Of
course, we have to be careful of the order of the operators involved in this substitution. At any rate, this fact
justifies the usual boson approximation used for treatment of dilute boson gas problems, 12 if we take care of
the order of operators,
The speCial case n = 2 in (1. 35) is especially interesting. If we set
J 1 +iJ2=A~, J 1 -iJ2=A~,
J3=HA~-A~),
then J u J 2 and J 3 are infinitesimal generators of the
three-dimensional orthogonal group 0(3), or more
precisely of the S U(2). Then, (1. 35) becomes
J 1 +iJ2 =a+{f-a+a)l/2,
J 1 - iJ2 = {f- a+a)1/2a,
J 3 = a+a -
(1. 36)
tf,
where we have set a1 == a. This is precisely the formula
of Holstein and Primakoff, 13 and we may regard (1. 35)
as its generalization. We can easily verify an identity
Jf. + ~ + ~ = ~ f(t f + 1)
so that f/2 corresponds to the total angular momentum.
The Holstein-Primakoff realization (1, 36) has been
used by Tanabe and Sugawara-Tanabe14 for study of
some deformed rotating nuclei. Also, it has been
utilized by Pang, Klein, and Dreizler15 for an analysis
of an exactly solvable nuclear model. For the special
case n = 3, Li, Klein, and Dreizler 16 previously discovered an asymptotic form of (1.35) for large values
of f. Also, the special conditions (1.29) and (1.30) for
completely symmetric representations have been successfully applied 9 to simplify electromagnetic mass
formulas of the baryon decuplet in the SU(3) symmetry.
Also, their validity explains the reason why the SU(3)
mass formula for the decuplet states becomes so simple. 1 Further applications of the present identities will
be given elsewhere.
2. DERIVATION OF IDENTITIES
~=1
In this section we shall prove various statements
made in the previous section. Before going into details,
let us briefly recapitulate some basic facts of the
representation theory of the U(n) group. Setting
When we define
A~=~a",
J.l*n, v*n
A~=a~8{f),
A~= q{f)a", ,
A~=8{f)q{f),
J.l=n, v*n,
(1. 35)
J,l'tn, v=n,
J.l=n, v=n,
then we can prove that these n 2 operators A~ obey (1.1),
(1.29), and (1.30), if we notice
e{f)a; = a;e{f-l),
9{f)a", = a", q{f+l),
e{f + 1)e{f + 1) - e{f)e{f) = 1.
530
an - e{f),
J2 =
(1. 32)
f?-
Comparing (1. 35) with (1. 32), we see that the former
can be obtainable from the latter by a formal
substitution
J. Math. Phys., Vol. 16, No.3, March 1975
(2.1)
then the n operators H", form a maximal Abelian subalgebra of our Lie ring. Consider a simultaneous eigenvector X satisfying
H"X =h",X.
Then, the n eigenvalues h", (Il= 1,2, ... ,n) are called
a weight. We introduce a partial ordering relation for
two weights h", and k '" as follows. If we have an integer
j, such that
Susumu Okubo
530
hl=kl1
~=k2"'"
Then, we proceed to show the validity of
hJ=kJ ,
but hJ+I > kJ+I' then we say that the weight h" is higher
than k". Any irreducible representation is specified by
its highest weight. In particular, the irreducible representation with signature (1.2) is characterized by the
highest weight vector cf> satisfying
H" cf> = I" ¢,
/1 = 1 ,2, ... ,n.
(2.2)
Hereafter, ¢ always refers to the highest weight state
with the highest weight I". Then, the standard argument
immediately leads to
(2.3)
First, we notice
D~(j3) = [D({3 + I)A(ls)J~ = [A (ls)D(/3 + 1)]~
including the case /1 < lJ. To prove this last statement,
we notice that (1. I) and (2.2) give us
(2 10)
0
from the Abelian nature of the product involving Ai. For
J1 ? (3 + 1, we use the second form of (2.10) to find
D~(j3)¢ =
t [A(ls)]~
~:l
Di ({3 + l)¢ = 0
because of the induction hypothesis (2.8). Hence, we
have only to prove (2.9) for the case /1 = j3. Using now
the first form of (2.10), we compute
However, if we have a special situation Iu = Iv for some
/1 and lJ with /1"* lJ, then we find an additional condition
(2.4)
(2.9)
D~«(3)cf>=O for/1?j3.
n
D~({3)¢= ~ D~({3+ I)[A~-lsO~E]¢.
hI
Because of (2.2), (2.3), and (2.6), we can rewrite this
as
[A~,A:J¢= (A~ -A~)¢ = (tv - lu)¢'
Suppose that we have /1 < lJ since otherwise (2.4) is valid
always in view of (2.2) and (2.3). If we have lu =Iv' the
above relation leads to
A~A~¢=O,
where we used the fact A ~ ¢ = 0 because we assumed lJ
> /1. Using the hermiticity condition (1. 6), we can rewrite this as
(A~)+A~¢=O.
Since the matrix (A ~)+A ~ multiplying ¢ is nonnegative,
this is possible only if we have A~¢ = 0, and this proves
(2.4).
Now, we shall proceed to prove validities of our main
results, identity (1.16) and the redundant factor rule
illustrated by (1.19). For this purpose, let us introduce
n new vector operators D~ (QI), (n? QI ? 1) by
The first and second terms in the above expression are
zero in view of the induction hypothesis (2.8) since the
summation over A runs only for A?c j3 + 1. Therefore,
we finally find
D~(J3)¢ = (n - /3 +Is -ls)D~(j3 + 1)¢,
which is identically zero if we note Eq. (1.3), i. e. ,
ls =Is + n - j3. This completes the proof of (2.9) so that,
by induction, we have proved Lemma 1.
By setting QI = 1, the lemma implies
D~(I)¢=O
for all values of J1 and lJ since the condition J1? 1 is
trivially satisfied. Then, as we shall see shortly, this
gives
D~(I)=O
n
DU(QI)=[IT A(l )]".
i=G:
v
J
v
(2.5)
For a fixed value of QI, D~(QI) is obviously a vector
operator, L e., it satisfies
[Ar, D~{QI)] =O:D~(QI) - orD~(QI).
Lemma 2: If a vector operator
for aU values of /1 and lJ, then
(2.7)
The proof is by induction on decreasing values of QI .
First, for the highest possible value QI =n, we see
D~(n)= [A(ln)]~=A~ -lno~E
so that
T~
is identically zero.
A'8 on this
T~A~¢=O.
Repeating the same procedure, we find
T~F(A)¢=O,
where F(A) is an arbitrary polynomial of generators
Since we are dealing with an irreducible representation, any state is cyclic, so that all states of the form
F(A)¢ generate the whole irreducible representation
space. Hence, T':, = 0 follows immediately.
A~.
D':,(n)¢ =A~¢ - In6':,cf>.
However, /1? QI implies /1 =n in this case. Therefore,
is the result of (2.2) and (2.3), and the
lemma is valid for QI =n. Next, suppose that the lemma
holds for QI =j3+1, Le., we have
~(n)cf>=O
(2.8)
531
satisfies
This can be shown as follows. Multiplying
equation and noting (1.7), this gives
Lemma 1: We have
J1?QI and lJ=1,2, •.• ,n.
T~
T~¢=O
(2.6)
Now, we shall prove the following lemma.
D~(QI)¢=O for
identically, which proves the validity of (1. 16). Now, in
the above argument, we utilize the following lemma.
J. Math. Phys., Vol. 16, No.3, March 1975
So far, in our derivation of the Lemma 1, we ul;ed
only the basic condition (2.3). However, if we have Iu
= Iv for some values of J1 and lJ with J1"* lJ, then we can
utilize the additional condition (2.4) so that we can make
Susumu Okubo
531
a stronger statement. In that case, we can simply omit
all redundant factors A(l,) in D(a) as has been explained
in Sec. 1. To be more precise, let us suppose that we
have
(2.11)
Then, we omit all redundant AU,) (y + 1 !S /1"; J3 -1) in
the definition of D(y + 1). This implies that instead of
(2.10), we now define i5(y+l) by
i5~ (y + 1) = [A (la)i5(J3 + 1)]~ = [D(J3 + l)A (la)]~'
(2012)
Also, suppose that we have
IT >fT+1 = I,+2 = .
0
•
=In
(2013)
at the extreme right end, and set
15~(T + 1) = [A (In)] ~ = A~ - V)~E.
(2.14)
Now, (2.12) and (2.14) define 15~(a) recursively and we
can still prove the validity of the Lemma 1 for this new
i5~«()1 +1), if we use (2.4) in addition to (2.3). Indeed,
we can repeat essentially the same argument, word by
word. For example, we have, to begin with
i5~(T+l)¢=O for n?- Jl?:- 7"+1
from (2.4) and (2.14) if we_notice I" =In' Next, suppose
that we have (2.8) for D-D. Then, we can easily prove
the validity of
i5~(y+l)¢=O for Jl?- y+1,
which now replaces (2.9). This implies that the same
induction method proceeds exactly in the same way for
15(a). Therefore, the identities such as (1.19) and
(1. 21) are valid. Especially, this shows that (1. 22) is a
necessary condition for the degenerate representation.
Actually, we can prove the following stronger converse
statement. Suppose that we have
(2015)
for some complex numbers band c. Then, we can show
that the IR is degenerate and that one of band c must
coincide with In' Moreover, unless the IR is one-dimensional with I1 =f2 =. o. = fn' we find an integer j such
that.t~=f2=··· =fj >Ij+1='" =In' with b=lj and c=l n·
Note that, for the one-dimensional case, only one of Ii
and c must be equal to 1n but the other can assume any
complex number. To prove this statement, let z be a
complex variable and consider a polynomial of z, given
by
g(z) =
repeat essentially the same argument to show that both
doz + d1 • In
this way, we establish the first part of our assertion.
On the other hand, if do = 0, we must have d 1 = also,
and (z - b)(z - c) must divide the polynomial g(z). Therefore, band c must coincide with l" and lv for some
values of /1 and v with /1 * v. However, as we shall
prove shortly, identities such as (1.19) or (1.21) are
minimal so that the IR under consideration must be
necessarily degenerate with 1" = lJ and Iv = In'
g(z) and (z - b)(z - c) must be divisible by
°
As we mentioned in Sec. I, we can obtain a stronger
relation (1.30) for the completely symmetric IR. This
is due to the following fact. Because of (2.4), we have
now
A~¢=O
for /1*1.
Then, we can easily verify
(A~8 -A~A~ - o~A~ + O~A8)CP =0
if we use (1.1). Now, the same reasoning which led to
Lemma 2 is applicable to prove the validity of (1. 30).
We have assigned a scalar (T) for any vector operator T~. However, we can make a stronger statement by
assigning n scalars a~ (T) (x = 1,2, ... ,n) by
(2.16)
The reason for the validity of (2.16) is due to the fact
that the state T~CP, for a fixed value of X, has exactly
the same highest weight I", as we may verify easily.
Therefore, (T) is given by
<T)=t(J~(T).
).=1
(2.17)
Also, we can show the validity of
T~cp=(j~a,,(T)¢
for /1?-
(2.18)
V.
Moreover, if we have I" = Iv for some values of /1 and
v, then (2.18) is also applicable for such pairs of /1 and
v even though we may have Jl < v. This is a simple consequence of (2.3) and (2.4) since we have
T~CP = [A~, T"...]CP ={a" (T) - T~} A~CP = 0
in view of (2.3) or (2.4) for /1*v.
Lemma 3: Suppose that
we have first
AT=TA,
i.e.,
T~
is a vector operator, then
(AT)~=(TA)~
(2.19)
and second
b1 (z - l~) .
(2.20)
Using the standard algorithm, we can find another polynomial h(z) and constants do and d1 such that
g(z) = (z - b)(z - c)h(z)
+ doz + d1 •
where K"v (/1, v= 1, 2, ..• ,n) is defined by
K"v=O,
Since the vector product involving AJ is Abelian, we can
replace z and 1 by NxN matrices A and I in the above.
For any irreducible representation, A satisfies (1.16),
so that we must have
K""=l,,,
K"v=-I,
/1 > v,
(2.21)
f.J.=v,
f.l<v.
In the matrix form, K is an nXn matrix of the form
if the IR satisfies (2.15). This implies that the repre-
sentation is one-dimensional if
532
do*O.
In that case, we
J. Math. Phys., Vol. 16, No.3, March 1975
Susumu Okubo
532
We are now in a position to prove that the identities
derived in the present section is the minimal one. Suppose that j(z) is a polynomial of a complex variable z
First, Eq. (2.19) can be proven by computing
" (AA)~, T~] == 2(A T)~ - 2(TA)~
[~
hI
and noting the fact that the Casimir operator b~1 (AA)~
is a multiple of the unit matrix E.
IJ.
= tA~
T).C/>== t
),=1
j£
t
j=O
cJz i
.
Then we can define a vector operator f(A) by
Next, we compute
(AT)I'C/>
j(z) =
:\:'IL
AAu. TUC/>
A.
From (2.22) and (2.24), we compute
(2.28)
where we used (2.18). However, the second term is
since the first term is zero because of (2.3). These are
rewritten as (2.20) with (2.21).
NOw, replacing the vector operator T by AT in (2.20)
with repeated uses of (2.20), we obtain
(2.22)
where K' is the jth power of the nXn matrix K (not the
NXN matrix! I). Especially, if we set T=[ in (2.22) and
sum over /J., then we compute
(2.23)
for all J.l == 1,2, ... ,n. First let us set /J. ==n, which
gives j(ln) =0. Next, we choose J.l=n-1 and findj(ln_l)
==0 unless l,,_I=l"+l, i.e., f"_I=f". Continuing, we
discover fell) = 0 always unless we have h == h+l' Therefore, we have j(lu) == 0 for nonredundant values of fu'
This proves that (1.16) is the minimal polynomial if all
flL are distinct. Similarly, for the special case (1.18),
Eq. (1.19) is the minimal polynomial for which A
satisfies.
Finally, we shall show that any vector operator must
be a linear combination of Ai. To this end, we prove the
following lemma.
Lemma 4: Let T~ be a vector operator. Then the
following three statements are equivalent:
We can diagonalize the nXn matrix K easily as
(R-l KR) uv = lJ) uv'
Suppose that we have f(A) == 0 identically. Then, this
gives (iu[f(A)] ==0. By setting T=[ in (2.28) and noting
(2.25) and (2.27), this leads to
(2.24)
where the explicit form of the diagonalizing matrix R
is given by
(2.25)
(i) T~ == 0 identically,
(ii) (i1L(T)=O for all J.l=1,2, ...
,n,
(iii) (AiT) ==0 for all j=0,1,2,· ...
Obviously (i) leads to (ii) trivially, while (iii) follows
from (ii) because of (2.22) and (2.17). Conversely,
suppose that (iii) is valid. Then, this implies that we
have (j(A)T) =0 for arbitrary polynomialj(z). Hence,
by summing (2.28) over /J. = 1,2, ... ,n and noting (2.27),
this gives
(2.26)
In (2.25) and (2.26), we interpret the product such as
nJ:~ for v-I < /J. and n~u+l for the case /J. + 1 > v to be
one. Also, in (2.26), the product on k omits the singular
pOint k== /J.. From (2.23), (2.24), (2.25), and (2.26),
we can derive the formula (1.15) if we note identities
for an arbitrary polynomial fez). We can always find a
polynomial j(z) such that j(l) == 0 for all A*" J.l but j(Z) == 1
for any given value of /1. Then, it is easy to check that
this leads to (ii). Now, we come to the most difficult
part that (ii) implies (i). First of all, we note that (ii)
leads immediately to
oJf(A)TJ==O, (j(A)T)==O
(2.27)
(2.29)
for an arbitrary polynomial fez) if we use (2.28). Then,
we can prove that T~ satisfies
(2.30)
Again in (2.27), the product on k omits the singular
pOint k== /J..
533
J. Math. Phys., Vol. 16, No.3, March 1975
where we have for simplicity omitted the presence of
the unit matrix [ in front of lu's. Note that in compariSusumu Okubo
533
son to (1.16) this simply replaces the last factor A
- In by T. Moreover, if we have fu = fv for some pair /1
and v with /1 v, we can omit all those redundant factors
as in (1.19).
*
To prove (2.30), we shall define vector operators
by
T~(Q')
T(n)=T,
o=n,
T(Q') = (A -la)(A -l,,_I)' •• (A - In_JT,
Q'
for /1?!0.
The proof is exactly the same as in the Lemma 1, if we
note (2 .1S), (2.19), and (2029). Then, setting 0 = 1, we
find (2.30) because of the Lemma 2. For the case that
we have f" =fv' we can omit redundant factors by the
same reasoning.
Next, we shall show that T also satisfies another
identity
(A - 1 - (2) (A - 1 - l3) • •• (A - 1 - In) T = O.
(2.31)
Again, we can omit all redundant factors in (2.31), if
two of fu and fv coincide. The proof of (2.31) is slightly
more complicated. To this end, we define a new vector
product SoT for two vector operators 5':, and T':, by
(5oT)~=t5~T~.
A=1
(2.32)
0
(A
+ 2 - f3)
0"
0
0
(A
+ 0' -1 - f2) T,
0
T~(Q') cp=O for o?! v.
(2.34)
(2.35)
T':,(n)=O.
Now, we can rewrite the product (2.32) in terms of the
old product (LS). By noting (2.29), then this leads to
(2.31). Another way of proving (2.35) is to use
n -", +1>
'i", =J" +n -
4)'" <A -In_r>T=O
in analogy to (2.30). This is noting but the relation
(2.35).
Now, since T satisfies both Eqs. (2.30) and (2.31),
there must be a minimal polynomial f(A) satisfying
f(A)T =0.
Then, using the standard algorithm, we conclude that
f(z) must divide two polynomials
g1 (z) = (z -l)(z -l2)'" (z -In_)'
534
l2)
0 •••
0
(A
+n - 1 -
In) = O.
(2.37)
This must coincide with (1.16) if we convert the new
product into the old one. We can explicitly verify this
fact for the case n=3. Of course, our formulas (1.16)
and (2.37) agree with the result of the Ref. 5 for the
special case n=3.
The present method may be applicable for more general Lie algebras. We may note that an analog of (1.29)
exists also for the n-dimensional orthogonal group O(n)
where its generators J",v satisfy
For the spinor representation of the O(n) group, we can
easily verify a special relation
(2.39)
As a matter of fact, this relation is related to various
identities 17 found for the nuclear boson expansion method
where the relevant Lie algebra is B n , corresponding to
the 0(2n + 1) group. Also some interesting identities
among O(n) generators are noticed by several
authors. 18
(2.36)
/1.
Then, 6 A~ is the generator of the complex conjugate
representation with signature (fu.h, ... ,7,.). Hence, we
must have
(A -Z)(A -
+n - 1 -
0
== n and using the Lemma 2, this gives
J", = - f
l1) 0 (A
(2.33)
we can now prove by induction on increasing values of
Ci
+n - 1 -
°
= 1,
0> 1,
Setting
(A
[J",v,J"aJ = 0v"J"a - 0vaJ",Oi + ""Jav - o",aJ "'v'
0'
= (A + 1 -f2)
T(o)
We remark that by means of the new vector product,
we can derive an identity
J"v=- J v",'
Then setting
T(l) = T,
However, g1 (z) and g2(Z) have no common factor, noting
that lj - lk can never assume values ± 1. This is because
in reality we can omit all redundant factors in both g1 (z)
and g2(Z) if we have f", = fv' Therefore, we conclude that
f(z) must be a constant and we have T':, == 0 identically.
This proves (i).
Our Lemma 4 implies that the subspace of the Hilbert
space H orthogonal to 'H 0 is identically null. Hence, we
find H =H 0 as we stated in Sec. 1. In other words, all
vector operators are linear combinations of Ai. This
fact is important in deriving the 5U(3) mass formula. 5
< n.
Then, we can prove by induction on decreasing values
of 0,
T~(Q')cp=O,
g2(z) = (z - 1 - l2Hz - 1 -l3)' •• (z - 1 - In)'
J. Math. Phys., Vol. 16, No.3, March 1975
*Work supported in part by the V.S. Atomic Energy Co
Commission.
1M. Gell-Mann and Y. Ne'eman, Eightfold Way (Benjamin, New
New York, 1964).
2F. J. Dyson, Symmetry Groups in Nuclear and Particle
Physics (Benjamin, New York, 1966).
3E. g., P. Kramer and M. Moshinsky, in Group Theory and
Its Applications, edited by E. M. Loebl (Academic, New York
York, 1968), p. 339.
4J.D. Louck and L.C. Biedenharn, J. Math. Phys. 11, 2368
(1970). The notations of this paper are related to ours by
E v ", =~ IjCn)=M j , mAn=iA• PAn=IA'
"Our notation is the same as in S. Okubo, Prog. Theor. Phys.
(Kyoto) 27, 949 (1962) except for a sign change of At: and for
a slight modification of the two-vector product.
6H. Weyl, Classical Groups (Princeton V.P., Princeton, N.J.,
1939); M. Hammermesh, Group Theory and Its Application to
Physical Problems (Addison-Wesley, Reading, Mass., 1962).
Susumu Okubo
534
7L.C. Biedenharn, J.D. Louck, E. Chacon, and M. Ciftan, J.
Math. Phys. 13, 1957 (1972); L. C. Biedenharn and J. D.
Louck, ibid. 13, 1985 (1972); J. D. Louck and L. C.
Biedenharn, ibid. 14, 1336 (1973).
8S. p. Rosen, J. Math. Phys. 5, 289 (1964).
9S. Okubo, J. Phys.Soc. Japan 19, 1509 (1964).
tOs. Okubo, University of Rochester Preprint UR-474 (1974)
(unpublished) .
!tN. Mukunda, J. Math. Phys. 8, 1069 (1967); J.D. Louck,
ibid. 6, 1786 (1965).
12E.g., A.A. Abriksov, L.P. Gorkovand I.E. Dzyaloshinsky,
Methods oj Quantum Field Theory in Statistical Physics
(Prentice-Hall, New York, 1963), Sec. 1.
535
J. Math. Phys., Vol. 16, No.3, March 1975
13T. Holstein and H. Primakoff, Phys.Rev. 58, 1098 (1940).
14K. Tanabe and K. Sugawara-Tanabe, Phys. Lett. B 34, 575
(1971) and Nucl. Phys. A 208, 317 0.973).
15S.C. Pang, A. Klein, and R.M. Dreizler, Ann. Phys. (N. Y.)
49, 477 0.968).
16S. Y. Li, A. Klein, and R.M. Dreizler, J. Math. Phys. 11,
975 (1970).
11S. Okubo, Phys. Rev. C 10, 2048 (1974).
18J.D. Louck, Am. J. Phys. 31, 378 (1963); J.D. Louck and
H. Galbraith, Rev. Mod. P9yS. 44, 540 (1972); E. Fishbach,
J. D. Louck, M. M. Nieto, and C. K. Scott, J. Math. Phys.
15, 60 (1974).
Susumu Okubo
535
Download