Physics 224 – Winter 2009 F. H. Shu Lectures 2 and 3

advertisement
Physics 224 – Winter 2009
F. H. Shu
Lectures 2 and 3
Reaction Equilibria and Einstein Coefficients
Law of Mass Action
The law of mass action is fundamental to any reaction (chemical or nuclear)
that occur in LTE. Consider a reaction where integer numbers, say, 3 and 2, of particles A
and B combine to form integer numbers, say, 2 and 1, of particles C and D:
3 A + 2 B = 2C + D.
In thermodynamic equilibrium, the forward and backward reactions are in balance, which
is why we write the reaction with an equal sign rather than with an arrow. A more
general notation moves everything to the left-hand side and reads
!#
a
" a = 0,
a
where ! a is the particle species of type a, and ! a is an integer, positive or negative
depending on whether we think of the species as a reactant or as a product. In the
previous example, if we label A, B, C, D as types 1, 2, 3, 4, respectively, then
" 1 = 3, " 2 = 2, " 3 = !2, " 4 = !1. Notice, however, that a flip in sign of what we
consider as reactants and products does not affect the final law of mass-action, since the
above equation can be multiplied by minus one without changing its validity.
In equilibrium, the entropy S of a closed system attains a maximum value if
the system is kept at constant volume V and energy E. In keeping with the usual
conventions of thermodynamics (see, e.g., the book by Callen), we have changed the
notation here so that E is the total energy, not the energy per unit volume as in Lecture 6.
We consider the closed system to have a fixed total mass. When dealing with LTE rather
than TE, the closest analog to what we are doing here is therefore the energy per unit
mass that we denoted in Lecture 6 as E . In any case, it turns out that the maximum
entropy principle implies the minimum energy principle: the energy E of a closed system
attains a minimum value if the syestem is kept at constant volume V and entropy S. The
proof proceeds as follows. Suppose the energy E were not a minimum. Then it would be
possible to extract energy from the system at constant entropy S by doing PdV work. The
work can be converted to heat TdS and added back to the system to bring it to the original
level of energy E and then allowing the system adiabatically to expand to its original
volume V. The system now has a higher level of entropy S for the same E and V, which
contradicts the assumption that the original system had a maximum value of the entropy
S. Thus, it must not be possible to lower the energy E for given S and V by doing PdV
work, i.e., the original system had the minimum possible value of E.
In many experimental and natural situations, what are kept constant are not
the entropy S and volume V, but the pressure P and temperature T (for example, a small
volume inside a star in mechanical and local thermal equilibrium with its surroundings).
In such a case, the minimum energy principle applies not to the internal energy E but to
the Gibbs energy G, defined by the Legendre transformation:
G = E ! TS + PV .
The internal energy E satisfies the fundamental equation of thermodynamics,
dE = TdS ! PdV + µdN ,
where the last term accounts for the change in E if the number N of particles changes,
with the coefficient µ , called the “chemical potential,” being defined as an intensive
variable complementary to the extensive variable N by analogy to the corresponding
intensive coefficients T and P that are complementary to the extensive S and V. For
homogeneous single-component systems, the particle number N does not change, so the
above equation reduces to the simpler form used in our prior discussion. The third term
µdN acquires importance only when we have reactions in multi-component systems. In
any case, when it is T and P which are kept constant rather than S and V, what is
minimized at equilibrium is not E, but G. The demonstration of the minimum G principle
is analogous to the proof the minimum E principle, so we forego it here. The important
point for us is that taking the differential of the expression for G and substitution of the
expression for dE yields
dG = ! SdT + VdP + µdN ,
i.e., the natural variables for G are T, P, and N, which is the original motivation for the
Legendre transformation defining G.
When the system consists of multiple components that we donote by a subscript a,
with partial entropies and pressures that we denote as S a and Pa , and with individual
chemical potentials per particle and particle numbers that we denote by µ a and N a , we
may write the above relationship for G as
'
$
'
$
dG = (% ! S a "dT + Vd % ! Pa " + ! µ a dN a .
& a
#
& a
# a
At fixed T and P " ! Pa , chemical (or nuclear) equilibrium requires G to be at a
a
minimum,
dG = ! µ a dN a = 0;
a
i.e., G is to be minimized subject to variations of the individual particle numbers that may
result when chemical reactions produce some particles types at the expense of other
particle types. The individual variations dN a are not independent because the
appearance or disappearance of individual particles must satisfy Dalton’s integer
proportions as defined by the original reaction equation; i.e.,
dN1 : dN 2 : dN 3 : ... = ! 1 : ! 2 : ! 3 : ...
Another way to express the above is
dN a = " a d!
where d! is a common scale factor for each a and is the only independent variable of the
problem. Substitution of the above expression into the requirement for the minimization
of G for arbitrary d! now yields the result that at chemical equilibrium,
!"
a
µ a = 0,
a
which is the law of mass action.
The law of mass action states that at chemical equilibrium, the chemical
potentials of the individual species, when multiplied by its Dalton integer in the
fundamental reaction equation, must sum to zero. Commit this result to memory. The
chemical potential µ is generally a function of the number density of the species and the
ambient temperature T. The law of mass action therefore provides a constraint on the
relative number densities (or concentrations) of the reactants and products of a reaction in
equilibirum at temperature T. The calculation of the chemical potentials of different
species as a function of number density and temperature is a relatively simple task in the
gaseous state when we deal with well-separated molecules, atoms, or nuclei with welldefined internal and external degrees of freedom.
Relativity makes the principle even simpler since the energy of a particle is
2
! = mc , where m is the particle mass, whether the particle is an elementary particle or a
compound particle. (Do not confuse our use here of ! and m with the nuclear energy
generation rate per unit mass and the mean particle mass in a cosmic mixture – there are
only so many letters in the Latin and Greek alphabets.) In other words, the binding
energy between 2 hydrogen atoms plus an oxygen atom and a water molecule is simply c2
times the mass difference between 2H+O and H2O. The chemical binding energy simply
represents the “mass deficit” of the water molecule in comparison with its separated
atoms moved to infinity. We usually think in these terms only for nuclear energy, but, of
course, relativity theory is universal. If one includes general relativity, what one calls the
binding energy of the Earth to the Sun, which is the sum of the gravitational potential
energy and the kinetic energy in Newton’s way of thinking, is in Einstein;s universe just
c2 times the mass difference of the Earth and Sun imagined infinitely separated compared
to the mass that an observer at infinity would ascribe to the actual Earth-Sun system.
In relativity, there is no such thing as gravitational potential energy, kinetic
energy, chemical binding energy, etc., only ! = mc 2 . The reader my be surprised by the
claim that there is no such thing as kinetic energy, since the conventions of modern
physics are to define kinetic energy as the difference between the actual total energy of a
particle and its rest energy. This is okay for truly elementary particles, but it runs into
conceptual problems for compound particles. Consider the proton. Does it have a rest
mass? All textbooks say so, but a proton is made of moving quarks. What are the
relative contributions to the “rest mass” of the proton of the “kinetic energies” of the
moving quarks, their rest masses, and their QCD interactions? It becomes frightfully
complex quickly, doesn’t it? Yet, Einstein solves the practical problem for us in a clean
stroke. What is the mass of the proton? Measure its gravitational mass. And if it’s
moving or in an excited internal state, it will have a higher gravitational mass than if it is
stationary. Such is the power and liberating effect of Einstein’s formula, ! = mc 2 .
Relativistic Approach to Notion of Chemical Potential
The occupation number of fermions (upper sign) or bosons (lower sign) with
energy ! and chemical potential µ (not to be confused with the reduced mass of the
reactants) at temperature T is
!(" ) =
1
e
( " # µ )/ kT
±1
.
Notice that ! is a function of an individual particle whereas µ is a function of the
distribution of particles; i.e., the latter is a function of the density and temperature while
the former is not. In the interiors of normal stars, the densities are not high enough that
we need to worry about the ± 1, i.e., the distinction between fermions and bosons; nor are
the temperatures high enough that we need worry about relativistic effects. Thus, the
relationship among particle energy, momentum, and mass is given by the usual nonrelativistic formula, except for an additive rest energy:
! = m0 c 2 +
p2
.
2m0
The chemical potential µ is now obtained by requiring that we get the number density n
when we make an appropriate integral of the occupation number:
&
n = gh !3 ' "(# )4$ p 2 dp = g%T !3e( µ ! m 0 c
0
2
)/ kT
,
where !T is the thermal de Broglie wavelength associated with a particle of momentum
(2!m0 kT )1 / 2 :
"T #
h
,
(2!m0 kT )1 / 2
and g is the number of equivalent internal states with the same rest mass m0 . (For
example, if we are talking about the ground state of a free proton, g would equal 2 for the
two spin states of an s = 1/2 particle. For more complex nuclei, we need more
information on nuclear shell models than we have learned in this book in order to
calculate the degeneracy factor g, a detail that we shall forego since the g’s are usual
factors of only a few.) Thus, the chemical potential of a gas of particles with rest mass
m0 at density n and temperature T is given by
[
3
]
µ =m 0 c 2 + kT ln n!T / g .
Boltzmann’s Law
We first use these formulae to derive Boltzmann’s law (which is the
analog of what Goldstein said in another context akin to “using a sledge hammer to crack
a peanut.”) Consider the collisional excitation and de-excitation equilibrium:
(i,1) + e ! " (i, n) + e ! .
The relativistically-correct rest-energy of an ion in state i and excited electronic level n
with energy En (i) above the ground state n = 1 is
m0n (i)c 2 = m10 (i)c 2 + En (i) ! E1 (i).
Because the free electron appears on both left and right of the excitation/de-excitation
equilibrium, it does not enter into the law of mass action (in other words, it doesn’t matter
what is colliding with ion i, as long as it has a thermal distribution; it could be blackbody
photons instead, for example):
&g n( 3#
µ1 ' µ n = 0 = '( E n ' E1 ) + kT ln$$ n 1 T 13 !!,
% g1 nn (Tn "
which holds for either i or i+1. Because the electronic energies are much smaller than
3
3
rest energies of the ion, we can set !T 1 / !Tn = 1, and derive
nn g n !( En ! E1 ) / kT
,
=
e
n1 g1
which is Boltzmann’s law. If we sum over all n, we can also express the above as
nn (i ) = ni
g n e ![ En (i ) ! E (i )1 ] / kT
Zi
"
where Z i $ ! g n e #[ En (i ) # E1 (i )] / kT ,
n =1
nn (i + 1) = ni +1
g n e ![ En (i +1) ! E (i +1)] / kT
Z i +1
"
where Z i +1 $ ! g n e #[ En (i +1) # E1 (i +1)] / kT ,
n =1
where ni and ni+1 are the densities of the total number of ions in stage i and i+1.
Einstein Coefficients for Radiative Transitions Between Discrete Levels
Consider now the condition of detailed balance when we have atoms or
molecules in thermodynamic equilibrium with a blackbody radiation field B! (T ) :
nu [Aul + Bul B! (T )]= nl Blu B! (T ).
In this notation, Boltzmann’s law, with " u # " l = h! , reads
nu g u " h! / kT
=
e
.
nl
gl
Division by nl in the equation of detailed balance and the substitution of the formula for
the Planck function B! (T ) yield, after multiplication through by the terms containing
exponentials:
) & 2h. 3 #
& 2 h. 3 #
gu ,
- h. / kT
- h. / kT
+ $$ 2 !! Bul e
* Aul 1 - e
' = $$ 2 !! Blu .
gl +
% c "
( % c "
(
)
Being atomic or molecular constants, Aul , Bul , and Blu cannot depend on T; thus, the
terms with and without the exponential factor e " h! / kT must be independently set equal to
each other. This requirement secures the identifications:
Aul =
g
2h' 3
Bul = l
2
gu
c
& 2h' 3 #
$$ 2 !! Blu .
% c "
The above relations were established by Einstein in 1916. Note especially the
relationship Bul / Blu = gl / gu that we cited without proof in Lecture 5 when we corrected
true absorption for stimulated emission. The genius in Einstein’s derivation reproduced
above lay in anticipating that the form of Planck’s law, with eh! / kT " 1 in the denominator
of the photon occupation number, instead of the classical inverse Boltzmann factor by
itself eh! / kT , implies that the radiiative emission and absorption processes contains
something extra compared to superelastic and inelastic collisions between classical
material particles. The extra !1 in the denominator must be associated with a process of
stimulated emission that exists as an extra process on top of spontaneous emission and
true absorption.
Moreover, whatever were the quantum laws governing radiative transitions, they
have to be consistent with the two equations above relating the three coefficients Aul, Bul
and Blu . If one knows one coefficient, one would know all three. It would take another
decade before the quantum mechanics of Heisenberg and Schrödinger for the quantized
levels of atoms, and the time-dependent perturbation theory of Dirac for radiative
transitions between different discrete levels would verify by direct computation the
Einstein relations. Moreover, of the two methods, Einstein’s argument from the principle
of detailed balance is the more general since it holds to all perturbation orders.
We warn the reader that the relationship between the Einstein coefficients hold in
the form s written above only when they are defined in terms of I! . If the absorbed and
stimulated rates are written using instead I ! , then a factor of c /" 2 = !2 / c needs to be
introduced in the relationship between the corresponding Einstein B’s to compensate for
the difference between I! and I ! .
We now derive the relationship between Einstein’s coefficient for true absorption
and the radiative cross-section that an atom or molecule has for absorbing a photon of the
appropriate frequency. In terms of the angle-independent cross-sections that we
introduced in Lecture 5, the rate at which true absorption is taking place per unit time per
unit volume equals
!! d#!
"
0
I$
nl % abs ($ )d$ ,
h$
(7.8)
where the division by the energy of the photon h! is necessary to convert I!
from a monochromatic specific intensity for energy into one for photon number, and
where the integration over d! accounts for photons of the appropriate frequency
propagating in all directions past the atoms. We suppose now that " abs (! ) has the form
given by the classical model for electronic transitions in atoms (see Chapter 14 of Shu,
Radiation), except that we include a correction factor, called the f-value, f lu , which
corrects for quantum effects:
# abs (! ) =
"e 2
f lu L(! ),
me c
where L(! ) is a sharply peaked function about " = " ul $ (! u # ! l )/ h that integrates to
unity when we perform the indicated integration over d! .
Often L(! ) is taken to be the Lorentzian profile (see eq. 23.18 of Shu, Radiation):
%
(
" ul / 4#
L(! ) = 4 '
,,
2
2 *
& (! $ ! ul ) + (" ul / 4# ) )
where ! ul / 4" is called the width of the transition and is given by quantum calculations
for isolated atoms as the sum of the Einstein A’s out of the levels u and l that, by the
uncertainty principle, make the levels “fuzzy.”
! ul = " Aun + " Aln .
n <u
n <l
In general, the lifetimes of atoms against spontaneous radiative decay, even for the
strongest processes, are long compared to the inverse frequency of the radiation that they
emit, i.e., Aul ! ! ul , so that the typical atom emits photons of many wave periods before
the energy difference of the upper and lower level is carried away, yielding a line profile
that is well-defined (i.e., very narrow) about the line frequency ! ul . When the Lorentzian
is a sharply peaked function of ! , L(! ) behaves essentially as a Diract delta function,
and the absorption rate (7.8) becomes
" 4! e2 % " flu %
nl $
** I(ul d).
# me c '& $# h( lu '& !
The corresponding rate written in Einstein’s notation is
nl Blu
1
4!
!
$$ I
" ul
d# ,
where we have introduced a factor of 1 / 4! because historically the volumetric rate
associated with Einstein’s B coefficient was defined in terms of the mean specific
intensity. Cancelling out the common factors, we obtain upon equating the two
expressions
" 4! 2 e2 % ful
1
Blu = $
=
'
# me c & h( ul h( ul
+)
abs
(( )d( .
(7.9)
*(
The first identification for Blu holds only for electronic transitions in atoms or ions; the
second identification, where the integration occurs over the line-width holds more
generally, for example, for rotational-vibrational transitions in molecules as well as their
electronic transitions, as long as ! abs (" ) is an accurately measured or calculated crosssection for true absorption.
From Einstein’s general relations, equation (7.9) yields an expression for the rate
coefficient of spontaneous emission of electronic transitions in atoms or ions:
g
Aul = l
gu
& 8( 2 e 2' ul 2
$
$ m c3
e
%
#
! f ul .
!
"
The total power radiated on average by an atom undergoing spontaneous emission is the
Einstein A-value times the energy of the emitted photon h" ul = !! ul :
2
g 2e 2! ul
P= l
f ul !! ul .
g u me c 3
We compare this expression with Larmor’s formula for the radiative power of an electric
dipole freely oscillating at natural radian frequency ! 0 (see eq. 14.13 of Shu, Radiation):
P=
2 e 2 !x!2
3c 3
2e 2! 0
=
3c 3
4
x2
Except for a dimensionless factor ( g l / g u )3 f ul , if we identify ! 0 with ! ul , we see that
there is a correspondence between the quantum and classical expressions if we identify
the expectation value for the oscillator energy (half in kinetic and half in potential
energy), when first excited, to be totally radiated away by the atom or ion as a photon of
energy !! ul :
2
me! 0 x 2 ~ !! ul .
This correspondence suggests correctly that for normal (strong resonance) lines, the
quantum-mechanical factor ( g l / g u )3 f ul is of order unity. However, in interstellarmedium astrophysics, one often deals with much weaker lines, particularly so-called
“forbidden lines,” where the f-values are much less than unity.
It is conventional to express hydrogen-atom f-values for transitions from a lower
level ! = n! to an upper level u = n in terms of semi-classical values computed by
Kramers times a quantum-mechanical correction g nn! called the Gaunt factor:
f nn! =
2 5 ' n! 3 n $
g .
% 2
2 3 " nn!
3) 3 & (n ( n! ) #
Carrying out the indicated multiplications, together with Rydberg’s formula for
hydrogen-line frequencies, we obtain
Ann! = 1.57 # 1010 [n 3 n!(n 2 " n! 2 )]"1 g nn! s -1.
For atomic hydrogen, the Gaunt factors are of order unity for large n and n! since
quantum mechanics becomes classical mechanics in the limit of large quantum numbers.
Even for the transition 1 ! 2 , we have g 21 = 0.717, which is not very different from
unity. Numerical values for some common transitions are
Lyman alpha: A21 = 4.68 " 108 s !1 ; Lyman beta: A31 = 5.54 " 10 7 s !1 ;
Balmer alpha (H ! ) : A32 = 4.39 " 10 7 s !1 ; Balmer beta (H ! ) : A42 = 8.37 " 10 6 s !1 .
In fact, the Einstein A values can be computed analytically for all the transitions of
atomic hydrogen (see Prob. Set 5 of Shu, Radiation), so an approach via f-values is not
needed for such a simple quantum system. It is much more useful in application to
complex atoms and ions whose spectra are measured in the laboratory.
Application to Ionization Equilibrium – Saha’s Equation
Let us now consider the recombination-ionization equilibrium out of the
ground electronic states of i and i+1:
(i + 1,1) + e " ! (i,1).
The law of mass action applied to this equilibrium reaction reads
& g1 (i )ne (Te 3 n1 (i + 1) #
µ1 (i + 1) + µ e ' µ1 (i ) = 0 = I i + kT ln $
!,
%$ g e g1 (i + 1)n1 (i ) "!
where ge = 2 for the two spin states of the free electron, and where we have taken the
3
3
liberty of setting !T 1 (i + 1) / !T 1 (i ) = 1 on the approximation that the electron rest energy
is also small in comparison with the ion rest energies. Thus, we have the equation
' n1 (i + 1) $ ' g1 (i ) $
!3 ! I i ; kT
.
%
"%
" ne = g e (Te e
& g1 (i + 1) # & n1 (i ) #
(7.10)
Applying Boltzmann’s law to n = 1 (the principle of detailed balance permits us to use
any n or any sum of them), we may express the ground-state fractions as
n1 (i + 1) ni +1
=
,
g1 (i + 1) Z i +1
g1 (i ) Z i
=
,
ni (i ) ni
which now allows us to write equation (7.8) in the usual form stated for Saha’s equation:
ni +1 ne Z i +1 g e
!3
=
"Te e ! I i / kT .
ni
Zi
(7.11)
It is usual to adopt the convention of defining the ground electronic-energy levels as
zero, E1 (i ) ! 0 and E1 (i + 1) = 0, in the electronic partition functions.
Equation (7.11) has the following breakdown. The combination ni +1 ne / ni
on the left-hand side is what we expect – product of densities of reactants over product of
densities of products – from the so-called “law of mass action” that we learn from
chemistry for an equilibrium reaction of dilute gases of the form given by equation
(i + 1) + e " ! i .
The right-hand side of equation (7.11) is a reaction “constant” that depends only on the
temperature T. The term e ! I i / kT is the usual Boltzmann factor that favors the less ionized
state i in comparison with the more-ionized state i+1 because of the need to supply
ionization energy Ii to convert the former into the latter. The partition functions Zi and
Zi+1 are simply the generalization of the usual degeneracy factors to account for the
number of ionic states that have the same energies, but now apportioned according to
!3
their Boltzmann distributions across excited electronic states. The factor g e "Te
represents the analogous quantity for the electrons, accounting for both its internal (spin
only) degrees of freedom as well as its translational degrees of freedom due to its thermal
motions. The left- and right-hand sides must have the same dimension of inverse
!3
volume, which is a second justification for the extra factor "Te on the right-hand side of
equation (7.11).
The thermal de Broglie wavelength !Te characterizes the minimum scale
over which thermal electrons will not experience the quantum presence of other electrons
because of the Pauli exclusion principle. In other words, in order for the derivation that
3
led to equation (7.11) to be valid, we require ne !Te << 1, so that the electrons are not
degenerate in physical space when they have typical momenta ~ (2!me kT )1 / 2 . The
3
inequality ne !Te << 1 is well satisfied for the relatively low densities that prevail in
stellar atmospheres.
Assuming that the partition functions in equation (7,11) yield only factors
of order unity (see below), we see that ionization to the next stage from i becomes
relatively important when
e " I i / kT
ne !Te
3
~ 1 , i.e., when T ~
Ii / k
3
" ln(ne !Te )
.
(7.12)
Let us apply this equation to the partial ionization of hydrogen, where I i / k = 158,000 K.
3
In a typical stellar atmosphere, ln(ne "Te ) ~ !15 ; thus, the partial ionization of hydrogen
is strongest when T ~ 11,000 K, which is the effective temperature at the top of the range
for A stars (Lecture 3). Although we naively expect the partial ionization of hydrogen to
occur only when the temperature approaches 158,000 K because of the binding-energy
advantage of the neutral atom in its ground electronic configuration compared to the
same in the ion, the actual temperature required to produce appreciable ionization in a
stellar photosphere is more than an order of magnitude lower than the naive expectation
because of the advantage gained statistically by an electron if it is freed to explore phase
space. In some sense, it is easier to ionize an atom than it is to excite it. A tension exists
between a preference for greater binding energy at low temperatures and a preference for
more freedom to explore phase space at high temperatures.
Einstein-Mile Coefficients for Continuum Processes
While the derivation of Saha’s equation is fresh in our memory, we use the
result to derive the so-called Einstein-Milne coefficients, A(n) and B(n), associated with
spontaneous and induced recombination by the radiative emission of the energy
difference between an electron in the continuum and a bound state n of the hydrogen
atom. The relevant equation of detailed balance reads
[ ]
n(H[n]) p! (n) B! (T ) = [vf e ( v)]4"v 2 dv ne np [$(n) + #(n) B! (T )].
In the above, p! (n) is related to the photo-ionization cross-section " ! (n) of a hydrogen
atom in level n, which we denote with the symbol H[n], for a photon of energy h! , by
the equation
p! (n) = 4#
" ! ( n)
.
h!
The function f e ( v) is the Maxwell-Boltzmann distribution at temperature T of random
velocities v of electrons e relative to protons p, whose number densities are, respectively,
ne and np, which are recombining into atomic hydrogen in level n, with number density
n(H[n]), spontaneously and by induced radiation with Einstein-Milne rate-coefficients A
and B, respectively. By energy conservation, we require
h" = ! n + me v 2 / 2,
where ! n is the ionization potential from level H[n], and where we have not bothered to
distinguish between the mass of the electron me and its reduced value in the electronproton system. Note that the last equation requires the following relationship between
differentials: hd! = me vdv.
If we substitute into the equation of detailed balance, the appropriate expressions
for the Maxwell-Boltzmann distribution of the electron, the Planck function, and the Saha
equation for the electron-proton-H[n] equilibrium, we get after some cancellation of
common factors:
2
&
#
, 2 h0 3 ) g e g p , m e )
, 2h0 3 )
p0 ** 2 '' =
*
' 41v 2 $ /(n) 1 . e . h0 / kT + -(n)** 2 ''e . h0 / kT ! .
gn + h (
+ c (
+ c (
%
"
(
)
Since A, B,and p! are atomic constants independent of T, we require
"( n) =
2h# 3
!(n),
c2
- gnh
.( n ) = +
+g g m
, e p e
*&
#
10
($
, with g n / g e g p = n 2 . (7.13)
( 0 (h0 ' / ) !
n "
)%
As usual, we are able to express any elementary rate coefficient for a radiative process in
terms of the absorption cross-section for the process,
Apparent Divergence of Partition Functions
Naively, one might think that the partition functions Z i and Z i +1 containing an
infinite number of terms might not sum to an order-unity quantity. Consider, for
example, the case when i is the neutral hydrogen atom. In this case, the degeneracy
factor g n = 2n 2 and the energy level E n = I i (1 ! n !2 ); thus,
"
"
n =1
n =1
Z i = ! g n e # En / kT = 2e # I i / kT ! n 2 e I i / n kT ,
2
which is a divergent expression since even the sum of n2 from 1 to infinity diverges, and
we have inside the sum something larger than n2. The formal difficulty is that the
number of bound Rydberg states increases dramatically as n " ! , so at any T, the
freedom to explore the large amount of phase space available to a bound electron at large
n would seemingly win out over any advantage for more binding energy at small n. If
this formal result corresponded to reality, then any container of thermal hydrogen atoms
would have most of the atoms occupying very high Rydberg states rather than having a
large fraction existing, say, in the ground level n = 1. This is a patently false conclusion,
so there must be a problem in allowing a sum over an infinite number of bound states.
To see that only a finite number of Rydberg states is a realistic
approximation, note that the effective size of the neutral hydrogen atom in level n scales
as rn ~ n 2 a 0 where a0 is the Bohr radius ~0. 5 ! 10 "8 cm. Thus, high-level Rydberg
atoms are very large, and perturbations from nearby material may truncate the existence
of such states. For example, in the solar atmosphere, the neutral hydrogen density might
be 1018 cm !3 , so that hydrogen atoms are spaced by 10 !6 cm apart on average. If we set
the largest rn at 10 !6 cm, we get a maximum value of n ~14. (There are more
sophisticated ways to find nmax, but they do not affect our general conclusion.) If the
upper limit ! in the definition of Zi is replaced by 14, then the sum is dominated by the
first term for values of T ~ 10 4 K. In this case, Zi is only slightly larger than 2, and most
neutral hydrogen atoms are in the ground state n = 1.
Those of you who do radio astronomy will know that a famous radiorecombination line in emission from H II regions is H109α, arising in a radiative
transition (recombination cascade) from n = 110 to n =109 in the hydrogen atom. A
hydrogen atom in n = 110 has a radius rn ~ 6 ! 10 "5 cm, i.e., the interstellar medium
contains atoms the size of a large bacterium! Atoms are not always microscopically
small because interstellar space is very empty.
Download