376GenericEquilibria

advertisement
How to Understand and Calculate the
Equilibrium Distribution of
Everything
[Note: The material below is required reading. Read it, then sec. 8.2, pp. 227 to 237 of the
handout from Shaw’s book Astrochemistry that included Chapters 8 and 9. You will then be
ready to handle lots of topics that you will encounter in and out of this course should you study
further in this direction. Then read Lunine (your textbook), Chapter 7, sections 7.2-7.4 (please,
look at Figure 7.7--anyone who can find the Morowitz paper from which he took it will earn my
eternal gratitude!). You will then be ready to read: 1. The rest of chapter 8, and chapter 9 in
Shaw, which is very good and from a different perspective than Lunine’s book; 2. The “big
reading”: Chapters 8 and 9 in Lunine, pp. 243-304. (In chapter 9, section 9.2 can be skipped
unless you are interested)
Do not bother beginning Shaw or Lunine unless you have read most of the material I have sent
you concerning biomolecules (readings from Cooper, collections on proteins, etc.). The
corresponding material on biomolecules in Lunine is Chapter 4, sections 4.2 (103-115), 4.3 (116119), and 4.8 (the genetic code, 133-137). ]
What do the following have in common? [Answer: They are all
the same fundamental physical question and all have the
same generic yet specific, quantitative answer.] Once this
is appreciated, all these situations and more can be
understood, many or most textbooks and papers on a wide
range of topics can be understood, and you can even carry
out your own calculations for these problems (if you know
where to look up the crucial “dissociation constants” or
“free energies” or “reaction rates”--but this is a matter
of knowing where to look, not of understanding anything
new).
 The fractional ionization of different elements in a
stellar atmosphere or interstellar cloud, or the pH level
in an aqueous solution.
 The distribution of populations of excited states of
atoms or molecules or many other things that can have a
variety of energies (and they don’t even have to be
discrete energies).
 The fraction of polymer chains as a function of their rms
length.
 The fraction of polymer chains that are in a helix or
coil state of conformation?
 The abundances of various molecules in a planetary
atmosphere (e.g. the ozone abundance in the Earth’s
atmosphere as a function of altitude) or a biological cell.
 The fraction of a gas that undergoes a phase transition
to a liquid state, forming droplets and possibly rain.
Example: Should it rain on Titan, and if so, what will the
oceans or lakes be composed of?
 The fraction of ethylene molecules in the “chair”
conformational state at room temperature? Fraction of
proteins or RNAs that should be found in a particular
folding state?
 Fraction of DNA “tautomers” in the enol rather than keto
state--consider that one is highly mutagenic!
 Generalization of the above to include any number of
chemical species (e.g. a thousand different molecules) in
any or all forms (gas, solid, liquid, ionized atoms, ...),
in any or all conformational states (folded, denatured
(“melted”), ...), or having any attribute whatsoever.
As mentioned above what they have in common is that they
are all the same fundamental physical question and all have
the same generic yet specific, quantitative answer. Once
this is appreciated, all these situations and more can be
understood, many or most textbooks and papers on a wide
range of topics can be understood, and you can even carry
out your own calculations for these problems (if you know
where to look up the crucial “dissociation constants” or
“free energies” or “reaction rates”--but this is a matter
of knowing where to look, not of understanding anything
new).
You have probably encountered many of these seemingly
different ideas before, in the guize of the Saha equation,
the Boltzmann distribution, the Clausius-Clapyron equation,
the “law of mass action” and more.
These are all answers
to the above questions that assume equilibrium beteween
forward and reverse rates, perhaps called “chemical
equilibrium” or “phase equilibrium” or “ionization
equilibrium.” They are almost trivial to derive, were it
not for a single deep assumption, and their derivation
shows you, without any more trouble, how to deal with the
more general nonequilibrium case.
The equilibrium case is the only case I have seen in the cell biology or even organic chemistry literature;
maybe it is so well-established that nonequilibrium is unimportant, that no one even mentions it. Pointing
in this direction is the fact that the densities in laboratory (or biological) contexts are large (compare the
number density of molecules in water with the number density in a protoplanetary disk, or of the Earth’s
atmosphere, and then square the ratio (because it is binary collisions that are usually important). However
I also have the distinct impression that a thermodynamic (read: equilibrium) description of biological
systems is so entrenched that it is not even considered whether it is a good approximation, similar to the
situation in planetary atmospheres and protoplanetary disks until about 10-20 years ago. See if you find a
discussion of this in Ch.7 of Lunine. However we will assume that in the biological case equilibrium is a
good assumption and proceed, so that you can see how many problems are at your fingertips once you can
get past the “jargon barrier.”
You do not have to take courses or read books or even chapters of books on thermodynamics or statistical
mechanics or kinetic theory to understand any of these things. However you do have to read your textbook,
Lunine, Chapter 7, beginning with sec. 7.3, to see how this material is usually presented; we purposely
avoid the usual discussions of whether biological or other systems violate the third law of thermodynamics,
in favor of simply stating that thermodynamics is not fundamentally deep, except for its basic assumptions
whose origin is left unexamined. A more traditional and “deep” view of this, in the tradition of
Schrodinger’s famous and thought-provoking book “What Is Life” is in sec. 7.5 of Lunine.) We will make
no progress in understanding the subject until we completely demystify it.
In addition, if you read sec. 8.2, pp. 227 to 237 of the handout that included Chapters 8 and 9 from Shaw’s
book Astrochemistry, you will understand all you need to know. (Worse, if you don’t read it you will not
be able to read the rest of the handout or Lunine’s book, or do the upcoming homework assignment,
or pass the exam that counts a significant fraction of your grade in this class!)
The point here is to eliminate all the pretense at rigor and the impression that there are different
ideas involved for different systems, and use this opportunity to see how, from an interdisciplinary
perspective, it is possible to learn about ten things at the same time. Having said that, here is what
you need to know.
Equilibrium as a Balance of Forward
and Reverse Rates
Consider an bimolecular (elementary) reaction that involves
the dimerization (making two from one) of species A
A + A  D
(1)
Now consider that at the temperature under which the
reaction is imagined to take place, dissociation of the
dimer, D, occurs via a unimolecular decay
D  A + A
(2)
This “decay” is usually not because the state D is
unstable, although it could be. It is usually a reversion
to A + A due to thermal fluctuations, or the absorption of
some high-energy photon or particle that breaks a bond in
D. Only for excited states in atoms and molecules do we
think of radiative deexcitation as a “spontaneous” event.
In general the rate of forming D will not be the same as
the rate of dissociating D -- for example say you begin
with all A. This is obviously nonequilibrium if it is
energetically possible, or even favorable, for A + A to
combine into D, and there will be a flow of the reactants
from A to D. However, as soon as you have formed some D,
this will be balanced by a backward flow from D to A due to
the dissociation of D. As long as you keep the physical
conditions (T, P, ...) constant for long enough for it to
occur, the flow of reactants will change until the rate at
which D and A are being formed are equal, and there is no
net flow either way.
This blissful state of affairs is called chemical
equilibrium, but the idea is much more general than
equilibrium between abundances of molecules. It can apply
to a change in phase, say from water to vapor, in which
case “D” represents the vapor and “A + A” the liquid. If
you start with all liquid there will be some evaporation or
vaporization until the vapor (gas) density of water vapor
is such that the condensation of vapor back to water
balances the rate at which vapor is entering the gas phase
from the liquid. Think about the reverse case of starting
with all vapor. The density of water vapor at which this
balance of to-and-from each phase is usually expressed as a
pressure because it is easy to think of it as a vapor
pressure keeping too much liquid from evaporating. So the
corresponding vapor (gas) pressure P = nkT is called the
“equilibrium vapor pressure.” It is equilibrium because the
rate of evaporation and condensation are in balance, and it
is vapor because it refers to how dense the gas phase must
be (or how high the partial pressure of H2O must be) to
effect this balance.
It could also apply to two folding states of a protein (say
the native state and the denatured state--the “melting
temperature” corresponds to the “excitation energy” of the
two states--or almost any other states that are
energetically separated (and not just discretely!) and
satisfy certain other detailed conditions that we are
skipping. In fact the approach is applied to systems far
from anything you might think it is applicable to, from
economic systems to self-gravitating systems of stars and
galaxies. That is why the topic is so important.1
Tempted as I am to go further in this direction, we return
to A + A  D.
The approach we are using, which more rigorously would be called
statistical mechanics, has been applied to a huge number of
systems, including systems of interacting autonomous agents (an
early version of this was “SugarScape”), artifical neural
networks, and vehicular traffic states (e.g. traffic jam vs. no
jams phase transition). But there are certainly limits,
including those that we will encounter in biochemistry! For
example, if it requires less energy to be still than move, why
can’t we use this approach to calculate the fraction of organisms
that are moving with a certain energy above their “ground state”
of motionlessness? The reason is not so obvious--what important
conditions are not satisfied by what you might imagine to be
“free will”? One more thing: Do not think that this idea of
equillibrium between forward and reverse rates is the same as the
idea of “detailed balance” in physics, which is a much more
delicate and fundamental idea that actually is part of the
problem of time reversibility. Astrophysicists often use
“detailed balance” to mean “steady state rate equations” for a
1
system of chemical or nuclear reactions, and that is probably the
most beneficial way for you to think of it--if you care, you
should consider whether that detailed balance is the same as the
equilibria related only through some “equilibrium constants” as
we are discussing here.
Generally one side of the reaction or phase change
will be energetically favorable, by some energy value that
we’ll call E. In that case the rate toward the
energetically-favored side of the reaction will be favored
by a Boltzmann factor exp(E/kT). The more common way of
saying this is that fraction of systems that will be found
in the higher energy state will be exp(-E/kT) < 1,
depending on the ambient average thermal energy. One way
to think of this is that even though the average thermal
energy may be well below E, there are always thermal
fluctuations occurring, in which particles may be moving
faster or slower than average, and the exp term represents
these fluctuations--if you assume the velocity probability
distribution as being a Gaussian [exp(- v2/v02)] speed
distribution a Maxwellian), then you can probably see that
the distribution of energies (square of velocities) will be
an exponential. This is part of why dealing with
situations in which the velocity distribution is not
Maxwellian is avoided at all costs. (An example is the
exosphere of a planet, by definition where the mean free
path is comparable to or larger than the characteristic
length scale, here the scale height of the upper
atmosphere.)
The idea is extremely general, and you probably have
encountered it in some form or another.
 The population of atomic levels due to collisions (or
photoexcitations) balanced by de-excitations: “Boltzmann
populations” if equilibrium is assumed for the transitions;
 The balance between how many atoms or molecules are in
various “phases” of ionization, which in equilibrium is
given by the Saha equation with the exponential
representing the ionization energy and the rest of the
formula representing the rate of collisions and how many
states are available (through the “partition function”).
 For molecular association-dissociation the equation is
usually called the “law of mass action” and there would be
one of these “laws” for each species of molecule in the
system. If it is a phase change involving, say, solid-tovapor and back equilibrium (sublimation), it is usually
called the Clausius-Clapeyron equation. The amount of
energy advantage of one side over the other is more
precisely the Gibbs free energy, always denoted G. I
insert the table to the right to remind you of the case of
simple atom-atom reactions forming covalent bonds. But the
interactions could be van der Waals interactions between
the fluctuating dipole moments of large assemblies of
molecules, or anything.
If the physical conditions are changing too rapidly,
or there just hasn’t been enough time to reach
equillibrium, you can’t just equate the back-and-forth
rates, but instead need to solve the differential equations
representing the time rate of change of each species
involved.
The dimerization has a rate given by
- (1/2) d[A]/dt = k2[A]2,
(3)
where the square on the RHS just signifies that D is being
formed by binary collisions of A, so the collision rate
must be proportional to the square of the concentration of
A. If it was a three-body collision A+A+A  D or 3A  D,
then we’d use [A]3 on the RHS. This simple idea will turn
out to be important in understanding the more general form
of the equation we are deriving (the law of mass action).
The dissociation reaction has a rate
(1/2) d[A]/dt = k1[D].
(4)
What would make the dimer D (think O2) dissociate back to O
+ O? In the case we have in mind, it is collisions with
atoms and molecules in the ambient medium which have a
velocity high enough so that the relative kinetic energy of
collision exceeds the binding energy Ediss of the dimer,
i.e. this is thermal dissociation balancing the tendency of
the O molecules to all turn into the lower-energy state of
O2.
At equilibrium, the concentration of A is some constant
([A] = [A]e), so we may write
(5)
Notice that if we didn’t assume equilibrium we would have
to solve the differential “rate equation” for [A], and
write and solve a similar equation for [D].
So we may solve for the ratio of equilibrium
concentrations:
(6)
where Keq is called the “equilibrium constant.”
We are an
exponential away from understanding a whole set of
seemingly different “laws” and equations, which are all the
same.
The Reaction Rates
What do we take for the reaction rates k1 and k2 ? In
general they have to be measured in the laboratory or (with
more effort than we care to think about), calculated
theoretically and tabulated somewhere for us to use. (It
is like oscillator strengths for transitions between atomic
levels, or nuclear reaction rates; it is too embarrassing
to bring up often, but we don’t know how to calculate these
accurately, so we can only use empirical values.)
We do know the general form the reaction rates will take:
If one side of the reaction is a lower energy state than
the other, then it is favored by some energy E, and the
rate driving the reaction toward the favored state will
involve a factor exp(E/kT). Why? This is where we have
“hit the wall” of conceptual depth, where it is not at all
easy to derive or explain this except for somewhat vacuous
appeals to infinite heat baths or combinatorial explosions,
the explanations given in courses in thermodynamics and
statistical mechanics. Below I will claim to relate it to
the Gaussian distribution of velocities of atoms and
molecules. Later I will present the argument that any two
global, discrete states composed of Gibbs microstates (here
comes the jargon) that have Boltzmann statistics will
themselves be related by a Boltzmann factor. So we just
accept it for now.
(Once we assume equilibrium, which we have not done yet, we
should refer to the energy difference as the Gibbs free
energy G; see Lunine ch. 7.3 for a discussion. But that
is a detail at this point.)
For a brief discussion of reaction rates k1 and k2 that
appear to enter here, see the Appendix tacked onto this
writeup. You will soon see that for equilibrium
calculations we don’t need to know these rates.
This is a simple example for a reaction whose forward and
reverse reaction are both elementary, but the analysis
actually holds for any reversible reaction, and this
includes phase transitions such as sublimation,
evaporation, etc.
The Definition of the Equilibrium
Constant In general, the equilibrium constant is equal to the
"proper quotient of equilibrium concentrations", i.e. for a
general reaction
 A +  B   C +  D (7)
The equilibrium concentration of reactant and products are
related to the equilibrium constant as
(8)
Note if i is a generalized stoichiometric coefficient, the
reaction equation may be written
i i I = 0
(9)
and the reaction rate may be calculated from any component
as
(10)
and the equilibrium expression is
(11)
Physical Discussion and
Applications
For the above simple case of A + A  D discussed above,
which might represent the balance of association of atoms
and the thermal dissociation of molecules bound by some
energy E, the probability of having such an energetic
collision is related to the fraction of ambient atoms of
molecules moving faster than some critical speed, so since
that velocity distribution is a Maxwellian in all cases of
interest (another sort of equilibrium that is usually
assumed without explanation; see Spitzer’s ISM book for an
attempt to show the reader what is going on), you will get
an exponential factor of the form exp(-(-E/kT), expressing
the fact that the reaction above energetically favors the
molecule D by an amount E, but a thermal fluctutation that
is large enough can “push” the reaction back to the other
side, and if the temperature is very large (>> kT), then
you could end up with virtually no molecule D in
equilibrium.
This is why there are no observable molecular bands in the
spectra of hot stars, and why we need planetary surfaces to
get molecules that associate with E’s that are typically ~
1eV, a planet allowing the covalent bonds great security
while providing a fluctuating environment for the operation
of the noncovalent interactions with smaller energies more
like room temperature ~ 0.1 eV.
Similarly for the ionization and excitation equilibria that
control the sequence of stellar spectral types (OBAGKM),
and, when applied to solid-gas phase transitions, gives
solids or liquids precipitating out of the gas only for the
lowest-temperature stars (e.g. brown dwarfs, where dust
formation is crucial because of the opacity it
contributes), or for any planet, where these solids or
liquids form clouds in which the particles or droplets may
grow and fall to the ground under gravity. This is why it
rains on the Earth (think about the evaporation from the
ocean and the condensation into droplets), and maybe on
Titan. We return to this below, as well as the application
biochemistry.
It is very important that you see the generality of
these ideas, because once you understand it, you can apply
it to anything, including seemingly complex and
unfathomable chemical reactions occuring in biological
environments. The only “deep” idea (and it is) that you
have to accept on faith is the “Boltzmann factor” exp(E/kT), which is much more difficult to justify rigorously
than usually presented, and the assumption of an
equilibrium between the forward and backward rates for all
the reactions involved. The latter is something whose
validity we can test, and use a different approach if it
doesn’t hold.
Unfortunately, such equilibrium does not generally occur in
planetary atmospheres, where photochemistry and reactions
with rates that have very different timescales for the
approach to equilibrium force you to solve a large number
of rate equations (differential equations) instead of a
large number of “laws of mass action” (algebraic equations;
solve by inverting a matrix, much cheaper than solving
differential equations) for the abundances of all the
species. In practice you can assume equilibrium for the
fastest reactions and only solve the differential equations
for the slow ones, but it is something of an art to learn
to do it correctly.
These considerations apply to the molecular abundances
themselves (e.g. the fractional concentration of NH3, HCN,
as a function of depth on Jupiter), or the partition
between gas and solid (or liquid) phases (e.g. the
concentration of sulfuric acid or other droplets in a
planet’s atmosphere, or of liquid droplets in the Earth’s
atmosphere. That the nonequilibrium case is more
complicated than just solving rate equations is
demonstrated by the fact that rain cannot be predicted with
any accuracy--in reality you don’t just calculate the
abundance of solids overall, but need to know how many
droplets there are of each size. This is just the problem
we discussed in detail for protoplanetary disks using the
“coagulation” or “Smoluchowski” equation, which is the same
equation that has to be solved for all the other systems
under consideration, if you want to know about the growth
of grains or droplets.
The “old school” for planetary atmospheres that used
equilibrium chemistry for quantitative results is
represented by the beautiful and highly recommended book by
John Lewis, where all planets and moons and everything else
are treated as chemical equilibrium systems. You can learn
a lot about what to expect in different environments using
equilibrium as a starter, but don’t expect quantitative
results to be trustworthy. Severe nonequilibrium can give
you very different
Equilibrium is also usually assumed in the most current
calculations of brown dwarf atmospheres, spectra,
classification, etc., (see Burrows, Tsuji, ...) even though
it is understood that this is probably a poor approximation
and the full rate equations must be solved. (As far as I
know only Woitke and Henning (2002-2007) actually solve the
full problem (including the hydrodynamics!).
The same is true in protoplanetary disks, where the
compositions of the solids, as well as the molecular
concentrations, were taken as equilibrium concentrations
until about ten or so years ago when it was realized that
the densities are just too low for this assumption to be
true except maybe in the innermost disk, where densities
and temperatures (molecular speeds) are large. See papers
by Fegley and references/citations for equilibrium
calculations, Aikawa, Willacy,... and references/citations
for kinetic calculations.
Notice that the density enters crucially into the question
of equilibrium, because the rate of reactions goes as the
square of the density (usually), and that rate is what is
determining if the timescales for reactions are small
enough for equibilibrium to be assumed. This is why in
stellar atmosphere courses or ISM courses you are told that
“local thermodynamic equilibrium” will only hold at large
enough densities, so that states are populated by
collisions. It is too bad that this concept is not usually
presented in a way that students can actually understand
except in a formal kind of way.
Appendix A: What if you need reaction rates?
UMIST (ISM, planetary atmospheres) website, NIST website
and links (http://webbook.nist.gov/), JANAF tables (not
available online?), book by Y. Yung, Photochemistry of
Planetary Atmospheres, X
Appendix B: What if you want to see more
about how to use free energies and mass
action equations to solve all sorts of
problems?
. Almost all physical and organic chemistry books use this
approach, and an introductory chemistry book is the best
place to start. I recommend Silberberg (a beautiful and
comprehensive book--if you can find cheap online, get it!),
Atkins and Jones, ... I will be sending applications taken
from cell biology books. In biochemistry McKee and McKee,
Biochemistry, or Lehninger, Biochemistry, early chapters,
are the place to go. If you want a particularly good
description and derivation of this topic and just about
everything else (e.g. the only start-from-scratch
noncovalent interaction derivations I have seen in an
introductory text, a complete but understandable
presentation of molecular bond formation at the quantum
molecular orbital level), the chemistry book to have is
Atkins, Physical Chemistry. (Excerpt on molecular
interactions handed out already.) Not only is this
appproach applied to chemical reactions and phase
transitions in the usual sense, but also to polymer helixcoil transitions, biomembrane structure, and more.
If you
intend to follow up toward the biological side, you would
want Atkins and de Paula, Physical Chemistry for the Life
Sciences. Atkins has a number of books spanning all levels
from non-science major (beautiful book, and not so
elementary) to quantum chemistry. He is the “textbook
king” in Chemistry.
For biological applications including polymer physics:
Sneppen and Zocchi, Physics in Molecular Biology; Grosberg
and Khokhlov, Giant Molecules (I found it at ½ Price Books
for $4).
If you are serious about this: Daune, M. Molecular
Biophysics: Structures in Motion; M. B. Jackson, Molecular
and Cellular Biophysics.
Download