bose-einstein condensation in two-dimensional traps

BOSE-EINSTEIN CONDENSATION IN TWO-DIMENSIONAL TRAPS
A Dissertation Presented
by
JUAN PABLO FERNÁNDEZ
Submitted to the Graduate School of the
University of Massachusetts Amherst in partial fulfillment
of the requirements for the degree of
DOCTOR OF PHILOSOPHY
February 2004
Department of Physics
c Copyright by Juan Pablo Fernández 2004
°
All Rights Reserved
BOSE-EINSTEIN CONDENSATION IN TWO-DIMENSIONAL TRAPS
A Dissertation Presented
by
JUAN PABLO FERNÁNDEZ
Approved as to style and content by:
William J. Mullin, Chair
Suzan Edwards, Member
Robert B. Hallock, Member
Jonathan L. Machta, Member
Jonathan L. Machta, Department Chair
Department of Physics
To the memory of Gonzalo Cortés González,
César Pretol Castillo, and Jacob Ketchakeu
ACKNOWLEDGMENTS
On my very first semester at UMass I had the pleasure of attending Bill Mullin’s lectures on classical
mechanics, and from the beginning I noticed his excellence as a teacher, his willingness to never stop
learning, his kindness, and his modesty. That first impression was only strengthened during the
five years or so that we worked together. I will forever be grateful to Bill for always putting aside
whatever he was doing and listening to anything I had to say, for convincing me that my work was
valuable, and for teaching me a great deal of physics.
Having three outstanding scientists in one’s dissertation committee is an honor, but also, given
their example and their standards, an enormous challenge. My fear of disappointing Suzan Edwards,
Bob Hallock, and Jon Machta forced me to do my best, and for that alone they would deserve my
thanks. My debt to each of them, however, goes far beyond: Suzan introduced me to Matlab, the
programming language that I used to obtain the great majority of the results that I present here,
and Jon lent me the computers in which I ran those programs and wrote this thesis; Bob lent me
his support during difficult periods and gave me much valuable advice in his office across the hall.
David Hall made it possible for me to at last observe a condensate and witness the phenomenon
that I had been thinking about for so many years; besides, he generously provided me with the
very first figure of the thesis. Eugene Zaremba and Markus Holzmann kindly let me reproduce
figures from their published papers (Prof. Zaremba’s figure, regrettably, does not appear in this final
version, but can still be admired in Ref. 91); Prof. Holzmann, moreover, lent me the program he
wrote for Ref. 69, and by doing so helped me get seriously started on my research. I also benefited
greatly from conversations and correspondence with Panayotis Kevrekidis and Brandon van Zyl.
Werner Krauth wrote the Monte Carlo code that, with some adjustments, produced the results
that I report in the last chapter. Stefan Heinrichs taught me how to run, and extract information
from, Prof. Krauth’s program. Justin Herrmann and Tim Middelkoop allowed me to run the simulations in their computer cluster. The Department of Astronomy at UMass granted me access to
its computational facilities during the initial stages of this work.
Finally, I would like to thank my family, friends, and colleagues, in Amherst, in New Hampshire,
in Vermont, and in Colombia, for their warmth, their companionship, and their sense of humor.
v
ABSTRACT
BOSE-EINSTEIN CONDENSATION IN TWO-DIMENSIONAL TRAPS
FEBRUARY 2004
JUAN PABLO FERNÁNDEZ
Fı́sico, UNIVERSIDAD DE LOS ANDES
Ph.D., UNIVERSITY OF MASSACHUSETTS AMHERST
Directed by: Professor William J. Mullin
The fact that two-dimensional interacting trapped systems do not undergo Bose-Einstein Condensation (BEC) in the thermodynamic limit, though rigorously proved, is somewhat mysterious because
all relevant limiting cases (zero temperature, small atom numbers, noninteracting particles) suggest
otherwise. We study the possibility of condensation in finite trapped two-dimensional systems.
We first consider the ideal gas, which incorporates the inhomogeneity and finite size of experimental systems and can be solved exactly. A semiclassical self-consistent approximation gives us
a feel for the temperature scales; diagonalization of the one-body density matrix confirms that the
condensation is into a single state. We squeeze a three-dimensional system and see how it crosses
over into two dimensions.
Mean-field theory, our main tool for the study of interacting systems, prescribes coupled equations for the condensate and the thermal cloud: the condensate receives a full quantum-mechanical
treatment, while the noncondensate is described by different schemes of varying sophistication. We
analyze the T = 0 case and its approach to the thermodynamic limit, finding a criterion for the
dimensionality crossover and obtaining the coupling constant of the two-dimensional system that
results from squeezing a three-dimensional trap.
We next apply a semiclassical Hartree-Fock approximation to purely two-dimensional finite gases
and find that they can be described either with or without a condensate; this continues to be true
in the thermodynamic limit. The condensed solutions have a lower free energy at all temperatures
vi
but neglect the presence of phonons within the system and cease to exist when we allow for this
possibility. The uncondensed solutions, in turn, are valid under a more rigorous scheme but have
consistency problems of their own.
Path-integral Monte Carlo simulations provide an essentially exact description of finite interacting
gases and enable us to study highly anisotropic systems at finite temperature. We find that our
two-dimensional Hartree-Fock solutions accurately mimic the surface density profile and predict
the condensate fraction of these systems; the equivalent interaction parameter is smaller than that
dictated by the T = 0 analysis.
We conclude that, in two-dimensional isotropic finite trapped systems and in highly compressed
three-dimensional gases, there is a phenomenon resembling a condensation into a single state.
vii
TABLE OF CONTENTS
Page
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LIST OF FIGURES
x
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
CHAPTER
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
1.2
1.3
The creation myth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
BEC, two dimensions, and traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
This thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2. THE IDEAL TRAPPED BOSE GAS . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1
2.2
2.3
2.4
2.5
2.6
Introduction . . . . . . . . . . . . . . . . . . . .
Exact results for the ideal gas . . . . . . . . . .
The semiclassical approximation . . . . . . . .
The off-diagonal elements of the density matrix
Effects of anisotropy . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
15
21
26
29
35
3. MEAN-FIELD THEORY OF INTERACTING SYSTEMS . . . . . . . . . . . . . . 36
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .
The effective interaction . . . . . . . . . . . . . . . . . . . .
The Hartree-Fock-Bogoliubov equations . . . . . . . . . . .
The Gross-Pitaevskiı̆ equation and the Thomas-Fermi limit
Anisotropic systems at zero temperature . . . . . . . . . . .
Finite temperatures: The semiclassical approximation . . .
The interacting Bose gas in the Hartree-Fock approximation
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
37
38
42
46
49
51
61
4. THE TWO-DIMENSIONAL BOSE-EINSTEIN CONDENSATE . . . . . . . . . . 62
4.1
4.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.1
4.2.2
4.2.3
The Hartree-Fock-Bogoliubov equations . . . . . . . . . . . . . . . . . . . . . . 66
The thermodynamic limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
The free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
viii
4.3
4.4
4.5
4.6
Numerical methods and results . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . .
Appendix: The Hartree-Fock excess energy
Summary . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
76
76
77
5. PATH-INTEGRAL MONTE CARLO AND THE SQUEEZED INTERACTING BOSE GAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.1
5.2
5.3
5.4
5.5
5.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
Path integrals in statistical mechanics . . . . . . . . . .
The Monte Carlo method and the Metropolis algorithm
An algorithm for PIMC simulations of trapped bosons .
The anisotropic interacting gas at finite temperature . .
Summary . . . . . . . . . . . . . . . . . . . . . . . . . .
6. CONCLUSION
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
78
80
82
89
97
106
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
APPENDICES
A. MATHEMATICAL VADEMECUM . . . . . . . . . . . . . . . . . . . . . . .
B. PERMUTATION CYCLES AND WICK’S THEOREM . . . . . . . . . . .
C. SPECTRAL DIFFERENTIATION AND GAUSSIAN QUADRATURE
D. MATHEMATICAL DETAILS OF PIMC SIMULATIONS . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
115
118
123
128
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
ix
LIST OF TABLES
Table
Page
1.1
Summary of methods used and systems studied in this thesis . . . . . . . . . . . . . . 11
3.1
Typical temperatures and frequencies found in experiments . . . . . . . . . . . . . . . 49
x
LIST OF FIGURES
Figure
Page
1.1
Experimental realization of Bose-Einstein condensation
. . . . . . . . . . . . . . . . . .2
2.1
Density and number density of a condensed Bose gas . . . . . . . . . . . . . . . . . . . 19
2.2
The growing condensate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3
Front view of Fig. 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4
Condensate fraction of a two-dimensional ideal trapped Bose gas . . . . . . . . . . . . 21
2.5
Chemical potential of a two-dimensional ideal trapped Bose gas . . . . . . . . . . . . . 22
2.6
Position of the noncondensate maximum . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7
Condensate fraction of a two-dimensional ideal trapped Bose gas in the semiclassical
approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8
Density and number density of a condensed 2D gas in the semiclassical approximation . . 25
2.9
Eigenvalues of the two-dimensional isotropic trapped-ideal-gas density matrix . . . . . 28
2.10 The first seven radially symmetric eigenfunctions (including the ground state) of the
two-dimensional isotropic harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . 30
2.11 Condensate fraction of a three-dimensional ideal gas in a trap of increasing anisotropy . . 32
2.12 Surface density and surface number density of a three-dimensional ideal Bose gas in
a trap of increasing anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1
Wavefunctions of two-dimensional isotropic trapped Bose gases at zero temperature . . 44
3.2
Typical forces exerted on an atom in a trap of increasing anisotropy . . . . . . . . . . 46
3.3
Condensate fraction of a three-dimensional Bose gas in the Hartree-Fock approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4
Failure of the Hartree-Fock approximation at high temperatures
3.5
Chemical potential of a three-dimensional interacting Bose gas in the Hartree-Fock
approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.6
Two views of the density of an interacting 3D trapped Bose gas at various temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
xi
. . . . . . . . . . . . 55
3.7
Number density of an interacting 3D trapped Bose gas at various temperatures . . . . 58
3.8
Densities and number densities of a two-dimensional isotropic ideal gas and the equivalent interacting system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9
Eigenvalues of the one-body density matrix of ideal and interacting two-dimensional
trapped gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.10 Ground state and azimuthally symmetric excited eigenfunctions of a two-dimensional
isotropic interacting trapped gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1
Condensate and noncondensate density profiles of a two-dimensional Bose-Einstein
gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2
Total density profiles of a two-dimensional gas . . . . . . . . . . . . . . . . . . . . . . . 74
4.3
Free energy per particle for the condensed and uncondensed solutions . . . . . . . . . 75
5.1
The density matrix as a path integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2
A simple game that illustrates the Metropolis algorithm . . . . . . . . . . . . . . . . . 86
5.3
A variation on the game of Fig. 5.2 that shows the effect of choosing better a priori
probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.4
Boxes and interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.5
The two steps involved in moving part of a cycle . . . . . . . . . . . . . . . . . . . . . 94
5.6
Moving a complete cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.7
Interactions between particles of changing identity . . . . . . . . . . . . . . . . . . . . 94
5.8
Monte Carlo and exact number densities of an ideal two-dimensional trapped gas . . . 98
5.9
Extracting the condensate number from a PIMC simulation . . . . . . . . . . . . . . . 98
5.10 Surface number density of a condensed ideal Bose gas of varying anisotropy . . . . . 101
5.11 Front view of the rightmost plot of Fig. 5.10 . . . . . . . . . . . . . . . . . . . . . . . 101
5.12 Finding the condensate fraction of an ideal Bose gas of varying anisotropy . . . . . . 102
5.13 Aspect ratios, obtained by Monte Carlo simulation, of condensed ideal and interacting
Bose gases of increasing anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.14 Monte Carlo and mean-field number densities of a three-dimensional trapped gas . . 103
5.15 Monte Carlo number surface densities of a three-dimensional trapped gas of increasing
anisotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
xii
5.16 Monte Carlo number surface density and best-fit two-dimensional profile of an interacting three-dimensional Bose gas in a highly anisotropic trap . . . . . . . . . . . . . 105
5.17 Condensate fraction of a quasi-2D interacting Bose gas . . . . . . . . . . . . . . . . . 107
5.18 Occupation numbers and eigenfunctions of the one-body density matrix for a quasi2D Bose gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
D.1 The Lévy construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
xiii
CHAPTER 1
INTRODUCTION
We were experienced enough to know that when results in experimental
physics seem too good to be true, they almost always are!
—E. A. Cornell and C. E. Wieman [1]
1.1
The creation myth
It happened, appropriately enough, in the early morning. At 6:04:29 a.m. on 29 September 1995, at
W. Ketterle’s laboratory in Building 26 of the Massachusetts Institute of Technology (MIT), postdoc
N. J. van Druten scrawled “BEC” at the very bottom of a page of a lab notebook and followed it
with an exclamation point that was taller than the acronym, and almost as wide. “BEC” stood for
Bose-Einstein condensation, the topic of this thesis, and Ketterle’s group had just created the first
condensate of sodium atoms [2,3].
For the preceding five years, Ketterle and his collaborators had been repeating and perfecting a
procedure that is now followed all over the world: they would i) produce a vapor of alkali atoms;
ii) load it into a magneto-optical trap (MOT); iii) cool the vapor to microkelvin temperatures using
lasers that, when tuned to an appropriate transition frequency, exerted a strong damping force on the
atoms; iv) transfer the resulting “optical molasses” into a magnetic trap; v) cool the molasses down
to nanokelvin temperatures by applying a radiofrequency field that, by inducing spin-flip transitions
into untrapped states, encouraged the most energetic atoms to evaporate and let the remaining
ones collide until they achieved equilibrium at a lower energy; and vi) turn off the trap and use a
CCD camera to photograph the expansion of the liberated cloud and capture it on a screen. 1
1
This oversimplified account of the experiments does no justice to the ingenuity and effort required
to bring atoms to the densities and temperatures necessary for BEC by harnessing their complexity.
The interested reader may turn to Ref. 4, which includes a complete collection of references on the
experimental aspects of BEC, or to the Nobel lectures just cited. Our wording above is similar to
that used in a semipopular article [5], according to which “the possibility of seeing the long-sought
Bose-Einstein condensation” had been brought “tantalizingly close” by recent developments. The
article appeared three days after the first condensate was created.
1
Figure 1.1. Experimental realization of Bose-Einstein condensation. Condensates like the one
shown here are routinely produced in D. S. Hall’s TOP trap at Amherst College. In Prof. Hall’s
words, “This particular condensate was released from our trap and allowed to expand for about
16 ms before we took the image, and probably contains somewhere around 300,000–400,000 atoms.
That image is 5 microns per pixel, and it came out of a cylindrically symmetric trap that was
90 Hz (axial) by 32 Hz (radial). The axial direction is up-down in the image—and is the direction
of greatest expansion upon release.”
Now Ketterle, van Druten, and the rest were seeing one of the pixellated, false-color images
that since that year have graced all sorts of science-related posters, calendars, and textbook covers:
out of a surface resembling a three-dimensional plot of a Gaussian there emerged a narrow, tall
peak located right at the center. Similar images taken at higher temperatures had shown only
the Gaussian, and lowering the temperature further yielded images where there was the peak and
little else. Moreover, the peak did not appear gradually, but, as hoped, emerged suddenly and
unambiguously at temperatures below a certain “critical” value Tc . The images obtained in the
experiments rendered the momentum distribution of the expanding cloud of atoms, or, since the
Fourier transform of a Gaussian is another Gaussian, they could also be interpreted as a depiction
of the density of atoms in the cloud. The sharp, narrow peak showed that a high fraction of the
sodium atoms had very low momentum, or, equivalently, that a macroscopic number of atoms had
accumulated at the center of the trap; both of these interpretations were consistent with BEC having
taken place.
Van Druten’s exclamation point was followed, in turn, by a much smaller, timid question mark
enclosed in parentheses: it was of course possible that a faulty apparatus, experimental noise,
2
stress, lack of sleep, and wishful thinking were conspiring to deceive them, but Ketterle and his
collaborators had at least two reasons to be confident that what appeared on their screen was
indeed a condensate. The first one was trivial: they had already been beaten to the finish line.
On 5 June of that year a group led by E. A. Cornell and C. E. Wieman of the Joint Institute for
Laboratory Astrophysics (JILA) in Colorado had created a similar, though much smaller, condensate
of 2000 rubidium atoms [6] in a time-orbiting-potential (TOP) trap. (R. Hulet and his collaborators
at Rice University in Texas had also created a condensate using lithium [7], but their evidence was
inconclusive until months later.)
The second reason was a real one, and was based on the fact that the trapping potentials were,
in all cases, anisotropic: while a “classical” thermal cloud obeys the equipartition theorem and
expands isotropically when freed from an anisotropic trap, the Bose-Einstein condensate, despite
its macroscopic size, is a fully quantum-mechanical entity that obeys the uncertainty principle and,
when released, should expand faster along the direction in which it had been confined more tightly
(see Fig. 1.1). This is just what happened at MIT and had happened before at JILA.
More than a year later, after Ketterle and his group had succeeded in imaging condensates nondestructively and observing them undergo the transition in situ, there came another breakthrough: the
group used a laser to gently cut a condensate in two, separated the pieces, put them back together,
and obtained a series of clean interference fringes [8] that showed that the atoms in the condensate
were all in the same quantum state, exactly as Einstein had predicted some 70 years before. There
remained no doubt that the Bose-Einstein condensate, until then a creature made out of ink, paper,
and mathematics, had finally been given physical form.
Some time before these events, but surely aware of their imminence, A. J. Leggett wrote a
paragraph that, in our view, goes a long way toward assessing their importance (the emphasis is
Leggett’s):
[T]he temperature, roughly 3 K, of the cosmic black-body radiation [ . . . ] is believed
to be the present temperature (in so far as one can be defined) of outer space, and
according to almost all cosmologies in current fashion the universe as a whole was never
cooler than this [ . . . ]. If this is correct, then when we cool bulk matter to temperatures
below the point marked we are in effect creating new physics which Nature herself has
never explored: indeed, if we exclude the possibility that on some planet of a distant star
alien life-forms are doing their own cryogenics, then the new phases of matter which we
create in the laboratory at low temperatures have never previously existed in the history
of the cosmos. [9]
What’s more, the JILA, MIT, and Rice experiments in a sense created a whole new field, removing
Bose-Einstein condensation from its niche somewhere between atomic physics and 4 He studies and
3
putting it right at the forefront as a general-interest discipline that, in turn, has breathed life
into many new subfields and opened the possibility of illuminating some of the dark corners still
left in more-established ones. A macroscopic manifestation of quantum mechanics akin to laser
action, superfluidity, and superconductivity, BEC had been known for decades to be at least partly
responsible for these other phenomena; since 1995 it became possible to extricate pure BEC behavior
from the other complications that characterize low-temperature systems and thus make further
progress toward their understanding.
Of great help in this endeavor is yet another astounding feature of the systems in which BEC was
finally created, namely their (in many respects) textbook-problem simplicity—for the most part, they
can be accurately described by ensembles of spinless point masses in perfect harmonic confinement
and affected by very simple interaction potentials—which makes them an excellent laboratory in
which to study the quantum-mechanical N -body problem using the methods that have been developed over time to treat other systems. Now, the particular characteristics of these systems—their
finite size, the Bose statistics they obey, and their inhomogeneity, among others—bring with them
a set of difficulties and subtleties that need to be dealt with.
This dissertation will use two standard many-body methods, mean-field theory and path-integral
Monte Carlo simulations, to approach and partially answer, in the context of these finite trapped
systems, a question almost as old as Einstein’s original prediction: can Bose-Einstein condensation
occur in two dimensions?
1.2
BEC, two dimensions, and traps
Predicted by Einstein [10,11] based on S. N. Bose’s [12] reinterpretation of Planck’s law of blackbody
radiation, Bose-Einstein condensation is a phase transition whereby, at the appropriate combination
of temperature and density, a system composed of bosons2 will experience a sudden avalanche of
particles into its ground state, resulting in a macroscopic accumulation. This phase transition is a
consequence of the indistinguishability of the particles that compose the system, as can be seen from
the following simple model [13]: Suppose we have two balls that we can place in any of two boxes,
2
Most textbooks and popular books and articles define bosons to be particles of integer spin.
However, bosons were originally defined in terms of their statistical behavior, of which BEC is the
fundamental characteristic at low temperatures. In fact, the spin-statistics theorem states that
particles with half-integer spin cannot be bosons and particles with integer spin cannot be fermions.
In any event, the integer-spin identification is adequate for our purposes, even though the “particles”
that we will consider are far from elementary. For example, 87 Rb, where BEC was first produced,
consists of 124 spin- 21 particles.
4
each of which can fit any number of balls (this last condition distinguishes Bose-Einstein statistics
from Fermi-Dirac statistics). If we initially make the balls distinguishable by painting one of them
white, we have four distinct ways
|•◦k
|
|◦ k• |
|• k◦ |
|
k•◦|
(1.1)
of placing them in the boxes, of which two (or 50% of the total) have two balls in the same box. On
the other hand, if the balls are identical, we now have only three different ways
|••k
|
|• k• |
|
k••|
(1.2)
of placing them in the boxes, of which two (or 67% of the total) have more than one ball in the same
box. The indistinguishability of the balls has enhanced their chances of occupying the same box. If
we identify the boxes with energy states and the balls with atoms, it follows that, as the temperature
of the system (and hence its average energy) decreases, the atoms will tend to congregate in the
state of minimum energy.
As Bose and Einstein showed, this feature was implicit in the Bose-Einstein distribution function, a generalization of Planck’s blackbody distribution that prescribes the average number of
atoms, hNk i, that occupy a given energy state Ek in a system at temperature T :
hNk i =
1
;
eβ(Ek −µ) − 1
(1.3)
here we have introduced the inverse temperature, β ≡ 1/kB T , where kB is Boltzmann’s constant,
and the chemical potential µ, which ensures that the total number of particles N is, on average,
conserved:
hN i =
X
k
hNk i.
(1.4)
Since the occupation numbers must be all positive, the chemical potential in (1.3) must be smaller
than the ground-state energy: µ < E0 . For a three-dimensional ideal gas in a box, where E = p2 /2m,
we can in principle turn the sum (1.4) into an integral [14,15] to obtain
N=
V
(2π~)3
Z
∞
0
4πp2 dp
eβ(p2 /2m−µ)
5
−1
,
(1.5)
in which case the chemical potential must always be negative. The substitution x ≡ βp 2 /2m turns
Eq. (1.5) into
V
2π
N=
(2π~)3
µ
2m
β
¶3/2 Z
∞
0
x1/2 dx
V
≡ 3 g3/2 (eβµ ),
x−βµ
e
−1
λT
(1.6)
where we have introduced the de Broglie wavelength, λ2T ≡ 2π~2 β/m ≡ 1/ΛT , and the Bose-Einstein
integral
g3/2 (x) ≡
∞
X
xk
,
k 3/2
k=1
(1.7)
whose definition and properties are reviewed in Appendix A. This implies the following general
relation to be obeyed by the number density, n ≡ N/V , of the Bose gas:
λ3T n = g3/2 (eβµ ).
(1.8)
As the temperature of the system decreases, β becomes larger. The chemical potential must correspondingly increase in order to keep the occupation numbers from becoming negative, and eventually, at some temperature Tc , becomes (infinitesimally smaller than) the ground-state energy
of the system (zero in this case). The right-hand side of (1.8) becomes ζ(3/2) ≈ 2.612, where
P∞
ζ(z) = gz (1) ≡ k=1 k −z is the Riemann zeta function, and we are left with Einstein’s criterion for
condensation:
nc
= ζ(3/2) ≈ 2.612.
(ΛTc )3/2
(1.9)
An inconsistency then arises: the right-hand side of (1.9) is bounded, and yet there is nothing
that prevents us in theory from fixing the temperature at a value lower than T c and increasing the
density beyond nc by adding particles—especially since, with a vanishing chemical potential, it costs
no energy to do so. To resolve this inconsistency, Einstein postulated that any extra particles would
have to occupy the ground state, whose relative importance had been neglected when introducing
the continuum approximation (1.5), and force the system to condense at those lower temperatures;
BEC had been conceived.
The exact same analysis in two dimensions yields
nc
= ζ(1),
ΛTc
(1.10)
and here we meet our dissertation topic: ζ(1), also known as the harmonic series, diverges, and
as a consequence this two-dimensional homogeneous ideal system can accommodate any number of
particles at any temperature without the need to invoke a condensation: there is no Bose-Einstein
condensation in two dimensions. (Or is there?)
6
Einstein accompanied his 1924 prediction of a condensation with a proposal of three systems
where it could be detected [16]: hydrogen, helium, and the electron gas. The third was ruled
out the next year with the introduction of Fermi-Dirac statistics.3 F. London’s observation [19]
that the λ point of 4 He occurs at a temperature close to Einstein’s Tc alerted physicists to the
importance of BEC in low-temperature systems and started a line of research that continues to
this day; however, the particular characteristics of helium—the low mass of its atoms and their
complicated interactions—make it a difficult system in which to study BEC directly. 4
Research into the first system received a boost when it was realized [21] that atomic hydrogen
(kept that way by spin polarization) should remain in the gaseous phase at all temperatures. By
the mid-1980s, two rival groups, one led by D. Kleppner, T. J. Greytak, and D. E. Pritchard at MIT
and the other led by I. F. Silvera at Harvard University, had managed to stabilize and cool down
spin-polarized hydrogen (in a container, using standard low-temperature methods) to a combination
of density and temperature not far from the BEC régime [22]. These efforts literally ran into a
wall, however, when it was found that the atoms stuck to the walls of the container and recombined
into molecules; this was initially suppressed by heating the walls, but the improvement in density
thus attained turned out to be short-lived. As a solution the experimenters decided to enclose the
hydrogen atoms in magnetic traps.
Now, when we put an ideal Bose-Einstein gas in a (harmonic isotropic) trap, a similar analysis
to the one above leads to
1
N=
(2π~)3
Z
3
dx
Z
∞
0
4πp2 dp
eβ(p2 /2m+mω2 r2 /2−µ) − 1
=
µ
kB T
~ω
¶3
g3 (eβµ ).
(1.11)
The density now depends on the position:
λ3T n(r) = g3/2 (eβ(µ−mω
2 2
r /2)
);
(1.12)
if we evaluate this density at x = 0 we recover Einstein’s criterion (1.9) (to quote from Ref. 23,
“the chief effect of the trap is merely to concentrate the particles to the density at which BEC commences”), and below the (higher) Tc that results any additional atoms pile up in the oscillator
3
There has been, however, an active search for BEC in a similar system, the exciton gas in
semiconductors [17,18].
4
In early 2001, BEC was observed in a dilute gas of 4 He in the 23 S1 metastable state [20].
7
ground state. More interesting for our purposes is the result we obtain when we repeat the analysis
in two dimensions; we obtain
N=
µ
kB T
~ω
¶2
g2 (eβµ ),
(1.13)
and g2 (x), like g3/2 (x), is bounded in the limit µ → 0, which leaves open the possibility of a
p
condensation in this system below a temperature kB Tc = ~ω 6N/π 2 [24]. (Note, however, that
the system density once again yields g1 (x), and diverges in this limit.) This possibility became
√
even more attractive when it was realized that the critical temperature was proportional to N in
two dimensions, while in three it went as N 1/3 and was thus much lower; this fact will show up
repeatedly in this thesis.
The quest for Bose-Einstein condensation in hydrogen at MIT—which, at one point or another,
involved Cornell, Hulet, Ketterle, and Wieman [3]—continued to be beset by difficulties, this time
at the evaporative-cooling stage, but eventually came to a happy conclusion in 1998 [25]. We
cannot help but close this section with Kleppner’s account [26] of the night when the first hydrogen
condensate was created:
Late one night last June, a phone call from the lab implored me to come quickly. I had
a pretty good idea of what was up because BEC in hydrogen had seemed imminent for
more than a week. As I drove in the deep night down Belmont Hill toward Cambridge,
still dopey with sleep, the blackness of the sky suddenly gave way to a golden glow. I was
not surprised because I had a premonition that the heavens would glow when BEC first
occurred in hydrogen. Abruptly, streamers of Bose-Einstein condensates shot across the
sky, shining with the deep red of rubidium and brilliant yellow of sodium. Small balls of
lithium condensates flared and imploded with a brilliant red pop. Stripes of interference
fringes criss-crossed the zenith; vortices grew and disappeared; blimp-shaped condensates
drifted by, swaying in enormous arabesques. The spectacle was exhilarating but totally
baffling until I realized what was happening: The first Bose-Einstein condensates were
welcoming hydrogen into the family!
1.3
This thesis
We have just seen that, while Bose-Einstein condensation does not occur in a two-dimensional ideal
homogeneous gas, the introduction of a trap changes the character of the system and allows the
transition to happen. However, when one restores the interactions between atoms their ability to
condense is lost once again, at least in the thermodynamic limit; this result, though rigorously proved
(by P. C. Hohenberg in 1967 [27], and adapted to the trapped case by W. J. Mullin in 1997 [28]),
continues to be somewhat of a mystery, since every relevant limiting case (not just the ideal gas)
seems to suggest otherwise: the equations of many-body theory consistently predict the existence of
8
BEC long-range order at zero temperature; for small numbers of particles, both experiments [29,30]
and Monte Carlo simulations [31–33] show results consistent with the presence of BEC.
On the other hand, these results may be due to other processes (or combinations thereof) that
have been predicted to happen in other two-dimensional systems. The Kosterlitz-Thouless transition [34], for example has has been proved to take place [35] in two-dimensional systems that are
free in one direction and harmonically confined in the other. There is the possibility of a transition
into a “smeared” BEC [36–38], in which a band of states become populated in such a way that
their total population is macroscopic. Finally, there is the possibility of a quasicondensation: the
presence of coherence over a region that, though finite, might still be larger than that prescribed by
the characteristic length of the system [39–41].
In this dissertation we will look into a few different questions regarding the existence of BoseEinstein condensation in two-dimensional traps. First of all, is it possible to find an approximation
scheme that predicts the existence of a BEC in a finite system? If that is the case, what will this
condensed system look like? Moreover, is this condensation into a single state? In other words, how
large is the population of the condensate (that is, the ground state of the system) when compared to
those of the low-lying states? Finally, the standard way of creating a two-dimensional condensate in
the laboratory [30] consists of taking a three-dimensional system and squeezing it in one direction,
creating in effect a pancake-shaped trap. In that case, how well can the resulting quasi-2D condensate
be described by our two-dimensional scheme? Do the density profiles match? Are the condensate
fractions consistent? How does the ground-state population compare, once again, to those of the
excited states?
Our scope is narrow: we will restrict ourselves to the depiction and analysis of density profiles,
the calculation of condensate fractions, and the comparison of ground-state occupations to those
of a few excited states. Except in the case of the ideal gas, we will avoid the transition itself,
mainly because most of the methods we use are expected to break down—and indeed do—before
the temperature reaches its critical value. We will not consider the dynamics of the system at all,
though experiments are going in that direction. In particular, and being aware of the enormity of
our omission, we will shy away from discussing superfluidity; the Kosterlitz-Thouless transition will
receive little more than a passing mention. The gas will always be assumed to be at temperatures
so low that it is at least partially condensed. It will also be invariably assumed to be in equilibrium;
in particular, whenever we speak of raising or lowering the temperature of a system, we will assume
that this is done slowly enough so that equilibrium is not disturbed. (And even then we leave aside
the problem of fluctuations about equilibrium values.)
9
We have tried to be as thorough as possible within this restricted range, and to make our
presentation self-contained. Though we have tried to relegate the goriest details to the appendices,
the main text is still full of mathematics. We have striven to fill in the steps that, though taken for
granted in research papers, are far from trivial. The reader will also note that our restricted subject
in fact ramifies into a maze of subdivisions and special cases; moreover, these subtopics can be and
have been addressed using one or more—and even combinations—of the methods we present.
As we said above, the systems under study are finite, dilute, harmonically trapped Bose-Einstein
gases below their transition temperatures. They will be either ideal or interacting, and either at
absolute zero or at finite temperature. The traps that confine them will be of three types: threedimensional and isotropic, two-dimensional and isotropic, or three-dimensional and isotropic in the
x- and y-directions, with a variable confinement along the z-direction always greater than those
along the other two. (Table 1.1 on the following page contains a partial listing of the systems that
we have studied, the methods we have used to study them, and the results we have obtained.)
We begin by studying the ideal gas in Chapter 2. Though hardly a realistic system, the ideal
gas has been an important reference and starting point from the very beginning. For our purposes,
the ideal trapped gas is important because it can be treated exactly at every temperature, for any
(finite) number of atoms, and in traps of any dimensionality or degree of anisotropy. This exact
method will allow us to look at the transition in some detail, to define and exhibit the quantities
that we will be calculating later, and to introduce a semiclassical approximation that will recur,
with similar successes and limitations, in the rest of the thesis; we do note that this semiclassical
approximation will be used to describe only the noncondensate; the condensate itself will receive the
same fully quantum-mechanical treatment as in the exact theory. We will display various results
that will enable us to get a feel for the temperature scales and occupation numbers involved and will
acquaint us with the typical shapes of the density profiles and of the wavefunctions that characterize
the various states in which the atoms can be.
In Chapter 3 we begin our study of interacting gases using mean-field theory. We develop
the standard Gross-Pitaevskiı̆ theory (also known as the Hartree-Fock-Bogoliubov equations) in
as general and detailed a way as we can; immediately afterwards we go on to introduce diverse
approximations of increasing crudity, settling eventually upon, and extracting results from, two of
the simplest:
While it is perhaps the least sophisticated approach to the study of BEC, being easier to solve
than even the ideal gas, the zero-temperature Thomas-Fermi limit nonetheless yields predictions
remarkably close to reality and is a standard tool for fitting experimental data. Though the assump-
10
Table 1.1. Summary of methods used and systems studied in this thesis
3D
2D
Compressed 3D
Works always
Works always
Works always
Fails at high T
Fails at high T
Fails at high T
Works always
Works always
Not tested enough
Works always
Works always
Fails at high λ1
but can be adapted
for that case
◦ HFB + GP2
Yields results
similar to those of
the HF approximation or else fails
Always fails
Not tested enough
◦ HF + GP3
Fails at high T
Fails at high T
Not tested enough
◦ HFB + TF4
Not tested enough
Always fails
Not tested enough
Works where the
HF approximation
works
Works where the
HF approximation
works
Not tested enough
Always fails
Works always but
has a high free
energy
Not expected to
work
Works always
Works always
Works always
Works always
Works but
disagrees with the
compressed-3D
results
Described well at
high λ by the 2D
HF approximation
Ideal gas
◦ Exact solution
◦ Semiclassical approx.
Mean-field theory
◦ T =0
◦ Gross-Pitaevskiı̆ Eq.
◦ Thomas-Fermi limit
◦ T 6= 0
◦ Thermodynamic limit
◦ HF + TF5
Uncondensed solution
PIMC simulation
◦ Ideal gas
◦ Interacting gas
1
2
3
4
5
Compression ratio or anisotropy parameter, defined by λ ≡ ωz /ω, where ω ≡ ωx = ωy .
Semiclassical Hartree-Fock-Bogoliubov approximation for the thermal cloud combined with
the Gross-Pitaevskiı̆ equation for the condensate.
Semiclassical Hartree-Fock approximation for the thermal cloud combined with the GrossPitaevskiı̆ equation for the condensate.
Semiclassical Hartree-Fock-Bogoliubov approximation for the thermal cloud combined with
the Thomas-Fermi limit for the condensate.
Semiclassical Hartree-Fock approximation for the thermal cloud combined with the ThomasFermi limit for the condensate.
11
tions that lead to it lose validity when the system becomes very anisotropic, it is still possible to
slightly modify the model in order to accommodate the necessary changes, and as a result we get
an important connection between a squeezed 3D gas and a 2D gas.
The Gross-Pitaevskiı̆ theory models three-dimensional interactions by considering two-body collisions, and the relevant parameter that describes them, the s-wave scattering length, is an experimentally measurable quantity. The two-dimensional version of the theory is quite different, and,
even if we try to model 2D collisions using 3D methods, we still have to deal with the fact that the
corresponding interaction parameter has different dimensions, and, consequently, cannot be readily
expressed in terms of the standard three-dimensional coupling constant. Still, we do not expect
the interactions in the two-dimensional gas to depend too strongly on parameters other than the
scattering length, and, when the 2D system has been created by compressing a three-dimensional
trap in one direction, some relation between the characteristic length in that direction to those in
the others. The Thomas-Fermi limit provides that missing link.
The other simplified scheme will be our main workhorse when we turn to the study of finitetemperature systems in the rest of the thesis: this method combines an exact description of the
condensate by means of the Gross-Pitaevskiı̆ equation (exact in the sense that it is identical to that
prescribed by more-rigorous theories) with a semiclassical, Hartree-Fock treatment of the noncondensate.
After presenting all the theory, we proceed to discuss the numerical methods we used to solve
the Hartree-Fock equations and show some of the results we obtained. We initially study one of
the standard systems in the literature: a 104 -atom 3D isotropic gas with
87
Rb parameters. This
will give us the chance to display the condensate fraction, the density, and the chemical potential of
an interacting gas; in particular, we will verify that interactions affect the transition temperature,
deplete the condensate, and cause a noticeable increase in the size of the system. This study will
also allow us to see some limitations of the semiclassical approximation—notably its failure at high
temperatures. After that, we go back to the 2D gas: repeating what we did in Chapter 2 for the ideal
case, we will set up and diagonalize the off-diagonal one-body density matrix in the semiclassical
Hartree-Fock approximation; the resulting eigenvalues and eigenvectors will show us the effects
of interactions on the ground- and excited-state populations of system and on the shapes of the
corresponding wavefunctions.
The study of the strictly two-dimensional interacting trapped gas is taken up in more detail in
Chapter 4. At that point we will be confronted with another interesting fact: the mean-field equations derived and used in Chapter 3 can also be solved for a 2D interacting finite trapped system
12
without having to invoke the presence of a condensate [42]. The approximation scheme that allows
these “uncondensed” solutions is in principle more rigorous than the one we have been using, since,
while still semiclassical, it does not make use of the Hartree-Fock approximation. When we try to
solve this model in the presence of a condensate, we encounter phononlike quasiparticles at the low
end of the energy spectrum; on the other hand, if we momentarily consider high temperatures and
artificially remove the condensate, the phonons disappear from the spectrum—and uncondensed
solutions can be found all the way down to T = 0. This would imply that phonons destroy the
long-range order of the system and prevent it from condensing, in agreement with previous explanations [43]. However, these uncondensed solutions have consistency problems of their own (in
particular, they do not reduce to the correct limit as T → 0) and are found to have a higher free
energy than the Hartree-Fock condensed ones. This renews our faith in the validity of the HartreeFock solutions, along with the infrared energy cutoff they impose, and encourages us to put them
through one last test.
Chapter 5 looks at the shape and condensate fraction of a quasi-2D interacting system created by
deforming a three-dimensional trap into the shape of a pancake. Our aim is to see how well the surface
density profile and condensate fraction of that squeezed system can be described using the density
profile and condensate fraction predicted for a pure 2D gas by the Hartree-Fock approximation. As
a guide, and in the absence of experimental results, we will introduce and use another, different,
method of analysis. Path-integral Monte Carlo simulations [44] are, for interacting gases, the closest
we have to an exact method like the one we use in Chapter 2 to treat the ideal gas [45]; we will spend
a good fraction of Chapter 5 justifying this last statement, explaining the method, and describing
its implementation for the specific case of a trapped Bose gas. After checking the correctness of our
simulations by reproducing results obtained in earlier chapters, we will proceed to display the results
we obtained for squeezed interacting systems and compare them to our mean-field predictions.
Chapter 6 will wrap up our considerations and suggest some directions for future work.
13
CHAPTER 2
THE IDEAL TRAPPED BOSE GAS
The weakly interacting Bose gas can be treated using the mean-field
approximation, though at the low densities likely to be of experimental
interest, the corrections are not expected to be important.
—V. Bagnato and D. Kleppner [24]
2.1
Introduction
As it turns out, corrections due to interactions are much more important than Bagnato and Kleppner
expected. When interpreting their data, experimentalists use the zero-temperature Thomas-Fermi
limit (to be discussed later), not the ideal gas, as a first approximation. It has even been argued that,
even though the phenomenon was discovered in the ideal gas, the very existence of a condensation is a
consequence of interactions.1 There are, however, many reasons why the ideal gas is worth studying,
and they are not restricted to its excellence as a pedagogical tool. The condensates created in the
laboratory occur in systems that are both decidedly finite and very inhomogeneous; the ideal gas
can be adapted to incorporate these nontrivial effects, many of which carry over to the interacting
case without modification. Moreover, the noninteracting gas is becoming less of an idealization with
every passing day: recent work in the study of Feshbach resonances has enabled experimentalists
to tune the strength of interatomic interactions, and the possibility of creating an essentially ideal
trapped gas is becoming progressively plausible.
We begin the chapter by introducing the one-body density matrix as the quantity that includes
all the information that can be gleaned about a finite system of trapped noninteracting bosons.
After discussing some exact quantum results where the inhomogeneity and finite size are already
present, we introduce a semiclassical approximation that enables us to set a temperature scale for the
1
Consider for example the following remarks by P. Nozières [38] (the emphasis is his): “[D]o the
condensate particles accumulate in a single state? Why don’t they share between several states that
are degenerate, or at least so close in energy that it makes no difference in the thermodynamic limit?
The answer is non-trivial: it is the exchange interaction energy that makes condensate fragmentation
costly. Genuine Bose-Einstein condensation is not an ideal gas effect: it implies interacting particles!”
14
system and that will reappear in a more sophisticated form when we treat the interacting gas in the
following chapters. This semiclassical approximation does not include the condensate, which has to
be added by hand; this limitation will provide us with a first opportunity to perform a self-consistent
calculation.
By explicitly diagonalizing the one-body density matrix of a gas in an isotropic 2D trap we
can obtain the populations of the ground state of the system and of its first few excited states.
We perform the diagonalization for distinguishable particles and for bosons and find that in the
distinguishable case the ground state is not preferentially populated, while in the Bose case the
occupation of the ground state is identical to that found by the more direct method. As we would
expect, the eigenvectors for both cases are the same and correspond to the eigenfunctions of the
harmonic oscillator.
We also squeeze a three-dimensional system by increasing one of the trapping frequencies; this
contributes to an enhancement of the density at the center of the trap and thus encourages the
system to condense. Indeed, we will see that we can take a system at a temperature above the
transition temperature and cause it to condense merely by confining it more tightly.
2.2
Exact results for the ideal gas
We start by studying a system of N distinguishable noninteracting atoms, each of mass m, trapped
by a σ-dimensional isotropic harmonic-oscillator potential characterized by an angular frequency ω
(we will consider anisotropic traps in later sections). Since there are no interactions, each atom moves
independently of the others, and we can consequently visualize the ensemble as being composed of
N one-body systems. Let us restrict ourselves momentarily to one of these. In an isotropic trap,
the motion of a single atom is described by the one-body Hamiltonian
H1 =
1
p2
+ mω 2 r2 ,
2m 2
(2.1)
where p and r are the momentum and position operators. The harmonic-oscillator potential sets the
length and energy scales of the system, and it is in terms of these that we will henceforth express
all physical quantities. As is well known [46], a single quantum-mechanical point mass confined
by a one-dimensional harmonic trap has a minimum energy, given by ~ω/2, that corresponds to a
15
ground state of Gaussian shape characterized by a length x0 such that x20 = ~/mω. Introducing
dimensionless coördinates x = r/x0 and momenta κ = x0 p/~, we can rewrite (2.1) as
H1 ≡
1
1
~ω H̃1 = ~ω (κ2 + x2 ),
2
2
(2.2)
whose eigenenergies in σ dimensions are given in our dimensionless units by
²n1 ···nσ ≡
2
En ···n = 2n1 + · · · + 2nσ + σ,
~ω 1 σ
(2.3)
where n1 , . . . , nσ are integers. (This is for Cartesian coördinates; the eigenfunctions and eigenvalues
in other coördinate systems are displayed in Appendix A).
When the system is at a temperature T , quantum statistics postulates that all relevant information is contained in the canonical one-body density matrix, ρ1 = e−βH1 /Z1 , where β ≡ 1/kB T , kB is
Boltzmann’s constant, and the one-body partition function Z1 ≡ Tr e−βH is a normalization factor
that ensures Tr ρ1 = 1. When we evaluate the diagonal elements of the (unnormalized) one-body
density matrix in real space we obtain
hx|e−β̃ H̃1 |xi =
2
1
cschσ/2 2β̃ e−x tanh β̃ ,
(2π)σ/2
(2.4)
where we have introduced the dimensionless inverse temperature β̃ ≡ t−1 ≡ 21 ~ωβ. This expression,
first found by F. Bloch, can be derived by explicitly summing over Hermite polynomials [47], by
using Feynman path integrals, by solving the Bloch equation H̃ρ1 = −∂ρ1 /∂ β̃ [15], or by purely
algebraic methods [48]. Appendix A sketches a proof for σ = 2 using Laguerre polynomials.
The one-body partition function, Z1 = (2 sinh β̃)−σ , can be found by integrating (2.4) with
respect to x or by evaluating the trace in its energy eigenbasis. If we divide Eq. (2.4) by Z 1 we
obtain the normalized diagonal density matrix,
ñ(x) =
2
1
tanhσ/2 β̃ e−x tanh β̃ ;
π σ/2
(2.5)
we shall henceforth use the notation ñ ≡ xσ0 n to refer to densities in real space.
PN
When we have N atoms, the Hamiltonian becomes H̃ =
j=1 H̃1,j , and the system will be
described by a density matrix
ρ=
N
Y
e−β̃ H̃1,j
1 −β̃Σj H̃1,j
=
e
,
ZN
Z1
j=1
16
(2.6)
where we used the fact that, in the absence of interactions, ZN = (Z1 )N . The corresponding reduced
one-body density matrix is defined by [49]
ρ1 = N Tr ρ = N
2:N
e−β̃ H̃1,1
Z1
(2.7)
and is identical to (2.5) but normalized to N :
ñ(x) =
2
N
tanhσ/2 β̃ e−x tanh β̃ .
σ/2
π
(2.8)
Up to now, the atoms, though identical, have been taken to be distinguishable: we have used
labels to tag them and even selected the “first” one when we calculated the reduced one-body
density matrix. When the system is composed of indistinguishable bosons, the situation is much
more involved (see Appendix B and Ref. 50 for some details), but the result is transparent and
intuitive. If we invoke the grand canonical ensemble, it is possible to show that the reduced onebody density matrix adopts a form that mimics the Bose-Einstein distribution: 2
ρ1 ≡
1
eβ̃(H̃1 −µ̃) − 1
=
∞
X
e−`β̃(H̃1 −µ̃) ;
(2.9)
`=1
the chemical potential µ̃ is determined by the subsidiary condition N = Tr ρ 1 . The fact that we
can express Eq. (2.9) as a sum over ` will be of enormous convenience throughout this work, but its
importance reaches far beyond; as Appendix B shows, it is a manifestation of Bose symmetry at its
deepest level, the closest we will get to being able to define what we mean by bosons. The real-space
diagonal density now becomes
∞
∞
X
2
1
e`β̃(µ̃−σ)
1 X
σ/2
`β̃ µ̃
−x2 tanh `β̃
hx|ρ1 |xi ≡ ñ(x) =
e−x tanh `β̃ .
e
csch
2`
β̃
e
=
(2π)σ/2 `=1
π σ/2 `=1 (1 − e−4`β̃ )σ/2
(2.10)
It is worth emphasizing that it is the difference µ̃ − σ, and not µ̃ by itself, that tends to zero below
the transition in the trapped case. This can be clarified further by noticing that, if we were to add
another particle to the system, it would have to have an energy of at least ~ωσ/2, the ground-state
2
Note that we use exactly the same notation for distinguishable and Bose-Einstein density matrices. In the few instances where this could lead to confusion, in Chapter 5 especially, we will use
the suffix B to denote quantities that obey Bose statistics.
17
energy of the oscillator, and the energy of the system would increase by that amount. We can readily
integrate any of the expressions above with respect to x to obtain [51,52]
∞
∞
X
e`β̃(µ̃−σ)
1 X e`β̃ µ̃
=
.
N= σ
σ
2
sinh `β̃
(1 − e−2`β̃ )σ
`=1
`=1
(2.11)
The first expression is especially interesting because it gives us the total particle number in terms
of the canonical one-body partition function; proofs of this result appear in Appendix B and in
Ref. 53. Equations (2.10) and (2.11) provide a complete description of the system, and, as Fig. 2.1
illustrates, they predict the occurrence of a condensation below a certain temperature.
The Bose-Einstein condensate is a pure (albeit macroscopic) system that can be described by
a single wavefunction. In the trapped ideal case, this wavefunction is the purely real ground state
of the harmonic oscillator. The condensate density is the square of this wavefunction, and as a
consequence the condensate density profile is the same at all temperatures:
ñ0 =
N0 −x2
;
e
π σ/2
(2.12)
only its normalization constant, which corresponds to the condensate number, varies. To calculate
the condensate number we plug the ground-state energy ²0 = σ into the Bose-Einstein distribution
function:
N0 =
1
eβ̃(²0 −µ̃) − 1
=
1
eβ̃(σ−µ̃) − 1
=
∞
X
e`β̃(µ̃−σ) .
(2.13)
`=1
We can then use (2.12) and (2.13) to resolve the total density (2.10) into its condensate and noncondensate components at any temperature and for any number of atoms. Figures 2.2 and 2.3
display the rapid growth of the condensate density with decreasing temperature and show that the
thermal density attains a maximum at a point away from the origin; we will later see that this
“delocalization” effect is enhanced by interactions.
The Bose-Einstein transition is characterized by abrupt changes in various quantities. Figure 2.4 on page 21 shows the temperature dependence of the condensate fraction for different values
of N . The onset of condensation, as one would expect, becomes more pronounced as N grows. It is
also evident that the system is completely condensed at zero temperature; indeed, in the limit β̃ → ∞
we can neglect the exponential in the denominator of Eq. (2.11) and once again obtain (2.13), provided we assume limβ̃→∞ β̃(µ̃ − σ) to be constant; this confirms that µ̃ ≈ σ at low temperatures.
The chemical potential stays essentially fixed at this value until the transition temperature, at which
point it experiences a sharp decrease and quickly becomes negative (see Fig. 2.5). The location of
18
0.3
ñ(x)/N
PSfrag replacements
0.2
0.1
5
0
0
2
4
x
6
25
8
50
75
T /Tc (%)
50
75
T /Tc (%)
100
PSfrag replacements
1
0.8
N (x)/N
0.5
0.6
0.4
5
0.2
0
0
2
4
x
6
25
8
100
Figure 2.1. Density and number density of a condensed Bose gas. Throughout this work we will
depict the density of a Bose system below the transition temperature using one of these two modes
of description. In this case, we have a two-dimensional ideal gas of N = 1000 atoms confined by
an isotropic trap. The plot at top shows the actual density of the gas, ñ(x), that has appeared
ourR discussion; the bottom panel shows the “number density” N (x) defined by N =
Rthroughout
∞
dx
N
(x)
= dσx ñ(x)—in two dimensions, N (x) = 2πx ñ(x); this is the quantity yielded directly
0
by the Monte Carlo simulations of Chapter 5. In each case, the total density (solid line) has been
resolved into its condensed (dotted) and thermal (dashed) components and has been divided by N .
19
0.15
ñ(x)/N
PSfrag replacements
0.1
0.05
5
0
0
2
95
4
x
90
6
85
8
T /Tc (%)
80
Figure 2.2. The growing condensate. This “closeup” view of Fig. 2.1 shows (when read from right
to left) the fast growth of the condensate in the region just below the critical temperature.
0.12
ñ(x)/N
ñ(x)/N
0.12
0.06
0
0
2
x
4
0
6
0.06
0
0
2
0
2
x
4
6
4
6
0.12
ñ(x)/N
ñ(x)/N
0.12
PSfrag replacements
0.06
0
2
x
4
0.06
0
6
x
Figure 2.3. Front view of Fig. 2.2. Note that the noncondensate (dash-dotted line) has a much
larger spatial extent than the condensate (dashed line). The delocalization hump is clearly visible.
20
1
PSfrag replacements
N0 /N
0.8
0.6
0.4
0.2
1.2
0
0
104
0.5
T /Tc
103
N
1
102
Figure 2.4. Condensate fraction of a two-dimensional ideal trapped Bose gas. The solid lines are
the exact results obtained from solving Eq. (2.11) for µ̃ and then using (2.13) to extract N 0 ; the
dashed lines show the infinite-N limit given by Eq. (2.17) below. We can see that already for N = 10 4
the infinite-N limit is quite accurate.
the thermal maximum also changes abruptly at the transition: as Fig. 2.6 illustrates, the “hump”
is suddenly shifted towards the origin, signalling that a sizable fraction of the particles have been
displaced to the most localized state available.
2.3
The semiclassical approximation
As written, the density matrix (2.10) gives an exact description of a finite trapped Bose gas of any
dimensionality at any temperature. It is difficult, however, to extract relevant quantities from it
in analytic form—the transition temperature is a case in point—and in order to do that we will
introduce a semiclassical approximation that we will also be using for the interacting gas.
The semiclassical approximation assumes that the temperature is high when compared to the
spacing between energy levels (which is microscopically small for a standard trap); if k B T À ~ω,
then β̃ ≡ ~ω/2kB T can be treated as a small parameter, and this allows us to expand the exponential
in the second expression of Eq. (2.11). However, since ` is unbounded, by performing that expansion
we are neglecting the sizable contribution to the sum of large-` values; we will soon see that these
21
PSfrag replacements
2.5
2
0.5
1.5
1
µ̃
0.5
0
−0.5
−1
−1.5
−2
0
0.2
0.4
0.6
T /Tc
0.8
1
1.2
Figure 2.5. Chemical potential of a two-dimensional ideal trapped Bose gas. The values of µ̃ shown
correspond to N = 100, 1000, and 10000 (solid, dashed, and dotted line respectively), just like in
Fig. 2.4.
1.17
PSfrag replacements
xmax
1.14
1.11
1.08
97
98
99
100
T /Tc (%)
101
102
103
Figure 2.6. Position of the noncondensate maximum, located at the hump shown in Fig. 2.3. In
this case we have N = 5 × 104 , 105 , and 1.5 × 105 (dash-dotted, dashed, and solid line respectively).
The inward shift caused by the temperature drop is more pronounced for higher N (compare to the
behavior of the chemical potential in Fig. 2.5).
22
1
0.8
N0 /N
0.6
PSfrag replacements
0.4
0.2
0
0
0.2
0.4
T /Tc
0.6
0.8
1
Figure 2.7. Condensate fraction of a two-dimensional ideal trapped Bose gas in the semiclassical
approximation. The figure shows the exact result for N = 1000 (solid line) along with the infinite-N
limit (2.17) (dotted line), the first-order condensate fraction predicted by the self-consistent procedure sketched in (2.19) (dashed line). The last two are virtually indistinguishable for most temperatures and clearly overestimate the condensate. Also shown is the finite-N approximation (2.22)
(dash-dotted line), which underestimates the condensate in 2D.
correspond to N0 , and thus it is necessary to insert the condensate number by hand [28]. The
expansion then yields
N ≈ N0 +
≈ N0 +
∞
1 X e`β̃(µ̃−σ)
(2β̃)σ `=1 `σ (1 − σ`β̃)
1
(gσ (eβ̃(µ̃−σ) ) + β̃σgσ−1 (eβ̃(µ̃−σ) ))
(2β̃)σ
for the total number of atoms; gσ (x) =
P∞
k=1
(2.14)
xk /k σ is a Bose-Einstein integral, already introduced
in Chapter 1, whose definition and properties are summarized in Appendix A. Equation (2.14)
can also be obtained directly from the assumption that the harmonic-oscillator energy levels are so
closely spaced that they may be taken to form a continuum; the subsequent replacement of sums by
integrals helps elucidate the identification of the neglected terms with the condensate fraction [53,54],
exactly as in the homogeneous gas [14,15].
As we remarked right below Eq. (2.10) on page 17, the transition can be characterized by the
vanishing of the quantity µ̃ − σ; when that happens, we have
23
N ≈ N0 +
ζ(σ)
σ ζ(σ − 1)
+
,
σ
2 (2β̃)(σ−1)
(2β̃)
(2.15)
where ζ(x) is the Riemann zeta function, or, upon introducing the “critical” temperature defined
(σ)
by tc
≡ 2(N/ζ(σ))1/σ ,
N0
=1−
N
µ
t
(σ)
tc
¶σ
−
σ −1/σ ζ(σ − 1)
N
2
(ζ(σ))(σ−1)/σ
µ
t
(σ)
tc
¶σ−1
.
(2.16)
If we keep the expansion to first order, we obtain
N0
=1−
N
µ
t
(σ)
tc
¶σ
=1−
µ
T
(σ)
Tc
¶σ
,
(2.17)
(σ)
which justifies in hindsight our definition of the critical temperature. For t > t c , the condensate
number vanishes: the ground state has an occupation no different than that of any other state.
Below the critical temperature, the condensate fraction obeys a simple power law in this limit,
and increases with decreasing temperature until it includes every particle in the system. To find
the corresponding approximation for the density we expand the exponential in the denominator of
Eq. (2.10) and obtain
ñ(x) ≈ ñ0 (x) +
µ
t
4π
¶σ/2
2
gσ/2 (eβ̃(µ̃−σ−x ) ),
(2.18)
where ñ0 is given once again by (2.12) but with N0 given by (2.17). By integrating both sides
of (2.18) over x we obtain the first two terms of Eq. (2.14); to this order, then, the number of
atoms is conserved and the approximation is consistent. As a result, Eqs. (2.18), (2.13), and (2.12)
combined provide in principle a complete specification of the system like that discussed on page 18.
The only unknown is the chemical potential, which we can calculate self-consistently: we initially
take N0 = N , µ̃ = σ and loop
N0
+σ
N +1
µ 0 ¶σ
t
gσ (eβ̃(µ̃−σ) )
N0 = N −
2
µ̃ = t log
(2.19)
until µ̃ and N0 stop changing. Some results of this procedure are shown in Figs. 2.7 and 2.8.
The second term in (2.16) is a correction [28,55,56] that takes into account the finite size of the
system and vanishes as N → ∞, in what is called the thermodynamic limit (TDL). 3 The Riemann
3
It has been remarked [28,43,57] that, just as the true TDL of a homogeneous system is taken
by simultaneously letting N → ∞ and V → 0 so that the density remains finite, when taking
24
0.8
N0 (x)/N , N 0 (x)/N
PSfrag replacements
ñ0 (x)/N , ñ0 (x)/N
0.3
0.2
0.1
0
0
1
2
x
3
0.2
2
4
6
4
6
x
0.4
N0 (x)/N , N 0 (x)/N
ñ0 (x)/N , ñ0 (x)/N
0.4
0
0
4
0.15
0.1
0.05
0
0
0.6
1
2
x
3
0.3
0.2
0.1
0
0
4
2
x
Figure 2.8. Density and number density of a condensed 2D gas in the semiclassical approximation.
Here N = 1000; the panels at top correspond to T = 0.4 Tc and those at the bottom to T = 0.8 Tc .
The exact results (solid lines) for the condensate and noncondensate densities are shown alongside
the first-order (dashed) and second-order (dash-dotted) semiclassical approximations. As Fig. 2.7
showed, the former overestimates the condensate and the latter underestimates it. Both do a good
job of describing the noncondensate density away from the origin but fail at small values of x.
zeta function ζ(x) diverges at x = 1, so the finite-size correction does not make sense 4 as written
when σ = 2; we can still evaluate it, however, because the Bose-Einstein integral g 1 (z) has a closed
analytic expression: g1 (z) = − log(1 − z). The condensate fraction is of order N , so we can use
Eq. (2.13) to obtain
the true TDL of a harmonic system one should simultaneously increase the number of particles and
decrease the spring constant of the trap in such a way that N ω σ remains finite. This is automatically
taken into account by our system of units, given the factor 21 ~ω used to scale the temperature.
4
In one dimension, we cannot calculate a finite-size correction at all, but we can still evaluate
the first term and find a “critical” temperature, tc = 2N/ log N , below which there is a substantial
accumulation of atoms in the ground state [28,51,58]. This, by the way, is all we will say about
the one-dimensional trapped gas, a very interesting system in its own right. We refer the interested
reader to our own publication [59] on the subject.
25
eβ̃(µ̃−σ) =
N0
1
1
1
=
≈1−
≈1−
N0 + 1
1 + 1/N0
N0
N
(2.20)
as β̃(µ̃ − σ) → 0, and hence
lim
β̃(µ̃−σ)→0
g1 (eβ̃(µ̃−σ) ) =
lim
β̃(µ̃−σ)→0
− log(1 − eβ̃(µ̃−σ) ) = − log(1/N ) = log N,
(2.21)
giving us [28]
N0
=1−
N
µ
t
(2)
¶2
log N
−p
N ζ(2)
µ
t
(2)
tc
tc
The corresponding second-order density in two dimensions is
ñ(x) = ñ0 (x) +
¶
.
(2.22)
2
1
1
t
.
g1 (eβ̃(µ̃−2−x ) ) +
4π
2π eβ̃(µ̃−2−x2 ) − 1
(2.23)
Figures 2.7 and 2.8 also show the second-order results of the self-consistent procedure described
above. The agreement is in general good, but not perfect. The delocalization effect, for one, does
not appear, and the noncondensate density is in general misrepresented at the center of the trap.
We can also see that, in two dimensions, the finite-size correction turns out to be too large (in three
dimensions, on the other hand, its agreement with the exact results is remarkably good [28,54–56];
see Fig. 3.3). Another shortcoming of the method is that it breaks down at high temperatures; the
semiclassical approximations to be discussed later also have this problem.
2.4
The off-diagonal elements of the density matrix
Up to now we have been considering only the diagonal of (2.9); needless to say, the off-diagonal
elements of the density matrix can give us more information about the system. In Appendix A
we derive the two-dimensional version of the full one-body reduced density for the 2D isotropic
harmonically trapped ideal gas [60,61],
0
0
ñ(x, x ) = hx|ρ1 |x i =
1
π σ/2
∞
X
`=1
e`β̃(µ̃−σ)
(1 −
e−4`β̃ )σ/2
1
e− 2 csch 2`β̃((x
2
+x02 ) cosh 2`β̃−2x·x0 )
,
(2.24)
calculated by using the definition [53]
ñ(x, x0 ) = hx|
1
eβ̃(H̃1 −µ̃) − 1
|x0 i =
X
k
1
eβ̃(²k −µ̃) − 1
hx | kihk | x0 i ≡
X
fk φ∗k (x)φk (x0 ),
(2.25)
k
where k denotes a collective summation index that varies with the dimensionality of the system, f k is
the kth Bose-Einstein factor, and φk (x) is the kth oscillator eigenfunction. The chemical potential is
26
once again found by using Tr ρ1 = N . If we multiply both sides of Eq. (2.25) by φj (x) and integrate
with respect to x, we can immediately exploit the orthogonality of the eigenfunctions to obtain
Z
dσx0 ñ(x, x0 ) φj (x0 ) = fj φj (x),
(2.26)
where, taking into account the symmetry of the matrix, we have swapped x and x 0 . A glance at
Eqs. (2.4) and (2.10) should convince us that the calculation for the distinguishable gas is analogous;
we find that the eigenvectors of the density matrix are the usual oscillator wavefunctions, regardless
of the statistics obeyed by the atoms, and the corresponding eigenvalues are the populations of these
eigenstates. While the diagonal one-body density provided us with the population of the ground
state on one hand and of the “rest of the world” on the other, by considering the off-diagonal
elements we can actually compare these occupation numbers state by state.
From our preceding considerations we expect the Bose-Einstein density matrix to have an uneven
distribution of eigenvalues. The largest one corresponds to the condensate population and should
be of order N , while the rest should be orders of magnitude smaller. This, in fact, is the standard
definition of BEC, first postulated in a classic paper by O. Penrose and L. Onsager [49].
As preparation for the interacting problem that we will tackle in later chapters, we will sketch the
numerical diagonalization procedure that we have used for the ideal case. First of all, we note that
Eq. (2.24) depends almost exclusively on the radial coördinates; the angular dependence is contained
only in the dot product x · x0 = x0 x cos(ϕ0 − ϕ). The matrix is in general a 2σ-dimensional array,
and in order to turn it into a matrix that we can diagonalize we must average over or integrate out
all but two of these dimensions; in particular, the process will be feasible only for isotropic traps.
Finally, the integrals have to be approximated by discrete sums.
We will restrict ourselves to a two-dimensional system. Since we are assuming an isotropic trap,
we can take our x-axis in any direction, and we will choose it so that ϕ = 0; moreover, we will
average out the angle ϕ0 by defining
ρ̄(x, x0 ) ≡
1
2π
Z
2π
dϕ0 ñ(x, x0 ),
(2.27)
0
which, using the result (A.15), turns the `th term of (2.24) into
1
ρ̄` (x, x0 ) ∝ e− 2 (x
2
+x02 ) coth 2`β̃
27
I0 (x0 x csch 2`β̃),
(2.28)
1000
Population
PSfrag replacements
100
10
7
1
0
1
2
3
4
Eigenstate
5
6
Figure 2.9. Eigenvalues of the two-dimensional isotropic trapped-ideal-gas density matrix. The
white bars correspond to a 1000-atom Bose-Einstein gas at T = 0.8 Tc , while the black bars represent
the populations of an equivalent distinguishable gas. Note that the scale on the y-axis is logarithmic.
The open circles represent the populations predicted by Eqs. (2.25) and (A.10), f k = (exp(β̃[²k −
µ̃]) − 1)−1 with ²k = 2(2k + 1); the straight line depicts the same function for continuous k. The
corresponding eigenfunctions are shown in Fig. 2.10.
where I0 (x) is a modified Bessel function. (An equivalent treatment in three dimensions yields a
hyperbolic sine.) For large values of the argument (x ≥ 12, to be precise), the Bessel function has
to be replaced with its asymptotic expansion [62]
ex
I0 (x) ∼ √
2πx
µ
¶
1
9
1+
+
+ ··· .
8x 128x2
(2.29)
We used two different methods to integrate the resulting density matrix. First we evaluated it on
an equally spaced grid of 300 × 300 points and multiplied it by a replicated array of the coefficients
of the alternative extended Simpson’s rule [63],
Z
xN
x1
59
43
49
17
f1 + f2 + f3 + f4 + f 5 + · · ·
48
48
48
48
43
59
17
49
· · · + fN −4 + fN −3 + fN −2 + fN −1 + fN ],
48
48
48
48
f (x) dx ≈ h [
(2.30)
where fj = f (xj ) and h = x2 −x1 . We got identical results much more quickly (and more accurately,
since the matrices involved are much smaller) by evaluating the matrix on a 30 × 30 grid of points
28
located at the zeros of the 30th Laguerre polynomial [64] and multiplying it by the Gauss-Laguerre
weights [62,65] introduced in Appendix C. The condensate fraction that we obtain for the twodimensional gas is indistinguishable from that displayed in the previous section; one particular case
is shown in Fig. 2.9. As we can see in that figure, this method also reproduces the Bose-Einstein
distribution function to very high accuracy, at least for the lowest-lying energy eigenstates. The
corresponding eigenvectors are indeed the same for the distinguishable and Bose-Einstein matrices,
and, as we can see in Fig. 2.10, they are identical to the standard harmonic-oscillator eigenfunctions.
It is also possible to evaluate the density matrix in the semiclassical approximation that we
introduced in the last section. We start by noting that the argument of the exponential in Eq. (2.24)
can be rewritten as
1
1
− csch 2`β̃((x2 + x02 ) cosh 2`β̃ − 2x · x0 ) = − sech `β̃(2(x2 + x02 ) sinh `β̃ + (x − x0 )2 csch `β̃)
2
4
1
= − ((x + x0 )2 tanh `β̃ + (x − x0 )2 coth `β̃).
(2.31)
4
The second of these expressions is frequently encountered in the literature [60,61]; any of them can
be expanded for small β̃, along with the exponential in the denominator of Eq. (2.24), to yield the
semiclassical expression
0
0
ñ(x, x ) ≈ ñ0 (x)ñ0 (x ) +
µ
t
4π
¶σ/2 X
∞
`=1
e`β̃(µ̃−σ) −`β̃ 1 (x2 +x02 ) −(x−x0 )2 /4`β̃
2
e
e
;
`σ/2
(2.32)
once again it is necessary to insert the condensate density by hand. This result will be rederived in
Section 3.6 by a different method.
2.5
Effects of anisotropy
Our next task consists of allowing the confining potential to be anisotropic. Isotropic potentials are
useful theoretical constructs, but they are far from realistic: as we saw in Chapter 1, the first traps to
create condensates at JILA and MIT [2,6] were already anisotropic, and their anisotropy was in fact of
fundamental importance in identifying the condensates as such; since then, experimental setups have
become, if anything, even less isotropic. We are moreover interested in studying two-dimensional
systems; though some successful 2D experiments have been carried out in atomic hydrogen adsorbed
on liquid 4 He [29], the only way of creating pure low-dimensional condensates consists of taking a
trapped gas and increasing or decreasing one or more of the confining frequencies, effectively creating
29
1
0
1
0
φ0 (x), . . . , φ6 (x)
1
0
1
0
1
0
1
PSfrag replacements
0
1
0
0
1
2
3
4
5
6
x
Figure 2.10. The first seven radially symmetric eigenfunctions (including the ground state) of
the two-dimensional isotropic harmonic oscillator. The solid lines depict
the exact eigenfunctions
√
2
from Eq. (A.10) of Appendix A, φn (x) ≡ hx | ni = e−x /2 Ln (x2 )/ π, while the open circles are
the result of diagonalizing the angle-averaged density matrix of an ideal Bose gas of 1000 atoms
at T = 0.8 Tc . The matrix was calculated using a 30 × 30 Gauss-Laguerre grid and is known only
at those points; its values everywhere else can be found by interpolation. All eigenfunctions were
normalized to be unity at x = 0. Shown with crosses are the first three eigenfunctions of an ideal
gas of 1000 distinguishable atoms at the same temperature. It is somewhat amusing that the boson
matrix gives more accurate results than its simpler distinguishable counterpart.
30
“pancake” and “cigar” geometries. These have already been produced in the laboratory [30] and
have been the subject of various studies [55,56,66,67].
The fact that the trap becomes anisotropic forces us to modify the expressions derived above.
Throughout this work we will concentrate on one possible squeezed geometry, that of a threedimensional trap whose z-frequency can be increased to create quasi-2D systems. We will use the
frequency ω ≡ ωx = ωy as a reference to scale all energies and (through an unchanged x0 ) all lengths:
in particular, we will resolve the position vector into transverse and z-components: x = ξ + η. The
trap anisotropy will appear through the parameter λ ≡ ωz /ω ≥ 1, to which we will occasionally refer
as the “compression ratio.” With these identifications, the Schrödinger equation for the harmonic
oscillator becomes
·µ
1 ∂
−
ξ ∂ξ
µ
¶
¶ µ
¶¸
∂
1 ∂2
∂2
2
2 2
ξ
− 2
φnmp ≡ (H̃ξ + H̃η )φnmp = ²nmp φnmp ;
+ξ + − 2 +λ η
∂ξ
ξ ∂ϕ2
∂η
(2.33)
its solutions, displayed in Eq. (A.9) on page 116, are normalized products of isotropic 2D oscillator
√
eigenfunctions in ξ and 1D eigenfunctions with η → η λ; the corresponding energy eigenvalues are
given by
²nmp = 2(2n + |m| + λp) + (2 + λ).
(2.34)
An important consequence of (2.34) is that the ground-state energy now depends on λ and grows as
the system is squeezed; it is far from negligible when λ becomes large. The chemical potential also
increases in value, and tends to 2 + λ at zero temperature; this makes sense if we once again identify
the chemical potential as the energy gained by the system when one adds a particle.
The one-body partition function of a distinguishable system can be calculated by splitting the
3D Hamiltonian in Cartesian coördinates and evaluating three separate one-dimensional sums, resulting in
Z1 =
1
csch2 β̃ csch λβ̃;
23
(2.35)
in our scaled units, then, the inverse temperature for the compressed third dimension acquires a
factor of λ. Armed with these results, it is not difficult to find an expression for the one-body
diagonal density of the Bose gas:
ñ(x) =
√
∞
λ X
π 3/2
δ` e−(ξ
2
tanh `β̃+λη 2 tanh λ`β̃)
,
(2.36)
`=1
where we have defined
δ` ≡
e`β̃(µ̃−(2+λ))
(1 − e−4`β̃ )(1 − e−4λ`β̃ )1/2
31
;
(2.37)
1
PSfrag replacements
N0 /N
0.8
0.6
0.4
0.2
0
0
100
104
1
(3)
T /Tc
10
1.79
λ
1
2.5
Figure 2.11. Condensate fraction of a three-dimensional ideal gas in a trap of increasing anisotropy.
The system contains 100 bosons; as the anisotropy parameter λ increases, the initially isotropic trap
becomes progressively flattened. The dashed lines show the thermodynamic-limit expressions (an
inverted cubic at the left end and a parabola at the right) predicted by Eq. (2.17) for two and three
(2)
(3)
dimensions respectively. For N = 100 the equivalent 2D temperature is Tc ≈ 1.786 Tc .
the off-diagonal elements are found by a straightforward extension of this procedure. The chemical
potential is determined by an expression analogous to (2.11):
N=
∞
∞
X
e`β̃ µ̃
e`β̃(µ̃−(2+λ))
1 X
=
.
2
3
2
sinh `β̃ sinh λ`β̃
(1 − e−2`β̃ )2 (1 − e−2λ`β̃ )
`=1
`=1
(2.38)
Equations (2.36) and (2.38) are equivalent to (2.10) and (2.11) and specify the anisotropic system
completely. Figure 2.11 shows how the condensate fraction evolves as the anisotropy increases.
Beyond a certain value of λ, the condensate fraction acquires the parabolic shape, prescribed by
Eq. (2.17), that we had already seen in Fig. 2.4 for the strictly two-dimensional gas: a crossover to
lower dimensionality has taken place. It is important to note that the tightly squeezed 3D system
behaves like a 2D gas but at a lower equivalent temperature; in fact, the transition temperatures are
related through
Tc(2) ≈ 0.829N 1/6 Tc(3) .
(2.39)
This result is not too surprising, and is a consequence of Einstein’s condition: the condensation, we
recall, is prompted by a proper combination of temperature and density, and by compressing the
32
gas we effectively lower the temperature.5 In particular, it should be possible take a 3D gas at a
(3)
temperature above Tc
and coax it into condensing by compressing it.
The crossover in dimensionality should also be apparent in the density profiles. Now, our traps
have ceased to be isotropic, and as a consequence the radial densities that we have been studying
until now no longer make sense. A more meaningful quantity is the actual appearance of the gas when
looked at from one direction; in other words, we will consider integrals of the density over one coordinate, which are equivalent to the “optical” or “column” densities measured by experimentalists.
When dealing with a system of increasing anisotropy, one can either measure the optical density
along the squeezed direction or along one of the directions that are left untouched. In the first case,
the measurable quantity is the surface density, given by
σ̃(ξ) ≡
Z
∞
2
1 X
e`β̃(µ̃−(2+λ))
e−ξ tanh `β̃ ;
−4`
β̃
2π
(1 − e
) sinh λ`β̃
`=1
∞
ñ(x) dη =
−∞
(2.40)
as the anisotropy of the system grows, the surface density should be described progressively better
by a two-dimensional isotropic density profile. In particular, the three-dimensional raw density
√
N0 λ −(ξ2 +λη2 )
ñ0 (x) = 3/2 e
π
(2.41)
is clearly anisotropic, but the corresponding surface density adopts an expression identical to the
condensate density in two dimensions, regardless of the degree of anisotropy:
σ̃0 (ξ) =
N0 −ξ2
e ;
π
(2.42)
its amplitude, given by the condensate fraction, will of course change.
Figure 2.12 displays the behavior with increasing gas anisotropy of this surface density and of
the corresponding surface number density
Ñ (ξ) ≡ 2πξ σ̃(ξ)
such that
N=
Z
∞
Ñ (ξ) dξ.
(2.43)
0
Note that the isotropic system is uncondensed at this temperature and develops a condensate upon
compression; the thermal density, in turn, starts as a Gaussian and eventually takes the shape seen
before, including the delocalization hump.
5
Using the semiclassical approximation we find a density ñ(0) ≈ (t/4π) 3/2 g3/2 (eβ̃(µ̃−(2+λ) ) at the
center of the trap. As λ grows, we see that it becomes easier to reach a point where Einstein’s
criterion λ3T ñ(0) = ζ(3/2) is fulfilled.
33
0.15
σ̃(ξ)/N
PSfrag replacements
0.1
0.05
0
0
2
100
4
ξ
10
λ
6
8
1
0.4
PSfrag replacements
Ñ (ξ)/N
0.3
0.2
0.1
0
0
2
100
4
ξ
10
λ
6
8
1
Figure 2.12. Surface density and surface number density of a three-dimensional ideal Bose gas in a
(3)
trap of increasing anisotropy. In every case N = 100 and T = 1.4 Tc . As usual, the total densities
are resolved into their condensed and uncondensed portions. As λ increases, the gas develops a
condensate. After it appears, the condensate density keeps the same shape throughout the process;
the thermal density develops a hump.
34
As we said above, we can also view the system from one of the uncompressed directions, and in
this case we also have a meaningful measurable quantity in the aspect ratio of the system, which
results from a straightforward calculation:6
q P
p
∞
λ `=1 δ` coth2 `β̃ coth1/2 λ`β̃
hξ 2 i
p≡ p
= qP
,
∞
2hη 2 i
δ coth `β̃ coth3/2 λ`β̃
`=1
(2.44)
`
where δ` was defined in Eq. (2.37). In the high-temperature semiclassical approximation this becomes
p = λ. At low temperatures, where the condensate dominates, the aspect ratio is smaller; in fact, it
√
is evident from (2.41) that p → λ. This aspect ratio will reappear in the next chapter, and will be
of help in Chapter 5 as a check on our Monte Carlo simulations.
2.6
Summary
We begin this work by considering the ideal gas, a system worth studying for many reasons: it
incorporates the inhomogeneity and the finite size of the systems studied in experiments, it is known
to undergo a condensation in two dimensions, and it can be solved exactly for any finite number
of atoms. This last feature lets us study the transition itself in detail, something that we cannot
do for the interacting gas (except via Monte Carlo simulations), and also enables us to introduce
and sharpen the tools that we will be using throughout the rest of the thesis. We introduce a
semiclassical self-consistent approximation that, though not really necessary here, gives us a feel for
the temperature scales involved. We study (and rule out) the possibility of a smeared condensation
by examining the eigenvalues of the off-diagonal one-body density matrix. Finally, we squeeze a
three-dimensional system in order to see its crossover into the two-dimensional régime.
6
The factor of 2 in the definition of p might look a bit confusing at first and deserves some
justification. It has been introduced to counterbalance the fact that, while η can be negative, ξ is
radial and therefore always positive: the 2 ensures that p = 1 in an isotropic trap.
35
CHAPTER 3
MEAN-FIELD THEORY OF INTERACTING SYSTEMS
In his well-known papers, Einstein has already discussed a peculiar
condensation phenomenon of the ‘Bose-Einstein’ gas; but in the course
of time the degeneracy of the Bose-Einstein gas has rather got the
reputation of having only a purely imaginary existence.
—F. London [19]
3.1
Introduction
In the preceding chapter we saw some of the effects that Bose-Einstein statistics and the presence
of a trapping potential have on a collection of atoms. The third essential ingredient for a realistic
description of a Bose-Einstein condensate is the introduction of interactions between the particles
that compose it and between those and the rest of the system. Needless to say, an interacting
system is a much more complicated entity than an ideal gas, and its description is consequently more
involved. Various schemes of differing degrees of sophistication have been introduced to describe the
behavior and characteristics of a Bose-condensed system, and among them mean-field theory has
earned a preëminent place because of its relative simplicity, its predictive power, and the crystal-clear
interpretability of its results. Even in its simplest versions, mean-field theory exhibits a remarkable
resilience: not only has its agreement with experiment been excellent so far [68], it also routinely
passes high-precision tests [69] and makes correct predictions even beyond its expected range of
validity [70,71]. A majority of the theoretical inquiry into BEC [54] is based on mean-field theory,
and this work will continue that tradition.
Our treatment of the mean-field theory of interacting systems will roughly parallel that of the
ideal gas in Chapter 2. After introducing the model interparticle interaction that we will use, we
begin by reviewing an “exact” theory, the Hartree-Fock-Bogoliubov (HFB) equations, based on a
model first considered by Bogoliubov [72] for the homogeneous imperfect Bose gas and later adapted
to describe the dilute trapped atomic gases in which BEC was created [73,74].
We digress for a while to treat systems at zero temperature by means of the famous GrossPitaevskiı̆ (GP) equation [75,76] and study their approach to the thermodynamic limit in what
36
is known as the Thomas-Fermi limit [77]. We will introduce a variational Ansatz based on the
Thomas-Fermi wavefunction to study the zero-temperature gas in a trap of increasing anisotropy;
this will enable us to provide a simplified description of the compressed gas and will provide us with
a criterion to find the conditions under which a crossover in dimensionality occurs in the presence
of interactions.
We then introduce a semiclassical model in which the HFB equations become easier to interpret
and to solve. The Hartree-Fock-Bogoliubov theory prescribes separate (though coupled) equations
for the condensate and for the surrounding thermal cloud, so we will have to resort to a self-consistent
procedure. We will conclude by mentioning some of the methods that we have used to solve these
equations and showing some results.
3.2
The effective interaction
The success of mean-field theory in describing Bose-Einstein condensates is due in part to the
characteristics of the system under study. The trapped gases in which Bose-Einstein condensation
has been created remain in a gaseous state at such low temperatures because they are so dilute—
and, paradoxically, they are at such low temperatures—that three-body collisions and other more
complicated processes are extremely rare.
Moreover, the low-energy collisions that dominate the interactions can be described quite well
in terms of a single parameter, the s-wave scattering length a [78], also known as the “hard-core
radius.” In Chapter 5 we will indeed assume that the boson gas is composed of hard spheres or
rods of radius a; this approximation, however, is convenient for Monte Carlo simulations but is too
complicated for our present purposes. Instead, we will use a popular model in which the interatomic
potential is described by a contact interaction (also called, somewhat inaccurately, a pseudopotential)
of the form [79]
V (r1 − r2 ) = g δ (σ) (r1 − r2 ).
(3.1)
In three dimensions the constant g is directly proportional to a:
g=
4π~2
a = (kB T )λ2T (2a);
m
(3.2)
in the dimensionless, dimension-explicit units that we are using, the equivalent parameter is the
“interaction strength” or “coupling constant” γ, related to g via g ≡ 12 ~ωxσ0 γ, and Eq. (3.2) adopts
37
the particularly transparent form γ = 8πa/x0 . The coupling constant, and the isotropy of the
interaction, should be the same even when the atoms are confined by an anisotropic trap.
In two dimensions, the interpretation of g is more subtle: the “constant” turns out to be (logarithmically) dependent on the relative momentum of the colliding atoms [35,80]. Mean-field expressions
for the 2D interaction strength that take this difficulty into account [67,81,82] have been derived
and used in the literature, and we will discuss them in Section 3.5. In general, though, we have
found that these more elaborate expressions have a rather negligible effect on the density profile
and condensate fraction of the system, and in what follows we will usually take γ to be constant.
More importantly, since we want to know the extent to which a squeezed 3D trapped system can be
described by a 2D gas as the confining potential becomes increasingly anisotropic, we need to be able
to relate the two-dimensional constant to its three-dimensional, experimentally known counterpart.
We will turn to this question in Section 3.5.
3.3
The Hartree-Fock-Bogoliubov equations
The introduction of the potential (3.1) greatly simplifies the appearance of the many-body grandcanonical Hamiltonian, which in terms of second-quantized field operators becomes
H̃ =
Z
dσx Ψ† (x)Λ̃Ψ(x) +
γ
2
Z
dσx Ψ† (x)Ψ† (x)Ψ(x)Ψ(x),
(3.3)
˜ 2σ + x2 − µ̃ is the one-body Hamiltonian.
where Λ̃ ≡ −∇
We will be working below the transition temperature, where the condensate is the most heavily
populated state, and so we will start by defining the condensate wavefunction Φ̃ as the ensemble
average of the field operator Ψ. The rest of the system will then be described by the fluctuation of
the operator about this average:
Ψ = hΨi + ψ̃ = Φ̃ + ψ̃.
(3.4)
It obviously follows that ψ̃, the fluctuation or noncondensate operator, obeys hψ̃i = 0. The condensate wavefunction Φ̃, like that of the ideal gas, is real and positive everywhere and prescribes
a condensate density ñ0 (x) = Φ̃2 ; moreover, it is a function, not an operator, which reflects the
imposition of long-range order in the system and restricts the grand canonical ensemble in order
to prevent unphysically large fluctuations in the condensate number [59,83]. When we insert the
decomposition (3.4) into Eq. (3.3) the integrands become
Ψ† Λ̃Ψ = Φ̃∗ Λ̃Φ̃ + Φ̃∗ Λ̃ψ̃ + ψ̃ † Λ̃Φ̃ + ψ̃ † Λ̃ψ̃,
38
(3.5)
Ψ† Ψ† ΨΨ = |Φ̃|4 + 2|Φ̃|2 (Φ̃∗ ψ̃ + Φ̃ψ̃ † ) + 4|Φ̃|2 ψ̃ † ψ̃
+(Φ̃∗2 ψ̃ ψ̃ + Φ̃2 ψ̃ † ψ̃ † ) + 2ψ̃ † (Φ̃ψ̃ † + Φ̃∗ ψ̃)ψ̃ + ψ̃ † ψ̃ † ψ̃ ψ̃.
(3.6)
At very low temperatures, when the condensate dominates the system, it is sufficient to neglect the
terms in (3.6) that contain more than two noncondensate operators [73,78,81,84]; this fruitful approach allows one to study the time-dependent behavior of a condensed system [54], yielding, among
other results, analytic solutions for its excitation spectrum [85,86] and a description of quantized
vortices [73,78,87]. However, we are interested in studying trapped gases at finite temperatures, and
for that reason we will use a scheme that includes these neglected terms in approximate form but
produces equations that mimic the second-order ones and have a similar method of solution.
This “self-consistent mean-field approximation” [74,88] takes the third- and fourth-order terms
of (3.6) and expresses them as combinations of first- and second-order terms, thus effecting a “quadratization” of the Hamiltonian [89]; the coefficients are ensemble averages of binary contractions of
the field operators weighted so that each approximate term will have the same mean value as the
corresponding exact term. Moreover, we invoke the Popov approximation, which consists of neglecting the “anomalous” averages1 hψ̃ † ψ̃ † i and hψ̃ ψ̃i and leaving only hψ̃ † ψ̃i ≡ ñ0 , the noncondensate
density. The high-order combinations2 then become
ψ̃ † ψ̃ ψ̃ ≈ 2hψ̃ † ψ̃iψ̃ ≡ 2ñ0 ψ̃,
ψ̃ † ψ̃ † ψ̃ ≈ 2hψ̃ † ψ̃iψ̃ † ≡ 2ñ0 ψ̃ † ,
ψ̃ † ψ̃ † ψ̃ ψ̃ ≈ 4hψ̃ † ψ̃iψ̃ † ψ̃ − 2hψ̃ † ψ̃ihψ̃ † ψ̃i ≡ 4ñ0 ψ̃ † ψ̃ − 2ñ02 ,
(3.7)
and we obtain a Hamiltonian of the form K0 + K1 + K1† + K2 , where the subscripts stand for the
number of field operators Ψ, Ψ† contained in each term. The first one,
K0 =
Z
1
d x Φ̃(Λ̃ + γ ñ0 )Φ̃ − γ
2
σ
Z
dσx ñ02 ,
(3.8)
1
These averages describe, respectively, a process through which two condensate atoms scatter
each other and end up in the thermal cloud, and a process through which two thermal atoms collide
and “fall” into the condensate [84]. These processes, while not unphysical, are expected to be quite
uncommon [88]; mean-field calculations that include them [70] confirm their relative rarity even at
near-transition temperatures.
2
The various factors of 2 that appear reflect the fact that “direct” and “exchange” terms are
identical for the zero-range interaction that we are considering [74].
39
gives the energy of the condensate; the term at the end ensures that the interaction energy is not
overcounted (see Section 4.5 for an alternative explanation). The terms linear in ψ̃ † and ψ̃,
K1 =
Z
dσx ψ̃ † (Λ̃ + γ(ñ0 + 2ñ0 ))Φ̃
(3.9)
and its Hermitian conjugate, are required to vanish exactly if we are to have h ψ̃i = 0; thus Φ̃ must
obey the generalized Gross-Pitaevskiı̆ equation [75,76]
˜ 2 Φ̃ + (µ̃ − x2 )Φ̃ − γ(ñ0 + 2ñ0 )Φ̃ = 0,
∇
σ
(3.10)
whose interpretation and solution we will discuss in Section 3.4. The quadratic term
K2 =
Z
¸ Z
·
¸
·
1
1
dσx ψ̃ † (Λ̃ + 2γ ñ)ψ̃ + γ ñ0 (ψ̃ † ψ̃ † + ψ̃ ψ̃) ≡ dσx ψ̃ † Lψ̃ + γ ñ0 (ψ̃ † ψ̃ † + ψ̃ ψ̃) , (3.11)
2
2
as we will presently prove, can be diagonalized by expanding, à la Bogoliubov, in a set of Bose
creation and annihilation operators
ψ̃ =
X
j
(uj αj − vj∗ αj† )
[αj , αj†0 ] = δj 0 j .
with
(3.12)
We begin by inserting (3.12) into (3.11). The operator L introduced in (3.11) is clearly Hermitian
(and, moreover, real), and consequently obeys [62]
Z
d
σ
x u∗j Lvk
=
Z
dσx vk Lu∗j
(3.13)
for any pair of functions uj , vk . This enables us to find, after some rearrangement, that
K2 =
1X
2
jk
£
Z
dσx
(vj (Lvk∗ − γ ñ0 u∗k ) + vk∗ (Lvj − γ ñ0 uj )) αj αk† + (u∗j (Luk − γ ñ0 vk ) + uk (Lu∗j − γ ñ0 vj∗ )) αj† αk +
¤
(vj (Luk − γ ñ0 vk ) + uk (Lvj − γ ñ0 uj )) αj αk + (u∗j (Lvk∗ − γ ñ0 u∗k ) + vk∗ (Lu∗j − γ ñ0 vj∗ )) αj† αk† ;
(3.14)
if we now assume that the expansion coefficients uj , vj obey the coupled eigenvalue equations [73,
74,88]
Ã
Λ̃ + 2γ ñ
−γ ñ0
−γ ñ0
Λ̃ + 2γ ñ
!Ã
40
uj
vj
!
= ²j
Ã
uj
−vj
!
(3.15)
we obtain
K2 =
1X
2
jk
£
Z
dσx
¤
−(²j + ²∗k ) uj vk∗ αj αk† + (²∗j + ²k ) u∗j uk αj† αk + (²k − ²j ) uk vj αj αk + (²∗j − ²∗k ) u∗j vk∗ αj† αk† .
(3.16)
We next introduce [78] the matrices
M = (Λ̃ + 2γ ñ) − γ ñ0 σ x
and
Uj =
"
uj
vj
#
,
(3.17)
where σ x is a Pauli matrix, and the inner product
hi | ji ≡
Z
d
x U†i σ z Uj
σ
=
Z
dσx (u∗i uj − vi∗ vj ),
(3.18)
which allow us to rewrite (3.15) as
MUk = ²k σ z Uk ;
(3.19)
the matrix M is once again Hermitian, so we can multiply each side of (3.19) by U †j and prove that
²k U†j σ z Uk = U†j MUk = (MUk )† Uj = (MUj )† Uk = ²∗j U†j σ z Uk ,
(3.20)
where we used the Hermiticity of M in the first step and switched the indices j and k in the second
(this is valid only when integrated with respect to dσx and summed over j and k); this leads to
(²∗j − ²k )hj | ki = 0,
(3.21)
which tells us that the eigenfunctions uj , vj are orthogonal under the inner product (3.18) (in fact,
we can choose the normalization hj | ki = δjk ) and that the eigenvalues ²j are real. In a similar
fashion we can prove that
(²j + ²k )
Z
dσx (uj vk − uk vj ) = 0.
(3.22)
This last result immediately shows us that the off-diagonal terms in Eq. (3.16) vanish: those with
j = k vanish at once, while every term with j 6= k will combine with its “mirror image” to add up
41
to zero. Thus the Hamiltonian (3.11) is diagonal. We can now use the commutation relation (3.12)
to show that
K2 =
1X
(²j + ²k )
2
jk
Z
dσx (u∗j uk − vj∗ vk ) αj† αk −
X
²j
j
Z
dσx |vj |2 ≈
X
²j αj† αj .
(3.23)
j
The neglected term describes the zero-temperature depletion of the condensate, which in trapped
Bose gases accounts typically for less than 1% of the number of atoms [88]. We also neglect it when
we calculate the noncondensate diagonal density, given by
ñ0 (x) = hψ̃ † ψ̃i ≈
Xh
jk
i X¡
2
2¢
u∗j uk hαj† αk i + vj vk∗ hαk† αj i =
|uj | + |vj | f˜j
(3.24)
j
with
f˜j = hαj† αj i =
1
eβ̃²j
−1
.
(3.25)
The form of (3.25) follows from Wick’s theorem [84,90] (see Appendix B) and reflects the fact that
the “quasiparticles” created by the αj† are bosons; it could have also been obtained by minimizing
the free energy H̃ − tS̃ (see Section 4.2.3).
Equations (3.10), (3.15), and (3.24), along with the conservation of particles imposed by N =
R
dσx ñ, where ñ = ñ0 + ñ0 , are the Hartree-Fock-Bogoliubov equations in the Popov approximation;
they form a complete set that can be solved self-consistently for the densities at any temperature
and at every point in space. Although the full set of equations has been successfully solved in
three [70,91,92] and (very recently) two dimensions [93], we will concentrate on a semiclassical
approximation that turns a discrete set of variables into a continuum—a method not unlike the one
discussed in Chapter 2. But first we digress to discuss the behavior of trapped Bose-Einstein systems
at zero temperature.
3.4
The Gross-Pitaevskiı̆ equation and the Thomas-Fermi limit
The Gross-Pitaevskiı̆ (GP) equation (3.10) has a transparent physical interpretation. To discuss it,
let us write it again in a slightly different form:
˜ 2σ Φ̃ + x2 Φ̃ + γ(ñ0 + 2ñ0 )Φ̃ = µ̃Φ̃.
−∇
(3.26)
We have already remarked that the Bose-Einstein condensate can be described by a single wavefunction, which we have labelled Φ̃. The wavefunction should obey a Schrödinger equation with an
42
external potential, and that is precisely what the first two terms in (3.26) exhibit. In the GP approximation, the interparticle interactions become an additional local potential that has two components.
The second one, with the factor of 2 that we already discussed in a footnote on page 39, describes
the interaction of the condensate with the thermal cloud and will be studied in detail in subsequent
sections. For now, however, we will concentrate on the zero-temperature case, where this term
disappears and the equation becomes
˜ 2 Φ̃ + x2 Φ̃ + γ ñ0 Φ̃ = µ̃Φ̃.
−∇
σ
(3.27)
The mean-field potential that remains describes the interaction of the condensate with itself and
makes the equation nonlinear at every temperature; this nonlinearity was originally introduced
by Gross [75] and Pitaevskiı̆ [76] to suppress an unphysical overdependence on boundary conditions
exhibited by the ideal Bose gas [14]. The GP equation is the Euler-Lagrange equation that minimizes
the functional
J[Φ̃] =
Z
£
¤
˜ Φ̃)2 + x2 Φ̃2 + 1 γ Φ̃4
dσx (∇
2
(3.28)
subject to the constraint that the number of atoms is fixed:
N=
Z
dσx Φ̃2 ;
(3.29)
in other words, it minimizes the thermodynamic potential H̃ − µ̃N , and the corresponding Lagrange
multiplier, which appears in (3.27) as the eigenvalue, can be identified as the chemical potential of
the system (and not the energy per atom, as we would have if the equation were linear).
We initially attempted to solve the two-dimensional GP equation by treating it as an initial-value
√
problem [94,95], a method that we soon abandoned. To that end, we introduced Φ ≡ Φ̃/ N , whose
unit normalization made it easier to display and compare solutions, and rewrote the equation as
2
˜ 2 Φ + (µ̃ − x2 )Φ − N γΦ3 = d Φ + 1 dΦ + (µ̃ − x2 )Φ − N γΦ3 = 0
∇
2
dx2
x dx
(3.30)
with the boundary conditions
¯
dΦ ¯¯
=0
dx ¯x=0
and
Z
d2x Φ2 (x) = 1.
(3.31)
To solve the equation for a given set of parameters, we would start by making initial guesses for
A (the amplitude of the wavefunction at x = 0) and µ̃, which, depending on the parameters, were
43
0.6
PSfrag replacements
Φ(x), ΦTF (x)
0.4
0.2
0
0
2
x
4
6
Figure 3.1. Wavefunctions of two-dimensional isotropic trapped Bose gases at zero temperature.
The full lines show the solutions (normalized to unity) of the Gross-Pitaevskiı̆ equation for (from
top to bottom at x = 0) N γ = 10, 100, and 1000, while the dashed lines display the corresponding
Thomas-Fermi wavefunctions. The Thomas-Fermi limit provides an adequate description of a system
over most of its extent for N γ as low as 100. The dash-dotted line displays the noninteracting ground
state.
given by either the noninteracting ground state or the Thomas-Fermi wavefunction discussed below.
This initial condition was then evolved using a standard differential-equation solver—we obtained
the same results using a fourth-order Runge-Kutta integrator and a predictor-corrector algorithm—
until the wave function became negative (due to µ̃ being too large) or attained a minimum at a point
where it was finite (when µ̃ was too small). We would then readjust µ̃ to counter this effect and
iterate the procedure until we found a function with the right behavior (that is, vanishing with zero
derivative) at “infinity” (which we usually took to be in the neighborhood of x = 6). The resulting
function would have the right shape but not the right size, so we had to restart the process with a
new initial amplitude until we reached self-consistency, at which point the chemical potential was
“accurate” to 12 or more figures.
Figure 3.1 shows the two-dimensional solutions that we obtained by this method. The noticeable
increase in the condensate radius caused by the interactions, as well as the lowering of the central
density, are present here, just like in three dimensions [54,87]. (Shortly after we found these solutions,
a paper [95] appeared that had two-dimensional results consistent with ours.) This process of
solution, however, is extremely slow and laborious, especially for the larger values of N γ that we
44
want to study (N γ ≈ 1080 for 10000 rubidium atoms): each of the curves displayed in Fig. 3.1 took
more than a day to generate.
For sizable values of N γ, though, we can use a much simpler model that nevertheless gives a
good description of the condensed system at zero temperature (except for a narrow region around
the condensate edge). Alongside the exact solutions just discussed we display the Thomas-Fermi
wavefunctions [77,96],
Φ̃2TF = ñTF =
1
(µ̃ − x2 ) Θ(µ̃ − x2 ),
γ
(3.32)
which result from neglecting the kinetic energy of the condensate. (Expression (3.32) is valid for
any number of dimensions.) The Heaviside step functions in (3.32) are there to ensure that the
wavefunctions are everywhere real and positive, as required (see the discussion following Eq. (3.4)),
1/2
and force the densities to lie within the condensate radii RTF = µ̃TF . The chemical potential, in
turn, is fixed by normalization:
µ̃TF =
·
σ(σ + 2)Γ(σ/2)
Nγ
4π σ/2
¸2/(σ+2)
,
(3.33)
where Γ(x) is the usual gamma function. For the particular cases of two and three dimensions,
(2)
µ̃TF =
µ
2
Nγ
π
¶1/2
(3)
and µ̃TF =
µ
15
Nγ
8π
¶2/5
.
(3.34)
The energy per atom can be found by inserting (3.32) into (3.28) and integrating. Since the system
at zero temperature has no entropy, the energy found is also the free energy per atom and is related
to the chemical potential through
σ+2
ÃTF
=
µ̃TF ;
N
σ+4
(3.35)
r
(3.36)
in two dimensions we obtain
2
ÃTF
=
N
3
2p
N γ.
π
The zero-temperature GP equation and the Thomas-Fermi limit already agree very well with 3D experiments carried out at finite temperature. We have already mentioned in Chapter 2 that experimentalists use the Thomas-Fermi limit as a first approximation; indeed, they usually fit their
observed density profiles with Thomas-Fermi functions to find how many atoms they have in a
trap [30]. On the other hand, the first Monte Carlo simulations of trapped bosons were checked
by finding the condensate fraction appropriate at a given temperature (using methods that we will
45
I
y
6
F012
¾
61
2
F12
1
¾
6
ẑ
2
F0ext
ωz ¿ ωz0
Fext
Figure 3.2. Typical forces exerted on an atom in a trap of increasing anisotropy. As the trap is
compressed more and more, the z-component of a typical interatomic force F 12 becomes negligible
compared to the external force Fext even if the actual interaction is large. (To avoid clutter we have
omitted the transversal components of the trapping force.)
explain later), solving Eq. (3.27) with that number as the total N , and comparing the results obtained [44]. The Thomas-Fermi should then give reasonable results for systems in traps of varying
anisotropy, and to that topic we now turn.
3.5
Anisotropic systems at zero temperature
We once again employ the specialized notation of Section 2.5, in which we resolve the position vector
into its transverse and z-components: x = ξ + η. A glance at Eq. (2.33) makes it clear that the
Hamiltonian for the harmonic oscillator—and hence the Gross-Pitaevskiı̆ Hamiltonian—acquires a
factor of λ2 in the z-direction. As we mentioned in Sec. 3.2, we expect that the interaction between
atoms will still be isotropic and that the coupling constant γ = γ (3) will remain unaffected. Under
these conditions, the Thomas-Fermi density profile becomes
Φ̃2TF =
1
(µ̃ − ξ 2 − λ2 η 2 ) Θ(µ̃ − ξ 2 − λ2 η 2 )
γ
(3.37)
(3)
and the chemical potential is simply µ̃ = λ2/5 µ̃TF . The step function in Eq. (3.37) again ensures that
the density is positive everywhere, and this requirement prescribes values for the condensate radii
in all directions and a corresponding aspect ratio p = λ, which, as would be intuitively expected,
grows faster with increasing λ than it does for the ideal gas at zero temperature (see our discussion
following Eq. (2.44)).
Beyond a certain degree of anisotropy, however, the Thomas-Fermi wavefunction becomes inadequate as a description of the system. The simple classical force diagram in Fig. 3.2 suggests an
explanation: The extreme anisotropy lines up the atoms in a narrow spatial band, and the component
46
in the compressed direction of the force exerted on a given atom by any other one becomes negligible,
regardless of the actual magnitude of the interaction, when compared to the now-much-larger trapping force. But harmonic forces as they get larger cause systems to vibrate faster, thus increasing
the kinetic energy and making it important again. In the squeezed direction, then, the atoms tend
to behave like noninteracting trapped particles. Now, from Eq. (2.34) we can see that, as λ grows,
so does the gap between the ground state and the first excited state; this gap eventually becomes
insurmountable, and at that point the atoms will be effectively frozen in the harmonic-oscillator
ground state. When that happens, the system has reached the quasi-2D régime [66,81].
We can now estimate a condition for this “régime change.” The crossover from three-dimensional
to quasi-2D behavior occurs when the confinement energy in the z-direction, ~ω z , becomes larger
than the chemical potential given right below Eq. (3.37):
1
1
~ωz = ~λω ≥ µ = ~ω µ̃ =
2
2
µ
a
15N λ
x0
¶2/5
(3.38)
or [30]
152
λ ≥ 5
2
3
µ
a
x0
¶2
225
N =
32
2
µ
γ (3)
8π
¶2
N 2.
(3.39)
To describe a condensate in the quasi-2D régime we can introduce an Ansatz for the condensate
wavefunction consisting of a Thomas-Fermi profile in the radial direction and an ideal- (though
trapped-) gas profile along z:
Φ = A (R2 − ξ 2 )1/2 Θ(R2 − ξ 2 ) e−λη
2
/2
,
(3.40)
where the constant A can be found by normalization:
Φ=
r
√
2
π 3/2
2
N
(R2 − ξ 2 )1/2 Θ(R2 − ξ 2 ) e−λη /2 ;
2
R
(3.41)
similar proposals have been introduced previously [66,77]. The “Thomas-Fermi radius” R can be
found by imposing that the wavefunction minimize the functional
J[Φ] =
Z
¤
£
˜ 2 + Φ(ξ 2 + λ2 η 2 )Φ + γ Φ4 ,
d3x (∇Φ)
2
(3.42)
with the understanding that the gradient is just the derivative with respect to η. Insertion of (3.41)
into (3.42) and straightforward integration yield
47
1
1
1
2N
J
= λ + R2 + λ +
N
2
3
2
3π
µ
λ
2π
¶1/2
γ
,
R2
(3.43)
which can be minimized with respect to R to give
R4 =
µ
2N
π
λ
2π
¶1/2
γ;
(3.44)
note that, even in its reduced form, the gradient term in (3.42) did not contribute to this expression
and could have been dropped altogether. The surface density defined in Eq. (2.40) is readily found
to be
1
σ̃(ξ) =
N γ (3)
µ
λ
2π
¶1/2
"
 2N
π
µ
λ
2π
¶1/2
γ (3)
#1/2

− ξ2 ,
(3.45)
with a step function implied, and coincides with the two-dimensional Thomas-Fermi density given
by Eqs. (3.32) and (3.34) if we identify
γ
(2)
≡
µ
λ
2π
¶1/2
γ (3) ,
(3.46)
the key result of this section. This expression for the equivalent two-dimensional coupling constant
is consistent with others found in the literature [97–99] and represents the first-order approximation
to the more elaborate expression [39,67,81,82]
γ (2) =
µ
λ
2π
¶1/2
γ (3)
Ã
1+
µ
λ
2π
¶1/2
γ (3)
log
8π
µ
λ
πq 2 x20
¶!−1
,
(3.47)
where q is the relative momentum of the colliding atoms in the equivalent two-body problem. There
is no consensus as to what should replace the argument of the logarithm in the N -body problem.
The authors of Ref. 81 suggest to transform q 2 x20 → µ̃ and then identify µ̃ → γ (2) ñ0 ; others either
interpret this as a self-consistent equation for γ (2) [82], replace µ̃ → µ̃ − x2 − γ (2) ñ0 [100], or offer
an expression like that of Ref. 81 but with a different coefficient and absolute-value signs around
the logarithm itself [67]. In general, however, we found that these coupling constants predict results
that differ very slightly from those of Eq. (3.46).
Finally, our variational wavefunction predicts an aspect ratio that does not increase as quickly
with λ as the Thomas-Fermi expression but still does so faster than the ideal-gas result:
p
µ ¶3/8
λ5/8
2
(N γ (3) )1/4 √ .
p≡ p
=
π
3
2hη 2 i
hξ 2 i
48
(3.48)
3.6
Finite temperatures: The semiclassical approximation
In the last section we saw that the condensate is well described by the Gross-Pitaevskiı̆ equation;
moreover, Section 3.3 summarizes the prescription for the complete treatment of the system. We
will soon put this prescription to work, but before we do so we will study the simplified scheme
that we will use from now on to treat the noncondensed particles. As we saw in Section 2.3, it is
possible to simplify the more nearly exact equations using an approximate semiclassical scheme that
will have a more transparent physical interpretation than the exact formalism. Though there we
derived our results in a different manner, it turns out that they could have been found by solving
the harmonic-oscillator equation in the WKB approximation [46,88].
We carry out the approximation by assuming that kB T À ~ω (a condition that, as Table 3.1
shows, is reasonably fulfilled in experimental situations) and consequently that the eigenenergies ² j
form a continuous spectrum; the functions uj and vj can then be taken to be of the form
uj → ueiφ(x) ,
vj → veiφ(x) ;
(3.49)
the phase φ has to be the same to ensure that both equations of (3.15) are compatible, and u and v
are assumed to be smooth functions. By constructing the appropriate probability flux [46] we are led
˜
to identify the gradient of this phase as an excitation quasimomentum [88]: κ ≡ ∇φ.
Furthermore,
we can use the vector identity ∇ · (ψa) = a · ∇ψ + ψ∇ · a and temporarily put back all the factors
of ~ (though keeping the notation we have been using for scaled variables) to see that the Laplacian
term in the first equation of (3.15) is
−
·
¸
~2 2 iφ/~
~2 2
i~
i~
1
∇ ue
= −
∇ u − ∇u · ∇φ −
u∇2 φ +
(∇φ)2 eiφ/~ .
2m
2m
m
2m
2m
(3.50)
In the semiclassical approximation we make ~ → 0 (or, equivalently, neglect all derivatives of u and v
and second derivatives of φ, in what is also called the local-density approximation [93,101]), and the
Table 3.1. Typical temperatures and frequencies found in experiments. The frequencies displayed here correspond in each case to the harmonic mean of the three trapping frequencies,
ω ≡ (ωx ωy ωz )1/3 . The factors kB T /~ω oscillate between 13 and 100.
Trap [Reference]
JILA trap [6]
MIT trap [2]
2D condensates [30]
Temperature (nK)
170
2000
40
Mean frequency (Hz)
169.7
415.6
61.9
49
kB T (J)
2.35 × 10−30
2.76 × 10−29
5.52 × 10−31
~ω (J)
1.12 × 10−31
2.76 × 10−31
4.10 × 10−32
coupled equations (3.15) become [54,88]
(κ2 + x2 + 2γ ñ − µ̃) u(x, κ) − γ ñ0 v(x, κ) =
²(x, κ) u(x, κ)
−γ ñ0 u(x, κ) + (κ2 + x2 + 2γ ñ − µ̃) v(x, κ) = −²(x, κ) v(x, κ),
(3.51)
with a Bogoliubov-like energy spectrum given by [102]
²2 = (κ2 + x2 + 2γ ñ − µ̃)2 − γ 2 ñ20 ≡ Λ2 − γ 2 ñ20 .
(3.52)
In order to maintain consistency with the normalization (3.18), we impose the condition u 2 − v 2 = 1
and obtain
u2 =
Λ+²
2²
and
v2 =
Λ−²
;
2²
(3.53)
the density of excited particles takes the form
ñ0 (x) =
1
(2π)σ
Z
dσκ (u2 + v 2 )f˜(κ, x) =
1
(2π)σ
Z
dσκ
Λ˜
f (κ, x)
²
(3.54)
with f˜(κ, x) given by Eq. (3.25) after replacing ²j with ² given by (3.52). These equations, together
with (3.10) and the imposition that
N=
Z
dσx (ñ0 + ñ0 ),
(3.55)
form a self-consistent set of equations that can be solved to study the complete thermodynamics of
the trapped system at any temperature, with no adjustable parameters, given N and a. As usual,
we have neglected the zero-temperature depletion.
The model can be further simplified by taking the Hartree-Fock approximation [83,96,103], in
which we neglect the “quasihole” function v altogether. While Eq. (3.10) remains unchanged, equation (3.15) is now simply Λ̃ = ², the energy spectrum (3.52) turns into
² ≡ ²HF = κ2 + x2 + 2γ ñ(x),
(3.56)
and the thermal density becomes
ñ0 (x) =
1
(2π)σ
Z
dσk f (x, κ) =
1 2π σ/2
(2π)σ Γ(σ/2)
50
Z
∞
0
κσ−1 dκ
e
β̃(κ2 +x2 +2γ ñ−µ̃)
−1
=
µ
t
4π
¶σ/2
gσ/2 (e−β̃(x
2
+2γ ñ−µ̃)
).
(3.57)
Equations (3.56) and (3.57) suggest that one can describe the thermal atoms as being acted upon
by the effective Hamiltonian
H̃eff = κ2 + x2 + 2γ ñ(x),
(3.58)
with a semiclassical kinetic energy that at this level of approximation commutes with the position
operator and a potential energy that contains both the trapping potential and an effective local
potential due to interactions. Indeed, we obtain Eq. (3.57) by invoking Eq. (2.9) with the new
Hamiltonian:
ñ0 (x) = hx|
1
eβ̃(H̃eff −µ̃)
−1
|xi.
(3.59)
This identification also gives us an expression for the off-diagonal elements of the density matrix.
From the definition
ñ(x, x0 ) ≡ hΨ† (x)Ψ(x0 )i = Φ̃∗ (x)Φ̃(x0 ) + hψ̃ † (x)ψ̃(x0 )i,
(3.60)
where we once again neglected the anomalous averages, we obtain [73]
ñ(x, x0 ) = Φ̃(x)Φ̃(x0 ) + ñ0 (x, x0 )
(3.61)
with [61,89]
ñ0 (x, x0 ) =
µ
t
4π
¶σ/2 X
∞
`=1
1
`σ/2
µ
¶
t
1
exp − (x − x0 )2 − `β̃( (x2 + x02 ) + γ(ñ(x) + ñ(x0 )) − µ̃) ,
4`
2
(3.62)
which results from neglecting the commutator of κ and x (which is valid for small β̃ [45]) and
2
0 2
using the free-particle off-diagonal density matrix [14,15]: hx|e−β̃κ |x0 i = e−(x−x )
/4β̃
/4π β̃. Equa-
tion (3.62) will be used in the next section and in Chapter 5.
3.7
The interacting Bose gas in the Hartree-Fock approximation
The solution of the semiclassical HFB equations is a self-consistent process like the one described
earlier in Section 2.3: Initially, we solve the GP equation (3.10) with ñ0 = 0, just as we did in
Section 3.4, and as a result we obtain initial values for the condensate density and the chemical
51
potential. These can then be fed into either Eq. (3.54) or Eq. (3.57) to obtain a thermal density,
which we integrate (using, for example, the extended Simpson’s rule (2.30)) to yield the number of
thermal atoms. The particle-conservation condition (3.55) then allows us to find a new value for
the condensate number, to which the condensate density must now be normalized. At this stage we
have values for ñ0 and ñ0 that we can insert into (3.10) to obtain a new self-consistent condensate
density. All we need to do now is iterate the process until all quantities (in particular, the condensate
fraction) stop changing.
Most of the self-consistent procedure is straightforward enough to implement; the most difficult
part by far is the solution of the Gross-Pitaevskiı̆ equation—a nonlinear eigenvalue problem with
the mixture of Neumann and global boundary conditions exhibited in Eq. (3.31). In Section 3.4 we
described the difficulties we encountered when we tried to solve the GP equation as an initial-value
problem. For finite temperatures the situation was, predictably, even worse, since the density profiles
had to be iterated to self-consistency—and this on top of the multiple iterations that we had to carry
out to obtain a self-consistent condensate density at every step. At low temperatures, when the plain
GP equation was already a good description of the condensate and was thus a reasonable starting
point for the self-consistent calculation, some three iterations sufficed. At higher temperatures, the
program needed about a week of full-time dedication in order to generate a single density profile.
The agreement with other semiclassical results found in the literature [69,87,88], and even with the
solutions of the full set of discrete equations in three dimensions [91], was encouraging, but the
method was simply not viable, and we had to look for alternatives.
The method we eventually used3 is based on the direct minimization of the functional [44,69]
J[Φ̃] =
subject to the constraint
R
Z
£
¤
˜ 2 + Φ(x2 + 2γ ñ0 (x))Φ + γ Φ4
dσx (∇Φ)
2
(3.63)
dσx Φ2 = 1. In an isotropic trap the problem is one-dimensional, and the
gradient term becomes a plain derivative. The minimization is performed by setting a grid of n fixed
abscissas x(n)
j that represent the radial coördinate x; at the beginning we assign a set of ordinates f j ≡
Φ(x(n)
j ), in terms of which (3.63) becomes a multivariate function, J = J[f 1 , . . . , fn ] ≡ J[f ], that can
be minimized using standard optimization routines [63]. The resulting function is known at the x (n)
j
and can be evaluated everywhere else by interpolation.
3
We obtained equivalent results using the modified steepest-descents method of Ref. 87, which
seems to be the most popular way of solving the problem [54,86,88]. In general, however, we found the
minimization method to be superior in both efficiency and accuracy, and much easier to implement.
52
Now, a glance at Eq. (3.63) shows that interpolation is also essential during the minimization
process, since for each evaluation of J we need to be able to calculate both derivatives and integrals
of functions that we know only at a few points. Finite-difference schemes cannot be used, since any
grid that would give reliable approximations makes the minimization impracticable. A much better
approach, based on cubic-spline interpolation [44,69,104], yielded the results that we present (along
with a more detailed exposition of the method) in Chapter 4: this lowered the time necessary to
obtain a density profile from a week to a few hours, and, unlike the initial-value method described
above, could run alone without having to be babysat. More recently, however, we developed a
method based on spectral differentiation [64,65,105] and Gaussian quadrature [62,63] that we can
consider, at least for our purposes, definitive. This scheme, like the one described in Section 2.4,
uses as grid points the zeros of the nth Laguerre polynomial; the calculation of (3.63) is reduced
(in 2D) to the matrix-vector product
J[f ] =
n
³
γfj4 ´ (n) x(n)
1 X
(n)
2
0 (n) 2
2
(2πx(n)
wj e j ,
j ) (Df )j + (xj fj ) + 2γ ñ (xj )fj +
∆ j=1
2∆
where
∆≡
n
X
(n)
2 (n) xj
(2πx(n)
j ) f j wj e
(3.64)
(3.65)
j=1
(n)
is the normalization factor; the wj(n) exj
are the Gauss-Laguerre weights and inverse weighting
function [62,63,65] at the grid points and D is a differentiation matrix [64,65,105]. (These quantities
are derived in detail in Appendix C.) Not only was this method more robust than that based on
spline interpolants, it was orders of magnitude faster: each of the density profiles that we show below
took about 15 seconds to generate. The results that we present in this section and in Chapter 5
were all found using this procedure. (Other authors [70,106] have recently used similar grids to solve
problems related to BEC, but to our knowledge nobody has used them to minimize (3.63).)
These results also correspond, in their entirety, to the Hartree-Fock approximation characterized
by the energy spectrum (3.56). In general we found that, in 3D, the purportedly more accurate
HFB approximation based on (3.52) converged for a quite restricted range of parameters and, when
it did converge, gave essentially the same results as the Hartree-Fock scheme; in 2D we were unable
to find a single instance where the HFB scheme worked (see Chapter 4).
Initially we consider a three-dimensional gas of 104 atoms in an isotropic trap. Figure 3.3 on the
next page shows the behavior of the condensate fraction. The dotted line is the thermodynamiclimit prediction from Eq. (2.17), while the dashed line is the finite-size prediction, Eq. (2.16), whose
53
1
0.8
N0 /N
0.6
PSfrag replacements
0.5
0.4
0.2
0
0
0.2
0.4
T /Tc
0.6
0.8
1
Figure 3.3. Condensate fraction of a three-dimensional Bose gas in the Hartree-Fock approximation.
In this case we have N = 104 atoms in an isotropic trap. Refer to the text for a detailed explanation.
correction to the critical temperature is shown as a square on the x-axis. The diamond on the x-axis
exhibits the total (finite-size plus interaction) correction4 to the critical temperature [102,107,108],
(3)
δTc
(3)
Tc
≈ −0.73 N −1/3 − 1.33
γ
N 1/6 .
8π
(3.66)
The dots represent the result of running the self-consistent Hartree-Fock program with γ ≈ 0, while
the solid line corresponds to an interacting gas of rubidium atoms with γ = 8π × 0.0043. Finally,
the solitary point with an error bar at T = 0.7 Tc is the result of a Monte Carlo simulation (to be
described in Chapter 5). The plot shows that interaction effects are greater than finite-size effects,
and that the latter are already built into the Hartree-Fock approximation. On the other hand, it
shows that, as expected, the method becomes inaccurate at high temperatures (Fig. 2.7 showed this
for the ideal gas) and eventually breaks down. For this combination of parameters, we cannot go
beyond T ≈ 0.88 Tc because the argument of the exponential in Eq. (3.57), as illustrated in Fig. 3.4,
vanishes at a point close to the Thomas-Fermi radius.
4
Repulsive interactions lower the condensation temperature because they tend to decrease the
density of the system, which then finds it more difficult to reach the Einstein condition.
54
8
¯ ¯
¯µ̃eff ¯
6
4
PSfrag replacements
2
0
0
1
2
x
3
4
Figure 3.4. Failure of the Hartree-Fock approximation at high temperatures, due to the vanishing
of the effective chemical potential (defined by the argument of the exponential in Eq. (3.57)). The
figure shows the behavior, for the same 3D gas as above, of µ̃eff at T /Tc = 0.82, 0.84, 0.86, and 0.88
(from top to bottom at x = 0). The diamond marks the Thomas-Fermi radius. As we remarked
previously, we know the function only at the Gauss-Laguerre points denoted by the open circles; the
lines result from a spline interpolation.
Figure 3.5 shows the behavior of the chemical potential of the interacting gas; it is significantly
larger, and decreases less abruptly with increasing temperature, than its ideal-gas counterpart (which
is stuck at around 3 for the whole range of temperatures seen in the figure).
Figures 3.6 on page 57 and 3.7 on page 58 exhibit some representative densities and number
densities (solid lines), once again resolved into condensate (dotted) and noncondensate (dash-dotted)
fractions. The delocalization hump that already characterized the thermal component of the ideal
trapped gas reappears here and is even more pronounced. This is to be expected, since once again
the most localized state available dominates, and this time the repulsive interactions displace the
thermal cloud even farther away from the center of the trap.
We now go back to the two-dimensional gas. In Chapter 4 we will study its density and free
energy in detail; in the following figures we will use it to illustrate the behavior of the off-diagonal
interacting density matrix introduced in Section 3.6.
Figure 3.8 on page 59 compares the densities and number densities (as usual, resolved into
condensate and noncondensate) of an ideal 1000-atom Bose-Einstein gas (at left) at T = 0.7 T c with
55
14
12
µ̃
PSfrag replacements
10
8
0
0.2
0.4
T /Tc
0.6
0.8
1
Figure 3.5. Chemical potential of a three-dimensional interacting Bose gas in the Hartree-Fock
approximation. The parameters are the same as in Fig. 3.3.
the equivalent interacting system (at right) in the Hartree-Fock approximation. We have used the
“rubidium” value γ = 8π × 0.0043 for the coupling constant. The expected effects are visible: the
interactions spread out the system and make it less dense; they also cause a significant decrease of
the condensate fraction.
Figure 3.9 on page 59 shows the occupation numbers yielded by the diagonalization of the onebody density matrix. The white bars show the ideal-gas populations and the black bars display
those of the interacting system. The depletion of the condensate due to interactions, and the
complementary increase of the excited occupation numbers, can be seen here. On the other hand, the
ground-state population is still orders of magnitude greater than that of any other state; in this case
we do not seem have the “smeared” condensation predicted for other two-dimensional systems [37,80]
(see also the footnote on page 14). The open circle at the left shows the condensate number yielded
by the self-consistent procedure; the eigenvalue of the density matrix is about 4% greater, but the
difference between the two is only around 1% of N .
Figure 3.10 on page 60 exhibits the corresponding eigenfunctions. The solid lines represent the
(spline-interpolated) eigenfunctions that result from diagonalizing the interacting off-diagonal density matrix; for comparison we show the ideal-gas eigenfunctions (dashed lines) previously exhibited
in Fig. 2.10. Just as in that figure, we used a 30 × 30 Gauss-Laguerre grid to discretize the matrix.
56
PSfrag replacements
ñ(x)/N
0.015
0.01
0.005
6
0
0
1
2
88
x
4
22
5
0.015
ñ(x)/N
0.015
ñ(x)/N
66
44 T /Tc (%)
3
0.01
0.005
0.01
0.005
PSfrag replacements
0
0
1
2
x
3
4
0
0
5
2
1
2
x
3
4
5
3
4
5
0.015
ñ(x)/N
ñ(x)/N
0.015
1
0.01
0.005
0.01
0.005
6
0
0
1
2
x
3
4
0
0
5
x
Figure 3.6. Two views of the density of an interacting 3D trapped gas at various temperatures.
The parameters are the same as in Fig. 3.3. Though this is not apparent from the figure, in the last
panel the condensate accounts only for some 10% of the atoms.
57
0.6
PSfrag replacements
N (x)/N
0.4
0.2
0
0
2
4
88
66
6
x
44
8
10
0
N (x)/N
0.6
N (x)/N
0.4
0.2
0
0
2
4
x
6
8
0.4
2
4
2
4
x
6
8
10
6
8
10
0.4
0.2
0
0
0.2
0
0
10
N (x)/N
N (x)/N
0.4
PSfrag replacements
T /Tc (%)
22
2
4
x
6
8
0.2
0
0
10
x
Figure 3.7. Number density of an interacting 3D trapped Bose gas at various temperatures. The
parameters and temperatures are the same as in the preceding figure; in three dimensions, N (x) =
4πx2 ñ(x). The noncondensate is clearly seen to dominate the system at the highest temperature.
58
0.15
0.1
0.1
ñ(x)/N
ñ(x)/N
0.15
0.05
PSfrag replacements
0
0
2
4
x
0.05
0
0
6
0.2
0
0
x
4
6
4
6
0.4
N (x)/N
N (x)/N
0.4
2
2
4
x
0.2
0
0
6
2
x
Figure 3.8. Densities (top) and number densities (bottom) of a two-dimensional isotropic ideal gas
(at left) and the equivalent interacting system (at right). In both cases, N = 1000 and T = 0.7 T c ;
the interacting gas has an interaction parameter γ = 8π × 0.0043.
1000
Population
PSfrag replacements
100
10
7
1
0
1
2
3
4
Eigenstate
5
6
Figure 3.9. Eigenvalues of the one-body density matrix of ideal (white bars) and interacting (black
bars) two-dimensional trapped gases. The parameters are the same as in Fig. 3.8.
59
1
0
1
0
φ0 (x), . . . , φ6 (x)
1
0
1
0
1
0
1
PSfrag replacements
0
1
0
6
0
1
2
3
4
5
x
Figure 3.10. Ground state and azimuthally symmetric excited eigenfunctions (solid lines) of a
two-dimensional isotropic interacting trapped gas. The parameters are the same as in Fig. 3.9. For
comparison we show the corresponding ideal-gas eigenfunctions (dashed lines).
60
The open circles at the bottom show the Gauss-Laguerre points where the functions are known
exactly; the function they depict, however, is not the ground state given by the density matrix but
the condensate wavefunction that resulted from the self-consistent procedure and that we used to
calculate the matrix using Eq. (3.62). Here we can see once more the competition between confinement and interaction effects: while the wavefunctions decrease in value and are flattened at the
center of the trap, the radius of the system becomes larger; as the figure shows, both condensate
and noncondensate are affected.
3.8
Summary
Mean-field theory is the main tool we use in the study of interacting Bose-Einstein systems; it
has been used with much success in three dimensions, and this lets us perform thorough checks
on our methods of solution. Mean-field theory prescribes coupled equations for the condensate
and the rest of the system: the condensate receives a full quantum-mechanical treatment from the
beginning; the noncondensate, on the other hand, is described by various schemes of different degrees
of sophistication, some of which we introduce.
We undertake a fairly detailed analysis of the zero-temperature case, where the system is fully
condensed; not only is this a good test system in which to study the approach to the thermodynamic
limit, it is also amenable in that case to exact solution. We use it to find a criterion for the crossover
in dimensionality of the interacting gas and to obtain an expression for the interatomic coupling
constant for the two-dimensional system that results from squeezing a 3D trap.
For systems at finite temperature, we settle upon a semiclassical Hartree-Fock treatment that,
though it breaks down at high temperatures, predicts reasonable results in two dimensions. In
particular, it shows that there is no smearing in 2D.
61
CHAPTER 4
THE TWO-DIMENSIONAL BOSE-EINSTEIN CONDENSATE1
One reason that impels physicists and applied mathematicians to look
at systems in less than three dimensions is that very often they are
simpler. There are two rather different reasons for this simplicity.
The first is the rather obvious reason that space in lower dimensions
is described by fewer coördinates, and so calculations are less laborious.
The second is that the topology—the structure—of lower-dimensional
spaces is simpler, and this enables various tricks to be used that cannot
be used in higher-dimensional spaces.
—D.J. Thouless [109]
4.1
Introduction
In a recent paper [30], one of the groups that pioneered the formation and detection of Bose-Einstein
condensation (BEC) in harmonically trapped atomic gases [2,6,7] reports the creation of (pseudo-)
two-dimensional condensates. These have been produced by taking a three-dimensional condensate
of
23
Na atoms and carrying out two independent processes on it: i) Initially, one of the confining
frequencies (that in the z direction, ωz , in order to minimize the effects of gravity) is increased
until the condensate radius in that dimension is smaller than the healing length associated with the
interaction between atoms (taken here to be repulsive and parametrized by the two-body scattering
length a). This is not sufficient to reduce the dimensionality, however, since the atoms, each of which
has mass m, will literally squeeze into the third dimension if there are more than
N=
s
32~
225ma2
1
s
ωz3
ωx2 ωy2
(4.1)
This chapter is a reprint of the paper “The two-dimensional Bose-Einstein condensate,” by J. P.
Fernández and W. J. Mullin, Journal of Low Temperature Physics 128, 233 (2002), and as such it
purports to be self-contained. Some repetition of previous material is therefore inevitable, and we
have kept references to other parts of the thesis at a minimum. The figures have been left untouched.
The original version of the paper contained an appendix that eventually got discarded; it reappears
here as Section 4.5.
62
of them in the trap. ii) Consequently, the number of atoms in the condensate must be reduced; this
is achieved by exposing the condensate to a thermal beam. The reduction in effective dimensionality
becomes apparent when the aspect ratio of the expanding condensate, which is independent of N
in 3D, starts to change as the number of atoms is gradually reduced. The condensates thus produced
have a number of atoms that ranges between 104 and 105 .
This constitutes an important experimental contribution to the long-standing debate about the
existence of BEC in two dimensions. It is a well-known fact (and a standard textbook exercise)
that BEC cannot happen in a 2D homogeneous ideal gas; a rigorous mathematical theorem [27]
extends this result to the case where there are interactions between the bosons. When the system is in a harmonic trap, on the other hand, BEC can occur in two dimensions [24] below the
p
temperature kB Tc = ~ω 6N/π 2 introduced on page 24, but the theorem is once again valid when
interactions between the bosons are considered [28]: while there is a BEC in the 2D system, it occurs
at T = 0.
The preceding discussion is valid only in the thermodynamic limit, which in the particular case
of an isotropically trapped system consists of making N → ∞ and ω → 0 in such a way that N ω 2
remains finite [28,57]. The question remains whether a phenomenon resembling BEC—that is, the
accumulation of a macroscopic number of particles in a single quantum state—occurs or not when
the system consists of a finite number of particles confined by a trap of finite frequency, as is certainly
the case in experimental situations. If there is such a phenomenon, one would like to know more
about the process by which this “condensate” is destabilized at finite temperatures as N grows.
Some authors [39,40] have considered, in the finite homogeneous 2D case, the possibility of a
BEC. A similar analysis [81] was considered for a quasi-2D trapped gas; the latter reference finds
that the phase fluctuations in the condensate vary with temperature and particle number as
h(δφ)2 i ∝ T log N
(4.2)
which diverges for finite temperature as N → ∞, as one would expect. For finite N , on the other
hand, the fluctuations are tempered at very low temperature; since the coherence length, though
finite, is still larger than the characteristic length imposed on the system by the trap [41] or by
walls, one can speak of a “quasicondensate.” It is this quasicondensation that we wish to study in
this paper. Reference 29 reports the observation of a quasicondensate in a homogeneous system of
atomic hydrogen adsorbed on 4 He; at this point we cannot tell if the condensates reported in Ref. 30
are actually quasicondensates.
63
One can approach the study of the 2D Bose gas by employing mean-field theory, which in this
context refers to the Hartree-Fock-Bogoliubov (HFB) equations and various simplifications thereof
that we will review in quantitative detail and classify in the present work. The HFB theory has
been remarkably successful in treating the 3D case; though we give a few representative references
below, we refer the reader to Ref. 54, which includes a comprehensive review and an exhaustive list
of references, and will henceforth concentrate on the work that has been carried out in 2D.
The HFB theory assumes from the outset that the system is partially condensed, and proposes
separate equations to describe the condensate and the uncondensed (thermal) component. The condensate is described by a macroscopic wavefunction that obeys a generalized Gross-Pitaevskiı̆ (GP)
equation [75,76], while the noncondensate consists of a superposition of Bogoliubov quasiparticle
and “quasihole” excitations weighted by a Bose distribution. The expression for the noncondensate
can be simplified by neglecting the quasihole excitations (the Hartree-Fock scheme) [23,103], by performing a semiclassical WKB approximation [110], or by combining both of these [69,88,102]. Each
one of the above schemes can be further simplified, when the thermodynamic limit is approached,
by neglecting the kinetic energy of the condensate: this is the Thomas-Fermi limit [77,96]. Finally
one can neglect the interactions between thermal atoms, arriving at the semi-ideal model [111,112].
The semi-ideal model has been implemented in 2D [113,114], but has been found to yield unphysical results as the interaction strength becomes sizable. The full-blown HFB model does not
appear to be much more successful, at least when considered in the semiclassical limit. In a previous
paper [43] it was found that, below a certain temperature, the introduction of a condensate in the
Thomas-Fermi limit (corresponding to the thermodynamic limit) renders the HFB equations incompatible, with the noncondensate density becoming infinite at every point in space. The singularity
occurs at the low end of the energy spectrum, indicating that the condensate is being destabilized
by long-wavelength phonons; this interpretation in terms of phase fluctuations had already been
proposed for the homogeneous system [27,115]. In this work we also report our inability to find
self-consistent semiclassical solutions to the HFB model when a finite trapped system is considered.
Moreover, it was recently discovered [42,100] that it is possible, in the 2D case exclusively, to solve
the HFB equations semiclassically at any temperature without even having to invoke the presence
of a condensate (thus obtaining what we will call an “uncondensed” solution). In other words, it is
possible to simply cross out the condensate component and solve for the system to a temperature
close to T = 0. The solution thus obtained shows an accumulation of atoms at the center of the trap
and yields a bulge in density similar to that caused by the presence of a condensate, even though
no state is macroscopically occupied.
64
These results appear to reinforce the conclusion that BEC cannot happen in the two-dimensional
trapped system. However, we are still confronted with the experimental results described above. Furthermore, two different Monte Carlo simulations [31,32] show significant concentrations of particles
in the lowest energy state for finite N , though it must be said that this method provides little information about the types of excitations that contribute to the disappearance of the condensate, and
that it is difficult to carry out such simulations on very large systems.
The HFB model cannot be discounted at this stage either: when we restrict ourselves to the
Hartree-Fock approximation, it is possible to find self-consistent solutions (henceforth referred to
as “condensed”) involving a condensate, both for finite systems and in the thermodynamic limit,
when using either the discrete set of equations [116]2 or the WKB approximation [117] to treat
the noncondensate. The Hartree-Fock approximation neglects phononlike excitations, so it is not
surprising that it yields solutions. However, one may be able to justify its usefulness.
It is known that, in the infinite homogeneous system, infrared singularities occur but are renor√
malized by interactions, providing, in essence, a cutoff at a low wavenumber k 0 ≈ nmU [118–120],
where n is the density, m the particle mass, and U the effective interaction strength. Indeed, it
is possible to estimate the Berezinskii-Kosterlitz-Thouless (BKT) transition temperature by simply
cutting off the ideal-gas density expression at this k0 [119]. Presumably a similar situation occurs
in the trapped case. The Hartree-Fock approach provides a convergent theory by cutting off the
singularities at a wavenumber similar to that of more rigorous theories. Whether such an approach
gives a reasonable estimate of the BKT transition temperature, the superfluid density, or a quasicondensate density for the finite system will need to await a more rigorous theoretical approach to
the interacting 2D trapped gas [121].
Given the above limitations, we analyze the character of the BEC in the 2D trapped system by
solving the coupled equations of the theory. We find that, in the Hartree-Fock scheme, it is possible
to find both condensed and uncondensed solutions for the two-dimensional equations. We also
calculate the free energy corresponding to each one and find that the condensed solution has a lower
free energy at all temperatures, which appears to imply that, at least at this level of approximation,
the uncondensed solution is unphysical or metastable. The condensed solution will be “preferred”
over the uncondensed one, and we suspect that the solution represents an approximation to a
quasicondensate.
2
The discrete set of equations was very recently solved for two dimensions in the full HFB approximation [93].
65
It is evident from our discussion that our approach to BEC in 2D trapped systems is a preliminary
one and that a further analysis that takes fully into account the BKT transition is necessary. A
start to answering this need has been presented in Ref. 121, and we intend to return to this problem
ourselves in the future.
4.2
4.2.1
The model
The Hartree-Fock-Bogoliubov equations
Throughout this paper we use a dimensionless system of units in which all lengths are scaled by
p
the oscillator length x0 ≡ ~/mω and all energies are expressed in terms of the one-dimensional
ground-state energy of the oscillator, ~ω/2. Dimensionless variables will in general carry a tilde: for
example, the total density n becomes ñ ≡ x20 n and the chemical potential µ̃ ≡ µ/ 12 ~ω.
The HFB equations [54,73,74,88] result from assuming that i) the (repulsive) interactions between
atoms consist exclusively of two-body low-energy collisions that can be described by a delta-function
pseudopotential [79] of strength g (related to its dimensionless counterpart γ through g ≡ 12 ~ωx20 γ),
that ii) the many-body field operator Ψ can be decomposed via
Ψ = hΨi + ψ̃ ≡ Φ̃ + ψ̃,
(4.3)
where the ensemble average Φ̃ is a real macroscopic wavefunction that describes the condensate
(reflecting the imposition of macroscopic long-range order on the condensed system), and that
iii) products of noncondensate operators can be simplified using a finite-temperature version of
Wick’s theorem [84,90]. If we insert (4.3) into the many-body grand-canonical Hamiltonian
H̃ =
Z
d2x Ψ† (Λ̃ +
γ † †
Ψ Ψ Ψ)Ψ,
2
(4.4)
˜ 2 + x2 − µ̃, and neglect anomalous averages via the Popov approximation [88],
where Λ̃ = −∇
we obtain an expression that can be diagonalized and yields an infinite set of coupled differential
equations.
On one hand, the macroscopic wavefunction mentioned above is the square root of the dimensionless condensate density ñ0 and obeys the generalized Gross-Pitaevskiı̆ equation,
Λ̃Φ̃ + γ(ñ0 + 2ñ0 )Φ̃ = 0,
66
(4.5)
where ñ0 ≡ ñ−ñ0 is the noncondensate density. The factor of 2 in (4.5) and hereafter is a consequence
of the direct (Hartree) and exchange (Fock) terms being identical, which follows from the fact that
the delta-function interaction that we are considering has zero range [74]. (The term involving
the condensate does not include that factor; this is a consequence of the restricted grand-canonical
ensemble that we are using. See the end of the next section.)
The noncondensate, in turn, is described by an infinite number of pairs of functions that obey [73]
Ã
Λ̃ + 2γ ñ
−γ ñ0
−γ ñ0
Λ̃ + 2γ ñ
!Ã
uj
vj
!
= ²j
Ã
uj
−vj
!
(4.6)
and which generate the noncondensate density via
ñ0 (x) =
X ¡¡¯ ¯2 ¯ ¯2 ¢
¯ ¯ ¢
¯ u j ¯ + ¯ vj ¯ f j + ¯ vj ¯ 2 ,
(4.7)
j
where we have introduced the Bose-Einstein distribution factor fj ≡ (e²j /t − 1)−1 , which appears
when the free energy of the system is minimized, and the dimensionless temperature t ≡ k B T / 12 ~ω.
The last term of (4.7) describes the zero-temperature depletion of the condensate, which accounts
for less than 1% of the particles and is therefore negligible [88]. This self-consistent set of equations
is closed, and the chemical potential found, by imposing that the total number of particles remain
fixed:
N = N0 + N 0 =
Z
d2x ñ(x) =
Z
d2x (ñ0 + ñ0 ) .
(4.8)
At temperatures such that kB T À ~ω, one can use the semiclassical (WKB) approximation [110]
that results from taking uj ≈ u(x)eiφ and vj ≈ v(x)eiφ . The phase common to both defines a
˜
quasiparticle momentum through κ = ∇φ;
u, v, and κ vary sufficiently slowly with x that their
spatial derivatives can be neglected [88]. The distribution factor is now f j ≈ f (x, κ), infinite
sums are transformed into momentum integrals—which in two dimensions can be solved in closed
form [43]—and the equations become algebraic, yielding the Bogoliubov energy spectrum [88]
²HFB (κ, x) =
q
(κ2 + x2 + 2γ ñ − µ̃)2 − γ 2 ñ20
(4.9)
and the following integral expression for the noncondensate density:
ñ0 (x) =
t
4π
Z
∞
((x2 +2γ ñ−µ̃)2 −γ 2 ñ20 )1/2 /t
√
¡
¢
t
dξ
−( (x2 +2γ ñ−µ̃)2 −γ 2 ñ20 )/t
=
−
log
1
−
e
.
ξ
e −1
4π
67
(4.10)
At high enough temperatures there is no condensate, and the density on the left-hand side of (4.10)
is just the total density. We thus have to solve a single self-consistent equation,
ñ0 (x) → ñ(x) = −
¡
¢
2
t
log 1 − e−(x +2γ ñ−µ̃)/t ;
4π
(4.11)
the chemical potential is once again calculated by requiring N to be fixed. Equation (4.11) has been
found by the authors of Ref. 42 to be soluble at all temperatures, a result that we confirm in the
present work. Reference 43 had mistakenly concluded that it was impossible to solve (4.11) below a
certain temperature.
The Hartree-Fock approximation [88,96,103] amounts to neglecting the v j in (4.6) and, when
combined with the semiclassical treatment, results in the disappearance of the last term within the
square root of Eq. (4.9). The energy spectrum now becomes
²HF ≡ ² = κ2 + x2 + 2γ ñ − µ̃
(4.12)
and the noncondensate density turns into
ñ0 (x) = −
¡
¢
2
t
log 1 − e−(x +2γ ñ−µ̃)/t ,
4π
(4.13)
an expression that differs from (4.11) in that the left-hand side corresponds only to the noncondensate
density. The three-dimensional version of this equation, coupled with (4.5) and (4.8), has been
frequently used in the literature to study the three-dimensional gas. Reference 69, for example,
exhibits a detailed comparison of its predictions to those of Monte Carlo simulations and finds
excellent agreement between the two.
In two dimensions, the Hartree-Fock equations have been solved without resorting to the WKB
approximation [116]. The authors of this reference succeeded in finding self-consistent density profiles
and used them to study the temperature dependence of the condensate fraction. Our results [117],
which do take advantage of the semiclassical approximation, agree quite well with theirs (see Fig. 4.1).
4.2.2
The thermodynamic limit
When N is large we can neglect the kinetic energy of the system, which in 2D can be shown to be
of order 1/N [43], and obtain the Thomas-Fermi approximation [77]
γ ñ0 = (µ̃ − x2 − 2γ ñ0 )Θ(µ̃ − x2 − 2γ ñ0 ),
68
(4.14)
160
120
100
0
x2 n (x) , x2 n’ (x)
140
0
0
80
60
40
20
0
0
2
4
x=r/x
6
8
10
0
Figure 4.1. Condensate and noncondensate density profiles of a two-dimensional Bose-Einstein
gas with N = 104 and γ = 0.1 at T = 0.7 Tc , where Tc is the ideal-gas transition temperature.
The Goldman-Silvera-Leggett model (dashed lines) treats the condensate in the Thomas-Fermi
limit (4.14), while the Hartree-Fock model (full lines) uses the full Gross-Pitaevskiı̆ equation (4.5).
Both models use Eq. (4.13) to describe the noncondensate.
where Θ(x) is the Heaviside step function, introduced to ensure that the density profile is everywhere
real and positive. Also, since the thermodynamic limit requires that ω → 0, the WKB approximation
becomes rigorous and can be used with confidence. Thus we can insert expression (4.14) in the
Bogoliubov energy spectrum (4.9) and show that the latter reduces to
²HFB (κ, x) ≈ κ
p
κ2 + 2γ ñ0 ≈ κ
p
2γ ñ0
(4.15)
for small quasimomenta. The fact that it is linear clearly shows us that, in this approximation, the
low end of the energy spectrum corresponds to phononlike quasiparticles. Now, if we introduce (4.14)
into the noncondensate density (4.10), we can see that the argument of the logarithm vanishes at
all temperatures, making the density diverge at every point in space. The first line of (4.10) shows
that the divergence in the integral is caused at its lower limit; this restates the conclusion arrived at
in Ref. 43: low-energy phonons destabilize the condensate in the two-dimensional thermodynamic
limit when the noncondensate quasiparticles obey the Bogoliubov spectrum.
In the Hartree-Fock approximation, on the other hand, the energy spectrum tends in this limit
to ² ≈ κ2 + γ ñ0 and predicts single-particle excitations whose minimum energy is γ ñ 0 ; this can be
69
interpreted equivalently by assigning a minimum value κ2c = γ ñ0 for the excitation quasimomentum.
This cutoff is consistent with those proposed in the past [39,40,118–120] and removes the infrared
singularity in the HFB equations.
This momentum cutoff is robust enough that it enables one to carry out Hartree-Fock calculations even in the thermodynamic limit: in fact, as was first found in Ref. 96, the introduction of
this limit actually simplifies the calculations, and it is possible to find self-consistent solutions by
simultaneously treating the noncondensate in the Hartree-Fock approximation and the condensate
in the Thomas-Fermi limit [116,117]. This model cannot provide realistic density profiles at every
point in space, since the condensate density (4.14) has a discontinuous derivative at its edge, but
predicts quite reasonable results outside of this region, as can be seen in Fig. 4.1.
4.2.3
The free energy
We have seen that in 2D the mean-field-theory BEC equations admit solutions both with and
without a condensate. The unphysical solution should be that with the highest free energy, since
equilibrium at finite temperatures occurs when the grand potential attains a minimum; in fact, the
Bose-Einstein distribution factor in Eq. (4.7) comes from minimizing this quantity [103,122], which
in our dimensionless units adopts the form
1
Ω = (U − µN − T S)/ ~ω ≡
2
Z
d2x (Υ − µ̃ñ − tΣ).
(4.16)
At this point we have to keep in mind that the grand-canonical free energy is a function of µ̃, not
of N ; in order to make a meaningful comparison of these energies at the same N , then, we have
to compare the Helmholtz free energies, given by à = Ω + µ̃N . The expressions given below have
all been derived in the grand-canonical ensemble and thus contain the chemical potential; we will
eliminate this dependence on µ̃ by adding µ̃N to the expressions we obtain.
In the Hartree-Fock approximation, the grand-canonical energy density of the system is given
by [88]
Υ − µ̃ñ = Φ̃(Λ̃ +
γ
1
ñ0 )Φ̃ +
2
(2π)2
Z
d2κ ²
− γ ñ02 ;
e²/t − 1
(4.17)
the first term corresponds to the condensate energy, while the second one is the sum, weighted by the
Bose-Einstein distribution, of the energies of the excited states; this last expression includes an extra
70
term γ ñ02 that has to be subtracted explicitly, as is usually the case in Hartree-Fock calculations [84].
(See Section 4.5.)We can simplify (4.17) further by invoking Eq. (4.5):
Υ − µ̃ñ = −γ ñ2 +
1
γ 2
ñ0 +
2
(2π)2
Z
d2κ ²
.
−1
e²/t
(4.18)
The entropy of the system can be found from the combinatorial expression [48,84]
S = −kB
X
i
(fi log fi − (fi + 1) log(fi + 1)),
(4.19)
which in the WKB and Hartree-Fock approximations yields the entropy density [88]
tΣ =
1
(2π)2
Z
t
d2κ ²
−
²/t
e − 1 (2π)2
Z
d2κ log(1 − e²/t ).
(4.20)
The first term in (4.20) cancels with the last one in (4.18), and the other term can be integrated in
closed form, yielding
Ãc = µ̃N −
Z
¶
µ
t2
1 2
−(x2 +2γ ñ−µ̃)/t
2
g2 (e
)
d x γ(ñ − ñ0 ) +
2
4π
2
(4.21)
for the free energy of the condensed solution. As T → 0, this expression reduces to the correct
value in the homogeneous case [23]. In the trapped case, it tends to the Thomas-Fermi value given
in Eq. (3.36).
The free energy of the uncondensed solution is found by simply crossing out the condensate
density in (4.21):
Ãu = µ̃N −
Z
2
dx
µ
¶
t2
−(x2 +2γ ñ−µ̃)/t
γ ñ +
g2 (e
) .
4π
2
(4.22)
Equations (4.21) and (4.22), of which more-general versions are derived in Ref. 83 by a different
method [103], can be easily seen to become identical at temperatures high enough that the condensate
density can be neglected. On the other hand, their low-temperature limits differ, since Eq. (4.22)
tends to a higher value than that attained by (4.21). This can be traced back to the fact that the
grand-canonical ensemble has to be changed in order for it to correctly describe the particle-number
fluctuations at low temperatures [83,123]. Now, a 2D Bose system has a condensate at least at T = 0,
so the two free energies should coincide there; however, we must keep in mind that the semiclassical
approximation used to derive Eq. (4.22) requires that kB T À ~ω; thus the method we are using
cannot describe the appearance of the zero-temperature condensate in the uncondensed solution.
At the end of the next section we will study further consequences of this distinction.
71
4.3
Numerical methods and results
There is a time-honored prescription [74,88] for finding the self-consistent solution of the HartreeFock equations in the presence of a condensate: Initially, we assume that only the condensate is
present and solve the Gross-Pitaevskiı̆ equation (4.5) for ñ0 = 0. The wavefunction and eigenvalue
that result are fed into the Hartree-Fock expression for the density (4.13), which, when integrated
over all space, yields also a value for the noncondensate fraction N 0 ; one can then readjust the
condensate fraction and solve the Gross-Pitaevskiı̆ equation that results. The process is then iterated
until the chemical potential and the particle fractions stop changing.
By far the most difficult part of this process is the solution of the nonlinear eigenvalue problem (4.5). Different methods exist in the literature; we have obtained identical results by solving
it as an initial-value problem [94] and, much more efficiently, by employing the method of spline
minimization [44,69]. This method uses the fact that Eq. (4.5) is the Euler-Lagrange equation that
minimizes the functional
J[Φ̃] =
Z
¤
£
˜ Φ̃)2 + Φ̃(x2 + 2γ ñ0 (x))Φ̃ + γ Φ̃4 .
d2x (∇
2
(4.23)
After setting a small, nonuniform grid of fixed abscissas that represent the coördinate x, we take
the corresponding ordinates, which represent Φ̃, as the parameters to be varied until (4.23) attains
its smallest possible value. Since we only have information about the value of the function at a
discrete set of points, in order to calculate the necessary derivatives and to integrate we perform
a cubic-spline interpolation; the integral is found using ten-point Gauss-Legendre quadrature. The
minimization is carried out using the Nelder-Mead method.
We checked our code by comparing its predictions in three dimensions to previously published
results. For the case studied in Ref. 69, the condensate fractions we found differed from those in the
paper by less than one part in 104 . The code also reproduced previously known results for the ideal
gas, including finite-size effects [28].
Figure 4.1, already discussed above, shows one of the solutions that we have found using the
Hartree-Fock approximation. The gas has N = 104 atoms; the coupling constant γ has been chosen
so that the system has approximately the same radius as the three-dimensional gas studied in Ref. 69,
where parameters resembling those of the original JILA trap [6] are used. The system is shown at
T = 0.7 Tc , where Tc is the condensation temperature for the ideal gas. We have also found solutions
for the Goldman-Silvera-Leggett model, which corresponds to the large-N limit of the GP equation.
72
This was done by treating the problem as a simultaneous system of nonlinear equations on a uniform
grid and solving it with a least-squares method.
When N = 104 , as in Fig. 4.1, it is not possible to find a self-consistent solution for the condensed equations beyond T ≈ 0.8 Tc , since ²HF becomes negative; this had already been noted in
Ref. 69 for three dimensions, where it was interpreted as a finite-size effect, and occurs at even lower
temperatures for 2D. The limitation becomes more severe as N increases: for N = 10 6 we cannot
find solutions beyond T ≈ 0.5 Tc . The authors of Ref. 116 report predictions at temperatures very
close to the transition by using a finite-size correction to the chemical potential [70]. Our inability to
work above certain temperatures might be a consequence of using the semiclassical approximation,
though we point out that the Popov approximation is expected to break down close to the transition
temperature [54,88].
We also found self-consistent three-dimensional solutions using the more general Bogoliubov
spectrum by applying the same method and using the Hartree-Fock solutions as a starting point
for the iteration. We find that these solutions exhibit an enhanced depletion of the condensate, in
agreement with those found by other authors [70]. It was impossible, however, to find this kind of
solution in two dimensions, even for systems with N as low as 100: after a few iterations, the chemical
potential became too large, the Bogoliubov energies became imaginary, and the noncondensate
density diverged. When the possibility of phononlike excitations is allowed, then, the condensate is
destabilized, just as had been found in the Thomas-Fermi limit [43].
The uncondensed case, on the other hand, gives us solutions in these conditions, and to it we
now turn. Equation (4.11) along with the particle-conservation condition (4.8) for all temperatures
is most easily solved by rewriting (4.11) as
Ze−x
2
/t
= 2 e−(π−γ)ν(x) sinh πν(x),
(4.24)
where have introduced the fugacity Z = eµ̃/t and ν(x) ≡ 2ñ(x)/t. Given Z, t, and γ it is possible
to find ν at every point using a standard root-finding algorithm. One then wants to find the value
of Z such that the total number of particles is N . It is better, however, to write Eq. (4.24) at the
origin,
Z = 2 e−(π−γ)ν0 sinh πν0 ,
(4.25)
eliminate Z between (4.24) and (4.25), and solve for ν0 = ν(0), the density at the center of the trap,
using the same root finder.
73
200
x0 n (x)
150
2
100
50
0
0
2
4
6
8
10
x = r / x0
Figure 4.2. Total density profiles of a two-dimensional gas with the same parameters as in Fig. 4.1.
In this case we exhibit both the uncondensed (full line) and the condensed (dashed line) solutions;
the latter we have broken once again into its condensate and noncondensate parts (dotted lines).
In Fig. 4.2 we show the (Hartree-Fock) condensed and uncondensed solutions that we have
obtained. They are similar in shape and exhibit identical behavior for large x. The uncondensed
solution has a lower value at the origin and predicts a wider radial density profile.
We also calculated the free energy corresponding to each solution; the results are shown in
Fig. 4.3, which shows Ã/N as a function of temperature for both cases when N = 103 and N =
104 . The free energies appear to coincide at high temperatures; this was to be expected, since
Eqs. (4.21) and (4.22) become identical at temperatures high enough for the condensate density ñ 0
to be neglected. As for the low-temperature limit, we have already noted that the free energy of the
condensed solution tends to the value predicted by the zero-temperature Thomas-Fermi limit, while
that of the uncondensed solution tends to a higher value. This value can actually be calculated: it
is easy to show [42] that the low-T limit of Eq. (4.24) is
2γn(x) = µ̃ − x2 ,
(4.26)
the Thomas-Fermi limit but with γ replaced by 2γ; a larger interaction strength, as we can see
from (3.36), implies a higher free energy. Interestingly, when we compare a given Hartree-Fock
solution to an uncondensed solution with half the interaction strength, we find that the density
74
40
20
A/N
0
−20
−40
−60
−80
0
0.1
0.2
0.3
0.4
0.5
T/T
0.6
0.7
0.8
0.9
1
c
Figure 4.3. Free energy per particle, Ã/N , for the condensed (dotted line) and uncondensed (full
line) solutions. The curves with higher value at T = 0 correspond to N = 10 4 atoms, while those
with the lower value at T = 0 correspond to N = 103 . The coupling constant is γ = 0.1 in both
cases. The open circles on the vertical axis are the Thomas-Fermi predictions given by Eq. (3.36)
for T = 0. The free energies are seen to concur in the high-temperature limit; the low-temperature
limit, on the other hand, shows the discrepancy that results from restricting condensate fluctuations
(see discussion after Eq. (4.22)). Note that Tc , the transition temperature for the ideal 2D trapped
gas, depends on N ; thus the x axis for N = 103 and that for N = 104 represent different actual
temperatures.
profiles become very similar (and are indistinguishable at T = 0), while the free energies coincide
almost exactly for a wide range of temperatures; attractive as this possibility might be, however, it
has a serious flaw: the free energies start to differ as the temperature increases, where they should
coincide by definition. This tells us that the factor of 2, which turns out to be the same one discussed
after Eqs. (4.5) and (4.22), has to be retained; the WKB approximate expression for the uncondensed
state is valid only for kB T À ~ω, so dropping the factor of 2 in order to match the T = 0 condensate
is invalid. (The authors of Ref. 42, in fact, omit this factor from their paper, although they address
this question in a subsequent publication [100]). It is in fact this factor that guarantees that the
uncondensed solution has a higher free energy than the condensed one at all temperatures; this,
despite our inability to find condensed solutions using the Bogoliubov energy spectrum, leads us to
conclude that, at least at this level of approximation, the uncondensed solution is unphysical and the
two-dimensional finite trapped system will exhibit some sort of condensation at finite temperature.
75
4.4
Conclusion
We have found solutions to the two-dimensional HFB equations in the Hartree-Fock approximation,
both for finite numbers of atoms and in the thermodynamic limit; still, when we try to go beyond this
scheme and consider the whole Bogoliubov excitation spectrum, the low end of which is described by
phonons, we are unable to find self-consistent solutions for finite—even low—values of N . We have
seen that it is possible to describe the system as having no condensate at all, but these solutions,
at least for the parameter combinations that we have studied, have a higher free energy than their
partly-condensed counterparts; this leads us to conclude that the 2D system will have some kind
of condensate at low enough temperatures, insofar as the Hartree-Fock approach can successfully
describe the quasicondensate that we expect to find in a finite system.
One could argue that the whole mean-field approach we have adopted is wrong in two dimensions,
since a condensation into a single state is being assumed from the start. However, an alternative
treatment [124] that does not make this assumption ends up with equations identical to (4.6), so
we are left none the wiser. We have also bypassed the fact that the interaction strength g is not
really constant in 2D, but rather depends logarithmically on the relative momentum [35,81]; this,
however, has been found to be of little consequence [100]. Another possibility is that condensation
occurs into a band of states, forming a “smeared” or generalized condensate [36,38]; this alternative
situation is not ruled out by the standard proof of Hohenberg’s theorem [80].
Monte Carlo simulations do seem to predict the presence of a condensate in two dimensions [32,
125]. Furthermore, two-dimensional condensates appear to have been produced in the laboratory [29,
30]. Presumably the MC simulations show both the effects of a quasicondensation of a finite system
and the BKT transition to the superfluid state. Our attempt here has been to test the possibility of
representing these Bose effects by a relatively simple set of HFB or Hartree-Fock equations. While
no solutions can be found for the HFB set, even for a finite system, the Hartree-Fock equations do
provide a description of a condensed state. We feel there is reason to believe that this description
should be a fair representation of the actual situation.
4.5
Appendix: The Hartree-Fock excess energy
The fact that the interaction energy is overcounted is solely a result of the Hartree-Fock approximation (i.e., it happens independently of the semiclassical approximation), and it occurs even in the
absence of a condensate. To simplify matters, we will prove this assertion at a temperature high
enough that the whole system can be described by the noncondensate field operator ψ̃.
76
The functions uj and vj are defined as the coefficients that result when the noncondensate field
operator ψ̃ is expanded in creation and annihilation operators. In the Hartree-Fock approximaR 2 ∗
tion, the vj are neglected and the uj can be proved to be orthonormal [73]:
d x uj uk = δjk .
Equation (4.6) becomes
Λ̃uj + 2γ
X
fk u∗k uk uj = ²j uj ;
(4.27)
k
on multiplying both sides by fj u∗j , summing over j, and integrating over the whole volume we obtain
X
j
fj ²j =
X
j
fj
Z
dσx u∗j (Λ̃ + 2γ
X
fk u∗k uk )uj .
(4.28)
k
On the other hand, we can expand ψ̃, insert it directly into the Hamiltonian (4.4), and use Wick’s
theorem [84,90] in the form
hαj† αk i = fj δjk , hαj† αk† αl αm i = fj fk (δjl δkm + δjm δkl )
(4.29)
to obtain
X
1
fj
(U − µN )/ ~ω =
2
j
Z
dσx u∗j (Λ̃ + γ
X
fk u∗k uk )uj .
(4.30)
k
The interaction term has clearly been overcounted in Eq. (4.28) and has to be subtracted explicitly.
At low temperatures, only a term corresponding to the noncondensate density has to be subtracted.
4.6
Summary
We apply Hartree-Fock-Bogoliubov mean-field theory to the study of a purely two-dimensional finite
trapped Bose gas at low temperatures and find that, in the Hartree-Fock approximation, the system
can be described either with or without the presence of a condensate; this continues to be true as
the system grows in size enough to have reached in practice the thermodynamic limit. Of the two
solutions, the one that includes a condensate has a lower free energy at all temperatures. However,
the Hartree-Fock scheme neglects the presence of phonons within the system, and when we allow for
the possibility of phonons we are unable to find condensed solutions; the uncondensed solutions, on
the other hand, are valid also in the latter, more general scheme, but are found to have consistency
problems of their own.
77
CHAPTER 5
PATH-INTEGRAL MONTE CARLO AND THE SQUEEZED
INTERACTING BOSE GAS
It is worth noticing that the possibility of making a close comparison
between exact Monte Carlo simulations, experimental data, and meanfield calculations is a rather rare event in the context of interacting
many-body systems and represents a further nice feature of BEC in
traps.
—F. Dalfovo, S. Giorgini, L. P. Pitaevskiı̆, and S. Stringari [54]
5.1
Introduction
We have seen in Chapter 4 that, while the HFB equations are successful in providing a description
of the finite two-dimensional interacting trapped Bose gas, their success is not beyond doubt or
qualification. The semiclassical approximation prescribes an energy spectrum that is linear (i.e.,
phononlike) at low wavenumbers, and these long-wavelength phonons seem to destroy the longrange order of the system: We saw analytically that the low-wavenumber part of the spectrum
contributes an infrared divergence when we attempt to calculate the thermal density of the gas in
the thermodynamic limit [43], and we were unable to find a single instance where the full HFBWKB scheme worked for a finite system. Furthermore, we confirmed that these same equations are
amenable to solution, quite robustly and at all temperatures, when the condensate density is crossed
out and the whole system is treated as an uncondensed gas [42,93,100].
On the other hand, we found that the Hartree-Fock approximation, which results from neglecting
the “quasihole” part of the Bogoliubov spectrum, provides a quasimomentum cutoff κ min = γ ñ0 that
takes care of most of the problems just described and makes the condensate reappear, fortified: the
density profiles start behaving as one would want, closely resembling those in 3D, tending gently to
reasonable limits at low temperatures and high particle numbers, and having a lower free energy than
their uncondensed counterparts. Moreover, the recently discovered [93] exact solutions that do not
resort to the semiclassical approximation (and therefore reflect the true quantum-mechanical behavior of the system) have a discrete energy spectrum that admits no infrared divergences; this bolsters
78
our confidence in the HF solutions as a reasonable description of the two-dimensional condensed
Bose gas.
We have also calculated (at the end of Chapter 3) the off-diagonal density matrix corresponding
to these Hartree-Fock equations: The largest eigenvalue of the matrix gives a condensate number
consistent with that found directly in the course of the self-consistent procedure, and inspection of
its other eigenvalues reveals that the other states individually have negligible populations. In this
final chapter we want to see if a Bose gas confined in a trap of extreme anisotropy can be described
consistently by these two-dimensional Hartree-Fock solutions.
The ideal trapped Bose gas, as we had the chance to see in Chapter 2, obeys this very well:
We performed the thought experiment of positioning a camera along the squeezed direction and we
looked at the surface density profile that resulted from compressing the system. The ideal trapped
2D gas undergoes a condensation, and, upon compression, the 3D ideal Bose gas experiences a smooth
crossover from three-dimensional to two-dimensional behavior: The surface density evolves smoothly
into the expected 2D isotropic density profile, with the condensate density having exactly the same
shape all through the process but becoming progressively more important, and the condensate
fraction changes from a cubic to a parabola, exactly as one would expect. A close look at that graph
suggested the predictable result that if we squeeze a system we can make it condense at higher
temperatures.
A satisfying feature of the theory of BEC in traps is that interactions can be well accounted for,
in all their complexity, by the introduction of just one additional parameter, the s-wave scattering
length a of the atoms under study, that then enters the dynamics of the system through the coupling
constant g. In Section 3.2 we saw how the coupling constant could be expressed consistently in our
system of units, in which it appears as γ. While in three dimensions γ is clearly interpretable in
terms of a, in 2D the connection is less transparent; our results in Chapter 4, for example, were found
using a feasible but arbitrary coupling constant. However, our study of the zero-temperature gas
in Section 3.5 gave us an expression for the equivalent 2D coupling constant of a highly anisotropic
3D gas that provides the missing piece in our analysis.
In this chapter, then, we will study the compressed 3D gas, and we will do so using path-integral
Monte Carlo simulations [32,33,44,69], a method that has been very useful in the study of liquid
helium [45,126,127] and that has also been used for trapped and homogeneous [128] dilute Bose
systems. We will start by giving a review of the method and its implementation and show some
of the results we obtained for isotropic traps. We then simulate a gas of increasing anisotropy and
study the behavior of its surface density using the Hartree-Fock approximation.
79
5.2
Path integrals in statistical mechanics
Perhaps the main difference between this method and the mean-field theory we developed and used
in past chapters is the fact that in the Monte Carlo simulation we will not impose Bose symmetry by
invoking the grand canonical ensemble but rather by employing a trick based on our considerations in
Appendix B. In the canonical ensemble, the N -body density matrix of a system of distinguishable
atoms is related to the Hamiltonian that governs the motion through ρ = e −β̃ H̃ . The inverse
temperature β̃ is a scalar parameter, and if we break it up into summands, say β̃ = β̃1 + β̃2 , the
relation
ρ = e−β̃ H̃ = e−(β̃1 +β̃2 )H̃ = e−β̃1 H̃ e−β̃2 H̃
(5.1)
holds exactly. We can repeat this process as many times as we want, dividing β̃ into M parts—which
for convenience we will take to be equal, though they need not be—and if we interpose complete sets
of N -body position eigenkets (|Rk i, where Rk ≡ [x1,k , . . . , xN,k ]) between each pair of operators we
can write
ρ(R, R0 ; β̃) =
Z
dNσR2 · · ·
Z
dNσRM ρ(R, R2 ; τ ) · · · ρ(RM , R0 ; τ ).
(5.2)
We can gain considerable insight into Eq. (5.2) by interpreting β̃ as the total duration of a
process.1 By dividing the time interval [0, β̃] into a series of discrete time steps, each of duration τ , we
can picture each member of the sequence R ≡ R1 , R2 , . . . , RM , RM +1 ≡ R0 as being an intermediate
configuration (or “time slice” [45]) adopted by the system as it evolves from R 1 to RM +1 . Indeed,
if we make M → ∞ and τ → 0 while keeping β̃ finite, this sequence becomes a continuous path
(hence the name). The density matrix can then be interpreted as a propagator that obeys the usual
composition property [46], and Eq. (5.2) states that in its evolution the system adopts all possible
intermediate configurations; the density matrix results from summing over all the time slices; see
Fig. 5.1.
Beside providing this physical picture, the path integral also constitutes a useful calculational
tool: since β̃ ⇔ T and hence τ ≡ β̃/M ⇔ M T , each of the intermediate density matrices is to
be evaluated at a high temperature, where it is amenable to accurate analytic approximation. The
exact convolution property (5.2) conveniently lowers the temperature at each step and leaves us
at the end with the low-temperature density matrix that we desire; at no point in the process
1
Rigorously speaking, this time is imaginary [46]. Caution must be had when interpreting this
variable: For example, as the temperature of a system decreases, so do on average the kinetic energies
of the particles that compose it; but this means that the uncertainty over their position increases,
their wavefunctions are more spread out, and, as a consequence, they have larger displacements
over τ —they move “faster.” [45]
80
x1,M +1
x2,M +1
R7 = R 0
β̃ ≡ 6τ
5τ
R6
4τ
R5
3τ
R4
2τ
R3
τ
R2
R1 = R
0
x1,1
x2,1
Figure 5.1. The density matrix as a path integral. The one-dimensional two-particle system shown
above is described by a density matrix that can be expanded into paths by means of Eq. (5.2);
the figure shows two possible paths traversed by each particle. The inverse temperature β̃ of the
system is split into M = 6 “time slices”; the density matrix at each of these corresponds to a system
temperature ∝ M T and can therefore be calculated more easily. When this is done, one must “sum”
over all possible paths. In the figure, all paths end at their starting points; these closed paths suffice
to determine the thermodynamics of the system.
do we have to make any essential approximations [45] other than those inherent in our choice of
interatomic potential. The price one pays for this convenience comes in the form of an integral in
N × M × σ dimensions.
The thermodynamics of the distinguishable system depends only on the diagonal (R = R 0 )
elements of the matrix, or, in the language of path integrals, only on paths that close upon themselves
(frequently called “closed polymers” [45]). This changes when we introduce Bose statistics into the
system, which we do by combining Eqs. (5.2) and (B.2) and rewriting the density matrix as
1 X
ρ(R, P R0 ; β̃)
N!
P
Z
Z
1 X
=
dNσR2 · · · dNσRM ρ(R, R2 ; τ ) . . . ρ(RM , P R0 ; τ ),
N!
ρB (R, R0 , β̃) =
(5.3)
P
where we have summed over all possible permutations of the indices that label the distinguishable
particles.2 The paths that yield the thermodynamic properties are now more complicated because
they are no longer closed: the system does not have to end in its initial configuration anymore, but
can do so in any other configuration where the atom labels have been permuted.
2
If N = 2, for example, we have to consider the two possibilities P R ≡ P [x1 , x2 ] = [x1 , x2 ] and
P [x1 , x2 ] = [x2 , x1 ], add them together, and divide by 2! = 2; see Appendix B for a more detailed
discussion.
81
This can be seen by considering a given permutation P , which, as we saw in Appendix B, can
be decomposed into cycles. The closed paths from the distinguishable system will still be trod by
the atoms whose labels are left untouched by the permutation and thus belong to 1-cycles. Any two
particles belonging to an exchange (or 2-) cycle, say 17 and 134, will have their paths intertwined:
x17 will be x134 at the end of the convolution, and x134 will become x17 , regardless of the shape
of their paths in the intervening time slices. Atoms whose labels belong to longer cycles will lie on
progressively “grand[er] ‘ring-around-the-rosy’ exchange loops” [53].
At high temperatures, the identity permutation dominates, and most atoms will be in 1-cycles;
the behavior of the Bose system will then be well described by a Maxwell-Boltzmann distribution
in which the indistinguishability of the atoms is accounted for by the Gibbs factor 1/N !. At lower
temperatures, when the de Broglie wavelength and the σth root of the inverse density become
comparable, the particles’ wavefunctions start to overlap and the longer exchange loops become
progressively important. As we saw in Chapter 2, the appearance of long permutation cycles is a
signature of BEC [44,53].
Thus the additional degree of freedom granted by the permutations gives rise to Bose-Einstein
condensation [129], and Eq. (5.3) incorporates this feature exactly. The price paid to lower the
temperature by means of the composition property is raised even further when we have to perform
an additional sum over N ! possible label reshufflings, and the task of directly evaluating the density
matrix (5.3) is not just daunting but outright impossible. However, every summand and integrand
in (5.3) is positive, and thus it is possible to sample the density matrix using the Monte Carlo
method and use it to calculate thermodynamic averages of other quantities.
5.3
The Monte Carlo method and the Metropolis algorithm
Even if it were feasible to set up and monitor a large-enough uniform grid on which to carry out the
function evaluations required for an integral like (5.3), such an effort would quickly lead nowhere.
The following simple example [130] illustrates why this is so. In a 10-point grid like
•
◦
◦
◦
◦
◦
◦
◦
◦
•
(5.4)
most of the evaluations will be performed in the interior of the grid—as it should be, since in order
to be integrable over an unbounded domain the function must fall steeply to zero with increasing
82
values of the argument. As the dimensionality of the domain grows, the number of interior points
plummets as a fraction of the total: in two dimensions, the grid
•
•
•
•
•
•
•
•
•
•
•
◦
◦
•
◦
◦
◦ ◦
◦ ◦
◦ ◦
◦
◦
◦
•
◦
◦
◦
•
•
◦
◦
•
•
◦
◦
◦
◦
◦ ◦ ◦
◦ ◦ ◦
◦ ◦ ◦
◦
◦
◦
•
◦
◦
◦
◦
◦
•
◦
•
•
◦
◦
•
◦
◦
◦ ◦
◦ ◦
◦ ◦
◦
◦
◦
•
◦
◦
◦
•
•
◦
◦
•
•
•
◦ •
◦ •
◦ •
◦
◦
◦
•
(5.5)
•
•
•
•
has 64 interior points and 36 endpoints. The three-dimensional grid that results from stacking
eight copies of (5.5) on top of each other and adding lids has 512 points in its interior and 488 at its
edges—almost half the total. In four dimensions the fringes already dominate, and in ten dimensions
the interior holds only 10% of the points. In general, a p-point grid has p − 2 interior points per
dimension, and the fraction of points within a σ-dimensional hypercube is
µ
p−2
p
¶σ
=
µ
2
1−
p
¶σ
= eσ log(1−2/p) ,
(5.6)
a number that decays to zero exponentially as σ grows. Hence any attempt to calculate (5.3)
using a uniform grid is just a wasteful time of calculating the number zero. Gaussian quadrature
methods like those reviewed in Appendix C and used extensively in this thesis improve this situation
somewhat by using nonuniform grids and giving more weight to the interior points, but the sheer
number of dimensions involved in (5.3) renders this method useless as well.
The Monte Carlo method, in which the function is evaluated at random points, has been developed to deal with this problem. The classic example of a Monte Carlo method is also the simplest:
to calculate π, we only have to enclose the unit circle in a square of side 2 and choose a million
points at random inside the square; the ratio of the number of points that also fall within the circle
to the total number of points will then be π/4. Alternatively, to calculate an integral of the form
Rb
f (x) dx we can choose M numbers x1 , . . . , xM at random in the interval [a, b]; the mean-value
a
theorem then tells us that
Z
b
a
M
1 X
f (x) dx ≡ (b − a)hf i ≈ (b − a)
f (xi ),
M i=1
83
(5.7)
and we say that the function f has been “sampled” over [a, b]. The error in the approximation falls
√
as 1/ M [131], so for M large enough Eq. (5.7) becomes a good estimate of the integral. Trivial as
it may seem from these two examples, the Monte Carlo method can be used for problems where the
domain of integration is so complicated that other methods are considerably more difficult, or even
impossible, to apply—and it is the only reliable method to calculate integrals in high-dimensional
spaces.
Note, however, that we managed to calculate these two integrals so easily because we knew the
kind of random numbers that we had to generate: the points in the first example had to lie within
the square, and every xi in the second obeyed a ≤ xi ≤ b. Let us concentrate on this second
Rb
example, and, moreover, let us view it not as the calculation of the integral a f (x) dx but as that
of the average value hf i. We can rewrite Eq. (5.7) as
1
hf i =
b−a
Z
b
f (x) dx =
a
R∞
f (x) %(x) dx
−∞
R∞
−∞
%(x) dx
(5.8)
where we have introduced the probability distribution
%(x) =
(
1
0
if a ≤ x ≤ b,
otherwise.
(5.9)
These equations can be interpreted in one of two ways. We could calculate the integral by generating
random real numbers uniformly distributed along the complete real axis (however that may be
done), sifting these through %(x), and only then calculating the average of f . It should be clear
that this process would be difficult and wasteful. On the other hand, we can go through the
process in the way we described above: We can select random numbers already distributed according
to %(x) and use that new uniform distribution to sample the function. In general, the probability
distribution % will not be uniform, and this process of importance sampling must reflect the shape of
the distribution—in other words, we must generate more random numbers (or, in higher-dimensional
spaces, configurations) lying within the intervals where %(x) is large than within the intervals where
%(x) is small.
At this point we can connect these considerations to those in the preceding section. Equation (5.8), rewritten as
hf i =
Z
dx f (x) R
%(x, x)
Tr %f
,
=
Tr %
dx %(x, x)
(5.10)
shows that it is possible to identify the average of f with an ensemble average, provided that we take
the probability distribution % to be the density matrix of the system. Thus given a large enough
84
number of xi we can find the ensemble average of any relevant function f by simply calculating an
arithmetic mean. The problem is reduced to finding a sufficient number of configurations, distributed
according to the density matrix, that i) will be in the region where the distribution—which, of
course, now includes the interactions—is significant and ii) will still be diverse enough to constitute
a representative sample. The high dimensionality makes it impossible to propose configurations “by
eye,” as one can easily do in the examples above, and even if that were possible it would still be
faced with the nontrivial task of normalizing the density matrix.
There is, however, a method, originally introduced by Metropolis et al. [132] to study a classical
system of interacting hard spheres, that takes care of these problems at a single stroke. Instead of
setting up a different configuration every time, we start with a plausible configuration and make
it walk through configuration space by having it change it at every step, in what is known in the
literature as a Markov process.3 The Metropolis algorithm, illustrated in Fig. 5.2, consists of the
following steps:
1. Start with a random configuration Cold and calculate its probability %old .
2. Create a new configuration Cnew by displacing one or more of the atoms at random and
calculate its probability %new .
3. If %new > %old then assign Cold → Cnew .
4. If %new < %old generate a random number 0 ≤ ν < 1. If ν < %new /%old then once again assign
Cold → Cnew ; otherwise keep Cold but count it as a new configuration.
To justify the Metropolis algorithm further we introduce the transition probability π(R s → Rt )
that a system will move from the configuration Rs to Rt , and the probability distribution %(Rs )
obeyed by Rs . If the system is visiting Rs at a given instant, it obviously has to have attained its
present configuration by having been in any other (not necessarily different) configuration in the
immediate past and having undergone a transition to where it is now. This can be written as
%(Rs ) =
X
t
π(Rt → Rs ) %(Rt ).
3
(5.11)
We assume that transitions happen ergodically; in other words, it is possible for the system,
when it starts at a given configuration, to reach any other configuration in a finite (though not
necessarily small) number of steps.
85
2
1
♦
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Total
Tests
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
14
5
?
?
♦
?
♦
♦
♦
?
♦
?
?
♦
♦
♦
?
?
♦
♦
?
0
1
♦
♦
♦
♦
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Total
Tests
♦
♦
♦
♦
♦
♦
♦
?
10
5
6
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
18
4
?
?
?
?
?
♦
♦
?
♦
♦
♦
♦
6
2
♦
♦
♦
♦
2
2
Figure 5.2. A simple game that illustrates the Metropolis algorithm. A diamond, confined in the
box shown at the top, is twice as likely to be in region 0 than in region 1 and cannot be anywhere
else; it can move to the left or right with equal probability. The second column of the table displays
a string of random bits: the first one is 0 and prescribes the initial condition; after that, a 0 will
order a move to the left and a 1 will order a move to the right, as long as it is into a region of
nonzero probability. All moves from region 1 to region 0 are accepted, while moves from region 0
to region 1 are subjected to a Metropolis test: the move is accepted if the next bit in the string
is 1 and rejected if it is 0. The third column shows the first 48 configurations: of the 16 decisions
(shown by question marks) that were taken, 9 ordered the diamond to stay in place; at the end,
the diamond is at the left in 32 configurations—exactly 32 of the total. The fourth column shows
the effect of not counting “failed” tests as new configurations: the diamond spends roughly half the
time in each region, as if the distribution were uniform everywhere. Note that these results were
obtained without ever needing to normalize the probability distribution.
86
In the next possible instant, the system will either stay in its present configuration or undergo a
transition to any other possible configuration; hence
X
t
π(Rs → Rt ) = 1.
(5.12)
Equations (5.11) and (5.12) imply that
%(Rs )
X
t
π(Rs → Rt ) =
X
t
π(Rt → Rs ) %(Rt ),
(5.13)
which restates the definition of % as an equilibrium distribution: the rates at which the system goes
into and out of any state s should be equal [133]. A sufficient condition for (5.13) to be satisfied is
the detailed-balance condition [130,133]:
%(Rs ) π(Rs → Rt ) = %(Rt ) π(Rt → Rs ).
(5.14)
The detailed-balance condition also expresses the fact that the microscopic dynamics of the system
obeys the time-reversal invariance demanded by quantum mechanics [133]. The Metropolis algorithm
is equivalent to choosing the transition probability
µ
¶
%(Rt )
π(Rs → Rt ) = min 1,
.
%(Rs )
(5.15)
It is easy enough to verify that (5.15) obeys (5.14), as can be seen from the following table [134]:
π(Rs → Rt )
%(Rs )π(Rs → Rt )
π(Rt → Rs )
%(Rt )π(Rt → Rs )
%(Rt ) > %(Rs )
%(Rt ) < %(Rs )
1
%(Rs )
%(Rt )/%(Rs )
%(Rt )
%(Rs )/%(Rt )
%(Rs )
1
%(Rt )
(5.16)
Note that there is a nonzero probability that the system will make a transition from a more-probable
system (i.e., a system with lower energy) to a less-probable one (with higher energy); this possibility
is, of course, not unphysical, and during the program run it will prevent the system from getting
stuck in a local energy minimum [63]. On the other hand, the inverse transitions occur with 100%
likelihood, and this encourages the system to visit more configurations and hence sample % more
efficiently.
87
Consider again the game we played in Fig. 5.2. The diamond can be either in region 0 or in
region 1; we do not know beforehand the absolute probability with which the diamond will be in any
of them, but we know that the configuration R0 (with the diamond at 0, that is) is twice as probable
as the configuration R1 and thus %(R0 )/%(R1 ) = 2. At any point in the game, if the diamond is
in region 1 and is told to move to the left, it will do so with probability min(1, 2) = 1; if it is in
region 0 and is told to move to the right, the probability for it to do so is min(1, 21 ) =
1
2;
hence
the choice of a random bit to make the decision for us. Equivalently, we could have chosen a real
number 0 ≤ ν < 1 at random and have the decision pend on whether the number was greater or
smaller than
1
2.
Had the diamond been three times as likely to be in region 1 than in region 2, we
could have used a three-faced die instead of a coin to help us decide, or, equivalently, we could have
generated a random number and compared it to 31 .
Now, suppose we perform many measurements in parallel on a large number of systems that are
in equilibrium and far away from each other. Consider any two states s and t, with %(R s ) < %(Rt )
for definiteness. In a given move, we will have Ns π(Rs → Rt ) = Ns systems going from s to t and
Nt π(Rt → Rs ) = Nt %(Rs )/%(Rt ) going from t to s, so the net number of systems that undergo the
transition Rt → Rs is
Nt→s = Nt π(Rt → Rs ) − Ns π(Rs → Rt ) = Nt
µ
%(Rs ) Ns
−
%(Rt )
Nt
¶
.
(5.17)
It is clear that Nt→s will be positive until Ns /Nt exceeds %(Rs )/%(Rt ), at which point it will become
negative; as more moves take place, the ratio Ns /Nt will oscillate about %(Rs )/%(Rt ) until equilibrium
is reached, at which point the ratio Ns /Nt for every s and t will have precisely the value required by
the equilibrium distribution %. We would have reached the same conclusion by looking at the net
number of systems going in the opposite direction:
Ns→t
%(Rs )
= Ns
%(Rt )
µ
Nt
%(Rt )
−
%(Rs ) Ns
¶
.
(5.18)
At this point we should note that, since we are working in the canonical ensemble, and using at every
step a system of distinguishable particles, at equilibrium we should reach the Gibbs distribution,
%(Rs ) ∝ exp(−β̃E(Rs )), where E(Rs ) is the energy associated with the configuration Rs . An
additional important feature of the Metropolis algorithm is that we do not need to calculate the
total energy at every step, but rather the change in energy elicited by the transition:
%(Rs )
= exp(−β̃(E(Rs ) − E(Rt ))) ≡ exp(−β̃∆E).
%(Rt )
88
(5.19)
In our simulations we will not consider energy differences but ratios of density matrices; Eq. (5.19)
tells us that whenever we make a Metropolis decision we need to take into account only the parts
that changed when we generated the trial configuration by displacing the original one. This significantly reduces the number of computations that we must perform and makes the simulations run
significantly faster.
The reader might have noted that in our previous discussion of the Metropolis game we did not
mention the moves (into “forbidden” regions) that always ended with the diamond staying in place.
These can, of course, be interpreted as moves into regions with zero probability that invariably failed
the Metropolis test. Though we did not include these moves in the calculation, we still obtained the
expected results because the Metropolis algorithm depends only upon probability ratios. We cannot,
however, dismiss these failed tests as a mere waste of bits because moves almost as improbable will
always be present; but we can try to avoid them.
To that end, we observe that the probability that a configuration Rs will undergo a transition into
another configuration Rt is in fact a product of two factors: one one hand, there is the probability
that a given transition will be attempted; on the other, there is the probability that the transition,
once proposed, will actually take place. While the latter is still fixed by %, we can make use of
everything we know about the system to try to tweak the former in order to enhance the occurrence
of transitions and make the system walk faster through configuration space.
Figure 5.3 on the next page shows the result of playing the simple game of Fig. 5.2 using better
selection probabilities. Region 0 is still twice as likely to be visited by the diamond as region 1,
but now we have an additional mechanism that pushes the diamond away from the fringes and into
the region of interest, guaranteeing a richer sample of relevant configurations. (In fact, we could
have played the game, and obtained the same results more cheaply, with an additional rule that
would make it impossible for the diamond to even attempt moves outside of regions 0 and 1; this,
however, is possible only if we know everything about the system.) The PIMC algorithm that we
now describe makes extensive use of these ideas.
5.4
An algorithm for PIMC simulations of trapped bosons
Research papers on computational methods usually devote most of their allotted space to the fundamental ideas behind the algorithms they discuss and very seldom give insight into the details of
their calculations. It is virtually impossible for the nonexpert to reconstruct one of those programs
just from studying the papers, and most textbooks on Monte Carlo methods deal almost exclusively
89
2
1
♦
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Total
Tests
−
−
1
0
−
0
0
0
0
0
0
0
0
1
0
2
1
+
1
2
+
1
+
2
+
2
2
+
1
+
1
+
2
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
15
2
0
1
?
?
♦
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
Total
Tests
?
?
♦
?
♦
?
♦
♦
?
♦
?
♦
?
♦
♦
9
7
0
1
0
0
0
0
−
−
0
0
−
−
−
−
−
0
1
2
+
2
+
1
1
1
+
2
2
1
2
2
2
1
2
−
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
♦
17
7
?
♦
?
♦
?
?
?
♦
♦
♦
?
?
?
?
?
?
♦
6
4
Figure 5.3. A variation on the game of Fig. 5.2 that shows the effect of choosing better a priori
probabilities. The rules are the same as before, but now there are two kinds of random numbers.
The integers 0–2 order the diamond to move in such a way that it tends to be pushed out of the
edges or into the regions with finite probability: when it is in region 0, the number 0 will order it to
move to the left, while both 1 and 2 order a move to the right; when it is in region 1, both 0 and 1
will order a left, while 2 will make it go right. The random bits, now called + and −, prescribe the
initial condition and decide the Metropolis tests. The diamond is at the left in 32 configurations once
again, but now there are 25% more Metropolis decisions, and the diamond moves around much more
often; had we had more than just two regions, we would have managed to sample more “admissible”
configurations instead of staying at the edges.
90
with lattice models [130,133] or one-body systems [131]. This section hopes to fill that void in the
literature by providing a “narrative” description of the PIMC simulations of trapped bosons pioneered by W. Krauth [32,33,44,69]; the more technical aspects of the calculation will be relegated
to Appendix D.
The fundamental parameters in the simulation are N , the number of atoms in the gas, a 0 , the
hard-sphere radius, and σ, the dimensionality of the system. We have seen in Section 5.2 that the
inverse temperature β̃ and the number of slices M are not independent, but are instead related
through β̃ = M τ . The time step τ is of fundamental importance: it has to be small enough to
guarantee that the high-temperature density matrix is accurate, and at the same time it has to be
large enough to ensure manageable array sizes and reasonable run times. In his paper [44], Krauth
recommends τ = 0.01 for N = 104 , a value arrived at after “extensive tests.” We shall use that
value for that particle number, and we will increase it to τ = 0.02 for N = 103 , where the densities
attained by the system are much lower (we will later see that this value is indeed acceptable).
Since τ is fixed, the temperature enters the program in the number of slices. (One of the virtues
of the program is precisely the large size of τ : the critical temperature for 104 atoms is reached
with only five slices.) The anisotropy factor λ is the last fundamental parameter we need; it enters
the simulation when we set up the initial configuration, when we generate new configurations, and
during the calculation of interaction corrections. In all of these instances, λ will appear as part
of an ideal-gas density matrix, whose general form in the presence of an anisotropic trap has been
displayed in Section 2.5 and Appendix A; further details are presented in Appendix D.
The high-temperature density matrices will be calculated by means of a pair-product approximation [135],
ρ(R, R0 ) ≈
Y
i
ρ1 (xi , x0i ; τ )
Y
i<j
ρ2 (xi , xj , x0i , x0j ; τ )
,
ρ1 (xi , x0i ; τ )ρ1 (xj , x0j ; τ )
(5.20)
where ρ1 stands for a one-body density matrix and ρ2 is a two-body density matrix. This approximation treats two-body collisions exactly but neglects three- and higher-order processes (which should
be rare in a dilute gas at the high temperature prescribed by a small-enough value of τ [45,136]).
Even this simplified description, however, requires the laborious enumeration of interacting pairs
and the detailed collision analysis that usually constitute the bulk of the work done in an N -body
simulation.
For a short-range potential like the one we consider, it is clear that most collisions will have
negligible effects; in fact, it is possible to find a maximum interparticle separation—an “effective
range”—beyond which the interaction becomes unimportant. This parameter must also be found
91
4057
274
275
6068
8214
254
255
2121
7382
Figure 5.4. Boxes and interactions. When given an atom number, the routine Interboxes finds
the box (of side L) to which it belongs and scours the neighboring boxes searching for other atoms
that might be close enough to it for the two to interact. As the figure shows, the search can involve
between one (2121) and four (6068, 8214) boxes in two dimensions, even though each box has eight
neighbors; the variable Range defines the circles’ radii. In general, we need to look at 3 σ boxes, of
which 2σ will contain potential pairs. In the case depicted here there is only one interacting pair,
with the atoms belonging to different boxes. The box labels are consistent with a 20 × 20 mesh in
two dimensions, while the particle labels are typical of a 10000-atom gas; it should be obvious that
the figure shows only a minute fraction of the atoms that would be present in a realistic simulation.
by hand by studying the two-body density matrix quantitatively. In the end it turns out to have a
value Range = 0.2.
The enumeration of pairs can be streamlined significantly through the introduction of a σdimensional mesh that divides the system into boxes larger than the interaction range; more precisely,
the cube side L must obey L > 2 × Range so that an atom located at the very center of one such
box will have a sphere of radius Range around it that will be completely contained by the cube.
(Krauth’s version has Nbox = 20 boxes per dimension for a total of 8000 boxes; every interior box
has L =
4
9
≈ 0.44, while those on the edges extend outward to infinity.) That way, instead of going
through the complete list of atoms to see if they interact with a given particle, we only have to search
within the box that contains the particle in question and within the neighboring boxes; in fact, we
don’t even have to look at all the adjoining boxes, as Fig. 5.4 shows. A list of boxes is generated (by
a subroutine called Interboxes) by literally drawing a sphere of radius Range around each atom at
every slice and seeing what boxes are contained within each sphere; an important auxiliary array
keeps tally of the boxes that have to be considered at every step.
With these considerations in mind, we are ready to introduce the arrays whose evolution constitutes the simulation. The first one, Rinner, is an N × σ × (M + 1) array that contains the system
configuration at every time slice at any given point in the run; Ibox, of dimension Nbox σ × Maxbox ×
92
(M + 1), where Maxbox is a large number (or can be left to vary if the programming language allows
for dynamic resizing of arrays), keeps track of the atoms contained in each box at each time slice.
Finally, Iperm keeps at every step the permutation that characterizes the system (along with its
inverse), and obviously has dimension N × 2.
At the beginning of a typical run we assign the initial positions of the atoms by sampling the
diagonal elements of the density matrix for distinguishable particles,
ρ(x) ∝ e−(ξ
2
tanh β̃+λη 2 tanh λβ̃)
.
(5.21)
With β̃ and λ as parameters, and given the Gaussian form of (5.21), it suffices to generate three
normally distributed random numbers per atom, one for each direction, centered on the origin
and with standard deviations (2 tanh β̃)−1/2 for x and y and (2λ tanh λβ̃)−1/2 for z. This array is
replicated in the slice direction in order to initialize Rinner. A call to the boxing routine Interboxes
assigns a box number to every atom, and a special-purpose routine creates the list of atoms contained
in each box. The permutation of atom labels is initially taken to be the identity.
The program then walks the system through configuration space by generating moves, subjecting the moved configurations to Metropolis tests, and taking data. There were usually between
105 and 106 such steps in a run; as in all other Monte Carlo simulations, it is necessary to “equilibrate” the system to make it lose memory of the initial, unrealistic configuration: we usually did
some 105 of these “thermalizing” moves. Each move consists of a configuration shift followed by
several permutation moves. We will start by discussing the former.
To increase the probability of acceptance, we move only particles that belong to the same permutation cycle (they are already located close to each other and possibly quite entangled). The
program chooses an atom at random and finds the cycle to which it belongs and the length of that
cycle; our knowledge of the global permutation makes this straightforward. In order to make moves
that will stand a chance of being accepted while walking the system through its enormous domain
at a brisk enough pace, it is necessary to have a maximum number of atoms that can be moved in a
given step. This maximum number can only be determined by trial and error, and a value Lmax = 5
has been found to be appropriate.
Two things can happen: either the cycle length Len ≤ Lmax or Len > Lmax. When Len > Lmax we
move only part of a cycle and leave its endpoints untouched, since they are connected to the atoms
“right before” and “right after” in the closed cycle (this is illustrated in Fig. 5.5). When Len ≤ Lmax
we move the whole cycle to a completely new random position: the endpoints are relocated using
93
←
∨
∨
∨
⇒
⇒
←
∧
∧
∧
Figure 5.5. The two steps involved in moving part of a cycle. Initially, the atoms in the cycle
segment (in this case just one) are displaced to new random positions obeying the distribution (D.15)
dictated by the endpoints (marked by wedges). In the second step, the intermediate time slices are
displaced using the same distribution (dictated now by the positions just found) and threaded in.
The endpoints are unaffected.
β̃
3β̃
0
2β̃
⇐⇒
⇓
β̃
β̃
0
0
Figure 5.6. Moving a complete cycle. To move the complete 3-cycle shown above we proceed as
in Fig. 5.5, but first we have to move the endpoints to a completely new random position; when
doing this we must take into account that a 3-cycle at inverse temperature β̃ (left) is equivalent to
a 1-cycle at inverse temperature 3β̃ (right).
1
3
2
4
1
2
3
4
Figure 5.7. Interactions between particles of changing identity. The figure shows a four-particle
gas in a configuration described by the permutation {1, 2, 3, 4} → {1, 3, 2, 4} or (1)(2, 3)(4); we have
taken the temperature to be so high that only one time slice is necessary. At the beginning of the
path, atom 2 interacts with atom 1 and atom 3 interacts with atom 4; at the end, by virtue of the
exchange cycle, atom 2 interacts with atom 4 instead, and atom 3 with atom 1; this possibility has
to be taken into account during every move.
94
the ideal-gas routine from the start of the run while keeping in mind that the particle belongs to an
l-cycle; in other words, its probability distribution is now governed by the partition function Z 1 (lβ̃),
as we saw in Appendix B, and hence we have to evaluate the standard deviations at an inverse
temperature lβ̃. (Figure 5.6 illustrates the equivalence; see also Ref. 136.)4
In both instances we thread in the positions of the intermediate particles using the interpolation
algorithm described in Appendix D and illustrated in Fig. D.1. Once we have made the move, we
once again use the algorithm from Appendix D to thread in the atoms’ positions at the intermediate
time slices. Since the particles have moved, it is quite likely that a few of them are now located in
new boxes; Interboxes updates this information at the end of every move.
When a move is proposed it is subjected to the Metropolis test. It is at this stage that the
interactions enter the simulation. Recalling Eq. (5.20), we see that at a given time slice k the density
matrix to be calculated is bracketed between the configuration Rk and Rk+1 , the configuration at
the next time slice; the typical term will then look like this:
ρ(Rk , Rk+1 ; τ ) =
Y
ρ1 (xi,k , xi,k+1 ; τ )
i
×
Y
i<j
Ξ(xi,k − xj,k )
ρHC,1/2 (xi,k − xj,k , xi,k+1 − xj,k+1 ; τ )
Ξ(xi,k+1 − xj,k+1 ),
ρHO,1/2 (xi,k − xj,k , xi,k+1 − xj,k+1 ; τ )
(5.22)
where we have used the notation of Appendix D. We need only to calculate the change in energy
as prescribed by Eq. (5.19), so for the calculation of each energy it suffices to consider only the
particles belonging to the cycle that was moved. The problem is then reduced to finding the pairs of
particles that interact with those particles at each time slice, a task simplified and hastened by our
knowledge of the boxes containing each atom, evaluating the pair-product density matrices in (5.22)
using the expressions derived in Appendix D, and calculating the ratios.
There are two subtleties that have to be taken care of during this procedure. The first one is
rather obvious: if both members of the interacting pair are part of the cycle that was moved, their
interaction will be counted twice (in other words, both (i, j) and (j, i) will appear as interacting
pairs at the end of the search), and it is therefore necessary to either discard one of the pairs or
divide both interactions by 2. The second one has to do with the fact that, in the last time slice,
4
Note that the higher inverse temperature implies a smaller standard deviation in the diagonal
density matrix; this feature is responsible for the density enhancement at the center of the trap that
characterizes our spatial BEC.
95
each particle has become the next particle in the cycle, and therefore we have to find the particles
that interact with that one, as Fig. 5.7 illustrates.
At the end of the Metropolis test, the new configuration will either be accepted or rejected;
in the latter case, we have to restitute the earlier configuration and count it again. After the
configuration move, the program then tries Nperm = 10 permutation moves, which consist simply of
swapping the positions of two of the atoms (in the last slice, as Eq. (5.3) prescribes) and interpolating
their positions in the intermediate time slices (this is necessary because otherwise the resulting
configurations would be highly improbable and almost invariably rejected [33,45]). To increase the
probability of acceptance, only permutations between atoms located in the same box are attempted.
The Metropolis test happens in two stages: in the first, the program calculates the ideal-gas density
matrices in the old and new configurations and compares them; if the configuration passes this
“ideal” test, then the move is treated exactly as the configuration moves described above.
In this thesis we have been interested in calculating two quantities: the density of the condensed
gas and the condensate fraction. The one-particle density of the system is given by
ñ(x) =
*
N
X
δ
(3)
i=1
(x − xi )
+
=
M N
1 X X (3)
δ (x − xi,j ),
M j=1 i=1
(5.23)
where M refers, as in Eq. (5.7), to the number of configurations that we sampled, not the number
of slices. To find the density we proceed as in Eq. (5.10): at every step we pick a particle at random
and record its coördinates, and, at the end of the run, we are left with a representative sample of
the particles’ positions, which we can use to calculate the density by generating histograms. Every
certain number of steps (100 in this case), the program also looks at the permutation characterizing
the system, finds its cycle structure, and records the length of the various cycles; the cycle structure
can be used to determine the condensate fraction.
In an isotropic trap, the most direct and accurate way of determining the density is through the
radial number density N (x), defined by
N=
Z
dσx ñ(x) =
Z
dx N (x),
(5.24)
that we introduced in Fig. 2.1. As we may recall, N (x) = 2πx ñ(x) in 2D, while in 3D N (x) =
4πx2 ñ(x). This quantity is extracted from the Monte Carlo simulation simply by counting how many
atoms are at a given distance from the origin; the corresponding histogram yields N (x) directly. For
96
the anisotropic trap, we instead consider the surface number density, defined in Eq. (2.43), which is
extracted by counting how many atoms are at a given transverse distance from the origin.
Krauth’s original method of extracting the condensate fraction [44] uses the fact—already mentioned in Chapter 2—that, even though the condensate contains particles in cycles of all lengths,
the longest cycles must contain condensed atoms exclusively. Recalling our identification on page 95
of an l-cycle at inverse temperature β̃ with a single particle at inverse temperature l β̃, we see that
such a cycle will be distributed, in the ideal gas, according to
ρ(x) ∝ e−x
2
tanh lβ̃
,
(5.25)
and for lβ̃ À 1 they will be in the oscillator ground state given by Eq. (2.12) [44]. Another
method [33,137] uses the cumulative distribution of atoms as a function of the cycle lengths; we will
explain it in more detail when we discuss particular cases in the next section.
5.5
The anisotropic interacting gas at finite temperature
We begin, naturally, by studying the ideal gas, which as usual will provide a check for the programs
and for our choice of parameters.
Figure 5.8 on the next page shows the number densities for a two-dimensional ideal gas of N =
1000 atoms at T ≈ 0.4056 Tc . The solid lines depict the Monte Carlo results obtained with τ = 0.02
and five slices after 8 × 105 steps, of which 6.5 × 105 were used for equilibration. The dashed lines
show the exact results obtained using the methods of Chapter 2.
In Fig. 5.9 we see how we can extract the condensate fraction from this Monte Carlo simulation.
The figure shows a sampling, averaged over many configurations, of the cumulative distribution of
particles as a function of the lengths of the cycles to which they belong; this function, defined below
and denoted by q(l), is, provided directly by the simulation, as we already saw. From Eq. (B.6)
we know that every single configuration of the system may be characterized by a permutation that
can in turn be decomposed into c1 1-cycles, c2 2-cycles, etc.; all cycle lengths and numbers are
constrained by the general relation
N
1 X
lcl = 1 ≡ q(N ).
N
(5.26)
l=1
Moreover, as we saw at the end of Section 5.2, the occurrence of BEC is signalled by the appearance
of nonzero cl ’s for high values of l. In the plot we show, averaged over many configurations, the
97
0.8
N (x)/N
0.6
PSfrag replacements
0.4
0.2
0
0
1
2
x
3
4
5
Figure 5.8. Monte Carlo and exact number densities of an ideal two-dimensional trapped gas of
N = 1000 atoms at T ≈ 0.41 Tc . The solid line shows the result of a Monte Carlo simulation, while
the dashed lines show the exact number density and its condensate and noncondensate components.
Number of particles/N
PSfrag replacements
1
0.8
0.6
1.1
0.4
1
0.2
0.9
750
0
0
200
400
600
Cycle length
800
850
800
1000
Figure 5.9. Extracting the condensate number from a PIMC simulation. The gas shown above has
the same parameters as that of Fig. 5.8. The solid line depicts a configuration average of the function
q(l), which tells us how many atoms (divided by N ) are in permutation cycles of lengths 1, 2, . . . , l.
We can either take N0 to be the highest cycle length with q(l) = 1 or we can perform a linear fit
(dashed line) and extrapolate the line to 1; the inset zooms in on the intersection.
98
function q(l) =
Pl
l0 =1
l0 cl0 /N , which tells us how many atoms (divided by N ) are in permutation
cycles of lengths 1, 2, . . . , l. This cumulative distribution smooths out statistical variations [137] and
is normalized to 1, all of which make it better to work with than the plain c l ’s; moreover, it yields
two different estimates for N0 :
Krauth’s method, justified at the end of the last section, assumes that N 0 is the smallest value
of l such that q(l) = 1—in other words, the longest cycle containing particles; this is easily found
by inspection. The other estimate for N0 results from fitting the large linear section of the plot and
extrapolating to the value where this line would be 1. The inset shows the intersection point and
its immediate vicinity: Krauth’s method gives N0 = 800 for these parameters, while extrapolation
predicts N0 = 822; the exact value is N0 = 812, and lies almost exactly in the middle between the
two estimates.
We next study the behavior of the condensed ideal gas with increasing trap anisotropy. Figure 5.10 on page 101 shows the Monte Carlo solutions we obtained and compares them with the
exact results. The figure is similar to Fig. 2.12; as in that one, we have N = 100, though now the
(3)
range of compression factors is somewhat smaller; the temperature, T = 1.3 T c , is a bit lower as
well, but the gas is still uncondensed in the isotropic trap and clearly shows the appearance and
growth of the condensate as a result of the increase in trapping frequency. A total of 3 × 10 5 steps,
of which 2 × 105 were required for thermalization, were needed to produce the jagged lines that
represent the Monte Carlo results (the simulation used τ = 0.01762 and attained the given temperature using ten slices). Note that the equilibration is essentially complete for the smallest and
largest anisotropy ratios, while a few tens of thousands of additional steps would have yielded more
satisfactory profiles for the intermediate values. The smooth lines represent, as usual, the exact
solutions for the surface number densities, resolved into condensate and noncondensate components.
Furthermore, Fig. 5.11 compares the surface number density of the gas with the highest anisotropy to
the exact prediction of Eq. (2.43) and to the number density profile of an isotropic two-dimensional
(2)
ideal gas at the temperature T = 0.7279 Tc
predicted by Eq. (2.39); as expected, the two analytic
expressions give indistinguishable results, and agree quite well with the Monte Carlo histogram.
The corresponding condensate fractions can be extracted from the plots shown in Fig. 5.12. This
figure, which corresponds to a progressively compressed gas with the same parameters as Fig. 5.10,
should be interpreted in the same way as Fig. 5.9. The inset zooms in on the region of interest,
and the four numbers on the x-axes represent the exact predictions for the condensate fractions
corresponding to λ = 1, 5, 15, and 50. The number of atoms is much smaller than in that figure,
and it does not make sense to make a linear fit; to find the condensate fraction in this case we
99
must then content ourselves with the upper bound given by Krauth’s method, though, as the figure
shows, even a crude fit would provide a reasonable lower bound (except for the totally uncondensed
isotropic gas).
Figure 5.13 on page 102 displays, as a function of the increasing anisotropy parameter λ, the asp
pect ratio (defined, as we recall from Eq. (2.44), by p ≡ hξ 2 i/2hη 2 i) of a gas of N = 1000 atoms at
(3)
a temperature T = 0.5316 Tc . The circles show the ideal-gas aspect ratios obtained by PIMC sim-
ulation, while the solid line corresponds to the ideal-gas prediction (2.44). The dotted line displays
√
the aspect ratio λ of a pure condensate, and describes the system well at high compression. The
good agreement between the Monte Carlo data and the exact prediction show that the value τ = 0.02
employed in the former is appropriate for this set of parameters. The diamonds exhibit the aspect
ratio of an interacting gas (discussed below) with the same parameters and the usual coupling constant corresponding to rubidium; as expected, the aspect ratio of the interacting gas grows much
faster with λ.
We continue our study of the interacting gas by looking at the density profile of the large threedimensional trapped system at finite temperature that we studied in Chapter 3. Figure 5.14 on
page 103 is in fact two versions of the same plot: the top panel is Fig. 2 of Ref. 69, which we used
as a guide and as a check on most of the methods that we developed during the research reported
in this thesis; by reproducing it on the bottom panel we have obtained a final check that all of
our algorithms are consistent with each other and work correctly.5 Both panels show the density
profiles of an ideal and an interacting 3D gas of 104 rubidium atoms at a temperature T = 0.705 Tc
calculated using Monte Carlo simulations (solid lines); the dashed lines display the exact solution for
the ideal case and the Hartree-Fock approximation to the interacting system. To avoid clutter, we
have not resolved any density into condensate and noncondensate. As we have seen in the previous
chapters, and now confirm with this new method, the repulsive interactions lower the density at the
center of the trap, deplete the condensate, and widen the particle distribution.
Some authors of competing PIMC simulations of bosons [31,125] report having spent approximately 50 hours of CPU time to obtain a single condensate fraction for N = 1000. Krauth’s
5
This reassurance is reinforced by the title of that article: “Precision Monte Carlo test of the
Hartree-Fock approximation for a trapped Bose gas.” Note that the authors of the reference do
not include a Monte Carlo simulation of the ideal gas in their figure; this latter profile is not
completely resolved after sampling 5 × 105 configurations, even though the first 3 × 105 steps were
used exclusively to make the system lose memory of its initial, distinguishable distribution, and even
though the interacting gas has converged to satisfaction with the same number of steps.
100
PSfrag replacements
0.5
Ñ (ξ)/N
0.4
0.3
0.2
0.1
0
0
50
4
ξ
15
8
5
12
λ
1
Figure 5.10. Surface number density of a condensed ideal Bose gas of varying anisotropy. The
gas has N = 100 atoms and is at a temperature T = 1.3 Tc . The jagged lines show the Monte
Carlo results, while the smooth lines represent the exact solutions, resolved into condensate and
noncondensate components (dotted lines).
0.5
N (x)/N, Ñ (ξ)/N
0.4
PSfrag replacements
0.3
0.2
0.1
λ
0
0
2
4
x, ξ
6
8
Figure 5.11. Front view of the rightmost plot of Fig. 5.10. The figure displays both the number
surface density of the squeezed gas (dash-dotted line) and the density profile of a 2D gas at T =
(2)
0.7279 Tc (dashed line).
101
1
PSfrag replacements
Number of particles/N
0.8
16.2
1.1
40
0.6
1
0.4
0.2
0.9
0.6
37.4
60
0
0
20
40
60
Cycle length
80
100
Figure 5.12. Finding the condensate fraction of an ideal Bose gas of varying anisotropy using the
procedure first shown in Fig. 5.9. The parameters are the same as in Fig. 5.10; the lines correspond
to (from left to right) λ = 1, 5, 15, and 50. The inset shows the region where the condensate numbers
should lie and displays the corresponding exact predictions.
6
hξ 2 i/2hη 2 i
5
PSfrag replacements
4
p=
p
0
3
2
1
1
3
5
7
9
11
λ
Figure 5.13. Aspect ratios, obtained by Monte Carlo simulation, of condensed ideal (circles) and
interacting (diamonds) Bose gases of increasing anisotropy. Each gas has N = 1000 atoms and is
(3)
at T = 0.5316 Tc . Also shown are the exact aspect ratio of the ideal gas (solid line) and that of a
pure ideal condensate (dotted line).
102
N(r)
0.5
ideal gas
0.4
HF
QMC
0.3
0.2
0.1
0.0
PSfrag replacements
0.0
2.0
4.0
0
2
4
6.0
8.0
10.0
6
8
10
r/a 0
0.5
N (x)/N
0.4
0.3
0.2
0.1
0
x
Figure 5.14. Monte Carlo and mean-field number densities of a three-dimensional trapped gas.
The top panel in the figure is Fig. 2 of Ref. 69, which we used as a check on our methods, and
has its own legend. The bottom panel displays the total number density of a gas of 10 4 atoms at
T = 0.705 Tc . The thin, tall curve corresponds to the ideal-gas profile obtained by both the exact
method (dashed line) and a Monte Carlo simulation (solid line). The other curve, which displays the
Monte Carlo (solid) and Hartree-Fock (dashed) density profiles for an identical gas with repulsive
interactions, shows the expected effects: a smaller density at the center and a larger cloud size. The
coupling constant is γ = 8π × 0.0043, corresponding to rubidium atoms.
103
algorithm is orders of magnitude faster,6 and can handle many more atoms, but suffers from the
same limitation. It is designed to cope with up to 104 atoms [44] and cannot go beyond that,
since any further increase in this parameter will make the computer run out of memory: the
array Ibox grows very quickly with N , and the search for interacting pairs, which can involve
some Maxbox × 2σ × Lmax × Nslice ≈ 105 pairs per step in the present conditions, becomes unmanageable. Moreover, the resulting enhanced density would require a smaller value of τ and (possibly
many) more slices. This was a pressing concern in our study of the squeezed 3D gas, the final topic
to which we turn.
We first made sure that our choice of τ was appropriate, and, as Fig. 5.13 shows, τ = 0.02 turned
out to be acceptable for N = 1000. Though we did a few simulations with N = 10 4 , the run times
were so long (and the data files so large, though this was the case for every set of parameters) that
we could not repeat an experiment enough times until we trusted the results. For that reason, we
(3)
will concentrate on N = 1000 and use the same temperature T = 0.5316 Tc
from Fig. 5.13. In
this system, the crossover to two dimensions, as predicted by Eq. (3.39) of Section 3.5, should occur
with a compression factor λc ≈ 5.07.
Figure 5.15 on the next page displays the surface number densities obtained for this system
with λ ranging between the isotropic value λ = 1 and λ = 11 > 2λc . We can see that it loses
its noncondensate tail as λ increases and then changes very little for λ ' λ c , showing, at least
qualitatively, that a finite-temperature crossover has indeed occurred. We then proceed to inspect
the number surface density profile at the highest anisotropy and try to fit it with a pure-2D isotropic
Hartree-Fock number density at the same temperature (which, for the parameters we are using,
(2)
corresponds to T = 0.2028 Tc ). From the result (3.46) found in Section 3.5 we should expect a
coupling constant
γ
(2)
=
µ
λ
2π
¶1/2
√
γ (3) ≈ 0.0043 λ ≈ 0.143
(5.27)
at this compression ratio. The fit was close but not perfect, and we found that we could improve
it by using a smaller effective λ. Figure 5.16 on the following page shows the best fit we obtained
using λeff = 4. The condensate fraction is extracted from the Monte Carlo simulation in Fig. 5.17; the
two methods we have used in previous examples once again bracket the condensate fraction predicted
6
The interacting density profile in Fig. 5.14 was generated by a fast machine in 2 hours and
44 minutes; in ours it took 11 hours and 47 minutes to generate the same profile. The ideal-gas
profile needed 7 minutes in the fast machine and 25 in ours. For completeness, we remark that our
exact ideal-gas program took about a minute to run, while the Hartree-Fock calculation took less
than 15 seconds.
104
Ñ (ξ)/N
PSfrag replacements
0.6
0.4
0.2
0
0
2
ξ
4
6
3
1
5
λ
7
11
9
Figure 5.15. Monte Carlo number surface densities of a three-dimensional trapped gas of increasing
anisotropy. This system, like the one in Fig. 5.13, comprises N = 1000 rubidium atoms at T =
(3)
0.5316 Tc .
Ñ (ξ)/N, N (x)/N
0.6
0.4
PSfrag replacements
0.2
5
0
0
2
ξx
4
6
Figure 5.16. Monte Carlo number surface density and best-fit two-dimensional profile of an interacting three-dimensional Bose gas in a highly anisotropic trap. The parameters are the same as in
the figure above, and λ = 11 as in the rightmost curve there. The Monte Carlo result is superimposed on the mean-field profile obtained, as explained in the text, by taking an effective anisotropy
parameter λeff = 4. The condensed and thermal components are shown explicitly.
105
by the two-dimensional isotropic Hartree-Fock gas, with the difference between the mean-field and
the Monte Carlo predictions being at most 5%. The condensate fraction, incidentally, changed
by less than one percent within the range of λ that we considered. We repeated the procedure
for N = 104 and found that, for a compression ratio of λ = 30 > 23.5 ≈ λc , a fit with λeff = 15.5
gave a similar agreement in both density and condensate fraction. In neither case did we see any
significant difference when we used any of the other expressions for the coupling constant discussed
after Eq. (3.47).
Finally, we calculated and diagonalized the one-body density matrix corresponding to the best-fit
2D system; the results are exhibited on Fig. 5.18. The eigenfunctions are not very different from
those that appeared in Fig. 2.10 (note, however, that they are displayed at a different scale); on the
other hand, the occupation numbers show, if anything, a greater difference between the population
of the ground state and those of the excited states; this is not too surprising at this very low
temperature, and, at least at this level of description, would seem to indicate that the condensation
that we are witnessing is not smeared.
5.6
Summary
Path-integral Monte Carlo simulations provide an essentially exact description of finite interacting
condensed Bose-Einstein gases; in particular, the method enables us to study how the properties
of a trapped system change as the anisotropy of its confinement increases beyond the crossover
condition and into the two-dimensional régime. We first give a self-contained account of the general
method and of its implementation in the trapped case. Then, after we reproduce the ideal-gas and
isotropic interacting results from previous chapters, we go on to study the interacting anisotropic
trapped gas at finite temperature. We find that the two-dimensional Hartree-Fock solutions derived
in Chapter 3 and studied in detail in Chapter 4 mimic the surface density profile and predict the
condensate fraction of highly anisotropic systems to very good accuracy; the equivalent interaction
parameter is smaller than that dictated by the T = 0 analysis of Section 3.5. Once again, we find
no evidence of smearing.
106
PSfrag replacements
1
Number of particles
0.8
0.6
932.4
1.05
0.4
1
0.2
0.95
0
0
200
884
400
600
Cycle length
934
800
1000
Figure 5.17. Condensate fraction of a quasi-2D interacting Bose gas with N = 1000 and T =
(3)
0.5316 Tc , extracted using the same method as in Fig. 5.9. The inset zooms in on the region
of interest and displays the two estimates yielded by the simulation (bottom axis) and the value
by the best-fit 2D Hartree-Fock calculation (top axis).
PSfragpredicted
replacements
Population
1000
7
100
10
1
0.1
0
1
2
3
4
Eigenstate
5
6
8
φ0 (x), . . . , φ3 (x)
0.4
0.2
0
−0.2
−0.4
0
1
2
3
x
4
5
6
Figure 5.18. Occupation numbers and eigenfunctions of the one-body density matrix for a quasi-2D
Bose gas with the same parameters as in Fig. 5.17. The figure shows the seven highest occupation
numbers and the first four of the corresponding eigenfunctions. The y-axis in the top panel is
logarithmic; the eigenfunctions are spline-interpolated (on a mesh similar to that of Fig. 2.10) and
have not been normalized.
107
CHAPTER 6
CONCLUSION
From a certain temperature on, the molecules “condense” without attractive forces, that is they accumulate at zero velocity. The theory is
pretty, but is there also some truth to it?
—A. Einstein, in a letter to P. Ehrenfest, 29 November 1924 [16]
At the beginning of this thesis we set out to find whether Bose-Einstein condensation can occur in
two dimensions or not. The obvious answer should be “Yes,” given that two-dimensional condensates
have been produced in the laboratory, though at this point we cannot clearly ascertain if these are,
say, quasicondensates that will become unstable as the size of the system grows—in fact, they could
even be the uncondensed solutions that we considered in Chapter 4, since those also go to a (though
not the) Thomas-Fermi limit as T → 0. On the other hand, and though the experimenters [30]
were explicitly attempting to compress their traps beyond the crossover condition (3.39), 1 one can
always argue that those systems are not really two-dimensional, since the interactions between the
atoms are still 3D, and the peculiarities of pure-2D scattering (which we barely touched upon in
Section 3.5) are at least partly responsible for the absence of two-dimensional condensation in the
thermodynamic limit.
As should be clear by now, we do not have a definite answer to our question. The methods we
have used to study the problem have had mixed success, and there is much still left to do if we are
to gain a deeper understanding of the subject. In the following we will summarize our results and
conclusions and mention some open questions that we have left unanswered and to which we might
come back in the future. Perhaps a good way to begin our summary is to retrace in sequence the
steps that we took.
When we began to study the topic, two-dimensional BEC was in no good shape: the Hohenberg
theorem had just been extended to include trapped systems [28]; the semi-ideal model (briefly
discussed on page 64) worked well in three dimensions [112] but gave unphysical predictions for
1
As we saw at the beginning of Chapter 4, the crossover condition is reached in experiments by
varying N , not λ.
108
two [113]. On the other hand, there were signs that the situation was more complicated: Monte
Carlo simulations had predicted a substantial accumulation of particles in the ground state of a finite
system [32]; moreover, it had been found that, while the HFB equations (in the WKB approximation)
were inconsistent in 2D as the thermodynamic (Thomas-Fermi) limit was approached, the form they
took for a finite system without a condensate also failed below a certain temperature, signalling the
possible presence of a transition in the system [43], probably of the Kosterlitz-Thouless type [35]
or perhaps into a fragmented condensate [37]. These latter findings, plus the fact that the ideal
trapped 2D gas [24,28] was as well understood and as well-behaved as its 3D counterpart, were the
initial motivation for our work.
Since there was no doubt that the 2D Bose gas has a condensate at zero temperature, it seemed
to us that a reasonable starting point for our research would be the simple Gross-Pitaevskiı̆ equation.
Our progress in that subject was discussed in Section 3.4; the wavefunctions we display on Fig. 3.1
were the very first results we obtained (and have been plotted using the original data file); as we
have already remarked, these same results appeared in print [95] shortly afterward. The cumbersome
method of solution that we had at that point was particularly difficult to employ for high values of N
(which make the GP equation highly nonlinear), and we believed that that could be a consequence
of the Hohenberg theorem, since these last curves corresponded to the “standard” 3D parameters
used in the literature [44,87].
As an interlude, we concentrated on the ideal gas. The slowly convergent series present in the
exact theory of Section 2.2 led us to digress for a while in the study of methods to accelerate
convergence, though we abandoned it when we realized a (to us) key point: unlike what happens
in the homogeneous case, in the trapped ideal gas the chemical potential becomes positive at low
temperatures, and is equal to the single-particle zero-point energy. This discovery (which is in fact
well-known [88] but very seldom acknowledged) led directly to a good part of the results presented
in Chapter 2.
We then got back to the interacting problem and tried to find finite-temperature solutions. We
started in 3D, where the results of various flavors of the HFB equations were already available [88].
With great difficulty (since we were still using the initial-value method of solution) we reproduced the
semiclassical Hartree-Fock results of Ref. 69 and, still using the semiclassical Hartree-Fock method,
obtained good agreement with the discrete HFB calculations of Ref. 91. The semiclassical HFB
solutions were more elusive, but we eventually made them converge by using our Hartree-Fock
solutions as starting guesses; the results were essentially the same.
109
At that point we realized that the method we had been using was unreasonably laborious and
slow, and we then turned to the search for a reliable and accurate scheme for solving the GP equation.
We studied the spectral methods [64,65,105] described in Appendix C, but still could not deal
satisfactorily with the nonlinearity. Eventually we became familiar with the spline-minimization
method [44] that finally made it feasible to calculate density profiles and, in particular, yielded all
of the results of Chapter 4.
We then started to find partial answers to some of our questions. The semiclassical HartreeFock equations were soluble in 2D for a fairly wide range of temperatures and atom numbers; when
we tried to go one step up in sophistication by using the HFB equations (still in the semiclassical
approximation), however, we were never able to find any solutions, even when we iterated the selfconsistent procedure with the Hartree-Fock profiles as starting guesses. That has been the situation
to this day. The semiclassical HFB equations fail due to the same pathology, shown in Fig. 3.4, that
ails the Hartree-Fock solutions (and which make their solution “quite difficult” [69]), exacerbated by
the long-wavelength behavior of the spectrum; this seemed to confirm that phonons are destabilizing
the system.
At that point we made another interesting discovery: by finding solutions to the Goldman-SilveraLeggett model [96] we proved that the semiclassical Hartree-Fock equations do have a solution in
precisely the same limit where the HFB scheme was known to explicitly fail. These results, along
with their finite-number counterparts and a report on our inability to do the same with HFB, were
summarized in Ref. 117. (Months later we found that many of these had already been discovered
independently [116].)
Shortly thereafter we learned that, in contradiction to the results of Ref. 43, it was indeed possible
to solve the uncondensed equations at any (numerically available) temperature [42]. We reproduced
those results using two different methods and became convinced of their validity; though that did
invalidate one of the questions that we had considered at the beginning, it of course did not rule
out the possibility of a transition occurring in the 2D trapped system. Moreover, we evaluated the
free energy for both solutions and found that the condensed solutions would be preferred over the
uncondensed ones, and that the latter also did not reduce to the correct Thomas-Fermi limit at zero
temperature; this contradicted the fact that the system has a condensate at absolute zero [138] and
made these solutions either unphysical or metastable.
These considerations led to another question: why does the semiclassical Hartree-Fock approach
succeed where other, more sophisticated schemes fail? It turns out that, by eliminating the quasihole excitations from the HFB description of the noncondensate, the Hartree-Fock approximation
110
effectively imposes an infrared cutoff in the Bogoliubov spectrum. This cutoff is consistent with
others imposed by interaction renormalization in the infinite homogeneous 2D system [40,119,120]
(a similar study for the trapped system [121] has also appeared); since these emerge from more
elaborate theories that take into account the Kosterlitz-Thouless transition and that allow for the
possibility of quasicondensates, we became confident that our Hartree-Fock equations can be a reasonable approximation to what is happening within the finite 2D trapped Bose gas. (The full-blown
HFB equations, successfully implemented in 3D a few years back [91] and in 2D very recently, have
by nature a discrete, quantum-mechanical energy spectrum and thus also feature a lower limit of
quasiparticle energy; the paper where their solution is reported [93], however, does not give an idea
of the magnitude of this “cutoff.”)
By the time we obtained those results [104], the first realization of lower-dimensional condensates
was reported [30]. Those were, naturally, created by squeezing a 3D condensate until it reached the
crossover régime, and that, plus the fact that 2D condensates had already been predicted for finite
systems by Monte Carlo simulations, led us to the study of Krauth’s PIMC algorithm, which works
for traps of any symmetry. As part of that study, we produced a working version of the program
in Matlab, the programming language that we have used almost exclusively since we started our
research; unfortunately, that semester-long endeavor failed: though each of the subroutines we wrote
could compete with (and sometimes beat) Krauth’s in efficiency, the main loop was that: a loop.
Matlab is designed for vectorized computation, and loops in Matlab programs are to be avoided
at any cost (Ref. 139 explains some of the reasons why this is so); and that, we learned the hard
way, is absolutely impossible in a path-integral Monte Carlo simulation. 2 This, though we have not
mentioned it before, is one of the important conclusions of this thesis.
We eventually took up Krauth’s Fortran code and, after undoing some approximations that
Krauth himself had made and that became inaccurate only at high anisotropies, used it to obtain
the results exhibited in Chapter 5; as we have seen, our agreement with published results (both
mean-field and Monte Carlo) for isotropic traps was quite good. This led in turn to two separate
research tracks. On one hand, it would be of interest to extract the off-diagonal one-body density
matrix from the simulation results [129,136] and, by diagonalizing it, obtain the populations of a
few excited states; this would allow us to explore the possibility of a smeared condensation in the
2D gas. On the other hand, it would be desirable to have a semiclassical Hartree-Fock program
that, unlike the spline-minimization routine, would work for an anisotropic trap. In the end, we did
2
Other simulation methods (diffusion Monte Carlo, for example) do allow vectorization [131,133].
111
not find a completely satisfactory answer to either of these challenges, but the partial answers we
obtained account for a good part of the results we have presented here.
In particular, our attempts to calculate the off-diagonal matrix for the ideal gas while avoiding
the enormous matrices that Simpson’s rule requires made us look into spectral methods again. When
the ideal-gas results turned out to be satisfactory, we tried to apply the method—now combined
with the minimization of Eq. (3.63)—to the Hartree-Fock procedure in the anisotropic case. We
were partially successful: the programs gave reasonable results for moderate degrees of anisotropy
but became very inaccurate for compression ratios still within the 3D régime. One day we decided to
check if the inaccuracy was due to the minimization routine, and that is how we found a method that
allowed us to redo two years of research in three hours (and also gave us the chance to explore the
different quasi-2D coupling constants mentioned in Section 3.5 and to find that they had a negligible
influence in the systems we studied). We then implemented the Hartree-Fock off-diagonal density
matrix [73,89] using this method; the results appear in Chapters 3 and 5.
We conclude in this thesis that, in two-dimensional isotropic finite trapped systems, and insofar
as such (admittedly small) systems can be described to any degree of approximation by the semiclassical Hartree-Fock equations, there is a phenomenon resembling a condensation into a single state.
These equations, moreover, provide in at least a few cases a reasonable description of a 3D system
anisotropic enough as to have entered the quasi-2D régime.
There are, of course, a few qualifications to this statement and more than a few questions left
wide open. In the following we enumerate a few of both.
Monte Carlo simulations are closer to being exact than mean-field solutions, and surely include
more physics than we have explored at this point, as long as we can extract it. In particular, and
as we mentioned above, we are aware that it is possible to obtain the off-diagonal density matrix
directly from path-integral Monte Carlo [45,129], which can then be diagonalized to yield excitedstate occupations; this has been done in 3D [136], and the results were in complete agreement with
the predictions of the Gross-Pitaevskiı̆ equation. Only very recently, after getting to know the Monte
Carlo method well enough, have we begun to understand how this can be done.
Now, it has to be kept in mind that Monte Carlo simulations are not a panacea. While they excel
at high temperatures3 (precisely where mean-field theory breaks down) and would thus allow the
study of the transition region, PIMC simulations work only for fairly moderate particle numbers,
and the finite-size-scaling ideas that have proved successful in the study of the homogeneous Bose
3
This is not to say that they fail at low temperatures.
112
gas [128] cannot be readily applied in a trap. Hence we cannot really use this method to approach
the thermodynamic limit, where the consequences of Hohenberg’s theorem would be apparent.
We also ran a few 2D interacting PIMC simulations, based on the code discussed in detail on
Ref. 33 and whose results were published in Ref. 32; this program uses Krauth’s algorithm but implements hard-disk interactions.4 This program does predict a condensation in two dimensions, and
gives reasonable estimates for the superfluid fraction (see below), but predicts condensate fractions
much smaller than those given by the Hartree-Fock method. (This is just in the interacting case; the
ideal gas works exactly as expected, and in fact our figures 5.8 and 5.9 on page 98 were generated
using this program.) We cannot really tell if there is something wrong with it, and we scarcely used
it because we did not gain sufficient control over its inner workings, given that at that point we
were more interested in compressed 3D systems. It might also be possible that there is indeed a
fundamental difference between hard disks in two dimensions and the hard spheres in quasi-2D space
that, as we saw, are reasonably mimicked by a pure two-dimensional Hartree-Fock approximation;
that, if confirmed, would be an interesting result.
Mean-field theory, with all its limitations, should still be considered the method of choice at
high atom numbers and low temperatures. One logical next step would be to work harder on
obtaining solutions to the exact, discrete HFB equations, though we are aware that, just like Monte
Carlo simulations, these programs should become too cumbersome for large systems: the 3D results
reported in [70] correspond to N = 2 × 104 , and we suspect that these are not likely to be superseded
in the near future. (The recent exact solutions have N = 2000.)
Even if we restrict ourselves to the Hartree-Fock approximation, we still need a consistent meanfield treatment of the anisotropic gas. Our current methods do not provide us with enough accuracy
to determine, for example, aspect ratios. As we already said, the method we describe at the end of
Chapter 3, which proved so successful for isotropic traps, was in fact an offshoot of a program we
wrote to deal with the anisotropic case; we only need point out that the necessary matrices (even
assuming radial symmetry) involve 900 × 900 terms as opposed to the 30 × 30 that suffice for the
treatment of the isotropic gas, with consequent losses of both speed and accuracy. On the other
hand, this problem is not insurmountable, and might soon be within our reach.
4
The two-body density matrix for that program results from a derivation almost identical to the
one we present in Appendix D, and involves cylindrical Bessel functions instead of the spherical
Bessel functions obtained there; however, the actual implementation of the interactions is more
difficult, since the resulting two-body density matrix cannot be efficiently calculated using GaussHermite quadrature and it becomes necessary to calculate it to very high precision and store the
results in a table. See Ref. 33 for details.
113
This desirable mean-field program should also be able to find higher-order eigenstates of the
GP equation, since the best methods we developed gave only the ground state. We do have the higher
eigenstates as given by the off-diagonal density matrix, but we still need to find them directly; and
at this point, the only method that was giving us excited states was the very inefficient initial-value
method from Chapter 3. Access to these excited states will constitute a royal road to comparison
with experiment.
Another open direction would be the study of vortices, superfluidity, and the Kosterlitz-Thouless
transition that we have so far avoided. There exist ways of calculating the superfluid fraction, both
in the Hartree-Fock approximation [88], where it is treated as a kinematic effect and is extracted
by comparing the moment of inertia of the gas in a rotating trap to its classical counterpart [140],
in path-integral Monte Carlo simulations, where it is related to the surface area swept by the paths
depicted in Fig. 5.1 when looked at from above [45,141], and through variational calculations [99].
Now, explicit 2D Monte Carlo simulations for 4 He found no significant temperature dependence of
the vorticity correlation function, and attributed this negative result to low data resolution [142].
For the two-dimensional trapped gas, simulations resembling ours [32,33] found no evidence of a
jump in the superfluid density close to the point where the transition should occur. A more in-depth
study of these matters would lead us to better understand the Hartree-Fock cutoff that we have
mentioned so often in this chapter and that has been so crucial for our arguments.
It is possible that what we have found in this research is evidence for a quasicondensate. More
insight into this topic could be gained by calculating the coherence lengths of our systems, a quantity
already (but not too readily) available from the off-diagonal density matrix [61], and studying the
behavior of phase fluctuations, which has been (though not necessarily by that name) behind many
of our considerations in the previous chapters.
Finally, another area where analytic results could join forces with Monte Carlo simulation is
the interpretation of experimental data. It should be within our reach someday to create a purely
mathematical rendering, including the effects of finite size, finite temperature, interactions, and
anisotropy, of Bose-Einstein condensates not unlike that first sodium one that exploded in a screen
at MIT one early morning.
114
APPENDIX A
MATHEMATICAL VADEMECUM
The Bose-Einstein integrals
The Bose-Einstein integrals are defined by
gσ (e
−x
1
)=
Γ(σ)
Z
∞
0
∞
X e−kx
tσ−1 dt
=
,
e(t+x) − 1 k=1 k σ
(A.1)
where the second expression results from expanding the denominator as a geometric series and
integrating term by term. For σ = 1, either the integral or the series can be immediately evaluated
to yield g1 (e−x ) = − log(1 − e−x ). For x = 0 and σ 6= 1 the integrals become ζ(σ), the Riemann zeta
function; g1 (e−x ) diverges at that point. For small nonzero values of x, the series in (A.1) converges
very slowly, making it a very inefficient procedure to calculate the integrals; the accumulation of
roundoff errors, moreover, makes it inaccurate as well. Instead we turn to an expression due to
J. E. Robinson [143]:
gσ (e−x ) = Γ(1 − σ) xσ−1 +
∞
X
(−1)k
k=0
k!
ζ(σ − k) xk .
(A.2)
When σ is not an integer, this expression can be used directly to calculate the integral, though the
values of the zeta function for negative arguments have to be found by analytic continuation. Most
of the time, however, we will be using integer values of σ, for which the first term and the term in
the sum corresponding to k = σ − 1 diverge. But these divergences turn out to cancel each other,
resulting in a finite expression,
lim ((−1)σ−1 Γ(σ)Γ(1 − k)xk−σ + ζ(k − σ + 1)) =
k→σ
σ−1
X
`=1
1
− log x,
`
(A.3)
which can be obtained using the properties of the polygamma function [62]. Finally, we can use the
following results about the Riemann zeta function [144],
1
ζ(0) = − ,
2
ζ(−2m) = 0,
115
ζ(1 − 2m) = −
B2m
,
2m
(A.4)
where Bp is the pth Bernoulli number [62,144], to find
gσ (e
−x
σ−2
X
(−1)σ−1
(−1)n
ζ(σ − n)xn +
)=
n!
(σ − 1)!
n=0
Ãσ−1
X1
`=1
`
!
− log x xσ−1 −
∞
X
B2k x2k+1
(−1)σ σ
x +
.
2σ!
2k (2k + 1)!
k=σ−1
(A.5)
The function g2 , for example, can be approximated by
g2 (e−x ) =
π2
x2
x3
x5
x7
x9
− (1 − log x)x −
+
+
+
+
+ ···
6
4
72 14400 1270080 87091200
(A.6)
The harmonic-oscillator eigenfunctions
The eigenfunctions of the harmonic-oscillator Hamiltonian are well known and can be found in any
textbook on quantum mechanics:
hx | ni = p
1
x2 /2
Hn (x)
√ e
2p p! π
(A.7)
In three dimensions, we can simply multiply three one-dimensional functions together or we can
solve the Schrödinger equation in spherical coördinates, in which case the angular dependence is
carried by spherical harmonics:
hx | n, l, mi =
s
2
2n!
l+ 1
e−x /2 xl Ln 2 (x2 )Ylm (ϑ, ϕ)
1
(n + l + 2 )!
(A.8)
To study the effect of anisotropy, we can write them down in cylindrical coördinates [145,146]
1
hx | n, m, pi =
π
s
n!
(n + m)!
µ ¶1/4
√
2
2
1
λ
1
p
e− 2 (ξ +λη ) ξ m Lm
(ξ 2 )eimϕ Hp (η λ);
n
π
2p p!
(A.9)
where we have used the scaling introduced in Section 2.5. In all of the above equations, the L m
n are
associated Laguerre polynomials and the Hp are Hermite polynomials. In a strictly two-dimensional
isotropic trap the eigenfunctions are particular cases of (A.9),
1
hx | n, mi = √
π
s
2
n!
2 imϕ
e−x /2 xm Lm
,
n (x )e
(n + m)!
(A.10)
and their corresponding eigenvalues are ²nm = 2(2n + |m| + 1). The first few radially symmetric
functions (that is, hx | n, mi with m = 0) are shown in Fig. 2.10, and their associated eigenvalues
116
appear in Fig. 2.9. We will presently use these to derive an expression for the one-body density
matrix of a distinguishable system. From Eq. (A.10) we have
ñ(x, x0 ) = hx|ρ|x0 i =
=
∞
∞
X
X
n=0 m=−∞
hx|e−β̃ H̃1 |n, m, pi hn, m, p | x0 i
∞
1 −2β̃ −(x2 +x02 )/2 X −2β̃|m| 0 m im(ϕ−ϕ0 )
e
e
e
(x x) e
π
m=−∞
×
∞
X
n!
2
m 02
e−4β̃n Lm
n (x )Ln (x ).
(n
+
m)!
n=0
(A.11)
The last sum can be evaluated by means of the identity [144]
à p
!
µ
¶
xyz
(xyz)−α/2
x+y
n! z n
α
α
L (x)Ln (y) =
exp −z
Iα 2
,
Γ(n + α + 1) n
1−z
1−z
1−z
n=0
∞
X
(A.12)
where Iα (x) is a modified Bessel function. We can now use the generating function for the modified
Bessel functions,
∞
X
Im (x) tm = e(x/2)(t+1/t) ,
(A.13)
m=−∞
and the fact that x0 x cos(ϕ − ϕ0 ) = x · x0 to obtain
ρ(x, x0 ) =
2
02
0
1
1
csch 2β̃ e− 2 csch 2β̃((x +x ) cosh 2β̃−2x·x ) ,
2π
(A.14)
where we have recovered Eq. (2.4). Equation (A.13) can also be used to find an alternative definition
of the modified Bessel function of order zero:
I0 (x) =
1
2π
Z
2π
dϕ ex cos ϕ ,
0
which supports our identification of the the averaged density matrix in (2.28).
117
(A.15)
APPENDIX B
PERMUTATION CYCLES AND WICK’S THEOREM
Permutation cycles in the ideal gas
Quantum particles are indistinguishable, and the wavefunction of a system of particles should be
independent of the bookkeeping scheme that we adopt to label them. In the particular case of
bosons, the wavefunction must remain unchanged when the particles (or, rather, the indices employed
to pinpoint them) are permuted. Hence the only wavefunction that we can use to describe the
system is the totally symmetric combination of the wavefunctions that would describe a system of
distinguishable particles. For example, if we have three bosons, labelled 1 through 3, we must use
the normalized combination
1
S|123i = √ (|1i|2i|3i + |2i|1i|3i + |1i|3i|2i + |2i|3i|1i + |3i|1i|2i + |3i|2i|1i),
3!
(B.1)
where the symmetrization operator S is Hermitian and obeys S 2 = S. This result can be generalized
to the case of N particles:
where
P
1 X
|P 1i · · · |P N i,
S|1 . . . N i = √
N! P
P
(B.2)
is the sum over all possible permutations of the numbers {1, . . . , N } and P i is the effect
of one such permutation on the index i.
The fundamental quantity that describes an ensemble of particles is the N -body density matrix,
PN
defined by ρN = e−β̃ H̃ /ZN , where H̃ is the N -body Hamiltonian, H ≡ i=1 H1,i ; the partition
function is the trace ZN = Tr e−β̃ H̃ that normalizes the density matrix and is given by [15,147]
1:N
ZN = Tr ρN =
1:N
Z
dσx1 · · · dσxN hx1 · · · xN |Se−β̃ H̃ S|x1 · · · xN i,
(B.3)
where, as usual, we evaluate the trace in coördinate space. Since the bosons cannot turn into
fermions, the symmetrization operator is a constant of the motion and commutes with the Hamiltonian; moreover, since S 2 = S, only the bra or the ket need be symmetrized. The consequences of
118
this symmetrization are most easily exhibited by considering systems with small particle numbers.
For N = 1, the partition function is the well-known result [14] Z1 = (2 sinh β̃)−σ . For N = 2, we
have
Z2 =
1
2!
Z
¡
¢
dσx1 dσx2 hx1 |e−β̃ H̃0 (1) |x1 ihx2 |e−β̃ H̃0 (2) |x2 i + hx2 |e−β̃ H̃0 (1) |x1 ihx1 |e−β̃ H̃0 (2) |x2 i . (B.4)
The first integral is just a product of two one-body partition functions and gives Z 12 (β̃). To calculate
the second one, we can insert two complete sets of oscillator eigenfunctions |ni, |n 0 i:
Z
Z
dσx1 hx2 |e−β̃ H̃0 (1) |x1 i dσx2 hx1 |e−β̃ H̃0 (2) |x2 i
Z
XXZ
σ
−β̃ H̃0 (1)
=
d x1 hx2 |e
|nihn | x1 i dσx2 hx1 |e−β̃ H̃0 (2) |n0 ihn0 | x2 i
n
=
X
n0
e−β̃²n
=
e
−β̃²n
=
e−β̃²n
=
X
n0
n
X
X
e
n0
n
X
e−β̃²n0
n0
n
X
X
e
Z
dσx1 hx2 | nihn | x1 i
σ
0
Z
d x1 hn | x1 ihx1 | n i
e−β̃²n0 hn | n0 ihn0 | ni =
−β̃²n −β̃²n
e
−β̃²n0
Z
X
n
Z
dσx2 hx1 | n0 ihn0 | x2 i
dσx2 hn0 | x2 ihx2 | ni
e−β̃²n
X
e−β̃²n0 δn0 n
n0
= Z1 (2β̃),
(B.5)
n
where we have used the orthonormality of the oscillator eigenfunctions and the fact that the position
eigenkets also form a complete set. This result is the basis for everything that follows. The important
point here is that the permutation {1, 2} → {2, 1} is a closed cycle, a snake biting its own tail; that
fact allowed us to move the brackets around and exploit the second completeness relation. Had
we had three particles, we could have calculated the term h231|ρ|123i by inserting three sets of
eigenfunctions and using the fact that δnn0 δn0 n00 δn00 n = δn0 n δnn00 ; the result would be Z1 (3β̃). If we
have k particles, any cyclic permutation of the indices will give us a term Z 1 (k β̃).
Now, every permutation of the N particles can be decomposed into cycles. Going back to three
particles, we can see that {1, 2, 3} → {1, 3, 2} can be decomposed into (1)(23): 1 has been left alone
(in its own 1-cycle) while 2 and 3 are now in a 2-cycle (otherwise known as an “exchange”). Another
permutation, {1, 2, 3} → {3, 2, 1}, for example, can be decomposed into (1, 3)(2). In general, any
given permutation of N particles will be broken up into c1 1-cycles, c2 2-cycles, and so on, up to
cN N -cycles. Any combination of these c` that obeys
N
X
`c` = N
`=1
119
(B.6)
is an acceptable permutation. At one end of the spectrum we will have the permutation consisting of
N 1-cycles—the identity permutation—and at the other end we will have N − 1 cyclic permutations,
each of them consisting of one N -cycle. It is easy to see why we have N − 1 of these, since we can
put any number other than 1 at the start of the cycle (1 would, of course, give us the identity).
Going back to the permutation {1, 2, 3} → {3, 2, 1}, we see that it can be put either as (1, 3)(2),
(3, 1)(2), (2)(1, 3), or (2)(3, 1). In general, there are
N!
c1 !c2 ! . . . cN !
(B.7)
ways of placing N particles in c1 1-boxes, c2 2-boxes, etc. Moreover, within each `-box we can put
any one of the ` particles in the first slot, which gives us an extra factor of Π ` `c` in the denominator.
Knowing this, we can rewrite the partition function (B.3) as
ZN =
1
N!
X
c1 ,...,cN
Y
`
N!
Z1 (`β̃)c` ,
c ` ! ` c`
(B.8)
subject to the restriction (B.6); for example, Eq. (B.4) really is
Z2 (β̃) =
1
2!
µ
2!
2!
Z 2 (β̃) +
Z 1 (2β̃)
2 1
1 1
2!
1
1!
2
|
|
{z
}
{z
}
{1,2}→{1,2}=(1)(2)
c1 =2, c2 =0
¶
.
(B.9)
{1,2}→{2,1}=(1,2)
c1 =0, c2 =1
Finally, the constraint (B.6) can also be lifted by summing over all possible values of N —in other
words, by invoking the grand canonical ensemble. Using the grand canonical partition function
Z=
∞
X
eβ̃ µ̃N ZN
(B.10)
N =1
we obtain
Z=
Y
`
exp
Ã
Z1 (`β̃)e`β̃ µ̃
`
!
.
(B.11)
We can immediately apply the definition
∞
X
∂
eβ̃ µ̃ Z1 (`β̃),
t log Z =
N=
∂ µ̃
(B.12)
`=1
along with the expression quoted above for the one-particle partition function, to recover Eq. (2.11).
120
Wick’s theorem
Our treatment of interacting systems on Chapter 3 was based on the formalism of second quantization, in which the field operators are expanded in creation and annihilation operators, and often
we had to calculate thermal averages of combinations of these. What follows is a brief discussion of
a simultaneous simplification and generalization of Wick’s theorem [84,148], which expresses these
combinations in terms of pairs of operators (also called “contractions”). It is a simplification because, just like everywhere else in this work, we will assume that the operators are independent of
time; one has only to look at Wick’s original proof [148], with its painstaking attention to the time
ordering of the operators, to see how much of a simplification this is. On the other hand, it is a
generalization because it describes systems at finite temperature, a case not treated by Wick (this
version of the theorem was in fact first proved by Matsubara [149]). Finally, we will concentrate on
Bose statistics, even though the theorem (in a slightly modified form) applies also to fermions. We
will illustrate the theorem rederiving some results obtained above for the ideal gas.
The very fact that we can use second quantization to treat a many-body problem is based on
the assumption that the many-body state vectors |n1 · · · nj · · ·i, where nj is the number of atoms in
the jth state, form a complete orthonormal set. When we impose Bose statistics on the system, we
know that a temperature t = 1/β̃ the populations must obey the Bose-Einstein distribution,
nj =
1
eβ̃²j
−1
≡ fj ,
(B.13)
and for that reason we must have
hb†k bk i = hn1 · · · nk · · · | b†k bk |n1 · · · nk · · ·i
= hn1 · · · nk − 1 · · · |
√
√
nk nk |n1 · · · nk − 1 · · ·i
= nk hn1 · · · nk − 1 · · · | n1 · · · nk − 1 · · ·i = nk = fk .
(B.14)
where we used the fundamental definition of the creation and annihilation operators [14,46,54].
Similarly,
hb†j bk i = h· · · nj · · · nk · · · | b†k bk | · · · nj · · · nk · · ·i
p p
= h· · · nj − 1 · · · nk · · · | nj nk | · · · nj · · · nk − 1 · · ·i
121
=
p
nj nk h· · · nj − 1 · · · nk · · · | · · · nj · · · nk − 1 · · ·i =
p
nj nk δjk = nj δjk .
(B.15)
This procedure already tells us some of the properties that must be had by the combinations we
want to average. First of all, all averages involving a single creation or annihilation operator will
vanish, and so will all combinations with an odd number of operators; more precisely, the operator
must have equal numbers of creation and annihilation operators. With this result we can derive
some properties of the ideal gas, for which the field operator simply becomes
Ψ(x) =
X
Ψ† (x) =
bj φj (x),
j
X
b†j φ∗j (x),
(B.16)
j
where the φj (x) are the harmonic-oscillator eigenfunctions from Appendix A. The total atom
number, for example, is readily found to be
N=
Z
σ
†
0
d x Ψ (x)Ψ(x ) =
X
b†k bj
jk
Z
dσx φ∗j (x)φk (x0 ) =
X
b†k bj δjk =
X
b†j bj
(B.17)
j
jk
and thus
hN i =
X
j
hb†j bj i =
X
fj ,
(B.18)
j
which is also consistent with defining the one-body density as ñ(x) = hΨ † (x)Ψ(x)i. The off-diagonal
one-body density matrix can also be found this way:
ñ(x, x0 ) = hΨ† (x)Ψ(x0 )i =
X
X †
fj φ∗j (x)φj (x0 ),
hbj bk i φ∗j (x)φk (x0 ) =
(B.19)
j
jk
as we saw in Eq. (2.25).
For interacting systems we must consider the next possible combination, which involves four
operators:
hb†j b†k bl bm i = h· · · nj · · · nk · · · nl · · · nm · · · | b†j b†k bl bm | · · · nj · · · nk · · · nl · · · nm · · ·i
(B.20)
p
=
nj nm h· · · nj − 1 · · · nk · · · nl · · · nm · · · | b†k bl | · · · nj · · · nk · · · nl · · · nm − 1 · · ·i
p
nj nk nl nm h· · · nj − 1 · · · nk − 1 · · · nl · · · nm · · · | · · · nj · · · nk · · · nl − 1 · · · nm − 1 · · ·i;
=
the last bracket will vanish unless either j = l and k = m or j = m and k = l. This immediately
leads to
hb†j b†k bl bm i = fj fk (δjl δkm + δjm δkl ),
a result used in Eq. (4.29).
122
(B.21)
APPENDIX C
SPECTRAL DIFFERENTIATION AND GAUSSIAN QUADRATURE
Interpolating polynomials
Consider a function f whose values we know only at a set of n points x (n)
j , 1 ≤ j ≤ n, assumed to
be distinct. If the function is smooth and well-behaved, it should be amenable to approximation by
means of the interpolant
f (x) ≈
n
X
(n)
f (x(n)
j ) `j (x),
(C.1)
j=1
where the dependence on x is carried by the “cardinal function” [105] `(n)
j (x), a polynomial of degree
(n)
(n)
(n − 1) that obeys `(n)
j (xi ) = δij . By construction, the interpolant is exact at the xj . We can
write the cardinal function in terms of the Lagrange interpolating polynomial:
`(n)
j (x) =
(n)
(n)
(n)
(x − x(n)
1 ) · · · (x − xj−1 )(x − xj+1 ) · · · (x − xn )
(n)
(n)
(n)
(n)
(n)
(n)
(n)
(x(n)
j − x1 ) · · · (xj − xj−1 )(xj − xj+1 ) · · · (xj − xn )
=
n
Y
x − x(n)
k
,
(n)
(n)
x
−
x
k
j=1 j
(C.2)
j6=k
which is easily seen to vanish at every x(n)
i , i 6= j, since for every i we can find a vanishing monomial
(n)
(n)
(x(n)
i − xi ), and which equals unity for x = xj .
Now, given a family of orthogonal polynomials u0 (x), u1 (x), . . . , un (x) such that
Z
I
uj (x)uk (x) w(x) dx = kuk k2w δjk ,
(C.3)
where we have introduced an interval I, a weight function w(x), and the norm ku k kw , we can find
(n)
another expression for the cardinal function. If we let x(n)
1 , . . . , xn be the roots of un (x), we can
write the polynomial as
(n)
(n)
un (x) = (x − x1 ) · · · (x − xn ) =
n
Y
(n)
(n)
(x − xj ) = (x − xk )
j=1
123
n
Y
j=1
j6=k
(x − xj(n) );
(C.4)
its derivative will then be
(n)
(n)
(n)
(n)
u0n (x) = (x − x(n)
2 ) · · · (x − xn ) + (x − x1 )(x − x3 ) · · · (x − xn ) + · · · =
and, in particular,
u0n (x(n)
k )=
n
Y
j=1
j6=k
n
n Y
X
j=1 j=1
j6=k
(x − x(n)
j ) (C.5)
(n)
(x(n)
k − xj ).
(C.6)
If we look at the last expression for un (x) in (C.4), we can see that
n
Y
x − x(n)
un (x)
j
(n)
)
=
(x
−
x
(n)
(n)
k
)
−
x
u0n (x(n)
x
j
k
j=1 k
(C.7)
j6=k
and hence
`(n)
k (x) =
n
Y
x − x(n)
un (x)
k
;
= 0 (n)
(n)
(n)
(n)
x
−
x
u
(x
n k )(x − xk )
k
j=1 j
(C.8)
j6=k
(n)
by means of L’Hôpital’s rule we can verify that `(n)
k (xk ) = 1.
Calculating orthogonal polynomials
The polynomials themselves and all of their derivatives can be calculated via Eq. (C.4) at any point
if we know their roots. These, in turn, can be calculated by exploiting the fact that, if a family of
orthogonal polynomials obeys a three-term recurrence relation of the form
un+1 (x) = (x − αn )un (x) − βn2 un−1 (x),
(C.9)
with u−1 = 0 and u1 (x) = 1, then it can be proved by induction that the roots of un (x) are the
eigenvalues of the Jacobi matrix [63,64]

α0

 β1
J=


β1
α1
β2
..
.
βn−1

βn−1
αn−1


.


For the particular case of the Laguerre polynomials, we have αn = 2n + 1 and βn = n2 .
124
(C.10)
Differentiation matrices
To calculate the derivative of f we do
f 0 (x) =
n
X
f (x(n)
j )
j=1
d (n)
` (x),
dx j
(C.11)
which, if we introduce the derivative matrix D(1) , whose elements are defined by [64,65,105]
(1)
dij ≡
can be written as
f 0 (x(n)
i )=
d (n) ¯¯
`
(n) ,
dx j xi
n
X
(C.12)
(1)
dij f (x(n)
j ).
(C.13)
j=1
Straightforward differentiation of `(n)
j (x) yields
u0n (x)(x − x(n)
d (n)
j ) − un (x)
,
`j (x) =
(n)
0
2
dx
un (xj )(x − x(n)
j )
(C.14)
and the off-diagonal elements of the matrix can be found immediately:
(1)
dij =
1
u0n (x(n)
i )
(n)
(n)
0
un (xj ) xi − xj(n)
for i 6= j.
(C.15)
To calculate the diagonal elements, we invoke L’Hôpital’s rule once again in order to avoid the
indeterminacy:
(1)
djj =
=
lim
(n)
x→xj
u0n (x)(x − x(n)
j ) − un (x)
2
u0n (xj(n) )(x − x(n)
j )
0
0
u00n (x(n)
u00n (x)(x − x(n)
1
j )
j ) + un (x) − un (x)
=
lim
.
(n)
(n)
0
0
(n)
un (xj ) x→xj
2(x − xj )
2un (xj(n) )
(C.16)
(n) T
Hence, if we express f as a column vector, f = [f (x(n)
1 ), . . . , f (xn )] , we obtain
f 0 = D(1) f .
(C.17)
It must be noted that, unlike those encountered in the finite-difference method, these differentiation matrices are not sparse. However, it can be proved [105] that, for smooth functions, the
125
accuracy attained grows much faster with n than with finite differences. In some cases, the error falls
as e−n , while the usual finite-difference algorithm converges at a rate n−2 . This treatment can be
extended to include higher-order derivatives [65] and many different boundary conditions [64]. We
will not delve into those here because the functions we interpolate only require first derivatives, but
we note that in some cases, and in particular in the case of Laguerre polynomials, the differentiation
matrices obey
D(k) = Dk .
(C.18)
Gaussian quadrature
Given the interval and weight function introduced in Eq. (C.3), we can integrate our function f by
a similar process [62,65]:
Z
f (x) w(x) dx =
I
n
X
(n)
f (xj )
j=1
Z
(n)
I
`j (x) w(x) dx ≡
n
X
(n)
f (x(n)
j ) wj ,
(C.19)
j=1
(n)
where we have introduced the weights wj(n) . To find the weights we can use the fact that `(n)
j (xi ) =
δij and write
n
X (n) (n)
1
(n)
` (xi )un−1 (x(n)
(n)
i )wi
un−1 (xj ) i=1 j
Z
1
`(n)
=
j (x) un−1 (x) w(x) dx
un−1 (x(n)
j ) I
Z
1
un (x)
=
un−1 (x) w(x) dx.
)
(un−1 u0n )(x(n)
x
− x(n)
I
j
j
wj(n) =
(C.20)
Now, we can determine the leading coefficient of each polynomial in the family. In other words, if
un (x) = an xn + an−1 xn−1 + · · ·
and
un−1 (x) = bn−1 xn−1 + bn−2 xn−2 + · · ·
(C.21)
we can find the numbers an and bn−1 ; furthermore, we find the first term of the quotient un (x)/(x −
x(n)
j ) by long division and obtain
un (x)
an
(n)
un−1 + qj (x) ≡ Aj un−1 (x) + qj (x)
= an xn−1 + · · · ≡
b
x − x(n)
n−1
j
126
(C.22)
and, since qj is a polynomial of degree (n − 2) at most, it can be expanded in terms of u 1 , . . . , un−2 .
But all of the resulting terms are orthogonal to un−1 , and thus we obtain
(n)
wj(n) =
Aj
(un−1 u0n )(xj(n) )
Z
(n)
un−1 (x) un−1 (x) w(x) dx =
I
Aj
(un−1 u0n )(xj(n) )
kun−1 k2w .
(C.23)
Most of the time we will be concerned with the Laguerre polynomials, for which I = [0, ∞) and
w(x) = e−x . It can be proved that [62]
Ln (x) =
¶
n µ
X
(−1)k k
n
x ,
n−k
k!
(C.24)
k=0
whence Aj = −(n − 1)!/n! = −1/n, and
Z
∞
(Ln (x))2 e−x dx = 1.
(C.25)
0
Combining these results we get
1
1
(n)
0
−1
wj(n) = − (Ln−1 (x(n)
= (n) (L0n (xj(n) ))−2 ,
j )Ln (xj ))
n
xj
where we used the recurrence relation xL0n = n(Ln − Ln−1 ) to arrive at the last expression.
127
(C.26)
APPENDIX D
MATHEMATICAL DETAILS OF PIMC SIMULATIONS
The two-body density matrix
The ratios in Eq. (5.20) can be calculated quite approximately because in these two-body collisions
the interaction potential depends only on the relative distance. Let us consider the ratio corresponding to any two particles, which we will call 1 and 2. If we introduce the center-of-mass and relative
coördinates and momenta
X≡
1
(x1 + x2 ),
2
x ≡ x 1 − x2 ,
K ≡ κ 1 + κ2 ,
κ≡
1
(κ1 − κ2 ),
2
(D.1)
we can rewrite the two-body Hamiltonian as
H̃ = κ21 + κ22 + x21 + x22 + v(|x1 − x2 |)
¡1 2
¢ ¡
¢ 1
1
=
K + 2X 2 + 2κ2 + v(x) + x2 ≡ H + h + x2 .
2
2
2
(D.2)
Since the inverse temperature τ is small, we can use the “Trotter breakup” [44,45,89] (that is, we
can neglect the commutators between momentum and position operators, as we did when we derived
Eq. (3.62)) to turn the numerator of Eq. (5.20) into
ρ2 (x1 , x2 , x02 , x02 ; τ ) = ρ2 (X, x, X0 , x0 ; τ ) = hX, x|e−τ H̃ |X0 , x0 i
1
2
1
2
≈ hX|e−τ H |X0 ihx|e− 4 τ x e−τ h e− 4 τ x |x0 i
= Ξ(x) hX|e−τ H |X0 ihx|e−τ h |x0 i Ξ(x0 ),
1
2
1
where Ξ(x) ≡ e− 4 τ x = e− 4 τ (ξ
2
+λ2 η 2 )
1
(D.3)
2
. (Note that we separated e− 2 τ x into two terms in order to
include contributions from both x and x0 ; this we also did when deriving (3.62).) The denominator
128
of Eq. (5.20) can also be calculated by defining center-of-mass and reduced coördinates [33]. The
center-of-mass terms cancel out, and we are left with
ρHC,1/2 (x, x0 ; τ )
ρ2 (x1 , x2 , x01 , x02 ; τ )
=
Ξ(x)
Ξ(x0 ),
ρ1 (x1 , x01 ; τ )ρ1 (x2 , x02 ; τ )
ρHO,1/2 (x, x0 ; τ )
(D.4)
where we have used the subscripts HC and HO to refer to the hard-core and harmonic-oscillator
density matrices respectively; the 1/2 emphasizes that these are the reduced-mass matrices; in
particular, the ideal-gas element in the denominator will have the form (2.31) with the
1
4
replaced
by 81 .
The hard-core density matrix for the relative coördinate can be found by assuming that the
Hamiltonian has eigenfunctions of the form ψkl (x)Ylm (ϑ, ϕ), where the angular dependence is carried
by spherical harmonics. The principal quantum number k is continuous, since the hard-sphere
potential has no bound states, and thus we can try a partial-wave expansion of the form [127,150]
0
ρHC (x, x ; τ ) =
∞
X
2l + 1
4π
l=0
Pl (cos γ)
Z
∞
2
dk e−2τ k ψkl (x)ψkl (x0 ),
(D.5)
0
where we used the addition theorem for spherical harmonics to express the angles in terms of the
relative angle γ defined by cos γ = cos ϑ cos ϑ0 + sin ϑ sin ϑ0 cos(ϕ − ϕ0 ). The density matrix (D.5)
has to obey the Bloch equation [15] (see Section 2.2)
hρHC = −
∂ρHC
,
∂τ
(D.6)
and this requirement forces the eigenfunctions to obey
˜ 2 ψkl + 1 v(x)ψkl = −
−∇
2
µ
d2
2 d
l(l + 1)
+
−
dx2
x dx
x2
¶
1
ψkl + v(x)ψkl = k 2 ψkl ,
2
(D.7)
where we used the eigenvalue equation for the spherical harmonics to divide out the angular dependence. The hard-sphere potential v(x) is infinite for x ≤ α, where α ≡ a0 /x0 , and zero otherwise, and this transforms (D.7) into a free-particle Schrödinger equation with the boundary condition ψkl (α) = 0 [151]. In three dimensions, the solution to this Schrödinger equation is a linear
combination of spherical Bessel and Neumann functions [62],
ψkl (x) = A(cos δl jl (kx) − sin δl nl (kx)),
129
(D.8)
and the phase shifts δl can be found by incorporating the boundary condition on the wavefunction:
tan δl =
jl (kα)
.
nl (kα)
(D.9)
The density matrix also has to obey the “initial” condition [15]
∞
ρHC (x, x0 ; 0) = δ (3) (x − x0 ) =
X 2l + 1
1
δ(x − x0 )
Pl (cos γ);
0
xx
4π
(D.10)
l=0
comparison with the closure equation for the spherical Bessel functions [62],
Z
yields A = k
p
∞
dk k
0
2
r
2
jl (kx)
π
r
1
2
jl (kx0 ) = 0 δ(x − x0 ),
π
xx
(D.11)
2/π for the normalization constant. The Gaussian factor in (D.5) suggests that
the numerical integration be performed using Gauss-Hermite quadrature, a process similar to the
Gauss-Laguerre quadrature we study in Appendix C. (In fact, in order to save time the program
calculates the matrix in a uniform mesh and then interpolates this table of values in subsequent
steps.)
Generating new configurations: The Lévy construction
In their original paper [132], Metropolis et al. generated new configurations by selecting a single
sphere and giving it a random displacement within an optimal range. In simulations like ours,
where we have many more atoms than there were in Ref. 132, and where moreover we have multiple
time slices, the only way to execute a calculation in a reasonable period of time is by generating
multiparticle, multislice configurations, in effect “sending a ripple through the system” instead of
moving just one particle at a time [152,153]; Krauth’s algorithmm uses a threading procedure,
introduced by Pollock and Ceperley [45,127] and based on previous work by P. Lévy [154], that in
the absence of interactions generates independent configurations that always pass the Metropolis
test.
The integrand of (5.3), we recall, is
I ≡ ρ(R, R2 ; τ )ρ(R2 , R3 ; τ ) · · · ρ(Rk , Rk+1 ; τ ) · · · ρ(R`−1 , R` ; τ )ρ(R` , R0 ; τ ),
130
(D.12)
¾ σ3
x
-
0
..
.
..
.
..
.
x3
x2
x
hx3 i
Figure D.1. The Lévy construction can be used to generate the new positions in a configuration
move. Given a point x2 and the (fixed) end position x0 , the Lévy construction prescribes the
midpoint hx3 i and the width σ3 of the Gaussian distribution obeyed by x3 in the case of an ideal
trapped gas. Each of the intermediate points depicted is a permissible, but the one marked with
the larger empty circle and the dashed line will be preferred. The point x2 has been previously
constructed using x (also fixed) and x0 ; x4 will be found based on x3 and x0 , and so on until the
chain is complete.
and we can rewrite it as [127]
"
#"
#
0
0
ρ(R,
R
;
τ
)ρ(R
,
R
;
β̃
−
τ
)
ρ(R
,
R
;
τ
)ρ(R
,
R
;
β̃
−
2τ
)
2
2
2
3
3
I = ρ(R, R0 ; β̃)
× ···
ρ(R, R0 ; β̃)
ρ(R2 , R0 ; β̃ − τ )
#
"
#
"
ρ(R` , R0 ; τ )ρ(R0 , R0 ; β̃ − `τ )
ρ(Rk , Rk+1 ; τ )ρ(Rk+1 , R0 ; β̃ − kτ )
··· ×
···
; (D.13)
ρ(Rk , R0 ; β̃ − (k − 1)τ )
ρ(R` , R0 ; β̃ − (` − 1)τ )
The terms that we added cancel in pairs, and a glance at the composition property (5.1) in the form
ρ(R, R0 ; β̃) =
Z
dNσR00 ρ(R, R00 ; β̃ − β̃ 0 )ρ(R00 , R0 ; β̃ 0 )
(D.14)
shows that every bracketed term is normalized to 1. Let us look more closely at one of those terms,
Γk+1
"
#
ρ(Rk , Rk+1 ; τ )ρ(Rk+1 , R0 ; β̃ − kτ )
≡
,
ρ(Rk , R0 ; β̃ − (k − 1)τ )
(D.15)
say, and think of it as a prescription for locating Rk+1 (or, more precisely, as N σ prescriptions for
locating each of the points in that time slice). Apart from the index k and the usual parameters
β̃ and τ , the only dependence is on Rk (the time slice immediately before Rk+1 ) and the final
time slice R0 ; in principle, then, we should be able to locate Rk+1 given those two time slices. In
that sense, and given its unit normalization, we can interpret (D.15) as a conditional probability
distribution.
131
For the case of the ideal trapped gas, it can be shown after a lot of algebra that Eq. (D.15), upon
substitution of the density matrix (2.31), becomes
Γk+1
Ã
!2 
0
R
csch
2τ
+
R
csch
2(
β̃
−
kτ
)
1
k
 (D.16)
∝ exp − (coth 2τ + coth 2(β̃ − kτ )) Rk+1 −
2
coth 2τ + coth 2(β̃ − kτ )

in the x- and y-directions; in the anisotropic case, for the z-direction we must substitute 2 → 2λ
everywhere except in the exponent at the end. Thus, as would be expected, the distribution (D.15)
for the ideal gas is Gaussian, and if we relocate the time slice to a random configuration obeying the
normal distribution (D.16) we will have a new configuration that will always pass the Metropolis
test.
In other words, it is possible to generate a multiparticle, multislice move that will exactly obey
the probability distribution of the ideal gas by making each xi in the kth slice be a Gaussian random
number with standard deviation
σk = q
1
(D.17)
coth 2τ + coth 2(β̃ − (k − 1)τ )
centered around
hxi,k i =
xi,k−1 csch 2τ + x0i csch 2(β̃ − (k − 1)τ )
.
coth 2τ + coth 2(β̃ − (k − 1)τ )
132
(D.18)
BIBLIOGRAPHY
[1] E. A. Cornell and C. E. Wieman, Rev. Mod. Phys. 74, 875 (2002).
[2] K. B. Davis et al., Phys. Rev. Lett. 75, 3969 (1995).
[3] W. Ketterle, Rev. Mod. Phys. 74, 1131 (2002).
[4] D. S. Hall, Am. J. Phys 71, 649 (2003).
[5] C. Foot, Nature 375, 447 (1995).
[6] M. H. Anderson et al., Science 269, 198 (1995).
[7] C. C. Bradley et al., Phys. Rev. Lett. 75, 1687 (1995).
[8] M. R. Andrews et al., Science 275, 637 (1997).
[9] A. J. Leggett, in The New Physics, edited by P. Davies (Cambridge University Press, Cambridge, 1989).
[10] A. Einstein, Sitzber. Kgl. Preuss. Akad. Wiss. 261 (1924).
[11] A. Einstein, Sitzber. Kgl. Preuss. Akad. Wiss. 3 (1925).
[12] S. N. Bose, Z. Phys. 26, 178 (1924).
[13] A. J. Leggett, Rev. Mod. Phys. 73, 307 (2001).
[14] R. K. Pathria, Statistical Mechanics (Pergamon, New York, 1972).
[15] R. P. Feynman, Statistical Mechanics (W. A. Benjamin, Reading, MA, 1972).
[16] A. Pais, “Subtle is the Lord. . . ”—The Science and the Life of Albert Einstein (Oxford University Press, Oxford, 1982).
[17] J. P. Wolfe, J.-L. Lin, and D. W. Snoke, in Bose-Einstein Condensation, edited by A. Griffin,
D. W. Snoke, and S. Stringari (Cambridge University Press, Cambridge, 1993).
[18] J.-L. Lin and J. P. Wolfe, Phys. Rev. Lett. 71, 1222 (1993).
[19] F. London, Nature 141, 643 (1938).
[20] F. Pereira Dos Santos et al., Phys. Rev. Lett. 86, 3459 (2001).
[21] C. E. Hecht, Physica 25, 1159 (1959).
[22] H. F. Hess et al., Phys. Rev. Lett. 51, 483 (1983).
[23] V. Bagnato, D. E. Pritchard, and D. Kleppner, Phys. Rev. A 35, 4354 (1987).
[24] V. Bagnato and D. Kleppner, Phys. Rev. A 44, 7439 (1991).
[25] D. G. Fried et al., Phys. Rev. Lett. 81, 3811 (1998).
[26] D. Kleppner, Phys. Today 52, 11 (1999).
133
[27] P. C. Hohenberg, Phys. Rev. 158, 383 (1967).
[28] W. J. Mullin, J. Low Temp. Phys. 106, 615 (1997).
[29] A. I. Safonov, S. A. Vasilyev, I. S. Yasnikov, I. I. Lukashevich, and S. Jaakkola, Phys. Rev.
Lett. 81, 4545 (1998).
[30] A. Görlitz et al., Phys. Rev. Lett. 87, 130402 (2001).
[31] S. Pearson, T. Pang, and C. Chen, Phys. Rev. A 58, 4811 (1998). See the comment following
Ref. 125.
[32] S. D. Heinrichs and W. J. Mullin, J. Low Temp. Phys. 113, 231 (1998); ibid. 114, 571 (1999).
[33] S. D. Heinrichs, Two dimensional Bose systems in a harmonic potential, Master’s thesis,
University of Massachusetts (1997).
[34] J. M. Kosterlitz and D. J. Thouless, J. Phys. C 6, 1181 (1973).
[35] S. I. Shevchenko, Sov. Phys. JETP 73, 1009 (1991).
[36] M. Girardeau, Phys. of Fluids 5, 1468 (1962).
[37] M. Van den Berg and J. T. Lewis, Commun. Math. Phys. 81, 475 (1981).
[38] P. Nozières, in Bose-Einstein Condensation, edited by A. Griffin, D. W. Snoke, and S. Stringari
(Cambridge University Press, Cambridge, 1993), p. 15.
[39] Y. Kagan, B. V. Svistunov, and G. V. Shlyapnikov, Sov. Phys. JETP 66, 314 (1987).
[40] Y. Kagan, V. A. Kashurnikov, A. V. Krasavin, N. V. Prokof’ev, and B. V. Svistunov, Phys.
Rev. A 61, 043608 (2000).
[41] D. V. Fil and S. I. Shevchenko, Phys. Rev. A 64, 013607 (2001).
[42] R. K. Bhaduri, S. M. Reimann, S. Viefers, A. Ghose Choudhury, and M. K. Srivastava, J.
Phys. B 33, 3895 (2000).
[43] W. J. Mullin, J. Low Temp. Phys. 110, 167 (1998).
[44] W. Krauth, Phys. Rev. Lett. 77, 3695 (1996).
[45] D. M. Ceperley, Rev. Mod. Phys. 67, 279 (1995).
[46] J. J. Sakurai, Modern Quantum Mechanics (Addison-Wesley, Reading, MA, 1994), revised ed.
[47] R. Kubo, Statistical Mechanics (North-Holland, Amsterdam, 1971).
[48] L. D. Landau and E. M. Lifshitz, Statistical Mechanics (Addison-Wesley, Reading, MA, 1969),
2nd ed.
[49] O. Penrose and L. Onsager, Phys. Rev. 104, 576 (1956).
[50] R. M. Ziff, G. E. Uhlenbeck, and M. Kac, Phys. Rep. 32C, 169 (1977).
[51] W. Ketterle and N. J. van Druten, Phys. Rev. A 54, 656 (1996).
[52] R. K. Pathria, Phys. Rev. A 58, 1490 (1998).
[53] W. J. Mullin, Am. J. Phys 68, 120 (2000).
[54] F. Dalfovo, S. Giorgini, L. P. Pitaevskiı̆, and S. Stringari, Rev. Mod. Phys. 71, 463 (1999).
[55] S. Grossman and M. Holthaus, Phys. Lett. 208, 188 (1995).
134
[56] T. Haugset, H. Haugerud, and J. O. Andersen, Phys. Rev. A 55, 2922 (1997).
[57] K. Damle, T. Senthil, S. N. Majumdar, and S. Sachdev, Europhys. Lett. 36, 7 (1996).
[58] N. J. van Druten and W. Ketterle, Phys. Rev. Lett. 79, 549 (1997).
[59] W. J. Mullin and J. P. Fernández, Am. J. Phys 71, 661 (2003).
[60] M. Naraschewski and R. J. Glauber, Phys. Rev. A 59, 4595 (1999).
[61] B. P. van Zyl, J. Low Temp. Phys. 132, 207 (2003).
[62] G. Arfken, Mathematical Methods for Physicists (Academic Press, San Diego, 1985), 3rd ed.
[63] W. H. Press, S. A. Teukolsky, W. A. Vetterling, and B. P. Flannery, Numerical Recipes in C
(Cambridge University Press, Cambridge, 1992), 2nd ed.
[64] J. A. C. Weideman and S. C. Reddy, ACM Trans. of Math. Software 26, 465 (2000).
[65] D. Funaro, Polynomial Approximation of Differential Equations (Springer, Berlin, 1992).
[66] K. K. Das, Phys. Rev. A 66, 053612 (2002).
[67] B. Tanatar, A. Minguzzi, P. Vignolo, and M. P. Tosi, Phys. Lett. A 302, 131 (2002).
[68] L. V. Hau et al., Phys. Rev. A 58, R54 (1998).
[69] M. Holzmann, W. Krauth, and M. Naraschewski, Phys. Rev. A 59, 2956 (1999).
[70] T. Bergeman, D. L. Feder, N. L. Balazs, and B. I. Schneider, Phys. Rev. A 61, 063605 (2000).
[71] N. V. Prokof’ev and B. V. Svistunov, Phys. Rev. A 66, 043608 (2002).
[72] N. Bogoliubov, J. Phys. USSR 11, 23 (1947).
[73] A. L. Fetter, Ann. Phys. (N.Y.) 70, 67 (1972).
[74] A. Griffin, Phys. Rev. B 53, 9341 (1996).
[75] E. P. Gross, Nuovo Cim. 20, 454 (1961).
[76] L. P. Pitaevskiı̆, Sov. Phys. JETP 13, 451 (1961).
[77] G. Baym and C. Pethick, Phys. Rev. Lett. 76, 6 (1996).
[78] A. L. Fetter, in Proceedings of the Enrico Fermi International Summer School on Bose-Einstein
Condensation, edited by M. Inguscio, S. Stringari, and C. E. Wieman (IOS Press, Washington,
1999).
[79] K. Huang and C. N. Yang, Phys. Rev. 105, 767 (1957).
[80] W. J. Mullin, M. Holzmann, and F. Laloë, J. Low Temp. Phys. 121, 263 (2000).
[81] D. S. Petrov, M. Holzmann, and G. V. Shlyapnikov, Phys. Rev. Lett. 84, 2551 (2000).
[82] M. D. Lee and S. A. Morgan, J. Phys. B 35, 3009 (2002).
[83] M. Houbiers and H. T. C. Stoof, Phys. Rev. A 54, 5055 (1996).
[84] A. L. Fetter and J. D. Walecka, Quantum Theory of Many-Particle Systems (McGraw-Hill,
New York, 1971).
[85] P. Öhberg et al., Phys. Rev. A 56, R3346 (1997).
135
[86] F. Dalfovo, S. Giorgini, M. Guilleumas, L. P. Pitaevskiı̆, and S. Stringari, Phys. Rev. A 56,
3840 (1997).
[87] F. Dalfovo and S. Stringari, Phys. Rev. A 53, 2477 (1996).
[88] S. Giorgini, L. P. Pitaevskiı̆, and S. Stringari, J. Low Temp. Phys. 109, 309 (1997).
[89] M. Holzmann and Y. Castin, Eur. Phys. J. D 7, 425 (1999).
[90] J. des Cloiseaux, in Problème à N corps (Many-Body Physics), Les Houches 1967, edited by
C. DeWitt and R. Balian (Gordon and Breach, New York, 1968).
[91] D. A. W. Hutchinson, E. Zaremba, and A. Griffin, Phys. Rev. Lett. 78, 1842 (1997).
[92] R. J. Dodd, M. Edwards, C. W. Clark, and K. Burnett, Phys. Rev. A 57, R32 (1998).
[93] C. Gies, B. P. van Zyl, S. A. Morgan, and D. A. W. Hutchinson (2003), cond-mat/0308177.
[94] M. Edwards and K. Burnett, Phys. Rev. A 51, 1382 (1995).
[95] S. K. Adhikari, Phys. Rev. E 62, 2937 (2000).
[96] V. V. Goldman, I. F. Silvera, and A. J. Leggett, Phys. Rev. B 24, 2870 (1981).
[97] T. Haugset and H. Haugerud, Phys. Rev. A 57, 3809 (1998).
[98] L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A 65, 043614 (2002).
[99] Y. Castin and R. Dum, Eur. Phys. J. D 7, 399 (1999).
[100] B. P. van Zyl, R. K. Bhaduri, and J. Sigetich, J. Phys. B 35, 1251 (2002).
[101] C. E. Campbell, J. G. Dash, and M. Schick, Phys. Rev. Lett. 26, 966 (1971).
[102] S. Giorgini, L. P. Pitaevskiı̆, and S. Stringari, Phys. Rev. A 54, R4633 (1996).
[103] D. A. Huse and E. D. Siggia, J. Low Temp. Phys. 46, 137 (1982).
[104] J. P. Fernández and W. J. Mullin, J. Low Temp. Phys. 128, 233 (2002).
[105] L. N. Trefethen, Spectral Methods in Matlab (SIAM, Philadelphia, 2000).
[106] P. Muruganandam and S. K. Adhikari, J. Phys. B 36, 2501 (2003).
[107] H. Shi and W.-M. Zheng, Phys. Rev. A 56, 1046 (1997).
[108] R. V. E. Lovelace and T. J. Tommila, Phys. Rev. A 35, 3597 (1987).
[109] D. J. Thouless, in The New Physics, edited by P. Davies (Cambridge University Press, Cambridge, 1989).
[110] M. Van den Berg, Phys. Lett. 78A, 88 (1980).
[111] M. Naraschewski and D. M. Stamper-Kurn, Phys. Rev. A 58, 2423 (1998).
[112] A. Minguzzi, S. Conti, and M. P. Tosi, J. Phys.: Condens. Matter 9, 33 (1997).
[113] M. Bayindir and B. Tanatar, Phys. Rev. A 58, 3134 (1998).
[114] J. G. Kim and E. K. Lee, J. Phys. B 32, 5575 (1999).
[115] L. Reatto and G. V. Chester, Phys. Rev. 155, 88 (1967).
[116] J. G. Kim, K. S. Kang, B. S. Kim, and E. K. Lee, J. Phys. B 33, 2559 (2000).
136
[117] J. P. Fernández and W. J. Mullin, The Bose-Einstein condensate in two dimensions (2001),
poster presented at the European Research Conference on Bose-Einstein Condensation.
[118] D. S. Fisher and P. C. Hohenberg, Phys. Rev. B 37, 4936 (1988).
[119] N. V. Prokof’ev, O. A. Ruebenacker, and B. V. Svistunov, Phys. Rev. Lett. 87, 270402 (2001).
[120] V. N. Popov, Functional Integrals in Quantum Field Theory and Statistical Physics (Reidel,
Dordrecht, 1983).
[121] U. A. Khawaja, J. O. Andersen, N. P. Proukakis, and H. T. C. Stoof, Phys. Rev. A 66, 013615
(2002).
[122] P. G. de Gennes, Superconductivity of Metals and Alloys (W. A. Benjamin, New York, 1966).
[123] T. Bergeman, Phys. Rev. A 55, 3658 (1997).
[124] S. I. Shevchenko, Sov. J. Low Temp. Phys. 18, 223 (1992).
[125] S. Pearson, T. Pang, and C. Chen, Phys. Rev. A 58, 4796 (1998). It was later found that this
paper underestimates the effect of interactions; see W. Krauth, cond-mat/9812328.
[126] D. M. Ceperley and E. L. Pollock, Phys. Rev. Lett. 56, 351 (1986).
[127] E. L. Pollock and D. M. Ceperley, Phys. Rev. B 30, 2555 (1984).
[128] M. Holzmann and W. Krauth, Phys. Rev. Lett. 83, 2687 (1999).
[129] D. M. Ceperley and E. L. Pollock, Can. J. Phys. 65, 1416 (1987).
[130] K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics (Springer,
Berlin, 1997), 3rd ed.
[131] H. Gould and J. Tobochnik, An Introduction to Computer Simulation Methods (AddisonWesley, Reading, MA, 1996), 2nd ed.
[132] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem.
Phys. 21, 1087 (1953).
[133] M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Phyisics (Clarendon
Press, Oxford, 1999).
[134] W. Krauth, “Introduction to Monte Carlo algorithms”, unpublished (1996).
[135] J. A. Barker, J. Chem. Phys. 70, 2914 (1979).
[136] M. Holzmann, La transition de Bose-Einstein dans un gaz dilué, Ph.D. thesis, Université Paris
VI (2000).
[137] W. J. Mullin, S. D. Heinrichs, and J. P. Fernández, Physica B 284–288, 9 (2000).
[138] E. Lieb and R. Seiringer, Phys. Rev. Lett. 88, 170409 (2002).
[139] V. Menon and K. Pingali, in Proceedings of the 2nd USENIX Conference on Domain-Specific
Languages, edited by T. Ball (1999).
[140] S. Stringari, Phys. Rev. Lett. 76, 1405 (1996).
[141] P. Sindzingre, M. L. Klein, and D. M. Ceperley, Phys. Rev. Lett. 63, 1601 (1989).
[142] D. M. Ceperley and E. L. Pollock, Phys. Rev. B 39, 2084 (1989).
[143] J. E. Robinson, Phys. Rev. 83, 678 (1951).
137
[144] W. Magnus, F. Oberhettinger, and R. P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics (Springer, Berlin, 1966), 3rd ed.
[145] S. Flügge, Practical Quantum Mechanics, vol. 1 (Springer, Berlin, 1971).
[146] M. Linn and A. L. Fetter, Phys. Rev. A 60, 4910 (1999).
[147] T. Matsubara, Prog. Theor. Phys. 6, 714 (1951).
[148] G. C. Wick, Phys. Rev. 80, 268 (1950).
[149] T. Matsubara, Prog. Theor. Phys. 14, 351 (1955).
[150] S. Y. Larsen, K. Witte, and J. E. Kilpatrick, J. Chem. Phys. 44, 213 (1966).
[151] S. Y. Larsen, J. Chem. Phys. 48, 1701 (1968).
[152] M. Takahashi and M. Imada, J. Phys. Soc. Jpn. 53, 963 (1984).
[153] P. K. MacKeown, Am. J. Phys 53, 880 (1985).
[154] P. Lévy, Compositio Math. 7, 283 (1939).
138