Cosmology Notes: Steady State vs. the Big Bang.

advertisement
Cosmology Notes:
Steady State vs. the Big Bang.
Expansion: Hubble’s results and Friedmann’s solution to the equations of GTR. Hubble
obtained a nice linear correlation between distance and red shift for a range of relatively
“nearby” galaxies; reading the red shift (plausibly) as speed of recession, we get a view
of the universe according to which it is expanding uniformly—a view which fits perfectly
with Friedmann’s expanding space solution to the GTR equations. Hubble time is
defined as the time required for all the galaxies (on average) to end up all in the same
place (where they started, on the big bang model) if we simply reverse the present rate of
expansion.
The Cosmological Principle: This is the principle that says there is nothing special about
our point of view on the universe—the universe will look pretty much the same from
anywhere, according to this principle. This fits with the Hubble data to give us a
universal uniform expansion.
The Perfect Cosmological Principle: This principle (defended by Fred Hoyle and other
steady-state fans) holds that the universe will look pretty much the same from anywhere
and at any time. It’s reminiscent of the strong uniformitarianism defended by Charles
Lyell in geology, in the mid-nineteenth century. According to this view, all the
fundamental cosmological processes are going on somewhere in the universe today, so
we should be able to observe all the processes responsible for the present state of the
universe. (Note how well this has worked for developing and testing our understanding
of stars and their histories—see Ch. 1 in particular.)
The “Big Bang”: Hoyle mockingly labeled Lemaitre and Gamow’s “explosive” account
of the origins of the universe the “big bang” model. According to this view, all the
material of the universe emerged from a single massive “explosion” (which, unlike more
familiar explosions, also produced the space into which it is now expanding). Since then,
over time, galaxies and stars have formed, and the whole present range (and distribution)
of the elements has developed out of the raw materials produced in the explosion.
Steady State: Hoyle, Bondi, Gold defended the view that the universe continues steadily
expanding, with new matter coming into existence in the resulting space, filling it with
new galaxies over time. Their universe is eternal, with no beginning and no end—an
echo of Aristotle’s cosmology.
Evidence: As far as the evidence of expansion goes, both models are on an equal
footing—they are both deliberately designed to account for Hubble’s expansion results.
But since then, two critical bits of evidence favouring the big bang model have emerged.
The abundance of helium: Work by Hoyle and others on nucleosynthesis (the formation
of nuclei for heavier elements) in stars led to a reasonable account of the abundance of
most elements. But it could not account for the fact that helium accounts for about 25%
of the mass of the observable universe—the processes in stars will rapidly “burn up”
helium, converting it to heavier elements. Hoyle and Tayler showed (in a speculative
model of quasars) that giant stars could burn hydrogen to make helium and self-destruct
before the helium was burnt up; but the same process can account for the origin of helium
in the early stages of the big bang (as Hoyle, Fowler and Wagoner showed), and there is
no credible model of how such large stars could arise in the steady state model.
Cosmic background radiation: Gamow had shown in the 1940’s that the big bang would
have left behind a “glow” in the form of a low-temperature black-body spectrum
pervading all space. (The glow dates back to the point at which, as the universe cooled,
protons and electrons combined to produce neutral hydrogen gas, which is transparent.
At that point the light and matter of the universe were “decoupled”, becoming
thermodynamically independent. Since then, the light has cooled as the universe
expanded, while retaining a black-body spectrum—now at a temperature of just under 3
degrees Kelvin.) In 1963 Dicke and his colleagues began an attempt to find this
radiation. But it was Penzias and Wilson at Bell Labs, who found the signal, though they
didn’t know what they were looking at: there seemed to be a background “noise” they
couldn’t eliminate from the signal their antenna was picking up. Dicke realized that he’d
missed the chance, and told Penzias and Wilson what they were hearing. Since then the
measurements have been refined greatly, and we now know (from the COBE
observations) that the background radiation has a black-body temperature of 2.728oK.
Neutron Stars. First proposed by Baade and Zwicky as supernova remnants, 1936.
Confirmed in 1968 when a pulsar was discovered in the Crab Nebula; pulsars, in the
meantime, had been discovered by Hewich and Bell at Cambridge in the early 60’s. Gold
suggested the rotating neutron star model for pulsars in 1967, which was confirmed by
the 1968 observations of the Crab Nebula (also confirming Baade and Zwicky’s 1936
account of neutron stars as supernova remnants).
The surface of a Neutron star is a thin, rigid shell surrounding a neutron superfluid,
possibly with a solid core at the centre. Slippage between the shell and the fluid can
cause little irregularities in the pulsing of the radio signal from the star, as can cracks and
shifts in the surface as the star slows and becomes more spherical. The precision of the
measurements we can make on the radio pulses allow us to detect these phenomena &
compare them in detail with theoretical models.
Gravity at a neutron star’s surface is immensely strong, enough to produce deflections of
light rays by 10 to 20 degrees, enough to make a 1 millimeter tall “mountain” require
more energy to climb than the highest mountains on earth, enough to make the escape
velocity at the surface ½ the speed of light.
Perhaps more surprisingly, tidal forces (the difference between the attraction felt by parts
closer to the star and that felt by parts further from the star) become very large near
neutron stars—during the approach to a smaller black hole (of stellar mass, for example)
they would be sufficient to rip an incautious explorer apart…
Black Holes. These extreme objects were first theoretically suggested by Chandrasekhar
in his gravitational analyses of white dwarves: beyond a certain mass (Chandrasekhar’s
limit = about 1.4 solar masses), G.T.R. takes over and gravitational collapse is
irresistible. A collapsar (an alternative term now disappearing from use) can be simply
modeled using the spherically symmetric, non-rotating model due to Schwarzchild, or
(more realistically) by the rotating Kerr model. These are very simple objects (as John
Wheeler put it, “Black holes have no hair”). Our theory tells us that the features of the
Kerr model completely characterize the physics of a rotating black hole—nothing matters
to such objects but their mass and their rotation. So the physics of these exotic objects is
probably better understood than the physics of almost anything else we know of, despite
how “out of the ordinary” they are.
A black hole results when the gravitational field around an object becomes so strong that
not even light can escape from it—the Schwarzchild radius is the “distance” (indirectly
arrived at, since the space-time metric actually changes at the surface, the “event
horizon”—which is defined as the surface at which all light cones are tipped to point
inside rather than outside the hole) from the centre to the surface at which light is
“frozen”. (Anything falling into a black hole looks, from outside, to finish up frozen in
time on the surface; from the point of view of the faller, however, she crosses the event
horizon and proceeds, in a relatively short time, to the singularity where she is either torn
apart tidally (in a non-rotating hole) or “spaghettified” (in a rotating, Kerr-model black
hole).
Dark Matter. We first found Neptune because the orbit of Uranus was out of whack, so
we inferred (rather than give up Newton’s theory of gravity) that there had to be another
mass affecting Uranus that we hadn’t accounted for. Similar studies of the dynamics of
galaxies and galaxy clusters show that the visible mass can’t account for the observed
motions. Stars orbiting the galactic core don’t slow with distance from the core, which
only makes sense if there is a “halo” of dark matter between the core and the stars,
speeding their orbits up; galaxy clusters would “come apart” if the visible matter were all
that was holding them together. Hypotheses about the nature of dark matter range from
“brown dwarfs” (near stars too small to achieve fusion) and black holes to a range of
WIMPs (weakly interacting massive particles). Observations so far suggest there may be
some brown dwarfs out there, but not enough to do the whole job. Further cosmological
constraints on the early production of helium limit the amount of dark matter that can be
composed of “ordinary” matter, i.e. atoms, so we probably need something a little exotic
to fit the bill here.
MOND: The alternative to dark matter as an account of our observation is MOND
(modified Newtonian dynamics). This view adjusts Newtonian gravity for extremely low
accelerations, and has made some successful predictions. But going this route would
require rejecting GTR, a step few physicists are ready to contemplate at this point.
Changing this very successful and elegant theory (when we have no overall take on the
alternative, but only an empirical “fix”) is a tough sell at this point.
Primordial Ripples to Cosmic Structures: If the universe had started out completely
uniform and smooth, it would have expanded smoothly over time, with none of the
structures that characterize the universe we see today; early big bang models worked
with this simplifying assumption to set the basic parameters of expansion, but since then
work has progressed on identifying the small inhomogeneities that were the seeds of
large-scale structures in the early universe. A slight excess of mass in a region of the
early universe will magnify itself over time, as the gravitational pull of the mass draws in
still more mass in a slow, but self-amplifying deviation from the overall average density
of the universe.
Theoretical fluctuations: The early fluctuations that gave rise to present-day structures
have been modeled as “scale-free” fluctuations, i.e. fluctuations that are distributed
independently of the size of the fluctuation (inflationary models of the very early
universe seem to support this sort of model). Computer simulations are being used to test
these hypotheses to see if they predict a universe with the sort and range of structures we
now witness. Recent data suggests both a lack of small galaxies and an absence of
structure on the very largest scales, that may tell us a lot about the early details of the
universe (its size and topology in particular), and constrain our models of inflation. But
of course the joker in the deck is our ignorance of the nature of dark matter.
Computer models: Models of these early density variations and how they develop have
been explored, some based on cold dark matter (i.e. slow moving particles that don’t
interact with normal matter, or do so very rarely) and some on hot dark matter (fast
moving particles, such as slightly massive neutrinos). The key difference between these
models is the order in which structures on different scales arise. For cold dark matter,
structure arises from the “bottom up,” with small structures developing first, and then
combining into larger structures (stars, galaxies, clusters forming in that order). But for
hot dark matter, large-scale structures develop first, with small scale structures arising
within already-formed large-scale structures. At present the evidence seems to lean in
favour of the cold-dark matter models: large scale structures (large galactic clusters) seem
to be a relatively recent development in the history of the universe. And galaxies seem to
have formed uncomfortably early for HDM models.
Rees’ Q5: One important number we can calculate is the energy that would be required to
“liberate” parts of large-scale objects from their gravitational bonds to each other. This
quantity (which is stable over time despite the increasing differences in density as early
fluctuations are amplified, because the kinetic energy of the parts increases as they draw
closer together, keeping the masses involved all just as close to the energies needed to
escape) is a measure of the roughness or unevenness of mass/energy distribution in the
universe—it is roughly 10-5. (Rees calls the number Q5.)
Fluctuations and Ripples: The earliest sign of these fluctuations is the unevenness of the
cosmic background radiation (there also we have a difference of 1 part in 105). COBE
was the first experiment to reveal this unevenness, but it gives a very crude picture of it
because it looked at the sky in 7o segments, blending together all the finer details of the
background radiation. The WMAP satellite observations have recently provided a much
more detailed map of the background radiation (paper at arXiv.orrg/abs/astroph/0302207).
Omega and Lambda (Ch. 8): Omega () measures the ratio between the actual mass of
the universe and the mass required to “close” it, i.e. to ensure gravity slows the expansion
to 0 in the long run (i.e. as time goes to ). The observation of visible matter contributes
about .02 to ; the dark matter we can infer from gravitational analysis of the dynamics
of galaxies and galactic clusters brings  up to about .2.
The behaviour of  over time makes this observation a very striking fact.  values that
differ from 1 rapidly deviate further and further from 1 as time goes on. For the value of
 to be within a factor of 5 or even 10 of 1 today requires that  be within 1 part in 1015
at the 1 second mark.(Gribbin, p. 348) And, as Rees assures us, we understand the
physics from that time on very well. So we can infer with some confidence that  was
indeed within one part in 1015 of 1 at that time. Now, 1 is a special sort of number, and
the fact that the  of our universe was (at least once upon a time) that close to a precise
value of 1 suggests that there is something special about how our universe actually came
about that somehow guarantees an  value of 1. We’ll see later how one new
cosmological theory, the inflationary universe, manages to do this.
Details of the early processes (in the first minute) that produced helium and deuterium
(heavy hydrogen) imply that ordinary matter (atoms made of protons, neutrons, and
electrons) can’t account for more than about .1 of . Since dark matter observations
already give us  = about .2, this implies that most of the universe must be made of
something other than ordinary matter (if in fact =1, only about 1/10 of the mass of the
universe can be ordinary matter).
So cosmologists’ inclination to believe that = 1 is connected with three main points.
First, the fact that it must have been so close to 1 early on suggest there’s something
special about the value 1 for , creating the suspicion that some deep theoretical basis
guarantees that it will be exactly 1. Second, the relative contributions of mass and
(negative) gravitational potential energy to the universe as a whole. It turns out that if
=1, then the universe as a whole can be a “free lunch,” in the sense that it has exactly
zero net energy (p. 143). Finally, inflationary cosmology solves some other problems for
the very early universe (reaching back into the speculative turf before the 1 second mark),
and also implies  must be almost exactly 1, because the rapid expansion of space in the
first microseconds would have “stretched” the space we can observe out to almost perfect
flatness.
Lambda (): Einstein invoked  as the label for his “cosmological constant”, a repulsive
force added to his theory of gravity to ensure the universe could be stable and static.
Einstein later called this addition “the biggest blunder of my life”. But recently
cosmologists have found reasons to consider putting  back into their gravitational
equations. Vacuum energy provides a motivated physical basis for a non-zero  value.
A non-zero  would mean that the universe can be older than the Hubble time (without ,
the universe can only be younger than the Hubble time). Evidence for the acceleration of
the universe’s expansion has recently come along—some distant supernovae are dimmer
than they should be according to the usual way of calculating their distance from their
redshift. Acceleration accounts for this—they weren’t moving away so fast back then, so
their redshift, projected linearly, underestimates their distance. Dark energy is now a
widely researched cosmological issue, with a very large distant supernovae survey
underway. Einstein’s blunder may turn out to be a very lucky guess.
Many unanswered questions remain in cosmology. This brings us face to face with a
difficult choice in science. Unanswered questions can often be dealt with by simply
invoking new ideas. But when these new ideas are simply designed with answering these
questions in mind, we have a difficulty. They resolve our wonder at the unanswered
questions. But they aren’t testable—they simply posit attractive answers, without
providing enough traction for further investigation. How to deal with such issues has
always been a difficult question—Newton invoked “choice” to explain the apparent
oddity that all the planets’ orbits are in the same plane. But today we understand that the
rotation of the solar nebula as it contracted to form the stars and planets naturally picks
out a plane on which the planets orbits would lie. So we need to resist jumping too
quickly to the conclusion that “God did it.”
Rees focuses the issue on the puzzle of initial conditions. We can develop theories, and
apply the scientific study of processes and present conditions to unraveling the history of
the universe we now observe. But the explanations that we get in the end always look
like this: They begin with certain initial conditions, and show how, given those initial
conditions, the present observations can be explained. So, in effect, the explanations
push our questions back, but they don’t fully resolve them—we can still wonder why and
how those initial conditions came to be. At any point we may be tempted to just give up
and hang a “God did it” sign on the initial conditions (and the laws too, for that matter).
But this is bad policy, since this “answer” has all the faults of premature conclusion
jumping, and none of the virtues (we can often correct premature conclusions when later
evidence shows they were mistaken, but there’s never evidence against the “God did it.”
claim—even if she did do it, God could have done it indirectly, by means of bringing
about still earlier initial conditions…and the history of science is full of “science will
never be able to explains” that turned out to be just plain wrong.) On the other hand,
when the initial conditions seem arbitrary and unconstrained, we may well suspect a good
explanation lies waiting down the road. But that’s no reason to help ourselves to an
“explanation” now, that makes no predictions and leads to no inferences or conclusions
that advance scientific understanding. Hypotheses non fingo is a better response, at least
until we have some hypotheses that can actually do some work.
The three main numbers: To characterize a universe based on a hot big bang beginning
from the 1 second mark, all we need are three numbers-- , a distribution of the total
mass/energy between the various kinds of stuff—ordinary atoms—made almost entirely
of baryons (that’s protons and neutrons) photons, and any other exotic sorts of “dark
matter”—massive neutrinos, or WIMPs, or what have you, and finally Q, the initial
roughness or unevenness in the distribution of mass/energy (= 10-5 in our universe).
But there may be some more fundamental hypotheses that will explain these numbers.
From the point of view of such hypotheses, the three numbers are not just arbitrary
parameters that we can use as input to generate “possible universes.” They are numbers
that result from prior goings on, which (to some degree or other) constrain them to have
the values (or a range of values compatible with) that we find in the real universe. And
there’s an important point that lurks here. As Rees says, events between 10-15 seconds
and 10-14 seconds are probably just as rich and complex as the events between 10-14 and
10-13, even though the last interval is 10 times longer than the first. The project we’re
engaged in here is open-ended; tracing the history of the universe down to the first
second is a huge accomplishment, but pushing the account back to the first tenth of a
second, the first hundredth, and so on, takes an immense amount of work at each step.
As we reach 10-14 seconds, we pass the range of energies that our most powerful particle
accelerators can achieve. But we still have a huge distance to go; before 10-14 we have
10-15, and so on indefinitely—this is not a way (and not intended to be a way) to capture
the first “moment”—in fact, moving back this way precludes even trying to describe a
first moment. What we do instead is push back our history, getting closer and closer to
“0” but never getting there. Cosmology is practiced on the open interval running between
0 and the indefinite future; when we study the past in cosmology, we reach back
indefinitely close to 0 but never get there—it’s always a question of what happened
before the time we’ve got a description of, never a question of a pure and absolute
“starting point.” And everything that happens, happens at times that aren’t 0.
Force unification. Maxwell’s equations unify electricity and magnetism. Late in his
career Einstein prematurely tried to unify these forces with gravity. But any hope of
success with that project will require a unification with the other two forces—the “weak”
and “strong” forces. The weak force is responsible for some modes of beta-decay, while
the strong force binds the nucleus (and, as we now believe, the quarks that make up
protons and neutrons). The unification of electromagnetism with the weak force is a
done deal, due to the (independent) work of Steven Weinberg and Abdus Salam.
Experiments have confirmed this unification, identifying the necessary particles and
showing that at sufficiently high energies (around 100 GEV—energies attainable in our
most powerful accelerators), electromagnetic forces become indistinguishable from the
weak force. But the energies at which the strong force is expected to “blend” with the
eletroweak force are far higher- around 1015 GEV. No accelerator we can build will
achieve such energies—only the big bang itself did, at about the t = 10-36 second mark.
So, if we are to directly test physical theories of this grand unification, we must either
extract other consequences for events at the terribly low energies we can actually manage
to produce, or examine the “debris” from those drastic early moments for evidence of
what happened in that first 10-36 seconds—in a sense, the entire universe is a relic of
those times, if we can only learn how to read what it has to tell us. Here cosmology may
turn out to be the final testing ground for ideas in physics that reach far beyond the
conditions now prevailing in the universe.
Fundamental questions: There are some very basic questions about the nature of the
universe that our present physics doesn’t answer; exploring how to answer them may
well lead us to an account of why our three numbers have the values they do. One of
these questions is, why is there so much matter in the universe, and so little anti-matter?
When matter is produced from energy—for example, when a gamma ray of about 1.06
MEV passes by an atom to which it donates its momentum and changes into matter, it
produces an electron-positron pair, i.e. a pair including a particle and its anti-particle.
And in general, physics regards particles and their anti-particles as behaving in almost
perfectly symmetrical ways. But the abundance of matter and the scarcity of anti-matter
in the universe clearly constitute a major asymmetry. Andrei Sakharov first posed this
puzzle, and outlined an answer to it: if the basic laws include a slight asymmetry
favouring the production of baryons (protons and neutrons, made of quarks) over antibaryons (made of anti-quarks), then there would initially be just slightly more baryons
than anti-baryons. When, in the early stages of the universe, these baryons and antibaryons destroyed each other, the result would be a lot of photons, and a few left-over
baryons that failed to find an anti-baryon to react with. A difference of 1 in a billion in
the rate of production of baryons vs. anti-baryons will do the job. But the implications
for physics today are subtle: if baryon number (the difference of baryons and antibaryons) is not strictly conserved, then at extremely long intervals protons should decay.
So far no-one has observed proton decay. But grand unification in this mode requires it
to be possible. Of course building such a small asymmetry into our physics raises further
questions—why should physics be asymmetrical in this way? As usual, questioning an
initial condition is cosmology leads us still further back into the past.
Unifying gravity with the other forces leads us to still more extreme and speculative
physics. Unlike the other forces, our theory of gravity is not even a quantum theory. So
along the way to this unification, we will need to develop a quantum theory of gravity.
But a quantum theory of gravity will only be applicable to exotic phenomena occurring
under conditions we just can’t bring about—conditions in the first 10-43 seconds (the
Planck time), when all the mass of the universe was concentrated in a region the size of
an atom, or conditions hidden inside large black holes today. Superstring theories, in
which the basic constituents of the universe are tiny vibrating quantum strings (in a 10dimensional space), are the most promising approach to this issue today—promising
principally because they include right at their foundations the basic materials for a
quantum theory of gravity. But the calculations are immensely difficult, so difficult that
as yet we can’t do any useful physics with these notions. Even the most basic problems,
such as why all but 4 of the dimensions have been “compactified” into tiny, unnoticeable
“loops” attached to each point of our familiar space-time, have yet to be resolved.
The End of Science: A few years back John Horgan wrote a book titled “The end of
science.” He argued that these recent developments in physics were leading to a project
very different from science as we have known it—the calculations are so difficult and
uncertain, and their empirical implications are so remote and unclear that, rather than this
new physics being a scientific endeavour aimed at understanding the world, it was instead
a sort of post-modern science, an idle, isolated and non-empirical “playing” with strange
mathematics. Insofar as science is really about understanding the world, Horgan argued,
it’s a done deal.
But Rees makes an important point here—the science of complex items, of the universe
and all that there is in it, carries on separately and independently. We may all be
reductionists in principle—that is, we may believe that the physical facts about atoms are
what make some atoms combine chemically in the ways they do, and that the chemical
facts about various molecules are what make organisms live and behave as they do, and
so on. But this doesn’t mean that chemistry is just physics, or that biology is just
chemistry. We have to observe how chemicals behave because we can’t provide or do
the calculations to apply purely physical models of molecules and their interactions. And
we have to observe living things to understand them, because we can’t provide or do the
calculations to apply a purely chemical model of life. Even if we never make any further,
testable progress towards a more complete and fundamental physics, there is plenty of
science left to be done.
Flatness and Horizons: We’ve already noticed that  is close to 1, and so must have
been almost precisely 1 early in the history of the universe. This raises the fine-tuning
question, why is it that our universe has an  of almost precisely 1? (This is a finetuning question because if  were significantly different from 1, nothing like us could
ever have evolved.) We call this the flatness problem, since a universe with =1 has an
over-all space-time geometry that is Euclidean. Another difficult puzzle is why the
cosmic background radiation is so smooth, i.e. why the temperature differences it reveals
are so small. The standard way to even out temperature differences is to let things
exchange energy and come to thermal equilibrium. But in the standard big-bang models,
the various regions in the early cosmic fireball never have enough time to exchange
energy and reach equilibrium. We have to put in the near-equality of these temperatures
by hand, as another very special initial condition. This is called the horizon problem,
since the difficulty is that various regions of the universe were “over the horizon” from
each other and couldn’t be seen or exchange energy during the early moments of the
bang. But an early, accelerated expansion of the universe can solve this problem, since it
implies that before the accelerated expansion things would have been close enough
together to exchange energy & “homogenize” after all. Better yet, such an early and
rapid expansion would “flatten” space-time out, producing an  value almost precisely 1.
So the so-called “inflationary universe” models actually kill two birds with one stone.
This inflation is supposedly the result of the attraction of gravity (which will always tend
to slow the expansion rather than accelerate it) being overwhelmed, under the extreme
conditions that held around about 10-36 seconds, by a cosmic repulsive force. The early
vacuum, according to this theoretical treatment, would have contained a vast amount of
dark energy whose gravitational effect was negative, putting space itself under immense
tension, and accelerating the expansion of the early universe. A little later, the vacuum
decayed to its now normal state, and a more leisurely expansion continued; the energy
released by this phase change in the early universe (like the latent heat of condensation)
became part of the energy of the universe we now see. The details remain tentative, but
the idea is widely seen as very attractive because of how neatly it resolves both the
flatness and horizon problems. Finally (as we’ve noted above) so long as the universe is
flat, the net energy of everything taken altogether is 0. This makes the emergence of the
universe more or less ex nihilo (out of nothing) a little easier to swallow (though it
certainly doesn’t resolve all the philosophical puzzles we can raise about this).
Andrei Linde has proposed an account of inflation according to which it happens
spontaneously in various regions of a “multiverse,” giving rise to different universes,
some like our own and some very different. Most cosmologists today accept some form
of inflation as part of the early history of the universe, and there are real efforts to test
particular theories, turning on how well they do in predicting large-scale features (the
hierarchy of structures we observe in the universe). But the door remains open to
alternatives for now. It’s possible that the changes that have distinguished the various
forces in our universe are not built into the laws of nature, but instead reflect chance
features of how our universe “froze out” from the higher temperatures at which all the
forces are unified. Then the particular relations and differences between the forces that
we observe would be analogous to the direction in which ice-crystals become oriented as
water freezes. While we can predict that water, cooled sufficiently, will freeze, nothing
but chance variation determines the orientation of the ice-crystals. This “spontaneous
symmetry breaking,” in which a body of water that had no such preferred direction
becomes a piece of ice that does, is an important feature of most contemporary
cosmological theories. Combining Linde’s multiverse (the term is Rees’) with this
picture of the origins of our particular physics suggests that the multiverse will not only
include universes with very different initial numbers (recall our 3 numbers above) but
also universes with very different physics. This provides Rees with his alternative to the
“design” explanations of fine-turning.
A little reflection on how far we’ve come, and where we may be going. In the nineteenth
century the periodic table, based on recurring patterns in the chemical behaviour of
various elements, organized our understanding of the elements for the first time. Today,
of course, we understand the table itself in a very fundamental way: it reflects the basic
structure of the various kinds of atoms, themselves built up from protons, neutrons and
electrons. And the variety of the periodic table emerged, as we now know, by fusion in
the hearts of stars, from the basic materials of hydrogen and helium produced in the first
second. Tracing the variety of things around us back to their origins and down to their
fundamental components has been a fundamental and very successful scientific project.
Because the pursuit of more and more fundamental principles of physics has led us to
theories that make their main predictions only under extreme conditions that we can no
longer witness directly, cosmology and the universe itself have become the laboratories
in which empirical guidance for these new theories is being sought. One important
avenue for such investigations lies in a very simple question: are there any strange relics
or “fossils” of the exotic early conditions in the universe these theories attempt to model
that we can hope to find and investigate today?
Miniholes: One such relic would be black holes the size of atoms (and with the mass of
mountains); given Hawking’s prediction of black-hole radiation (a major theoretical
advance linking black holes with thermodynamics), such holes would radiate (by trapping
just one member of some of the virtual particle pairs always present for brief moments in
the vacuum) immense amounts of energy, rising continuously until they disappear in a
flash (and a bang).
Magnetic Monopoles: Unlike electrical fields, with the separate particles carrying
positive and negative charges, magnetic fields don’t have separate “charge carriers” for
north and south poles. In contemporary particle theory, monopoles would form in the
early universe as “knots” frozen into space, within which the initial conditions of the
universe (around 10-36 seconds) would still hold. Since the presence of many magnetic
monopoles would short out the galactic magnetic field, and the presence of a still smaller
number would show up in the form of “hot” neutron stars, powered by mass-conversion
of protons into energy inside monopoles, we know there aren’t many of these—but they
may still exist. Cosmic strings (these are not the ‘strings’ of string theory) are similar
“topological defects” in the structure of space. They would arise at the transition
between inflation and normal expansion of the universe, and form either loops or
unending lines crossing the entire universe. Strings can be immensely massive objects;
they have been proposed as the sources of some of the gravity binding galaxy clusters
together, but their high velocity may rule that out—though they still might have some
role as part of the dark matter, contributing to the distribution of matter. Finally, gravity
waves from massive strings may be detectable through their effects (delaying & speeding
up) on the pulses we receive from spinning neutron stars.
The future of the universe. In the next 5 billion years, the Sun will exhaust its hydrogen
fuel and inflate into a red giant as it shifts to helium and heavier elements; at about the
same time the Andromeda galaxy will merge with the milky way, producing a nondescript, massive “blob” of stars. Depending on the value of , in the next 100 billion
years the universe may recollapse, or still be expanding, as the hydrogen and lighter
elements that power starts are steadily consumed. Since only closed universes are
required to satisfy Mach’s principle (that rotating reference frames really must rotate with
respect to the overall mass of the universe), many favour a closed universe over an open
one. But for now our hard evidence only supports a value of  of about .2, not high
enough for recollapse—and the inflationary models imply  is almost exactly 1, which
may give a recollapse (if it’s just barely over 1) but only after an immensely long time. If
there is a crunch, it will be a messy business—galaxies merging, stars overheating and
“puffing up” into huge clouds of gas, and everything falling down the rabbit hole of a
singularity in the end. It will be a little like the big bang in reverse, but only a little: the
non-uniformity of the universe, developed over billions of years, will maintain a
significant difference between the “forward” and “backward” time directions. If the
universe goes on expanding forever, gravity will continue to pull galaxies together into
clusters, clusters into superclusters, and so on; fuel for fusion will gradually run out, as
will other sources of energy, and the universe will become a dark sea with widelydispersed but immense islands of stars surrounding massive black holes. In the very long
run, even the protons and neutrons that matter is made of will probably decay; and as the
universe grows darker and colder, the black holes will slowly radiate their mass away.
Nothing would remain but radiation, and electrons and positrons. Life (or at least
information processing) can still go on for an immensely long time, well beyond the
conditions where anything like us could survive (as Freeman Dyson argues). But how
much processing can go on may be limited, even if it can go on indefinitely: the process
gets slower and slower as the universe expands and cools towards absolute 0.
Conversely, Rees points out that if, during a big crunch, information processing can
actually accelerate indefinitely, a collapsing universe might allow as much time for
thought as the ever-expanding universe does. Barrow and Tipler have taken this notion
to heart and explored its implications and potential in their optimistic eschatology; in a
sufficiently asymmetrical collapse (depending on the details of quantum gravity!) an
immense, and even unlimited, amount of information processing may be possible even in
a universe with a finite history. For those who want to witness the distant future, a close
orbit around a massive black hole seems to be the ticket—deep in such a gravitational
well, time would pass very slowly indeed, while the universe outside blue-shifted (and
sped up) immensely.
Eggs in one basket? Some have argued that the earth is too frail and dangerous a place to
keep all of us safely. Space travel may be defended as the key to an insurance policy—
just in case the earth ends up destroyed in a cosmic accident (or just in case we ourselves
manage to render in uninhabitable through war or unhappy experiments with dangerous
organisms). But, speaking of unhappy experiments, what about the risks of fundamental
physics? Could the present “vacuum” state of the universe really be only “meta-stable”?
Could it be supercooled, and ready to shift to a more stable state with just a gentle (and
presumably) unintentional prodding? If so, and if our accelerators are up to the job, then
a natural event should already have done the work: natural events involving cosmic ray
collisions still far exceed the energies produced by our most powerful accelerators.
The Nature of Time: As we’ve seen, the “ticking” of clocks, even the most regular and
well-behaved clocks, is something that varies depending on the state of the clocks’
motions relative to us, and how strong a gravitational field they are in. But the nature of
time is more puzzling than even these oddities suggest. Even the direction of time seems
more than a little arbitrary (almost all the laws of physics look exactly the same forward
and backwards in time---and reversing charge and parity (phase) makes even the slight
difference disappear). And if the direction of time seems arbitrary, what shall we say
about causation? Doesn’t the future constrain the present just as much as the past does?
If not, why not? Stranger still, if that’s possible, time itself may be quantized: if we
measure short enough intervals, we need extremely high frequencies. But high
frequencies demand high energies—if we limit these energies to those above which a
black hole will form, we get a time period (about 10-43 seconds) below which time itself
must behave oddly, if at all. Time and space as we understand them apply well enough to
the everyday world, and even to the world of physics as we see it. But what happens to
these concepts (and what they refer to) when conditions are really and truly extreme is
difficult to say…So we can sensibly ask, is there a large-scale difference between the past
in the future in the universe? (Or, as a friend of mine inquires in a very interesting paper,
how much does the fact that we remember the past and not the future really tell us about
time and physics? The answer, it seems, is “not much!”) Basic, contingent facts about
the universe—such as that the universe is expanding—may be responsible for the fact
that time really does have a “direction” for us. Gravity, too, plays a role here, in
changing a nearly structureless early universe into the increasingly structured universe of
today. It may even be that a closed universe that collapses only after an extremely long
period of time looks the same to those watching what we call the collapse as it does to us,
i.e. that in effect the arrow of time gets reversed as the collapse proceeds. But right now
it looks as though time will continue, locally, to “point” the same way it does for us.
Faster-than light travel or signaling raises the puzzles of causal loops and the paradoxes
of time travel. But what’s possible is not much of a worry, so long as actually things
work out consistently. As Godel showed long ago, there are solutions to the GTR
equations that would allow us, by traveling into the future in the right way, to wind up in
the past. And cosmic strings could produce similar effects in their neighbourhoods. Do
we need time police to ensure that no contradictions arise? Or is the logical impossibility
of a contradiction enough to ensure that even in a world with time travel, no paradoxes
will actually occur? Poincarre’s recurrence theorem tells us that a system will tend to
return to any given state, if it’s only given enough time.
Physical Constants: At this point our physics includes numbers, such as Planck’s
constant, the charge on the electron, the relative strengths of the various forces, and so
on, that are very important to how things work for us, but that seem arbitrary, and can
only be determined by measurement. Theoretical work suggests that these numbers may
have become fixed in the very early stages of the big bang. But are they really constant?
So far no measurements suggest changes have occurred—including very precise
measurements of pulsar orbits that show gravity is not changing at a rate that we can
measure (well below 1 part in 1011 per year). Atoms and their energy states behaved
exactly as they do now in the far distant past, as spectra from remote galaxies show. And
an ancient natural nuclear reactor discovered in Africa confirms that the strong force
worked 2 billion years ago just as it does today—a resonance effect sensitive to changes
of a few parts per billion operated then just as it does now. Dirac’s suggestion of
changing gravity was motivated by a coincidence linking the ratio of the size of the
universe to the size of a proton with the ratio of the strength of gravity to the strength of
the electrical force. To make the approximate equality of these a law would require the
strength of gravity to change over time. But Dicke pointed out that things like us could
only exist at a time when these ratios had become approximately equal. So Dicke
proposed an “observational selection” effect to explain the equality that Dirac had noticed
& proposed as a law. This is the first bit of “anthropic” reasoning we’ve considered
here—and it’s the modest, weak form of the principle that does the work this time: that
is, the universe, given that we’re observing it, must be such that (and must be at a time in
its history such that) we can exist. Brandon Carter distinguishes this “weak” anthropic
principle from stronger forms, which treat the development of intelligent observers as
somehow required, rather than working strictly with the effects of observational
selection. Moreover, an interesting “Bayesian” argument in favour of the big-bang
theory results from these considerations—in a steady-state universe, the “Hubble time”
need not bear any relation to the normal lifespan of a star, but in a big-bang universe the
two should be of the same order.
Fine tuning. We’ve already raised the issue of fine-tuning with respect to the value of ,
and the resonance in carbon nuclei that makes nucleosynthesis in stars work out as our
existence requires. But in fact there are a lot of other considerations that reveal similar
“brute matters of fact” that seem to be essential for our existence. Continuing in the vein
first explored by Henderson, who commented on the special (and unusual) properties of
water and other compounds essential to life, modern cosmologist Brandon Carter
assembled a large paper discussing various apparently unrelated and (by our best
understanding) brute facts that are each essential parts of the conditions that make life
like our own possible. These properties include the strength of the strong force, the low
mass of electrons, the strength of the weak but important interaction between neutrinos
and matter, which both shapes the initial level of helium and makes it possible for
supernovae to distribute their synthesized atomic nuclei throughout the galaxy, the small
(1 part in 105) initial “ripples” in the early big bang, the weakness of gravity compared to
the other forces, the dimensionality of space (which makes gravity work as an inverse
square force, which in turn has uniquely stabilizing effects on orbits), and more. But
Rees proposes that accepting an ensemble of universes, each realizing different detailed
conditions with regard to these quantities, gives us a background against which our
finding a universe with these qualities is unsurprising—after all, we can only exist in
such a universe, and the existence of some such universe amongst all the multiverse is
effectively guaranteed.
Sciama’s reductio: Suppose you walk into a room full of one million cards, and find that
they are actually numbered in order. Could this arrangement be reasonably attributed to
chance ordering of the cards? Sciama (and Rees) say this ordering is an objective one in
a very clear sense: the information required to specify it is very simple—starting at 0,
when you reach card n, determine the number of the next card along by adding one. But
does this show that we should reject a chance ordering? Hume had some doubts: without
some actual knowledge of cards and how they come to be ordered (and that they come to
be ordered deliberately) it’s very hard to be confident that some hypothesis about this
order is justified—even the very vague one that says, well, something must have acted to
bring this special ordering into being. Further, why couldn’t an observer selection effect
do the job? (If you couldn’t get into the room unless the cards were ordered that way,
then it’s obvious once you’re in that they must be ordered that way.) And, as Rees
recognizes, the pattern that characterizes our universe is in fact a lot harder to specify
independently of our own determination of various special conditions that are required
for our existence. John Leslie’s execution by firing squad example also makes a dubious
link between a familiar example where many reasonable hypotheses come to mind and an
example—the universe as a whole—where it’s hard to even see what sorts of hypotheses
would be worth entertaining, in any objective, testable sense.
Strong Anthropic “principles” or reasoning: The universe requires observers in order to
really exist. As part of the quantum mechanical, observer-created reality view, this goes
some distance towards explaining fine tuning. But only a little. After all, there is another
philosophical question behind this one that is left entirely unanswered here: why is there
something (why does anything exist) rather than nothing? (This question was a big
favourite with Heidegger.) John Wheeler has long been fond of this sort of quantumcircle of existence; Roger Penrose feels similarly that these two issues are connected.
But the many-worlds approach is a little less teleological in tone, and so more palatable to
the empirically minded. If this view is correct, then observer selection effects explain
why we find ourselves in this sort of universe, while the multiple universes all existing
explains why there is such a universe at all. There are many particular versions of
multiverse views, some involving branching (e.g. at quantum measurement points), some
involving independent universes existing separately, and some involving a kind of growth
and spread, with new universes being produced (under various, extreme conditions) from
old. Smolin even proposes a selection effect at this level, with more and more universes
of the sort that produce more universes developing as the multiverse grows. If a kind of
heredity ensures that universe-progeny resemble their parents, then the multiverse can
come to be dominated by universes with fertile physics, i.e. the sort of physics that tends
to produce a lot of “baby” universes. Since black holes are where new universes are born
on his hypothesis, if our universe is a typical result of a process of selection like that, it
should be extremely fine-tuned for the production of black holes. But whether it is so
fine-tuned remains an open questions.
The alternative: Heinz Pagels proposes keeping our minds open and carrying on with
finding real explanations of these fine-tuning “coincidences.” And this is clearly
constructive advice—cutting off the search too soon is a bad idea, and many
“unexplainables” have come to be explained in the course of science. But the weak form
of anthropic reasoning clearly does do real work. The heart of the debate is over the
strong form. Maybe a final theory will show that all these “brute facts” (esp. the values
of various constants, strengths of forces, etc.) really have an explanation in basic physical
principles; Steven Weinberg hopes for such a theory. But it’s not clear that the final
theory has to do this. Some things may really be due to chance (recall our talk of phase
transitions and “spontaneous symmetry breaking” above). Our vision of this universe,
and the hints it may even provide us regarding other universes, make a fine view of
things already, as Rees’ final quote from Chandrasekhar suggests.
Here’s a final, recent report from New Scientist, just to hint at how much may be
observable when we learn how to identify what we’re looking at and diagnose the signals
we receive…:
X-ray effect reveals black hole's event horizon
17:10 01 April 03
Will Knight
A black hole's event horizon - beyond which not even light can escape the extreme pull
of gravity - creates a unique X-ray signature as matter is sucked in, new research
demonstrates. Event horizons are characteristic of black holes but, by definition, cannot
be seen. So Christine Done and Marek Gierlinski at University of Durham, UK,
compared the X-ray emissions from suspected black holes with emissions from known
neutron stars.
Neutron stars are only slightly less dense than black holes and, in both cases, the
immense gravitational force they exert is transformed into X-ray radiation as material
spirals inwards and is torn apart.
According to theoretical models, material should crash into the solid surface of a neutron
star, releasing any remaining energy back out. But this energy will be swallowed
completely by a black hole, producing a characteristic difference in the X-ray spectra
generated by each object. And the new survey has now shown that the difference does
indeed exist.
Bewildering variety
"The idea is simple in theory and has been known for a long time," says Done. "But until
now it has been hard to put into practice, because the X-ray emission even from a single
type of object can show a bewildering variety of properties."
To simplify the hundreds of X-ray spectra they examined, each was reduced to just two
characteristic numbers describing the shape of the X-ray emission. This enabled them to
identify the two classes of spectra, one for black holes and one for neutron stars. The
shape of the spectra also matches theoretical predictions, says Done.
Black holes and neutron stars are both created when a star's inner fuel runs out and it
collapses in upon itself. Stars with a mass between 1.4 and 3.0 times that of our Sun
become neutron stars, while more massive stars becomes black holes.
Until now, astronomers could distinguish the two objects when their X-ray emissions
were very faint, due to very little material falling into them. But these findings give a new
way to separate black holes from neutron stars when they are bright, and much easier to
see.
Done and Gierlinski's paper has been accepted for publication in the Monthly Notices of
the Royal Astronomical Society.
Download