Pikeville HS Academic Team`s

advertisement
Pikeville HS Academic Team’s
Rutherford’s solar system model
of the atom
Big Science Packet
Astronomy
Astronomy, the most ancient science, began with the study of the Sun, the Moon, and the visible planets.
The modern astronomer is still centrally concerned with recording position, brightness, motion, and other
directly observable features of celestial objects and with predicting their motion according to the laws of
celestial mechanics. Astrophysics, a 19th- and 20th-century outgrowth of classical astronomy, uses quantum
mechanics, relativity theory, and molecular, atomic, nuclear, and elementary-particle physics to explain
observed celestial phenomena as the logical result of predictable physical processes. The astrophysicist
seeks to characterize the constituents of the universe in terms of temperatures, pressures, densities, and
chemical compositions. Although the term astronomer is still used, virtually all astronomers have been
trained in astrophysics. The broad aim of modern astronomy is to develop encompassing theories of the
origin, evolution, and possible destiny of the universe as a whole, a field of endeavor that is known as
cosmology.
NEW WINDOWS ON THE UNIVERSE
In the 20th century advances in astronomy have been so rapid that the second half of the century can be
considered a golden age. Traditional optical astronomy has been revolutionized by the development of new
techniques of faint object detection, including more sensitive photographic emulsions and a plethora of
electronic imaging devices. Using standard telescopes, the optical astronomer can now see fainter and more
distant objects than ever before. In addition the astronomer is no longer limited to observing the visible light
from celestial bodies. New instruments now allow the study of the heavens in entirely new regions of the
electromagnetic spectrum.
Radio Astronomy
In 1931, Karl G. Jansky of the Bell Telephone Laboratories discovered extraterrestrial radiation at radio
wavelengths and launched the field of radio astronomy. During the 1930s, Grote Reber, an American radio
engineer, further investigated celestial radio radiation and single-handedly brought radio astronomy to the
attention of professional astronomers.
As a result of theoretical investigations by astronomers in the Netherlands during World War II, an
observable radio line, emitted by neutral hydrogen atoms in space, was predicted at a wavelength of 21 cm.
Detection of this line caused radio astronomy to advance rapidly after the war. Today, radio telescopes and
radio interferometer systems worldwide study radio emission from the stars, the planets, the interstellar
medium in the Galaxy, and extragalactic sources.
Achievements in radio astronomy include the mapping of galactic structure and the discovery of quasars,
pulsars, and a large number of complex organic molecules in interstellar space (see interstellar matter).
radar astronomy has also been used within the solar system to determine, for example, the rotational
periods of Venus and Mercury.
Infrared Astronomy
Although scientists have known since the time of William Herschel in the late 18th century that infrared
radiation from celestial objects can be detected, it was not until the late 1950s and early 1960s that infrared
astronomy became the subject of intensive research. Sensitive detectors were developed that allowed
astronomers to explore the infrared region of the spectrum. Infrared astronomy has been helpful in studying
the very young or evolved stars that are commonly associated with dense clouds of dust observed in
interstellar space.
Ultraviolet, X-Ray, and Gamma-Ray Astronomy
In 1957 the USSR launched the first satellite, thus beginning the space age. Few other disciplines have
benefited from artificial satellites to the extent that astronomy has. (See space exploration.) For the
astronomer, the atmosphere presents a murky or opaque barrier through which observations of the far
infrared, ultraviolet, X-ray, and gamma-ray spectral regions are difficult or impossible. Satellites and, to a
limited extent, high-altitude balloons and rockets have become platforms from which to observe these
spectral regions. Since 1962 the United States and other nations have launched a wide range of orbiting
observatories that are devoted to observing the ultraviolet, infrared, and X-ray regions (see OAO; OSO; High
Energy Astronomical Observatory; Uhuru). These studies have resulted in better understanding of very hot
stars and have produced evidence of the existence of black holes. The impact of extraterrestrial astronomy
in all parts of the spectrum is being extended in the 1990s by a continuing program of space astronomy
supported by the Space Shuttle (see gamma-ray astronomy; Space Telescope; ultraviolet astronomy; X-ray
astronomy).
THE SOLAR SYSTEM
The achievements of astronomy and astrophysics are evident in the rapidly growing knowledge of the
extraterrestrial environment, from the solar system to the most remote galaxies. The solar system, as it is
known today, comprises the Sun and nine planetsÑMercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus,
Neptune, and PlutoÑalong with numerous asteroids, comets, and smaller objects.
Planets, Asteroids, and Comets
Except for Mercury and Venus, each planet has from 1 to more than 20 natural satellites, including Pluto,
whose moon was not discovered until 1978. The four gas-giant planets Jupiter, Saturn, Uranus, and
Neptune all have ring systems made up of vast swarms of small icy and rocky fragments circling those
planets in the plane of their equators. By far the most spectacular of these ring systems is that of Saturn.
The others are less developed.
Between the orbits of Mars and Jupiter lies a belt containing thousands of minor planets, or asteroids. The
orbits of most of the asteroids restrict them to the region between Mars and Jupiter, but exceptions of
various kinds existÑincluding orbits that cross the Earth's orbit or lie still closer to the Sun.
Comets can attain distances 150,000 times greater than that from the Earth to the Sun. In 1950 the
astronomer Jan Oort speculated that the solar system is surrounded at a vast distance by a cloud of comets,
and the astronomer Gerard Kuiper later suggested a nearer cloud, as well. (Possible members of this latter
cloud began to be sighted in the early 1990s.) The orbits of only a few comets are disturbed sufficiently to
bring them near the Earth. Halley's comet, known since 240 ©, swings around the Sun once every 76 years
and was visited by probes in 1985-86.
Flyby space probes involving most of the planets, and surface landings on the Moon, Venus, and Mars,
have transformed planetary astronomy. No longer must observations be made at great distances. On-site
measurement of numerous physical properties is now possible. In studying the planets, the astronomer must
also enlist the aid of the chemist, the geologist, and the meteorologist. The Venera and Magellan missions to
Venus and the Viking landers on Mars indicate that life as it is known on Earth does not exist on either
planet, nor is life possible on the Moon. In spite of such great increases in knowledge, however, the probes
and landings have raised as many questions as they have answered. The origin of the solar system, for
example, remains unknown.
The Sun
The Sun is a star with a surface temperature of 5,800 K and an interior temperature of about 15,000,000 K.
Because the Sun is the nearest star and is easily observed, its chemical composition and surface activity
have been intensely investigated. Among the surface features of the Sun are sunspots, prominences, and
flares. It is now known that the maximum number of sunspots occurs approximately every 11 years, that
their temperature is approximately 4,300 K, and that they are related to solar magnetic activity in a cycle
taking about 22 years to complete. Studies of historical records have also shown long-scale variations in
sunspot numbers.
Predictions based on theory indicate that energy-generating processes deep within the Sun and other stars
should produce a certain number of chargeless, weightless particles called neutrinos. Efforts to detect solar
neutrinos have thus far indicated a far lower rate of neutrino production than current theory seems to
require, and revisions of theory may in time prove necessary. On the other hand, physicists and
cosmologists are equally interested in the concept that at least some forms of neutrino have mass and
undergo transformations inside the Sun.
THE STARS
The accumulation of precise data on some of the nearer stars early in the 20th century enabled Ejnar
Hertzsprung and Henry Norris Russell, working independently, to plot a graph of brightness and color, two
basic stellar properties. When they plotted intrinsic stellar brightness on one axis and stellar color
(equivalent to surface temperature) on the other axis, Hertzsprung and Russell found that, instead of being
scattered over the graph, the stars fell into distinct regions: a heavily populated, diagonal band, known as
the main sequence, that varies from bright, hot, blue stars to faint, cool, red ones; a horizontal band
containing bright, cool, red stars (the giants); and a sparsely populated, horizontal band containing very
luminous stars of all colors (the supergiants). In honor of these scientists, graphs of the type they plotted are
called Hertzsprung-Russell diagrams, or simply H-R diagrams.
The features found on the H-R diagrams are a key to modern astrophysics because they are basic to an
understanding of stellar evolution. The star's initial mass determines exactly the position of the star on the
main sequence. The star gradually changes, however, thus changing its position on the H-R diagrams. As
the hydrogen that fuels the star's fusion reaction becomes depleted, the outer layers of the star expand, and
it enters the giant phase. Eventually they become unstable and begin to lose massÑsome smoothly, others
catastrophically, depending on their masses. Most stars pulsate smoothly; some may brighten rapidly in
older age, blowing material off into space to form a planetary nebula. A few giant, unstable stars explode as
supernovas. In any case, evolution proceeds to the stellar graveyard. The most common result of evolution
is the white dwarf; large stars end up as neutron stars (pulsars) and, possibly, black holes.
THE GALAXIES
The solar system is located in the outer regions of our Galaxy. From the Earth, the visible part of the Galaxy
is seen in the night sky as the Milky Way. This visible part is actually a flattened disk about 100,000 lightyears wide and with a central bulge. Around it lies a spherical halo of star clusters (see cluster, star) about
200,000 light-years wide, which is surrounded in turn by a much larger corona of dust and gas. The entire
system contains matter in quantities equivalent to more than 1,000 billion solar masses. The Sun takes
about 200 million years to orbit the galactic center, which lies about 30,000 light-years from the Sun in the
direction of the constellation Sagittarius. Prior to the pioneering work of Harlow Shapley in 1917, the Sun
was thought to lie near the galactic center. Shapley also demonstrated the galactic halo of star clusters.
The structure of the Galaxy has been mapped by now, using the distances of extremely luminous stars as
well as radio observations of the 21-cm line of the hydrogen spectrum. It has been shown to take the form of
a typical spiral galaxy (see extragalactic systems). Three basic galactic types exist: spirals, such as the
Milky Way and the Andromeda galaxy; irregulars, such as the Magellanic Clouds; and ellipticals. The last
exist at both extremes of galactic size; a dwarf elliptical may contain only a few million stars, whereas a giant
elliptical may contain trillions of stars. The fact that extragalactic systems are vast, remote collections of
stars was not understood until 1929, when Edwin P. Hubble identified a variable star in the Andromeda
galaxy and determined its distance.
The Milky Way, Andromeda galaxy, and Magellanic Clouds are all members of a gravitationally bound
cluster of galaxies known as the Local Group, which contains some 20 members in all. Other clusters, such
as one in the direction of the constellation Virgo, may contain more than 1,000 galaxies, and much larger
superclusters also exist; the Local Group may itself be part of a local supercluster.
COSMOLOGY
In 1912, Vesto M. Slipher discovered that more distant galaxies beyond the Local Group are receding from
the Earth at high velocities. With a knowledge of the distances of these objects, Hubble was able to
demonstrate a relationship be tween a galaxy's red shift (a Doppler effect observed in the galaxy's spectrum,
indicating its recessional velocity) and its distance: the farther away a galaxy is, the greater is its red shift
(see Hubble's constant). This is considered good evidence for the big bang theory mentioned below.
Since Hubble's computations, a continuous effort has been made to extend the boundaries of the
observable universe to more remote, higher red-shift objects. Observation of such distant galaxies
contributes to greater understanding of the origin and possible fate of the universe. The search for higher
red shifts took an unexpected turn when, in 1960, two radio sources were identified with what appeared to
be stars. Stars are not expected to be such strong radio sources, however, and the spectra of these objects
could not be associated with any type of star. It was not until 1963 that Maarten Schmidt correctly
interpreted the spectra as having enormous red shifts. These "quasi-stellar" objects, or quasars, may be the
violently active nuclei of very distant galaxies, but they are still the subject of great controversy.
Other evidence has given astronomers a good idea of the origin of the universe--the concern of cosmology.
In 1965, R. Wilson and A. A. Penzias discovered an isotropic microwave background radiation characteristic
of that emitted by a blackbody at 3 K. This blackbody radiation, whose existence has since been confirmed
by numerous observations, is believed to be an artifact of the big bang with which most astronomers believe
the universe began. According to the big bang theory, a single, cataclysmic event occurred about 20 billion
years ago that disrupted the dense mass composed of all matter and radiation in the universe. The matter
then dispersed, cooled, and condensed in the form of stars and galaxies. Most cosmologists now accept the
big bang theory rather than the rival steady-state theory and are attempting to account for the exotic physical
events that would have been involved in the universe's first moments of rapid inflation after the big bang.
Some theorists go on to propose that our universe is only one among an infinity of other universes, all of
them coming into being out of virtual nothingness. In the meantime, however, basic questions about the size
and age of the one known universe remain to be resolved.
History of Astronomy
The history of astronomy comprises three broadly defined areas that have characterized the science of the
heavens since its beginnings. With varying degrees of emphasis among particular civilizations and during
particular historical periods astronomers have sought to understand the motions of celestial bodies, to
determine their physical characteristics, and to study the size and structure of the universe. The latter study
is known as cosmology.
MOTIONS OF SUN, MOON, AND PLANETS
From the dawn of civilization until the time of Copernicus astronomy was dominated by the study of the
motions of celestial bodies. Such work was essential for astrology, for the determination of the calendar, and
for the prediction of eclipses, and it was also fueled by the desire to reduce irregularity to order and to
predict positions of celestial bodies with ever-increasing accuracy. The connection between the calendar
and the motions of the celestial bodies is especially important, because it meant that astronomy was
essential to determining the times for the most basic functions of early societies, including the planting and
harvesting of crops and the celebration of religious feasts.
The celestial phenomena observed by the ancients were the same as those of today. The Sun progressed
steadily westward in the course of a day, and the stars and the five visible planets did the same at night. The
Sun could be observed at sunset to have moved eastward about one degree a day against the background
of the stars, until in the course of a year it had completely traversed the 360¡ path of constellations that
came to be known as the zodiac. The planets generally also moved eastward along the zodiac, within 8¡ of
the Sun's apparent annual path (the ecliptic), but at times they made puzzling reversals in the sky before
resuming their normal eastward motion. By comparison, the Moon moved across the ecliptic in about 27 1/3
days and went through several phases. The earliest civilizations did not realize that these phenomena were
in part a product of the motion of the Earth itself; they merely wanted to predict the apparent motions of the
celestial bodies.
Although the Egyptians must have been familiar with these general phenomena, their systematic study of
celestial motions was limited to the connection of the flooding of the Nile with the first visible rising of the star
Sirius. An early attempt to develop a calendar based on the Moon's phases was abandoned as too complex,
and as a result astronomy played a lesser role in Egyptian civilization than it otherwise might have. Similarly,
the Chinese did not systematically attempt to determine celestial motions. Surprising evidence of a more
substantial interest in astronomy is found in the presence of ancient stone alignments and stone circles (see
megalith) found throughout Europe and Great Britain, the most notable of which is Stonehenge in England.
As early as 3000 ©, the collection of massive stones at Stonehenge functioned as an ancient observatory,
where priests followed the annual motion of the Sun each morning along the horizon in order to determine
the beginning of the seasons. By about 2500 ©, Stonehenge may have been used to predict eclipses of the
Moon. Not until 1000 ¥ were similar activities undertaken by New World cultures (see archaeoastronomy).
Babylonian Tables
Astronomy reached its first great heights among the Babylonians. In the period from about 1800 to 400 ©,
the Babylonians developed a calendar based on the motion of the Sun and the phases of the Moon. During
the 400 years that followed, they focused their attention on the prediction of the precise time the new
crescent Moon first became visible and defined the beginning of the month according to this event.
Cuneiform tablets deciphered only within the last century demonstrate that the Babylonians solved the
problem within an accuracy of a few minutes of time; this was achieved by compiling precise observational
tables that revealed smaller variations in the velocity of the Sun and of the Moon than ever before
measured. These variationsÑand others such as changes in the Moon's latitudeÑwere analyzed numerically
by noting how the variations fluctuated with time in a regular way. They used the same numerical method,
utilizing the same variations, to predict lunar and solar eclipses.
Greek Spheres and Circles
The Greeks used a geometrical rather than a numerical approach to understand the same celestial motions.
Influenced by Plato's metaphysical concept of the perfection of circular motion, the Greeks sought to
represent the motion of the divine celestial bodies by using spheres and circles. This explanatory method
was not upset until Kepler replaced the circle with the ellipse in 1609.
Plato's student Eudoxus of Cnidus, c.408-c.355 ©, was the first to offer a solution along these lines. He
assumed that each planet is attached to one of a group of connected concentric spheres centered on the
Earth, and that each planet rotates on differently oriented axes to produce the observed motion. With this
scheme of crystalline spheres he failed to account for the variation in brightness of the planets; the scheme
was incorporated, however, into Aristotle's cosmology during the 4th century ©. Thus the Hellenic civilization
that culminated with Aristotle attempted to describe a physical cosmology. In contrast, the Hellenistic
civilization that followed the conquests of Alexander the Great developed over the next four centuries soon
predominant mathematical mechanisms to explain celestial phenomena. The basis for this approach was a
variety of circles known as eccentrics, deferents, and epicycles. The Hellenistic mathematician Apollonius of
Perga, c.262-c.190 ©, noted that the annual motion of the Sun can be approximated by a circle with the
Earth slightly off-center, or eccentric, thus accounting for the observed variation in speed over a year.
Similarly, the Moon traces an eccentric circle in a period of 27 1/3 days. The periodic reverse, or retrograde,
motion of the planets across the sky required a new theoretical device. Each planet was assumed to move
with uniform velocity around a small circle (the epicycle) that moved around a larger circle (the deferent),
with a uniform velocity appropriate for each particular planet. Hipparchus, c.190-120 ©, the most outstanding
astronomer of ancient times, made refinements to the theory of the Sun and Moon based on observations
from Nicaea and the island of Rhodes, and he gave solar theory essentially its final form. It was left for
Ptolemy, c.100-c.165, to compile all the knowledge of Greek astronomy in the Almagest and to develop the
final lunar and planetary theories.
With Ptolemy the immense power and versatility of these combinations of circles as explanatory
mechanisms reached new heights. In the case of the Moon, Ptolemy not only accounted for the chief
irregularity, called the equation of the center, which allowed for the prediction of eclipses. He also
discovered and corrected another irregularity, evection, at other points of the Moon's orbit by using an
epicycle on a movable eccentric deferent, whose center revolved around the Earth. When Ptolemy made a
further refinement known as prosneusis, he was able to predict the place of the Moon within 10 min, or 1/6¡,
of arc in the sky; these predictions were in good agreement with the accuracy of observations made with the
instruments used at that time. Similarly, Ptolemy described the motion of each planet in the Almagest, which
passed, with a few notable elaborations, through Islamic civilization and on to the Renaissance European
civilization that nurtured Nicolaus Copernicus.
The revolution associated with the name of Copernicus was not a revolution in the technical astronomy of
explaining motions, but rather belongs to the realm of cosmology. Prodded especially by an intense dislike
of one of Ptolemy's explanatory devices, known as the equant, which compromised the principle of uniform
circular motions, Copernicus placed not the Earth but the Sun at the center of the universe; this view was
put forth in his De revolutionibus orbium caelestium (On the Revolutions of the Heavenly Spheres, 1543). In
that work, however, he merely adapted the Greek system of epicycles and eccentrics to the new
arrangement. The result was an initial simplification and harmony as the diurnal and annual motions of the
Earth assumed their true meaning, but no overall simplification in the numbers of epicycles needed to
achieve the same accuracy of prediction as had Ptolemy. It was therefore not at all clear that this new
cosmological system held the key to the true mathematical system that could accurately explain planetary
motions.
Keplerian Ellipses and Newtonian Gravitation
The German astronomer Johannes Kepler provided a daring solution to the problem of planetary motions
and demonstrated the validity of the heliocentric theory of Copernicus, directly associating the Sun with the
physical cause of planetary motions. At issue for Kepler was a mere 8Ï discrepancy between theory and
observation for the position of the planet Mars. This degree of accuracy would have delighted Ptolemy or
Copernicus, but it was unacceptable in light of the observations of the Danish astronomer Tycho Brahe,
made from Uraniborg Observatory with a variety of newly constructed sextants and quadrants and accurate
to within 1Ï to 4Ï. This new scale of accuracy revolutionized astronomy, for in his Astronomia nova (New
Astronomy, 1609), Kepler announced that Mars and the other planets must move in elliptical orbits, readily
predictable by the laws of planetary motion that he proceeded to expound in this work and in the
Harmonices mundi (Harmonies of the World, 1619). Only by abandoning the circle could the heavens be
reduced to an order comparable to the most accurate observations.
Kepler's laws and the Copernican theory reached their ultimate verification with Sir Isaac Newton's
enunciation of the laws of universal gravitation in the Principia (1687). In these laws, the Sun was assigned
as the physical cause of planetary motion. The laws also served as the theoretical basis for deriving Kepler's
laws. During the 18th century, the implications of gravitational astronomy were recognized and analyzed by
able mathematicians, notably Jean d'Alembert, Alexis Clairaut, Leonhard Euler, Joseph Lagrange, and
Pierre Laplace. The science of celestial mechanics was born and the goal of accurate prediction was finally
realized.
During all of this discussion the stars had been regarded as fixed. While working on his catalog of 850 stars,
however, Hipparchus had already recognized the phenomenon known as the precession of the equinoxes,
an apparent slight change in the positions of stars over a period of hundreds of years caused by a wobble in
the Earth's motion. In the 18th century, Edmond Halley, determined that the stars had their own motion,
known as proper motion, that was detectable even over a period of a few years. The observations of stellar
positions, made with transit instruments through the monumental labors of such scientists as John
Flamsteed, laid the groundwork for solving a cosmological problem of another era: the distribution of the
stars and the structure of the universe.
PHYSICAL CHARACTERISTICS OF CELESTIAL BODIES
The study of the motions of the celestial bodies required only that they be regarded as mere points of light.
But already in the 4th century ©, Aristotle proposed in his De caelo (On the Heavens) a theory of the
physical nature of these bodies, which conferred on them the properties of perfection and unchangeability
thought appropriate to the divine celestial regions, in contrast to the ever-changing Earth. In the next
century, Aristarchus of Samos, again exhibiting the mathematical penchant of the Hellenistic era, offered an
observationally based estimate, although about ten times too low, of the sizes of the Sun and Moon. These
size estimates were universally admired, and Aristotle's physical hypothesis only sporadically challenged, by
medieval commentators.
Moon and Planets
The physical similarity of the Earth and planets became a matter of significant inquiry only after Copernicus
showed that the Earth and all of the planets are in motion around the central Sun. The question might have
remained forever unresolved had not Galileo Galilei constructed a telescope, although not the first in
Europe, which he turned toward the heavens in 1609. The results, announced in the Sidereus nuncius
(Sidereal Messenger, 1610), were shattering to the Aristotelian view. The Moon was found to be a
mountainous body "not unlike the face of the Earth." Galileo's further discovery of the moons of Jupiter and
the phases of Venus was more evidence that the planets had Earthlike characteristics.
Such discoveriesÑextremely important as verification for the physical reality of the Copernican
theoryÑslowly accumulated throughout the 17th and 18th centuries. The nature of the Moon was discussed
in increasing detail in Kepler's Somnium (1634), Johannes Hevelius's Selenographia (1647), and G. B.
Riccioli's Almagestum novum (1651). Christiaan Huygens, the greatest observational astronomer of the 17th
century, first correctly interpreted the rings of Saturn in his Systema Saturnium (1659), observed dark
markings on Mars, and belts of clouds on Jupiter, and speculated that Venus was shrouded in clouds. With
more refined telescopes, such as those built by Sir William Herschel in England, the details of the solar
system became better known. Herschel himself made the spectacular discovery of the planet Uranus in
1781. In 1846, the presence of yet another planet (Neptune), predicted by J. C. Adams and U. J. J.
Leverrier, was confirmed observationallyÑa triumph for both theory and observation.
With the invention of spectroscopy in the 1860s, and with the work of Sir William Huggins and Norman
Joseph Lockyer in England, P. A. Secchi in Rome, CŽsar Janssen in Paris, Lewis Rutherfurd (1816-92) in
the United States, and Hermann Vogel (1842-1907) in Germany, the science of astrophysics was born. By
spreading the light of the celestial bodies into the constituent colors of the spectrum, each interspersed with
lines characteristic of the elements present, a powerful new tool was given to the astronomer. The ability to
determine the chemical composition of planetary atmospheres and even of the stars, a task which the
positivist philosopher Auguste Comte had offered less than 30 years before as the paradigm of what science
could never achieve, now became possible.
Sun and Stars
Astrophysics yielded its most substantial results in the study of the Sun and stars, where the myriads of
observed spectral lines were gradually interpreted as a precise set of chemical fingerprints. With
spectroscopy and the almost simultaneous invention of photography, astronomers compiled great catalogs
mapping the solar spectrum. Knowledge of the Sun, which had not advanced substantially since Galileo's
discovery of sunspots, now outstripped planetary astronomy. The means were at hand to determine the
temperature, composition, age, and structure of the Sun and to compare this data with that of the other
stars, which were now for the first time proved to be other suns. In conjunction with advances in nuclear
physics the investigations of the latter half of the 19th century led to the building of a firm foundation for a
discussion in the first quarter of the 20th century of the internal constitution and evolution of stars. This
program was pioneered by A. S. Eddington and Karl Schwarzschild. By providing a wealth of data previously
unavailable, astrophysics fueled the controversy over the possible existence of extraterrestrial life not only in
our solar system, but throughout a universe of other possible solar systems.
COSMOLOGY: THE STRUCTURE OF THE UNIVERSE
The study of the motions and physical characteristics of celestial bodies could be undertaken on a case-bycase basis. Cosmology, on the other hand, by definition had to mold from the whole body of observations a
coherent theory of the structure of the universe. From the time of Aristotle until the Copernican revolution
this structure had been conceived firmly as Earth-centered, in spite of Aristarchus' heliocentric views.
Between the inner Earth and the outer sphere of fixed stars each planet was considered embedded in a
crystalline sphere that was in uniform circular motion. Within this neat compact structure the whole history of
humankind and the universe unfolded. The work of Kepler and Newton on planetary motion and of Galileo
and others on the physical nature of the planets resulted in the gradual acceptance during the 17th century
of the Copernican heliocentric hypothesis as a physically valid system. This acceptance had profound
implications for cosmology. Not only did Kepler's theory of ellipses make obsolete the theory of crystalline
spheres of Aristotle, the absence of any detectable stellar parallaxÑapparent change in the direction of a
starÑeven when measured from opposite sides of the Earth's orbit, demonstrated the necessity of
acknowledging the enormous size of the universe. The subsequent shift from the closed, tightly structured
world to an infinite homogenous universe was one of the landmarks in the history of astronomy. Giordano
Bruno and Thomas Digges were among the new concept's earliest exponents, and RenŽ Descartes and
Newton incorporated it as a standard part of the new view of the universe.
The Galaxy
The downfall of the concept of the sphere of fixed stars opened the way for investigations into the
distribution of the stars. The phenomenon of the Milky Way, shown by Galileo to consist of myriads of stars,
hinted that some previously unsuspected structure might exist among the stars. In 1750 the English
theologian and astronomer Thomas Wright sought to explain the brilliantly luminous band of the Milky Way
as a collection of stars that extended further in the direction of the band than in other directions. In 1780,
William Herschel initiated an observational program of star counts, or gauges, of selected regions of the sky.
By assuming that the brightness of a star is a measure of its distance from the Earth, this program resulted
in a picture of a flattened disk-shaped system with the Sun near the center. In spite of the fact that his
assumptions were incorrect, Herschel's research marked the beginnings of an understanding of the structure
of the stellar system; his research also earned him the title of founder of stellar astronomy.
Because of the lack of a direct method for determining stellar distances, progress in cosmology lagged for
more than a century after Herschel's star gauges. Only with improved instrumentation did Friedrich Bessel,
Wilhelm Struve, and Thomas Henderson (1798-1844) in 1838 succeed in measuring the first stellar
distances. An annual parallactic shift of .31× in the position of the star 61 Cygni implied a distance
equivalent to 590,000 times that of the Earth from the Sun. Difficulties innate to the method, however,
resulted in fewer than 300 known stellar distances by the end of the centuryÑfar too few to solve the
problem of determining the structure of the stellar system.
Along with continuous attempts at parallax measurements, a major task of 19th-century astronomers was
the compilation of astronomical catalogs and atlases containing the precise magnitudes, positions, and
motions of stars. The work of Friedrich Argelander, David Gill, and J. C. Kapteyn is especially notable in this
regard, with the latter two astronomers utilizing new photographic techniques. Such surveys yielded the twodimensional distribution of stars over the celestial sphere. Heroic efforts were made, especially by Hugo von
Seeliger, to extrapolate from this data to an understanding of three-dimensional structure.
Real progress was finally made only through an analysis of the extremely small motions of stars, known as
proper motion. Building on the astronomical catalogs of James Bradley, G. F. Arthur von Auwers (18381915), and Lewis Boss (1846-1912), and on his own observations, Kapteyn, exploiting the new field of
statistical astronomy, applied statistical methods of distance determination to find an ellipsoidal shape for the
system of stars. In 1904 he found that the stars streamed in two directions. Only in 1927 did J. H. Oort,
working on the basis of Bertil Lindblad's studies, determine that Kapteyn's observational data could be
accounted for if the galaxy were assumed to be rotating. In a cosmological shift comparable to the
Copernican revolution four centuries before, the Earth's solar system was found to lie not at the center of the
Galaxy but, rather, many thousands of light-years from the Galaxy's center.
Other Galaxies
The question of whether or not the Galaxy constituted the entirety of the universe came to a head in the
1920s with the debate between H. D. Curtis and Harlow Shapley. Curtis argued that nebulae are island
universes similar to but separate from our own galaxy; Shapley included the nebulae in our galaxy. The
controversy was settled when E. P. Hubble detected Cepheid stars in the Andromeda nebula and used them
in a new method of distance determination, demonstrating that Andromeda and many other nebulae are far
outside the Milky Way. Thus the universe was found to consist of a large number of galaxies, spread like
islands through infinite space (see extragalactic systems).
Such was the progress of astronomy from the time of the Babylonian observations of planetary motions to
within a few degrees' accuracy, to the Greek determinations of positions within a few minutes of arc, to the
19th-century measurements of parallax and proper motions in fractions of a second of arc. The concern of
astronomers evolved from the determination of apparent motions to the observation of planetary surfaces
and ultimately to the measurement of the motions of the stars and galaxies.
Biology
Biology is the science of living systems. It is inherently interdisciplinary, requiring knowledge of the physical
sciences and mathematics, although specialties may be oriented toward a group of organisms or a level of
organization. Botany is concerned with plant life, zoology with animal life, algology with algae, mycology with
fungi, microbiology with microorganisms such as protozoans and bacteria, cytology with cells, and so on. All
biological specialties, however, are concerned with life and its characteristics. These characteristics include
cellular organization, metabolism, response to stimuli, development and growth, and reproduction.
Furthermore, the information needed to control the expression of such characteristics is contained within
each organism.
FUNDAMENTAL DISCIPLINES
Life is divided into many levels of organizationÑatoms, molecules, cells, tissues, organs, organ systems,
organisms, and populations. The basic disciplines of biology may study life at one or more of these levels.
Taxonomy attempts to arrange organisms in natural groups based on common features. It is concerned with
the identification, naming, and classification of organisms (see classification, biological). The seven major
taxonomic categories, or taxa, used in classification are kingdom, phylum, class, order, family, genus, and
species. Early systems used only two kingdoms, plant and animal, whereas most modern systems use five:
Monera (bacteria and blue-green algae), Protista (Protozoa and the other algae), fungi, plant, and animal.
The discipline of ecology is concerned with the interrelationships of organisms, both among themselves and
between them and their environment. Studies of the energy flow through communities of organisms and the
environment (the ecosystem approach) are especially valuable in assessing the effects of human activities.
An ecologist must be knowledgeable in other disciplines of biology.
Organisms respond to stimuli from other organisms and from the environment; behaviorists are concerned
with these responses. Most of them study animalsÑas individuals, groups, or entire speciesÑin describing
animal behavior patterns. These patterns include animal migration, courtship and mating, social
organization, territoriality, instinct, and learning. When humans are included, biology overlaps with
psychology and sociology. Growth and orientation responses of plants can also be studied in the discipline
of behavior, although they are traditionally considered as belonging under development and physiology,
respectively.
Descriptive and comparative embryology are the classic areas of development studies, although
postembryological development, particularly the aging process, is also examined. The biochemical and
biophysical mechanisms that control normal development are of particular interest when they are related to
birth defects, cancer, and other abnormalities.
Inheritance of physical and biochemical characteristics, and the variations that appear from generation to
generation, are the general subjects of genetics. The emphasis may be on improving domestic plants and
animals through controlled breeding, or it may be on the more fundamental questions of molecular and
cellular mechanisms of heredity.
A branch of biology growing in importance since the 1940s, molecular biology essentially developed out of
genetics and biochemistry. It seeks to explain biological events by studying the molecules within cells, with a
special emphasis on the molecular basis of geneticsÑnucleic acids in particularÑand its relationship to
energy cycling and replication. Evolution, including the appearance of new species, the modification of
existing species, and the characteristics of extinct ones, is based on genetic principles. Information about
the structure and distribution of fossils that is provided by paleontologists is essential to understanding these
changes.
Morphology (from the Greek, meaning "form study") traditionally has examined the anatomy of all
organisms. The middle levels of biological organizationÑcells, tissues, and organsÑare the usual topics, with
comparisons drawn among organisms to help establish taxonomic and evolutionary relationships.
As important as the form of an organism are its functions. Physiology is concerned with the life processes of
entire organisms as well as those of cells, tissues, and organs. Metabolism and hormonal controls are some
of the special interests of this discipline.
HISTORY OF BIOLOGY
The oldest surviving archaeological records that indicate some rudimentary human knowledge of biological
principles date from the Mesolithic Period. During the Neolithic Period, which began about 10,000 years ago,
various human groups developed agriculture and the medicinal use of plants. In ancient Egypt, for example,
a number of herbs were being used medicinally and for embalming.
Early Development
As a science, however, biology did not develop until the last few centuries ©. Although Hippocrates, known
as the father of medicine, influenced the development of medicine apart from its role in religion, it was
Aristotle, a student of Plato, who established observation and analysis as the basic tools of biology. Of
particular importance were Aristotle's observations of reproduction and his concepts for a classification
system.
As the center of learning shifted from Greece to Rome and then to Alexandria, so did the study of biology.
From the 3d century © to the 2d century ¥, studies focused primarily on agriculture and medicine. The Arabs
dominated the study of biology during the Middle Ages and applied their knowledge of the Greeks'
discoveries to medicine.
The Renaissance was a period of rapid advances, especially in Italy, France, and Spain, where Greek
culture was being rediscovered. In the 15th and 16th centuries, Leonardo da Vinci and Michelangelo
became skilled anatomists through their search for perfection in art. Andreas Vesalius initiated the use of
dissection as a teaching aid. His books, Fabrica (1543) and Fabrica, 2d ed. (1550), contained detailed
anatomical illustrations that became standards. In the 17th century William Harvey introduced the use of
experimentation in his studies of the human circulatory system. His work marked the beginning of
mammalian physiology.
Scientific Societies and Journals
Lack of communication was a problem for early biologists. To overcome this, scientific societies were
organized. The first were in Europe, beginning with the Accademia dei Lincei (Rome, 1603).
The Boston Philosophical Society, founded in 1683, was probably the first such society to be organized in
colonial America. Later, specialized groups, principally of physicians, organized themselves, among them
the American Association for the Advancement of Science (AAAS), founded in 1848. Much later, in 1951,
the American Institute of Biological Science (AIBS) was formed as an alliance of the major biological
societies in the United States.
The first journals to present scientific discoveries were published in Europe starting in 1665; they were the
Journal des Savants, in France, and Philosophical Transactions of the Royal Society, in London. Over the
years numerous other journals have been established, so that today nearly all societies record their
transactions and discoveries.
Development and Early Use of the Microscope
Before 1300, optical lenses were unknown, except for crude spectacles used for reading. Modern optics
began with the invention of the microscope by Galileo Galilei about 1610. Microscopy originated in 1625
when the Italian Francesco Stelluti published his drawings of a honeybee magnified 10 times.
The 17th century produced five microscopists whose works are considered classics: Marcello Malpighi
(Italy), Antoni van Leeuwenhoek and Jan Swammerdam (Holland), and Robert Hooke and Nehemiah Grew
(England). Notable among their achievements were Malpighi's description of lung capillaries and kidney
corpuscles and Hooke's Micrographia, in which the term cell was first used.
Basis for Modern Systematics
Consistent terminology and nomenclature were unknown in early biological studies, although Aristotle
regularly described organisms by genos and eidoes (genus and species). Sir Isaac Newton's Principia
(1687) describes a rigid universe with an equally rigid classification system. This was a typical approach of
the period. The leading botanical classification was that used in describing the medicinal values of plants.
Modern nomenclature based on a practical binomial system originated with Karl von LinnŽ (Latinized to
Carolus Linnaeus). In addition to arranging plants and animals into genus and species based on structure,
he introduced the categories of class and order. Jean Baptiste Lamarck based his system on function, since
this accommodated his view of the inheritance of acquired characteristics. In 1817, Georges, Baron Cuvier,
became the first to divide the entire animal kingdom into subgroups, for example, Vertebrata, Mollusca,
Articulata, and Radiata.
Explorations and Explorers
During the 18th and 19th centuries numerous important biological expeditions were organized. Three of
these, all British, made outstanding contributions to biology. Sir Joseph Banks, on Captain Cook's ship
Endeavor, explored (1768Ð71) the South Seas, collecting plants and animals of Australia. Robert Brown, a
student of Banks's, visited Australia from 1801 to 1805 on the Investigator and returned with more than
4,000 plant specimens. On perhaps the most famous voyage, Charles Darwin circumnavigated (1831Ð36)
the globe on the Beagle. His observations of birds, reptiles, and flowering plants in the Gal‡pagos Islands in
1835 laid the foundation for his theories on evolution, later published in On the Origin of Species (1859).
The Discovery of Microorganisms
Arguments about the spontaneous generation of organisms had been going on since the time of Aristotle,
and various inconclusive experiments had been conducted. Louis Pasteur clearly demonstrated in 1864 that
no organisms emerged from his heat-sterilized growth medium as long as the medium remained in sealed
flasks, thereby disproving spontaneous generation. Based on Edward Jenner's studies of smallpox, Pasteur
later developed a vaccine for anthrax and in 1885 became the first to successfully treat a human bitten by a
rabid dog.
Beginning in 1876, Robert Koch developed pure-culture techniques for microorganisms. His work verified
the germ theory of disease. One of his students, Paul Ehrlich, developed chemotherapy and in 1909 devised
a chemical cure for syphilis.
The value of antibiotics became evident when Sir Alexander Fleming discovered penicillin in 1928. An
intensive search, between 1940 and 1960, for other antibiotics resulted in the development of several dozen
that were used extensively. Although antibiotics have not been the panacea once anticipated, their use has
resulted in a decreased incidence of most infectious diseases.
The Role of the Cell
Following Hooke's use of the term cell, biologists gradually came to recognize this unit as common
throughout living systems. The cell theory was published in 1839 by Matthias Schleiden (see Schwann,
Theodor, and Schleiden, Matthias Jakob), a plant anatomist. Schleiden saw cells as the basic unit of
organization and perceived each as having a double life, one "pertaining to its own development" and the
other "as an integral part of a plant. " Schwann, an animal histologist, noticed that not all parts of an
organism consist of cells. He added to the theory in 1840 by establishing that these parts are at least "cell
products." Between 1868 and 1898 the cell theory was enlarged as substructures of the cellÑfor example,
plastids and mitochondriaÑwere observed and described.
Basic Life Functions
Until the 17th century it was believed that plants took in food, preformed, from the soil. In about 1640, Jan
Baptista van Helmont, the first experimental physiologist, concluded that water is the only soil component
required for plant growth. Stephen Hales showed (1727) that air held the additional ingredient for food
synthesis. In 1779, Jan Ingenhousz identified this as carbon dioxide.
The study of photosynthesis began with a demonstration by Julius von Sachs and Nathanael Pringshe•m in
the mid-19th century that light is the energy source of green plants. Vernon Herbert Blackman showed
(1905) that not all parts of this process require sunlight. Results of work done during the 1920s and '30s
proved that chloroplasts produce oxygen. Subsequently, it was shown that the light-dependent reactions
cause two types of high-energy molecules to be formed that use the energy from light.
The route of carbon dioxide in photosynthesis was worked out by Melvin Calvin in the early 1950s, using the
radioisotope carbon-14. His results proved Blackman correct: there exist two distinct but closely coordinated
sets of chloroplast reactions, one light-dependent and the other light-independent. High-energy products of
the light-dependent reactions are required for incorporation of carbon dioxide into sugars in the lightindependent reactions.
The earliest demonstration of ferments (the word enzyme was not coined until 1878) in pancreatic juice was
made by Claude Bernard in France. Bernard also experimentally determined numerous functions of the liver
as well as the influence of vasomotor nerves on blood pressure.
In the 1930s, Otto Warburg discovered a series of cellular enzymes that start the process of glucose
breakdown to produce energy for biological activity. When Hans Krebs demonstrated (1950s) an additional
series of enzyme reactions (the citric acid cycle) that completes the oxidation process, the general
respiration scheme of cells became known.
Chemical synchronization of body functions without direct control by the nervous system was discovered in
1905 by Sir William M. Bayliss and Ernest Henry Starling (the first to use the term hormone). Steroids were
discovered in 1935.
Continuity in Living Systems
The early biologists known as preformationists believed that animals existed preformed, either in sperm (the
animalculist's view) or in the egg (the ovist's belief). Embryology actually began when Karl Ernst von Baer,
using the microscope, observed that no preformed embryos exist. Modern interpretations of developmental
control in embryogenesis can be traced to Hans Spemann's discovery in 1915 of an "organizer" area in frog
embryos. More recent research has shown the importance of other factors, such as chemical gradients.
Genetics, the study of heredity, began with the work of Gregor Johann Mendel, who published his findings in
1866. Mendel's extensive experiments with garden peas led him to conclude that the inheritance of each
characteristic is controlled by a pair of physical units, or genes. These units, one from each parent (the law
of segregation), were passed on to offspring, apparently independent of the distribution of any other pairs
(the law of independent assortment). The gene concept was amplified by the rediscovery and confirmation
of Mendel's work in 1900 by Hugo De Vries in Holland, Karl Erich Correns in Germany, and Erich
Tschermak von Seysenegg in Austria. From these studies De Vries developed his theory of mutation,
introducing it in 1903.
The chromosome theory is based on the speculations of Pierre Paul Roux in 1883 that cell nuclei contain
linear arrangements of beadlike components that replicate (produce exact copies) during cell division. Many
important contributions were made early in the 20th century by the American Thomas Hunt Morgan. These
included sex-linked inheritance and the association with gene theory of the crossing over of chromosomes.
The discovery by Geoffrey Hardy and Wilhelm Weinberg of the equilibrium relationship that exists between
frequency of alleles (a term originated by William Bateson in 1909 for alternate forms of a gene) in a
population led to formulation of the law bearing their names. The role of genetics in evolution was publicized
in 1937 by Theodosius Dobzhansky's Genetics and the Origin of Species.
Molecular biology, the most recent branch of biology, began early in the 20th century with Archibald Garrod's
work on the biochemical genetics of various diseases. The concept of one gene producing one enzyme was
established in 1941 by George W. Beadle and Edward L. Tatum. The work on protein synthesis by Jacques
Monod and Fran•ois Jacob and others in 1961 has modified the one geneÐone enzyme concept to one
geneÐone protein. Essential to the understanding of protein synthesis were the advances made in the
1940s and '50s in understanding the role and structure of nucleic acids. The structural model proposed in
1953 by James D. Watson and F. H. C. Crick is a landmark in biology. It has given biologists a feasible way
to explain the storage and precise transmission of genetic information from one generation to the next (see
genetic code). Knowledge of biological processes at the molecular level has also enabled scientists to
develop techniques for the direct manipulation of genetic information, a field now called genetic engineering.
UNITY OF LIVING SYSTEMS
Despite the astounding diversity of organisms that have been discovered, an equally astounding degree of
unity of structure and function has been discerned. The structure of flagella is essentially the same in all
cells having nuclei. The molecules involved in growth and metabolism are remarkably similar, and often they
are constructed of identical subunits. Furthermore, enzymes, the catalysts of biological chemistry, are now
known to act similarly in all organisms. Phenomena such as cell division and the transmission of the genetic
code also appear to be universal.
Botany
In simplest terms, botany may be defined as the field of science that is concerned with the study of plants.
This definition is not completely inclusive, however, the reason being that botany is one of three fields into
which the broader life science of biology is divided, the other two fields being microbiology and zoology.
Biology recognizes various life kingdoms. According to the classification system used in this encyclopedia,
these kingdoms are the Monera (bacteria and blue-green algae), Protista (one-celled organisms), fungi,
plants, and animals (see classification, biological). With some variations, the fields of biology divide these life
forms among themselves. Thus, of the five kingdoms, botany generally considers the monerans, the protists
that contain chlorophyll, the fungi, and the plants as its subject matters.
Subdivisions of Botany
The study of plants may be approached from many directions, each of which is a specialization involving an
aspect of plant lifeÑfor example, classification, form and structure, life processes and functions, diseases,
fossils, and heredity and variation.
Plant taxonomy is the study of plant classificationÑthat is, the grouping of plants into species, genera,
families, orders, classes, and divisions reflecting evolutionary, or family tree, relationships. Taxonomists
provide internationally recognized scientific names for plants. As the field of taxonomy has become more
specialized, various branches of the discipline have appeared, including biosystematics, numerical
taxonomy, cytotaxonomy, and chemical taxonomy, which reflect different approaches to problems of
classification.
Plant morphology is the study of the form and structure of plants. Morphologists may investigate plant life
cycles or internal structure (plant anatomy). Specialized branches of morphology are cytology (study of cell
structure and function), palynology (study of pollen and spores), and morphogenesis (study of how plants
develop their form).
Plant physiology is the study of life processes of plantsÑfor example, photosynthesis and respirationÑand of
the functions of different tissues and organs. Many physiologists, working mainly with chemical processes in
plants, might well be called biochemists.
Plant ecology is the study of how plants affect, and are affected by, their environment and of the structure
and distribution of vegetation, or communities of plants (see ecology).
Plant pathology is the study of plant diseases (see diseases, plant). Paleobotany is the study of the fossil
record of plants. A developing science is paleoecology, which considers all fossil life and its environmental
relationships. Genetics is the study of heredity and variation. The basic principles and processes of genetics
are so similar in plants and animals that little distinction is made between them.
Some additional branches of botany are devoted to particular groups of plants. For example, bacteriology is
the study of bacteria; mycology, the study of fungi; and phycology, the study of algae. Workers in these
fields may be concerned with any aspectÑtaxonomy, morphology, physiology, ecologyÑof the group.
Economic botany is concerned with those plants that are of economic importance, because of their
usefulness (food, fiber, medicine) or because of the harm they do (weeds, poisonous plants). Ethnobotany is
concerned with how primitive societies used plants.
These many facets of botany are not mutually exclusive. Each relies on one or more of the others. Plant
pathologists, for example, are interested in taxonomy, morphology, physiology, ecology, and genetics of the
disease-causing organisms they study.
History
People have always been interested in plants as sources of useful products or as troublemakers. In
prehistoric societies some persons probably were more interested in plants than others, and perhaps it was
these "primitive botanists" who discovered that seeds produce plants, that pieces of plants (cuttings) can
grow into new plants, and that certain plant parts have healing properties.
The first written records of scientific botany are from the time of the ancient Greeks and Romans. It was
among these people that some of the methods of scienceÑobservation, description, deduction, and
organization of knowledgeÑfirst appeared. Theophrastus, the "father of botany," wrote extensively about
plantsÑtheir form, classification, and natural historyÑbut only two of his works, Historia plantarum and De
causis plantarum, survive. Pedanius Dioscorides wrote De materia medica, a popular herbal that described
plants, especially those useful in medicine. It combined fact with superstition. Several Roman authors also
contributed to the literature. Despite their imperfections, for 1,500 years these early works, especially those
by Theophrastus and Dioscorides, were accepted in Europe without serious question.
During the 16th century, after the invention of the printing press, a number of herbals appeared that
contained original rather than borrowed observations and that critically evaluated knowledge and authority
from earlier times. Exploration of various parts of the world was making Europeans aware of the enormous
variety of plants that existed, and plants were again studied carefully.
Exploration of the invisible world began in the late 16th century when the compound microscope was
invented. The pioneer microscopic studies that took place in the 17th century were the start of plant
anatomy.
The age of botanical experimentation, a fundamental activity in science, began during the 17th century.
Experimental plant physiology had its beginnings when water uptake of a tree was measured by the chemist
Johannes Baptista van Helmont; the study was published in 1648. The work of Stephen Hales, considered
the founder of plant physiology, led to studies of basic processes such as photosynthesis.
Other early physiologists studied the movement of water in plants, and in 1779 the chemist Joseph Priestley
showed that plants produce oxygen in sunlight.
In 1753, Carolus Linnaeus published Species Plantarum, which described 6,000 species of plants and
introduced the consistent use of binomials, two-word names of plants (for example, Quercus alba for white
oak), a basis for the classification system in use today. The history of taxonomy since then has involved,
primarily, efforts to make plant classification more nearly reflect evolutionary relationships and, secondarily,
refining the rules for scientific names of plants.
Vast herbaria (collections of pressed, dried plants) have been built up as research facilities of importance in
plant taxonomy. Charles Darwin's exposition of evolution in On the Origin of Species (1859) encouraged
taxonomists to build evolutionary classifications, and it profoundly influenced botanical research.
Although plant taxonomy and morphology dominated botany during the 18th century and part of the 19th
century, other botanical disciplines eventually matured or developed as the necessary tools and techniques
appeared and were improved. Plant diseases have been known since ancient times, but the modern science
of plant pathology did not begin to develop until the mid-19th century. The devastating potato blight in
Ireland in the 1840s greatly stimulated the study of plant diseases.
Until 1900, plant genetics was concerned largely with practical hybridization and selection in plants,
especially crops and ornamentals. Work in genetics had become important in agriculture. The mechanism of
heredity and variation, however, was not understood until the late 19th and early 20th centuries. The
rediscovery in 1900 of Gregor Johann Mendel's then three-decade-old work on plant breeding completely
reoriented research in genetics and led to the remarkable development of modern genetics.
Plant fossils have been known since early times, and some ancient Greek writers recognized that fossils are
evidences of past life. This view was replaced in succeeding centuries, however, by fantastic or mystical
explanations. Starting in the 15th century, fossils were directly observed and reasonably interpreted, but not
until the late 19th century was the debate about the true nature of fossils essentially over. By the early 19th
century, as the richness of the fossil record was gradually revealed and the great age of the Earth began to
be understood, paleobotany had become well established as a science.
The beginnings of ecology were apparent in Theophrastus's writings on natural history, but ecology did not
emerge as a unified science until the late 19th and early 20th centuries. Concern for the environment
increased after 1950 and has made ecology one of the most talked-about specialties in biology. Although
plant ecology is a logical subdivision of ecology and has its own history, philosophy, and methods, the
concept of ecosystem (or total environment) has done much to bring plant and animal ecologies together.
Recent milestones in botany and biology include the following: the development of electron-microscopic
techniques that allow scientists to observe the three-dimensional structure of living cells; the discovery of
fossils of prokaryotes about 3.5 billion years old and the remains of unicellular eukaryotic algae about 1.4
billion years old; the development of radioactive isotopes for dating and tracing materials as they move in
biological systems; the rapid expansion of genetic engineering and other areas of biotechnology; the
elucidation of the structures of DNA and RNA and their role in protein synthesis; and the appearance of new
scientific ideas and theories on the origin of life.
Botany and Other Sciences
Botany relies heavily on the physical sciences (chemistry, physics), the earth sciences (geology), and
mathematics. It contributed to and borrows from zoology, especially entomology (the study of insects) as
related to pollination and to transmission of plant diseases, and from anthropology. Knowledge of botany is
basic to serious work in agriculture, horticulture, forestry, conservation, and other areas.
Chemistry
Chemistry is the physical science that deals with the composition, structure, and properties of substances
and also the transformations that these substances undergo. Because the study of chemistry encompasses
the entire material universe, it is central to the understanding of other sciences.
A basic chemical theory has been formulated as the result of centuries of observation and measurement of
the various elements and compounds (see chemistry, history of). According to this theory, matter is
composed of minute particles called atoms. The more than 100 different kinds of atoms that are known are
called chemical elements. Atoms of the same element or of different elements can combine together to form
molecules and chemical compounds. The atoms are held together by forces, primarily electrostatic, called
chemical bonds. In a chemical reaction two or more molecules can undergo various changes to form
different molecules by means of breaking and making the chemical bonds.
BRANCHES OF CHEMISTRY
Five subdivisions traditionally are used to classify various aspects of chemistry. The study of organic
chemistry originally was limited to compounds that were obtained from living organisms, but now the field
deals with hydrocarbons (compounds of carbon and hydrogen) and their derivatives. The study of inorganic
chemistry includes compounds derived from all of the elements except for hydrocarbons. Biochemistry is the
subdivision in which the compounds and chemical reactions involved in processes of living systems are
studied.
Physical chemistry deals with the structure of matter and the energy changes that occur during physical and
chemical changes of matter. This field provides a theoretical basis for the chemical observations of the other
subdivisions. Analytical chemistry is concerned with the identification of chemical substances, the
determination of the amounts of substances present in a mixture, and the separation of mixtures into their
individual components.
Special subdivisions of chemistry are now recognized that account for knowledge at the interface of
chemistry and other physical sciences. For example, recent research has involved the chemical origin of
lifeÑreactions between simple molecules at low pressures to form such complex organic molecules as
proteins found in living organisms.
Astrochemistry is the interdisciplinary physical science that studies the origin and interaction of the chemical
constituents, especially interstellar matter, in the universe. Geochemistry is concerned with the chemical
aspects of geologyÑfor instance, the improvement of ore processing, coal utilization, shale oil recoveryÑand
the use of chemicals to extract oil from wells that are considered dry by ordinary standards.
Nuclear chemistry deals with natural and induced transformations of the atomic nucleus. Studies in this field
now center on the safe and efficient use of nuclear power and the disposal of nuclear wastes.
Radiochemistry deals with radioactive isotopes of chemical elements and the utilization of those isotopes to
further the understanding of chemical and biochemical systems. Environmental chemistry is a subdivision
that has as its subject the impact of various elements and compounds on the ecosphere.
TOOLS OF CHEMISTRY
Chemistry is a precise laboratory science, and the equipment of a chemical laboratory is usually involved
with measurement. Balances are used to measure mass, pipettes and burettes to measure volume,
colorimeters to measure color intensities, and thermometers to measure temperature changes. Advances in
electronics and computer technology have enabled the development of scientific instruments that determine
the chemical properties, structure, and content of substances accurately and precisely.
Most modern chemical instrumentation has three primary components: a source of energy, a sample
compartment within which a substance is subjected to the energy, and some sort of detector to determine
the effect of the energy on the sample. An X-ray diffractometer, for instance, enables the chemist to
determine the arrangement of atoms, ions, and molecules that constitute crystals by means of scattering X
rays (see X-ray diffraction). Most modern laboratories contain ultraviolet, visible, and infrared
spectrophotometers, which use light of various wavelengths on gaseous or liquid samples. By such a means
the chemist can determine the electron configuration and the arrangement of atoms in molecules. A nuclear
magnetic resonance spectrophotometer subjects a sample in a strong magnetic field to radio frequency
radiation. The absorption of this energy by the sample gives the chemist information concerning the bonding
within molecules. Other instruments include mass spectrometers, which use electrons as an energy source,
and differential thermal analyzers, which use heat.
An entirely different class of instruments are those which use chromatography to separate complex mixtures
into their components. Chemists are also using extremely short pulses of laser light to investigate the atomic
and molecular processes taking place in chemical reactions at the microsecond level. These and other
devices generate so much data that chemists frequently must use computers to help analyze the results.
IMPACT ON SOCIETY
Chemistry is closely associated with four basic needs of humans: food, clothing, shelter, and medical
services. The applications of chemistry usually bring to mind industries engaged in the production of
chemicals. A significant portion of the chemical industry is engaged in the production of inorganic and
organic chemicals, which are then used by other industries as reactants for their chemical processes. In the
United States the great majority of the leading chemicals being produced are inorganic, and their
manufacture is a multibillion-dollar industry.
The chemistry of polymersÑlarge molecules made up of simple repeating units linked together by chemical
bonds (see polymerization)Ñincludes plastics, resins, natural and synthetic rubber, synthetic fibers, and
protective coatings. The growth of this segment of chemistry has been phenomenal since the late 1930s.
The fabrication of natural rubber and coatings (paints, varnishes, lacquers, and enamels) derived from
natural agricultural products has been a mainstay of the chemical industry for more than 150 years.
The search for new energy sources and the improvement of existing ones are in many ways chemical
problems (see fuel). At the heart of the petroleum industry are the processes of refining crude hydrocarbons
into such products as gasoline and petrochemicals. The utilization of nuclear power depends heavily on the
chemical preparation and reprocessing of fuel, the treatment and disposal of nuclear waste, and the
problems of corrosion and heat transfer. The conversion and storage of solar energy as electrical energy is
primarily a chemical process, and the development of fuel cells is based on either chemical or
electrochemical technology (see electrochemistry).
Chemical research has been the basis of the pharmaceutical industry's production of drugs. The controlled
introduction of specific chemicals into the body assists in the diagnosis, treatment, and often the cure of
illness. Chemotherapy is a prime treatment in combating cancer.
Tremendous agricultural gains have been achieved since about 1940 as a result mainly of farmers' use of
chemical fertilizers and pesticides. Other chemical industries include soap and detergent production; food
processing; and the production of glass, paper, metals, and photographic supplies.
Specialized Uses
Outside the mainstream of what is traditionally considered chemistry is research that supports other
professions. Chemistry is used by museums in art conservation and restoration, the dating of objects (see
radiometric age-dating), and the uncovering of frauds. Forensic chemists work in crime laboratories, carrying
out tests to assist law-enforcement agencies (see forensic science; forensic genetics). Toxicologists study
the potentially adverse effects of chemicals on biological systems (see toxicology), as do those involved in
industrial hygiene. The chemistry involved in sanitary engineering and sewage treatment has come to be of
major importance to society as populations increase and environmental concerns intensify.
Problems
Through the use of chemistry and related technology, chemical substances have been produced that either
immediately or eventually are harmful to humans, animals, and the environment. Pollution is not a new
problem, but the combination of a rapidly growing chemical industry and the use of sophisticated detection
devices has brought the extent of pollution to public attention.
The discharge and disposal of industrial waste products into the atmosphere and water supply, for example,
at Love Canal, have caused grave concern about environmental deterioration (see pollution, air; pollution,
water). The repeated exposure of workers to some toxic chemicals at their jobs has caused long-range
health problems (see diseases, occupational). In addition, the use of some pesticides and herbicides can
cause long-term toxicity, the effects of which are still only partially understood. The safe storage and
disposal of chemical and biological warfare agents and nuclear waste continue to be a serious problem. An
advance in chemical technology almost always involves some trade-off with regard to an alteration of the
environment.
Challenges and Trends
Much of the future of chemistry will lie in providing answers to such technological problems as the creation
of new sources of energy and the eradication of disease, famine, and environmental pollution. The
improvement of the safety of existing chemical products, for example, pesticides, is another challenge.
Research into the chemical complexities of the human body may reveal new insights into a variety of
diseases and dysfunctions. The improvement of industrial processes will serve to minimize the use of
energy and raw materials, thereby diminishing negative environmental effects.
History of Chemistry
Humans began to practice chemistryÑthe transformation of material thingsÑin prehistoric times, beginning
with the use of fire. Primitive humans used fire to produce such chemical transformations as the burning of
wood, the cooking of food, and the firing of pottery and bricks, and later to work with such ores as copper,
silver, and gold. As civilization developed in China, Mesopotamia, and Egypt, artisans performed further
transformations to produce a variety of dyes, drugs, glazes, glasses, perfumes, and metals.
Early theoretical explanations of chemical phenomena were generally magical and mythological in
character. The ancient Greeks added little to the chemical practice that they inherited from older and
neighboring civilizations, but they did refine the theoretical explanations of transformations observed in the
artisans' shops and in the environment. They recognized change as a universal phenomenon, to such a
degree that Heraclitus, in the 6th century ©, asked whether there was anything visible or invisible that did
not change.
After considerable discussion of this question by many Greek philosophers, Aristotle, in the 4th century ©,
formulated a theory that dominated scientific thinking for almost 2,000 years. He postulated the existence of
a primeval matter and four qualities: heat, cold, wetness, and dryness. As these qualities were impressed on
the primeval matter, four elements were produced: fire (hot and dry), air (hot and wet), earth (cold and dry),
and water (cold and wet). All material things were viewed as different combinations of these four elements.
Greek philosophers also introduced the theory of atomicity. Anaxagoras and Empedocles held that all matter
was composed of infinitely small seeds; Leucippus and Democritus proposed that all matter coalesced out of
indivisible atoms moving rapidly and at random in a void.
Alchemy, the next major phase of the history of chemistry, developed in Alexandria, Egypt, and combined
aspects of Greek philosophy, Middle Eastern artisanship, and religious mysticism. Its main objective was the
transformation of base metals into gold. In the 4th and 5th centuries ¥ the emigrant Nestorians brought the
craftsmanship of Egyptian artisans to the Arabs in Anatolia. During the golden age of Islamic science
(8thÐ11th century) the ideas of Aristotle were modified, and a number of important substances, such as
sodium hydroxide and ammonium chloride, were introduced into chemical practice. This Arabic alchemy
came into western Europe between the 11th and 16th centuries through Sicily and Spain. The mystical ideas
so introduced were accompanied by practical advances in chemical procedures, such as distillation, and by
the discovery of new metals and compounds. The art of metallurgy became more sophisticated, and
chemicals were introduced into medical practice by Paracelsus in the 16th century.
SEVENTEENTH AND EIGHTEENTH CENTURIES
At the beginning of the 17th century chemistry became recognized as a science. The first methodical
chemical textbook, Alchemia, by Andreas Libavius, was published in 1597. Alchemia was defined as the art
of producing reagents and extracting pure essences from mixtures. During this century many new
compounds were prepared by distilling animal and vegetable materials, and the phlogiston theory was
proposed by Georg Ernst Stahl as a unified explanation of combustion and calcination (rusting): when a
substance burned or a metal was converted into calx (rusted), a proposed substance known as phlogiston
was lost. Easily combustible materials such as charcoal were viewed as containing large amounts of
phlogiston, which could be transferred directly to calx, thereby regenerating the metal.
Refined Techniques
The invention of the pneumatic trough and of the balance stimulated the development of chemistry in the
18th century. During the previous centuries, liquids and solids could be handled in the laboratory without
difficulty, but gases produced by heating substances could be manipulated only by using animal bladders.
The pneumatic trough, filled with water or mercury and containing an inverted vessel also filled with water or
mercury, permitted easy collection, transfer, and study of gases. Joseph Black discovered carbon dioxide
(1756), and Carl Scheele (1772) and Joseph Priestley (1774) discovered oxygen, using the pneumatic
trough.
The balance was used effectively by Antoine Lavoisier, at the end of the 18th century, to disprove the
phlogiston theory and establish the true nature of combustion, calcination, and biological respiration.
Lavoisier showed that heating the calx of mercury produced oxygen, with a loss of weight that was regained
when oxygen and mercury were heated at a lower temperature.
Refined Theory
Through these experiments Lavoisier demonstrated that the process of combustion was the reaction of
oxygen with carbonaceous material to form carbon dioxide and water and that respiration was biological
combustion. More generally, Lavoisier defined a chemical element as a substance that could not be
decomposed into simpler substances by heat or chemical reaction. A compound was defined as a
combination of two or more elements in a definite proportion by weight. This innovative concept placed
chemistry on a quantitative basis. Each element could be assigned a number, or combining weight, such
that this number or any integral multiple of it represented the weight in which the element combined.
Lavoisier can be considered the father of modern chemistry.
The atomic theory of John Dalton, at the beginning of the 19th century, further extended Lavoisier's theories.
Dalton assumed that each element was composed of very small particles, called atoms, which have a
characteristic weight, and that chemical reactions resulted from the combination or reshuffling of atoms. For
almost 50 years, however, there was no clear way of distinguishing combining weights, which were variable,
from atomic weights. Furthermore, there was confusion concerning molecular weight, the weight of a
standard number of molecules of any given compound.
NINETEENTH AND EARLY TWENTIETH CENTURY
Discovery of New Elements and Their Systematization
During the first half of the 19th century new elements were discovered at an increasing rate.
ElectrolysisÑthe decomposition of compounds by an electric currentÑbroke up compounds that hitherto
could not be decomposed by heat or chemical reactions. The alkali metals, alkaline earth metals, silicon,
and the halogens were isolated and studied. Michael Faraday showed that the amount of current necessary
to liberate an equivalent weight (combining weight) of an element was the same for all elements. Electrolysis
deeply influenced the thinking of chemists of the time, including Jšns Jakob Berzelius, who formulated the
dualistic, or electrochemical, theory of atomic combination. According to this theory, all atoms are either
positively or negatively charged, and molecules are formed by the electrostatic attraction of oppositely
charged atoms.
A clearer insight into the distinction between combining weights and atomic weights was offered by Pierre
Dulong and A. T. Petit (1791Ð1820), who stated (1819) that the product of the atomic weight and the
specific heat was a constant. Thus, once the specific heat of an element was determined, an approximate
atomic weight could be obtained. The exact atomic weight could then be obtained by multiplying the
combining weight by an integer. This weight, referred to oxygen, was precisely determined. The integer used
in this calculation came to be known as the valence of the element. Using this method, Berzelius was able to
draw up a table of atomic weights.
The distinction between combining weights and atomic weights was further clarified at the First International
Chemical Congress (1860), held in Karlsruhe, Germany. Stanislao Cannizzaro showed how the hypothesis
of Amedeo AvogadroÑthat equal volumes of gases at the same temperature and pressure contain the same
number of moleculesÑcould be used to determine molecular and atomic weights. The culmination of the
atomic-weight program was the formulation in 1869 by Dmitry Mendeleyev and in 1871 by Lothar Meyer of
the periodic table, which systematized the large number of elements according to their atomic weights and
correlated their physical and chemical properties. Mendeleyev carried this systematization still further and
used his table to produce the properties of three elements not known at that time. These elements (gallium,
scandium, and germanium) were discovered in 1875, 1879, and 1886, respectively. Their properties were
found to be strikingly similar to those predicted by Mendeleyev. The periodic table was subsequently
extended to accommodate the inert gases, such as helium, argon, and neon, discovered by Lord Rayleigh
and Sir William Ramsay.
Theories of Chemical Bonding
During the first half of the 19th century several carbon compounds were isolated, purified, and
characterized. The dualistic electrochemical theory of Berzelius was inadequate to explain the chemical
bonding found in most of these compounds. This was strikingly shown in the ability of chlorine, an
electronegative element, to replace hydrogen, an electropositive element, in methane (CHP). A theory of
radicals and types was formulated to explain this situation. It was postulated that certain groups of atoms
(radicals), such as CHO, could act as a unit in chemical reactions, replacing hydrogen, for example, in HCl.
The HCl represented what was called a type. Other types were water (HMO), ammonia (NHO), hydrogen
(HM), and methane. Thus a radical could replace one hydrogen in water to form alcohol and two to form an
ether.
Although the radical and type theories systematized many organic compounds, they could not systematize
all the compounds without the invention of more and more types. The confusion was compounded by the
failure at this time to distinguish between atomic, equivalent, and molecular weights. The theory of types and
the study of metal organic compounds, however, suggested that each atom could bind only a fixed number
of atoms or radicals: hydrogen, one; oxygen, two; and nitrogen, three. Friedrich KekulŽ von Stradonitz and
A. S. Couper (1831Ð92) proposed that carbon not only could bind four atoms but also could bond with other
carbon atoms to form chains and rings. The number of such bonds was called the valence of the element.
Simplistically it could be visualized as the number of "hooks" that the element possessed.
Valence theory led to the structural theory of organic chemistry expounded by Aleksandr Butlerov. This
theory explained the differences between two compounds, such as dimethyl ether and ethyl alcohol, having
the same composition, molecular weight, and molecular formula (in this case, CMHRO) but markedly
different chemical properties, by assigning them different structural formulas (here, CHO#CHO and
CMHQ#OH, the lines in the formulas indicating important valence bonds.)
In 1874, Jacobus van't Hoff and Joseph Le Bel (1847Ð1930) extended these formulas into three
dimensions, opening up the new field of stereochemistry. Stereochemistry not only explained the puzzling
difference between compounds that had the same structure and different properties (stereoisomers) but also
explained the optical activity of certain compounds important in life processes. The ideas of stereochemistry
were soon applied to complex compounds of the transition metals by Alfred Werner. By the end of the 19th
century, organic chemistry had not only acquired a comprehensive theory of structure but had also
developed new methods for the synthesis of dyes, perfumes, explosives, and medicines. The starting
materials for these syntheses were obtained from the coal-tar industry.
Contributions of Physics
During the 19th century chemistry developed for the most part independently of physics, where progress
was being made in mechanics, electricity, magnetism, thermodynamics, and optics. Nevertheless, there
were interactions between the two fields. Electrolysis and chemical batteries involved electricity. Some new
elements were detected by spectroscopy, notably by Robert Bunsen. Electric conductivity of aqueous salt
solutions showed that on dissolution salts break up into charged particles. In 1884, Svante Arrhenius
formulated the theory of electrolytic dissociation. Thermodynamics was applied to chemistry during the
middle of the century in the measurement of heats of reaction by Germain Hess (1802Ð50), Marcelin
Berthelot, and Julius Thomson (1826Ð1909). J. Willard Gibbs, Jr., developed the thermodynamics of
heterogeneous equilibria and formulated (1876) the phase rule (see phase equilibrium). He also formulated
the discipline of statistical mechanics.
Chemical Analysis through Spectroscopy
The ability to analyze gases spectroscopically by passing electricity through them proved to be a major new
analytical tool. The use of chemical electrical batteries to pass electricity through molten solids and liquids
had led to the isolation of new elements and the formulation of the dualistic theory of molecular structure.
Electrolysis of gases, however, was not possible until the invention of an efficient vacuum pump to provide
low-pressure gases in a glass tube. When an electric potential is applied to such a gas, it becomes
conductive and produces visible radiation that can be analyzed with a spectroscope.
The development of this discharge tube was essential to the development of modern physics and chemistry.
The electric-charge carriers inside the discharge tube were found to include both positive (canal) rays and
negative (cathode) rays. The positive rays were composed of different ionized atoms when different gases
were in the discharge tube, but the cathode rays were always the same no matter what residual gas was in
the tube. The cathode rays were identified as electrons by Sir Joseph John Thomson in 1897. Using an
appropriate arrangement of magnetic and electric fields, Francis Aston constructed a mass spectroscope,
which he used to separate ions of the positive rays according to their atomic weight. In this way, not only
were atoms of different species separated from each other, but also some elements were found to consist of
atoms with differing weights (isotopes).
Radiation emitted from the discharge tube consisted of a visible glow that, when analyzed by a
spectrograph, showed discrete spectral lines characteristic of traces of gas remaining in the evacuated
discharge tube. The surprising result was that even the spectrum of hydrogen, the simplest atom, appeared
complex, consisting of a series of discrete lines whose wavelengths could be determined with a high degree
of precision. Almost four decades elapsed before this complexity was explained. In 1885, Johann J. Balmer
showed that there was a simple mathematical relationship between the lines of hydrogen in the visible
spectrum. A similar relationship was found in the extreme ultraviolet and in the infrared. Thus each spectral
line at wavelength Ñ of the hydrogen atom could be represented by the formula
where NM and NN are integers, with NN larger than NM, and NM constant for a given series. R is the
Rydberg constant, a universal constant named in honor of Swedist physicist Johannes Robert Rydberg. (Its
value for hydrogen is 109,678.76 cm¦x.) A similar formula was discovered for spectral lines of alkali metals
and alkaline earth elements.
Structure of the Atom
In 1913, Niels Bohr proposed his atomic theory for the hydrogen atom. This theory postulated that the
hydrogen atom consisted of a positive massive nucleus and an electron traveling in definite discrete orbits
around it. These orbits are characterized by integers called quantum numbers, represented by the letter n.
Bohr also related the energy of the atom to the orbit of the electron, determining it to be equal to r/n6, where
r is the radius of the electron orbit. He further maintained that emission and absorption of light were
characterized by a "quantum jump" between two orbits in the atom. Bohr was able to derive the Rydberg
constant in terms of known physical constants and to calculate its value to within several percentage points.
The simple Bohr theory was extended to other atoms and made more sophisticated for hydrogen by
introducing a set of quantum numbers.
Despite its many virtues, the Bohr theory had several shortcomings, particularly a lack of self-consistency. In
1925, Louis de Broglie proposed that electrons have wave properties; Clinton Davisson and Lester Germer
(1896Ð1972) in the United States and Sir George Thomson in England confirmed this by showing diffraction
of electrons by crystals. Erwin Schršdinger developed this concept into wave mechanics, which was
modified by Paul Dirac, Werner Heisenberg, and John von Neumann. The discovery of electron spin by
George Uhlenbeck (1900Ð88) and Samuel Goudsmit (1902Ð78) in 1925 represented a major advance in
the understanding of atomic and molecular structure and had important repercussions in the theory of
magnetism and chemical bonding. Other important developments were the formulation of the exclusion
principle (1925) by Wolfgang Pauli and the formulation of the uncertainty principle (1927) by Heisenberg.
These developments constituted the new quantum theory.
The gas discharge tube also led to new knowledge about the structure of matter. In 1895, Wilhelm Roentgen
discovered a penetrating kind of invisible radiation, now known as X rays, that was emitted from the
discharge tube. The characteristics of this radiation were determined by the electrode in the discharge tube.
In 1913, Henry Moseley used X-ray spectroscopy to show that each element could be assigned a
characteristic integer, the atomic number, which was equal to the positive charge on the nucleus and also
corresponded to the element's position in the periodic table. Thus the simple Bohr theory was extended to
complex atoms. In the period from 1916 to 1920, these ideas of atomic structure were used by Gilbert Lewis,
Walter Kossel (1882Ð1956), Irving Langmuir, and Nevil Sidgwick to formulate a quantum theory of valence
(see quantum mechanics). A distinction was made between an ionic, a covalent, and a coordinate bond. In
1927, Walter Heitler (1904Ð81) and Fritz London (1907Ð76) formulated a quantum mechanical theory of
bonding. Their ideas were further developed by Linus Pauling, Erich Huckel (1896Ð1980), and Henry Eyring
(1901Ð81).
The first half of the 20th century was marked by far-reaching discoveries concerning the nucleus. In
experiments indirectly associated with the penetrating radiation from the discharge tube, Antoine Becquerel
discovered that all uranium salts emitted penetrating radiation. This discovery ushered in the age of
radioactivity. Marie and Pierre Curie discovered the radioactive elements polonium and radium. Sir Ernest
Rutherford formulated the spontaneous nuclear disintegration theory of natural radioactivity. In 1919,
Rutherford produced the first artificial transmutation of elements, realizing the dream of alchemists. By
bombarding nitrogen with alpha particles, he transformed the nitrogen into oxygen and the alpha particles
into protons. This opened up the new field of nuclear chemistry. In 1932, Sir James Chadwick discovered
the neutron, which in turn led to the discovery of artificial radioactivity by Irne and FrŽdŽric Joliot-Curie, to
the synthesis of transuranium elements, and to the realization of nuclear fission. The periodic table was
extended to atomic number 103 by Glenn Seaborg and to negative atomic numbers by the discovery of
antimatter. All gaps in the periodic table were filled by an increasing number of isotopes.
RECENT ADVANCES
Inorganic Chemistry
World War II research spurred important advances in inorganic chemistry. The atomic weapon and nuclear
power projects intensified studies of uranium and the transuranium elements, the chemistry of fluorine
compounds, and the metallurgy of fuel element components such as zirconium. Rare earths, produced by
nuclear fission, were separated in pure state by chromatography and were made available for chemical
study. Neil Bartlett (1932Ð====) prepared compounds of inert gases.
Modern electronics has become highly dependent on inorganic chemistry. Vacuum tubes have been
replaced by solid-state devices, with the ultrapure solid matrix replacing the vacuum. Growth of single
crystals of germanium, silicon, and other semiconductors has become an industry. Compound
semiconductors are used in color television screens, solar batteries, lasers, photoconductors, photocopying
processes, and thermoelectric devices. Modern computers and audio and video recorders use magnetic
materials for information storage. The field of microelectronics depends on inorganic chemical techniques for
producing high-purity films on single crystal chips.
A group of inorganic compounds known as coordination compounds was found to contain atoms or groups
of atoms united by bonds supplementary to classical valence bonds. In 1891, Alfred Werner (1866Ð1919)
had classified such amines, hydrates, double cyanides, and double salts and had postulated the existence
of secondary bonds uniting these groupings of atoms. In the 1920s, Nevil Sidgwick reformulated the Werner
theory in terms of a central acceptor atom (usually a transition element such as copper) and a definite
number of donor molecules such as ammonia and the chloride ion. The secondary valence bond was
formulated in terms of a coordinate bond involving pairs of electrons. Stereochemistry, isomerism, and the
optical properties of coordinate compounds were widely studied. Organometallic compounds in which the
donor was an organic radical were extensively investigated during the middle of the 20th century. A new
chemistry linking both inorganic and organic compounds was developed. The organometallic compounds
have found extensive use as polymers, plastics (silicones), antioxidants, insecticides, and herbicides. They
have also been used as catalysts for hydrogenation and polymerization.
During the 20th century several techniques were developed for structure determination. The most important
of these is X-ray diffraction. Max von Laue obtained (1912) the first diffraction pattern of a single crystal, one
of zinc sulfide. Von Laue, William H. Bragg, and his son, William L. Bragg, showed how these diffraction
patterns could be used to determine the arrangement of atoms in crystals. The structure of gaseous
molecules was determined by electron diffraction and by infrared and microwave spectroscopy. Electriccharge distribution within molecules was deduced from the dielectric properties of materials. Important
structural information was obtained from magnetic measurements. Electron paramagnetic resonance,
discovered by E. K. Zavoiski in 1945, has proved to be an important research tool in the inorganic chemistry
of transition metals and in studying radicals.
Organic Chemistry
Organic chemistry grew rapidly during the 20th century. Before World War II organic chemistry was based
on the coal-tar industry, but after the war petroleum became the major source of organic compounds.
Petrochemicals were produced in thousand-ton lots for the plastics, fiber, and solvent industries. Valuable
new pharmaceuticals were synthesized as well: Salvarsan by Paul Ehrlich in 1909; tropinone (1917) and
anthocyanines (1931) by Sir Robert Robinson; vitamin C (1933) by Sir Walter Haworth, E. L. Hirst, and
Thadeus Reichstein; complex alkaloids by R. B. Woodward; and nucleotides and coenzymes by Lord
Alexander Todd. The first sulfa drug, prontosil, was synthesized in 1932 by Gerhard Domagk; sulfanilamide,
synthesized by J. Trefoel, followed in 1936. The antibiotic penicillin was made in the laboratory in 1957. In
the late 20th century the search for useful medicines was aided by the development of a technique called
combinatorial chemistry, in which many related compounds are produced together simultaneously. The
chemicals are then separated for individual study.
Structure determination and analysis in organic chemistry have been facilitated by modern techniques,
including ultraviolet, visible, and infrared spectroscopy; X-ray crystallography; mass spectrometry; and
magnetic resonance spectrometry. Separation of complex mixtures has been simplified by a variety of
chromatographic methods.
Organic chemistry took an unexpected turn in the last third of the 20th century when it was determined that
another, long-conjectured form of carbon does in fact exist: carbon atoms linked to form more or less
spherical molecules. Such molecules, as a group, are now known as fullerenes or, in the form of nanometerwide tubes of graphitelike carbon, as carbon nanotubes. They have unusual properties that may offer a wide
range of new applications.
Physical Chemistry
Physical chemistry, a discipline on the border between physics and chemistry, is concerned with the
macroproperties of chemical substances and the changes they undergo when subjected to pressure,
temperature, light, and electric and magnetic forces. It also investigates changes produced by dissolution in
a solvent or chemical reactivity. Wilhelm Ostwald was instrumental in identifying this area of knowledge as a
distinct science.
Subjects of particular study beginning in the 20th century have included adsorption of gases, by Irving
Langmuir; catalysis, by Giulio Natta and Karl Ziegler; and the kinetics of chemical reactions, by Sir Cyril
Hinshelwood and Nikolai Semenov. The kinetic theory of gases, the liquid state, solution theory,
electrochemistry, and thermodynamics were studied by Lars Onsager. Ilya Prigogine researched the areas
of phase equilibrium, photochemistry, and the electric and magnetic properties of substances. The lowtemperature phenomenon called superconductivity, first observed in 1911, was found toward the end of the
century to occur as well in certain materials at relatively high temperatures.
Chemical Physics
During the second third of the 20th century chemical physics was identified as a new discipline. Chemical
physics differs from physical chemistry in that it deals with the microscopic properties of different chemical
substancesÑspectra, X-ray structures, magnetic resonanceÑand interprets them in terms of atomic and
molecular theories. Such interpretations are based on quantum mechanics.
Analytical Chemistry
Since the mid-20th century, instrumental methods have replaced the standard gravimetric and volumetric
procedures in analysis. Instruction in these classical analytical techniques occupies a very small part of
today's chemical curriculum. The microtechniques introduced in 1917 by Fritz Pregl to enable analysis of
several milligrams of a sample were the first stage in the development of microchemistry. Scientists can now
analyze a small number of atoms of a chemical substance by means of neutron activation, mass
spectrography, and fluorescence spectrometry. X-ray absorption spectrometry also enables researchers to
elucidate the atomic and molecular structure of such substances. Scientists use the scanning tunneling
microscope, a type of electron microscope, to manipulate matter at the atomic level. To observe the details
of rapid chemical reactions, researchers employ lasers that emit extremely short pulses of light.
Thus chemistry has become a highly sophisticated branch of science. It involves complex apparatuses for
experimental work and a highly refined theoretical approach in interpreting results, including the prediction of
reaction results by the use of computers. Modern chemistry in turn has an impact on all segments of the
world economy.
Physics
Physics studies the different forms of matter, their properties, and the transformations that they undergo.
From the beginnings of physics among the ancient Greeks, through its revival in late Renaissance Europe
and its flowering in the 19th and 20th centuries, there have been continual increases in the breadth of
phenomena studied by physicists and in the depth of understanding of these phenomena (see physics,
history of). During this growth of the science, physicists have discovered general laws, such as the
conservation of energy, that apply throughout space and time.
MATTER AND ITS TRANSFORMATIONS
Originally, the term matter referred to anything evident to the senses, such as solids and liquids, and
possessing properties such as weight. As the scope of physics has expanded, however, so has the range of
the concept of matter. Gases are now included, as are the near-vacuums of outer space and those attained
in scientific laboratories. Individual subatomic particles are now considered to be the ultimate constituents of
matter, even though some of them lack properties of mass and specific location (see fundamental particles).
Today matter is broadly identified as anything that interacts with the familiar types of matter, by exchanging
such qualities as energy and momentum.
The earliest transformation of matter to be studied by physicists was motion, which is treated in the branch
of physics called mechanics. The laws of motion were codified in the 17th century by Isaac Newton, who
provided a physical explanation of the motions of celestial bodies. In the 19th century, physics was extended
to study changes in physical form that take place, as, for example, when a liquid freezes and becomes a
solid. Such changes of state are studied in the branch of physics called thermodynamics. Other changes in
the form of matterÑfor instance, those which occur when oxygen and hydrogen combine to form waterÑare
usually considered to be part of chemistry rather than physics. This distinction is somewhat arbitrary,
however, since ideas from physics are routinely used in chemistry.
New transformations have been discovered among subatomic particles. One type of particle can change into
another, and particles can be created and destroyed. In descriptions of subatomic particles using the
physical theory called quantum field theory (see quantum mechanics), such particle creations and
destructions are taken as the fundamental events out of which all other transformations are built.
MICROPHYSICS AND MACROPHYSICS
Physics is divided into subdisciplines, as already noted, according to subject matter. The broadest division is
between microphysics, which studies subatomic particles and their combinations in atoms and molecules,
and macrophysics, which studies large collections of subatomic particles such as the solid bodies of
everyday experience.
Different experimental methods are used in these two divisions. In microphysics the objects under study
usually are observed indirectly, as in the use of particle detectors to observe the record of the passage of
subatomic particles. Consequently, much theoretical analysis stands between the observations and their
interpretation. Most individual subatomic systems can be studied only for short periods of time. It is therefore
very difficult to follow a microscopic phenomenon in detail over time, even to the extent that physical laws
allow this. In macrophysics, however, phenomena are usually directly observable, and less theoretical
analysis is needed to determine what is happening. Furthermore, because individual systems can normally
be observed over long periods of time, their evolution can be analyzed and can often be predicted.
Differences between microphysics and macrophysics also exist in the laws that apply. In microphysics the
fundamental laws are those of quantum mechanics, whose descriptions are fundamentally statistical. They
only allow probabilities to be predicted for individual events, such as radioactive decays. In macrophysics,
fundamental laws such as Newton's laws of motion are deterministic, and, in principle, precise predictions
can be made about individual events. For some macroscopic systems, however, it is necessary to use
statistical methods because of difficulties in treating large numbers of objects individually.
THE METHODS OF PHYSICS
Physics, like other sciences, uses diverse intellectual methods. Because of the long history of physics, the
distinctions among these methods have become pronounced.
Experimental Physics
When physics was revived in 16th-century Europe, Galileo and other workers in the field added an important
new element. They insisted that observation and experiment are the ultimate sources of knowledge about
nature. Whereas such early physicists as William Gilbert could rely on observations of natural phenomena, it
soon became clear that more rapid progress could be made through experiments. These involve setting up
situations that accentuate certain aspects of the phenomena under investigation. Most discoveries
concerning electricity and magnetism, for example, were made through experiments.
Often the purpose of experiments is to obtain specific numerical data, such as the temperature at which a
material becomes a superconductor. Sometimes the result is more qualitative and the experiment results in
the recognition of novel phenomena. Physicists use a wide variety of techniques to probe specific aspects of
nature. Many such methods involve instruments that themselves are outgrowths of important discoveries in
physics, including such 20th-century inventions as particle accelerators, lasers, and nuclear magnetic
resonance detectors.
Theoretical Physics
Until the late 19th century, there was no clear distinction between experimental and theoretical physics.
Around the beginning of the 20th century, however, a sort of division of labor arose. Scientists such as Max
Planck and Albert Einstein, for instance, were purely theoretical physicists. They made no serious attempts
to carry out experiments but instead confined their work to seeking general principles illuminating wide areas
of nature.
One aim of theoretical physics is to determine implications of well-known laws, such as those of quantum
mechanics, for specific physical systems. On a higher level, theoretical physicists look for general laws that
encompass a variety of phenomena and predict new phenomena based on such laws. Triumphs of
theoretical physics along these lines include James Clerk Maxwell's theory of electromagnetism, which
predicted radio waves, and Paul Dirac's relativistic quantum theory, which predicted antiparticles (see
antimatter). Theoretical physicists often make use of advanced mathematics in their work as a help in
determining the consequences of their theories or sometimes, as with general relativity, to guide them to the
proper form of the theories.
Simulations
More recently, a new technique has been introduced in some areas of physics: computer simulation (see
computer modeling), which shares attributes of both theory and experiment. In simulations of complex
experimental phenomena, computers are used to infer what should be observed when certain theories
apply. The results are compared with the actual data to see whether the expectations are correct. In
simulations of complex theories, a simplified version of the theory is analyzed through computer
calculations. The result is used to gain insights into what the full theory implies. The increased use of
simulations is blurring the standard distinction between theory and experiment in physics.
BASIC IDEAS OF PHYSICS
Discoveries in physics have shown that most natural phenomena can be understood in terms of a few basic
concepts and laws. Some of these are apparent in everyday macroscopic phenomena, others only in the
microscopic world.
Particles
The idea that the world is made up of small objects in motion goes back to ancient Greece and the atomic
theories of Leucippus and Democritus. Such objects are called particles. Particles carry some fixed
properties, such as charge and mass, and variable properties such as location and energy. Complex objects
are combinations of one or more types of particles. Changes in the properties or behavior of complex
objects are due to the motions of their component particles through space, or other changes in their variable
properties. In quantum theory the fixed properties of different examples of the same particle, such as two
electrons, are identical, so that they cannot be distinguished. Quantum theory also assigns novel wavelike
behavior to particles.
Space and Time
The concept of space as an arena in which physical objects move is an outgrowth of the ancient idea of a
void. Early physicists thought of space as passive, without its own properties. Developments in the
Renaissance and thereafter led to the realization that space, as conceived in physics, has properties,
including those of continuity, geometry, and three-dimensionality. At first these properties were taken as
fixed, but Einstein's work made physicists recognize that the properties of space can vary depending on its
matter content.
Time, regarded as a flow that exists independent of human perception, seems necessary to allow for
physical changes. Early physicists did not attribute properties to time, but it was later realized that such
familiar facts as the apparent one-dimensionality of time are statements about an actual entity. Again, in
general relativity some properties of time vary with the matter content. Einstein recognized an important
connection between time and space in his special theory of relativity, where the estimation of space and
time intervals depends on the motion of the observer. The idea was used by Hermann Minkowski to
substitute the idea of a four-dimensional space-time continuum for the separate ideas of three-dimensional
space and one-dimensional time.
Laws of Motion and Evolution
Many physical laws describe how systems change with time. These laws usually are differential equations
relating the rate of change of one quantity with respect to other quantities. By solving the equations, the
changing quantity at a later time can be calculated in terms of its value and that of other quantities at an
earlier time. These are known as initial conditions. The evolution of the system over time can be predicted if
the initial conditions are known precisely. This evolution can be extremely sensitive to the initial conditions,
and small uncertainty there can rapidly lead to great uncertainty in the system's evolution.
Laws of Conservation and Symmetry
It is as important to know what remains the same as to know what changes. The answer is given by laws
that state what quantities remain constant in time as a system evolves. Examples are the laws of
conservation of angular momentum and of electric charge. Most conserved quantities, other than energy,
have only one form, but the conserved quantity may move from one to another constituent of a system.
Conservation laws follow from the fact that the laws describing physical systems remain unchanged when
the systems are viewed from different perspectives. Such symmetry considerations play major roles in
relativity theory and quantum physics, where they are used to constrain laws and infer their consequences.
Fields
The work of 19th-century physicists showed that it is useful to think of electric and magnetic forces between
two objects some distance apart as being generated by a two-step process. Each object influences the
surrounding space, and this altered condition of space produces a force on the other object. The space in
which this influence resides is said to contain an electric or magnetic field. Maxwell was able to summarize
all of the theory of electromagnetism in terms of four equations describing the mutual effects of electric
charges and magnetic fields. One implication of his work was that fields can become detached from the
charges that produce them and travel through space (see electromagnetic radiation). In the 20th century a
union of field theory with quantum mechanics led to quantum field theory, the most fundamental description
of nature now available.
Statistical Averages
For systems containing many elementary objects, the mathematical problems of solving the equations of
motion are prohibitive. An alternative method often employed involves calculating only the average behavior
of the constituents. This approach is used in the branch of physics known as statistical mechanics,
developed in the late 1800s by Maxwell, Ludwig Boltzmann, and Josiah Willard Gibbs. They showed that
many of the results of thermodynamics, previously an independent subject, could be inferred by applying
statistical reasoning to the Newtonian mechanics of gases. Statistical mechanics has also been applied to
systems described by quantum theory, where the type of statistics used must recognize the
indistinguishability of identical subatomic particles.
Quantization
In prequantum physics, physical quantities were assumed to have continuously variable magnitudes. For
many quantities this is now recognized to be an illusion based on the large size of ordinary bodies compared
with subatomic particles. In quantum theory, some quantities can only take on certain discrete values, often
described by simple integers. This discreteness, as well as the mathematical rules enforcing it, is known as
quantization. Usually a quantum description of a system can be obtained from the corresponding Newtonian
description by adding suitable rules of quantization.
CURRENT AND FUTURE PHYSICS
Physics continues to be extended in many directions. The evolution of complex systems and the
development of order are areas of current concern. The relation between the early universe and the
properties of subatomic particles is another active field of study (see cosmology). Attempts to replace
quantum field theory with some new description are under serious consideration, in the hope that they will
allow general relativity to be merged with quantum theory (see grand unification theories). At the same time,
new experimental techniques such as the scanning tunneling microscope (see electron microscope) are
allowing physicists today to observe matter at the atomic level.
History of Physics
The growth of physics has brought not only fundamental changes in ideas about the material world but also,
through technology based on laboratory discoveries, a transformation of society. Physics will be considered
in this article both as a body of knowledge and as the practice that makes and transmits it. Physics acquired
its classical form between the late Renaissance and the end of the 19th century. The year 1900 is a
convenient boundary between classical and modern physics.
PRACTICE OF PHYSICS
In the Aristotelian tradition, physics signified the study of nature in general (see Aristotle). It was literary and
qualitative as well as encompassing; it did not recommend experiment, and it did not rely on mathematics.
Geometrical optics, mechanics, and hydrostatics belonged to applied mathematics.
The Aristotelian conception of the scope of physics prevailed in the universities into the 18th century.
Meanwhile, a different conception developed outside the schools, exemplified in William Gilbert's De
magnete (1600), the work of a practicing physician. Gilbert's book on magnetism was the first report of
connected, sustained, and reconfirmed experiments in the history of physics. Magnetism became a popular
study and not only for practical application: it could also be used in "natural magic"Ñthe production of
perplexing effects by concealed mechanisms. Such magic figured prominently in the academies and
museums formed in the 17th century by gentlemen virtuosos interested in the unusual. Their play often
amounted to experiment. Many of their practicesÑfor example, cooperative work and showy
demonstrationsÑrecurred in the first national academies of science, which were established in London and
Paris in the 1660s (see scientific associations).
Many virtuosos explained their magic or experiments via the principles of the new mechanical philosophy, of
which RenŽ Descartes's Principia philosophiae (Principles of Philosophy, 1644) was the chief guide and
apology. His reader did not need to accept Descartes's peculiar models or metaphysics to see the
advantages of his system. A leading mechanical philosopher, Robert Boyle, explained these advantages:
compared with Aristotle's scheme, corpuscularism offers simple, comprehensive, useful, and intelligible
accounts of physical phenomena and a basis for further advance. Descartes stressed another advantage as
well: his physics, built on extension and motion, was implicitly mathematical.
Like Galileo Galilei before him, Descartes called for a physics patterned after mixed mathematics. Each was
able to fashion bits of such a physics, but neither managed to quantify a large domain of phenomena. Sir
Isaac Newton was the first to do so, in his Principia mathematica philosophiae naturalis (Mathematical
Principles of Natural Philosophy, 1687). The reach of his principles, from Saturn's satellites to pendulums
swung on Earth, and the precision of his results astounded his contemporaries. They did not see in
Newton's masterpiece, however, a model for a comprehensive exact physics. As successful as the Principia
was as a mathematical description, it failed as physics for those faithful to Descartes's goal. Either one took
the principle of gravityÑthe mutual acceleration of all particles in the universeÑas mathematical description,
as Newton usually did, and had no physics, or one postulated that bodies could act on one another at a
distance (see gravitation). In the latter case, according to the up-to-date 17th-century physicist, one returned
to the unintelligible, magical explanations from which Descartes had rescued natural philosophy.
During the 18th century physicists lost the scruples that had caused Newton as well as his adversaries to
worry about admitting action-at-a-distance forces into physics. Even if the physicist knew nothing of the true
nature of these forces, they were nonetheless useful in calculation and reasoning; a little experience, as one
physicist said, "domesticates them." This instrumentalism began to spread after 1750. It set the precedent of
choosing theories in physics on the basis not of conformity to general, qualitative, intelligible principles but of
quantitative agreement with measurements of isolated phenomena.
Institutionalization of Physics
During the first half of the 18th century the demonstration experiment, long the practice of the academies,
entered university physics courses. For practical reasons, most of these demonstrations concerned physics
in the modern sense. By 1750 the physicist no longer had responsibilities for biology and chemistry; instead
he had the upkeep and usually the expense of a collection of instruments. By the 1780s he was winning
institutional support for his instructional hardware: a budget for apparatus, a mechanic, storage and
maintenance facilities, and lecture rooms. By the end of the century advanced professors had apparatus and
research assistance from their institutions.
The initiative in changing the physics curriculum came from both inside and outside the discipline. Inside,
modernizing professors strove to master Newton's ideas and to enrich the teaching of physics (and
themselves) through paid lecture demonstrations. Outside, enlightened government ministers, believing that
experimental physics together with modern languages might be helpful in the running of states, pressed the
universities to discard scholastic principles and "everything useless."
Meanwhile, physics became important in schools where mining, engineering, and artillery were taught. The
most advanced of these schools, the ƒcole Polytechnique, established in Paris in 1793, inspired upgrading
of curricula in universities as well as in technical schools. Among its descendants are the German-language
Technische Hochschulen, of which 15 existed in 1900.
During the 19th century the old university cabinets de physique were transformed into institutes of physics.
The transformation occurred in two steps. First, the university accepted the obligation to give beginning
students laboratory instruction. Second, it provided research facilities for advanced students and faculty. The
first step began about 1850; the second, toward the end of the century. By 1900 the principle was accepted
in the chief physics-producing countries (Great Britain, France, Germany, and the United States) that the
university physics institute should give theoretical and practical instruction to aspiring secondary school
teachers, physicians, and engineers and also provide space, instruments, and supplies for the research of
advanced students of physics.
By 1900 about 160 academic physics institutes, staffed by 1,100 physicists, existed worldwide. Expenditures
reckoned as percentages of gross national product were about the same in the four leading countries.
Academic physicists maintained pressure on government and private donors by pointing to the increasing
demand for technically trained people from industry, government, and the military. This pressure was
brought not only by individual spokespersons but also by a characteristic novelty of the 19th century: the
professional society. The first of the national associations for the advancement of science, in which
physicists always played prominent parts, was established in Germany in the 1820s. Beginning in 1845
physicists also formed national professional societies.
While physics prospered outwardly, it grew internally with the definitive capture of the physical branches of
applied mathematics. The acquisition proved more than physics could handle. Beginning about 1860, it
recognized a specialty, theoretical physics, that treated the more mathematical branches with an emphasis
on their interconnections and unity. By 1900 roughly 50 chairs in theoretical physics existed, most of them in
Germany. During the same period important new fields grew up around the borders between physics and
astronomy, biology, geology, and chemistry.
Content and Goal of Physics
The unity that the theoretician referred to was mechanical reduction. The goal of the late 18th centuryÑto
trace physical phenomena to forces carried by special substances, such as electrical fluidsÑgave way to a
revised corpuscularism about 1850. The new doctrine of the conservation of energy and the
interconvertibility of forces promised that all physical transactions could be reduced to the same basis (see
conservation, laws of). Physicists took the concepts of mechanics as basic, for much the same reasons that
Boyle had given, and they strove to explain the phenomena of light, heat, electricity, and magnetism in terms
of the stresses and strains of a hypothetical ether supposed to operate as a mechanism.
The program had as its remote goal a model such as the vortex atom, which forms matter of permanent, tiny
vortices in the same ether that mediates electromagnetic interactions (see electromagnetic radiation) and
propagates light. The president of the French Physical Society may have had the vortex atom in mind when
he opened the International Congress of Physics in 1900 with the words, "The spirit of Descartes hovers
over modern physics, or better, he is its beacon."
THE DEVELOPMENT OF MAIN BRANCHES
The main branches of classical physics are mechanics, electricity and magnetism, light, and heat and
thermodynamics.
Mechanics
The first branch of physics to yield to mathematical description was mechanics. Although the ancients had
quantified a few problems concerning the balance and hydrostatics (see fluid mechanics), and medieval
philosophers had discussed possible mathematical descriptions of free fall, not until the beginning of the
17th century was the desideratum of quantification brought into confrontation with received principles of
physics. The chief challenger was Galileo Galilei, who began with a medieval explanation of motion, the socalled impetus theory, and ended by doing without an explicit dynamics. To him it was enough that, as a first
step, the physicist should describe quantitatively how objects fall and projectiles fly.
Galileo's kinematical approach did not please Descartes, who insisted that the physicist attack received
principles from a knowledge of the nature of bodies. Descartes gave out this knowledge as laws of motion,
almost all incorrect but including a strong statement of the principle of rectilinear inertia, which was to
become Newton's first axiom of motion. Another Cartesian principle important for Newton was the
universalizing of mechanics. In Aristotelian physics the heavens consist of material not found on Earth. The
progress of astronomy had undermined Aristotle's distinction, and Newton, like Descartes, explicitly unified
celestial and terrestrial mechanics.
In Descartes's system bodies interact only by pushing, and space devoid of body is a contradiction in terms.
Hence the motion of any one object must set up a vortex involving others. The planets are swept around by
such a whirlpool; another carries the Moon, creates the tides, and causes heavy bodies to fall; still others
mediate the interactions of objects at or near the Earth's surface. Newton tried to build a quantitative
celestial vortical mechanics but could not; book 2 of the Principia records his proof that vortices that obey
the mechanical axioms posited for terrestrial matter cannot transport planets according to Kepler's laws. On
the assumption of universal gravitation, however, Newton could derive Kepler's laws and tie together
planetary motions, the tides, and the precession of the equinoxes. As one essential step in the derivation,
Newton used Galileo's rule about distance traversed under constant acceleration. He also required the
assumption of "absolute space"Ña preferred system of reference against which accelerations could be
defined.
After receiving their definitive analytic form from Leonhard Euler, Newton's axioms of motion were reworked
by Joseph Louis de Lagrange, William Rowan Hamilton, and Carl Gustav Jacobi into very powerful and
general methods, which employed new analytic quantities, such as potential (see potential energy), related
to force but remote from immediate experience. Despite these triumphs, some physicists nonetheless
retained scruples against the concept of force. Several schemes for doing without it were proposed, notably
by Joseph John Thomson and Heinrich Hertz, but nothing very useful came from them.
Electricity and Magnetism
As an apparent action at a distance, magnetism challenged the ingenuity of corpuscular philosophers.
Descartes explained that the terrestrial vortex, which carried the Moon, contained particles shaped so they
could find easy passage through the threaded pores that defined the internal structure of magnets and the
Earth. The special particles, he theorized, accumulated in vortices around magnets and close to the Earth,
oriented compass needles, and mediated magnetic attraction and repulsion. This quaint picture dominated
Continental theorizing about magnetism until 1750. Meanwhile, Newton's disciples tried to find a law of
magnetic force analogous to the law of gravity. They failed because they did not follow Newton's procedure
of integrating hypothetical microscopic forces to obtain a macroscopic acceleration. In 1785, Charles A.
Coulomb demonstrated the laws of magnetic force between elements of the supposed magnetic fluids. He
benefited from the domestication of forces, from a developed understanding of Newton's procedures, and
from a technique of making artificial magnets with well-defined poles.
Gilbert established the study of electricity in the course of distinguishing electrical attraction from magnetism.
The subject progressed desultorily until Francis Hauksbee introduced a new and more powerful generator,
the glass tube, in 1706. With this instrument Stephen Gray and C. F. Dufay discovered electrical conduction
and the rules of vitreous and resinous electrifications. In the 1740s electricity began to attract wider attention
because of the inventions of the electrostatic machine and the Leyden jar and their application to parlor
tricks. Benjamin Franklin's demonstration in 1752 that lightning is nature's electrical game further enhanced
the reputation of electricity.
Up to 1750, physicists accepted a theory of electricity little different from Gilbert's: the rubbing of electric
bodies forces them to emit an electrical matter or ether that causes attractions and repulsions either directly
or by mobilizing the air. The theory confused the roles of charges and their field. The invention of the Leyden
jar (1745) made clear the confusion, if not its source; only Franklin's theory of plus and minus electricity,
probably developed without reference to the Leyden jar, proved able to account for it. Franklin asserted that
the accumulation of electric matter within the Leyden jar (the plus charge) acted at a distance across the
bottom to expel other electrical matter to ground, giving rise to the minus charge. Distance forces thus
entered the theory of electricity. Their action was quantified by F. U. T. Aepinus (1759), by Henry Cavendish
(1771), and by Coulomb, who in 1785 showed that the force between elements of the hypothetical electrical
matter(s) or fluid(s) diminished as the square of the distance. (The uncertainty regarding the number of fluids
arises because many physicists then preferred the theory introduced by Robert Symmer in 1759, which
replaced Franklin's absence of electrical matter, negative electricity, with the presence of a second electrical
fluid.) Since the elementary electrical force followed the same law as the gravitational, the mathematics of
the potential theory lay ready for exploitation by the electrician. The quantification of electrostatics was
accomplished early in the 19th century, principally by SimŽon Denis Poisson.
In 1800, Alessandro Volta announced the invention of a continuous generator of electricity, a "pile" of disks
of silver, zinc, and moist cardboard. This inventionÑthe first batteryÑopened two extensive new fields:
electrochemistry, of which the first dramatic results were Humphry Davy's isolation of the alkali metals, and
electromagnetism, based on the healing of the breach opened by Gilbert in 1600.
The discovery in 1820 by Hans Christian Oersted that the wire connecting the poles of a Voltaic cell could
exert a force on a magnetic needle was followed in 1831 by Michael Faraday's discovery that a magnet
could cause a current to flow in a closed loop of wire. The facts that the electromagnetic force depends on
motion and does not lie along the line between current elements made it difficult to bring the new discoveries
within the scheme of distance forces. Certain Continental physicistsÑat first AndrŽ Marie Amp•re, then
Wilhelm Eduard Weber, Rudolf Clausius, and othersÑadmitted forces dependent on relative velocities and
accelerations.
The hope that electric and magnetic interactions might be elucidated without recourse to forces acting over
macroscopic distances persisted after the work of Coulomb. In this tradition, Faraday placed the seat of
electromagnetic forces in the medium between bodies interacting electrically. His usage remained obscure
to all but himself until William Thomson (Lord Kelvin) and James Clerk Maxwell expressed his insights in the
language of Cambridge mathematics. Maxwell's synthesis of electricity, magnetism, and light resulted. Many
British physicists and, after Heinrich Hertz's detection of electromagnetic waves (1887), several Continental
ones tried to devise an ether obedient to the usual mechanical laws whose stresses and strains could
account for the phenomena covered by Maxwell's equations.
In the early 1890s, Hendrik Antoon Lorentz worked out a successful compromise. From the British he took
the idea of a mediating ether, or field, through which electromagnetic disturbances propagate in time. From
Continental theory he took the concept of electrical charges, which he made the sources of the field. He
dismissed the presupposition that the field should be treated as a mechanical systemÑthat it should be
assigned any properties needed to account for the phenomena. For example, to explain the result of the
Michelson-Morley experiment, Lorentz supposed that objects moving through the ether contract along their
line of motion. Among the unanalyzed and perhaps unanalyzable properties of the ether is the ability to
shorten bodies moving through it.
In 1896Ð97, Lorentz's approach received support from the Zeeman effect, which confirmed the presence of
electrical charges in neutral atoms, and from the isolation of the electron, which could be identified as the
source of the field. The electron pulled together many loose ends of 19th-century physics and suggested
that the appearances of matter itself, including its inertia, might arise from moving drops of electric fluid. But
the electron did not save the ether. Continuing failure to find effects arising from motion against it and, above
all, certain asymmetries in the electrodynamics of moving bodies, caused Albert Einstein to reject the ether
and, with it, the last vestige of Newton's absolute space.
Light
During the 17th century the study of optics was closely associated with problems of astronomy, such as
correcting observations for atmospheric refraction and improving the design of telescopes. Johannes Kepler
obtained a good approximation to refraction and explained the geometry of the eye, the operation of the
lens, and the inversion of the image. Descartes computed the best form for telescope lenses and found, or
transmitted, the law of refraction first formulated by Willebrord Snell in 1621. While trying to correct
telescopes for chromatic aberration, Newton discovered that rays of different colors are bent by different but
characteristic amounts by a prism. The discovery upset the physics of light.
Traditional theory took white light to be homogeneous and colors to be impurities or modifications. Newton
inferred from his discovery that colors are primitive and homogeneous, and he portrayed their constituents
as particles. This model also conflicted with the ordinary one. For example, Christiaan Huygens, who did not
bother about colors, gave a beautiful account of the propagation of light, including an explanation of
birefringence (see polarized light), on the supposition that light consists of longitudinal waves in a pervasive
medium.
Newton also required an optical ether to explain phenomena now referred to as interference between light
waves. The emission of particles sets the ether vibrating, and the vibrations impose periodic properties on
the particles. Although many 18th-century physicists preferred a wave theory in the style of Huygens's, none
succeeded in devising one competitive with Newton's. Progress in optics took place mainly in fields Newton
had not investigated, such as photometry, and in the correction of lenses for chromatic aberration, which he
had not thought possible.
In the first years of the 19th century Thomas Young, a close student of Newton's work and an expert on the
theory of vibrations, showed how to quantify Huygens's theory. Young succeeded in explaining certain cases
of interference; Augustin Jean Fresnel soon built an extensive analytical theory based on Young's principle
of superposition. Newton's light particles, which fit well with the special fluids assumed in theories of heat
and electricity, found vigorous defenders, who emphasized the problem of polarization. In Newton's theory,
polarization could be accommodated by ascribing different properties to the different "sides" of the particles,
whereas Young's waves could be characterized only by amplitude (associated with intensity), period (color),
phase (interference), and velocity (refraction). About 1820, Young and Fresnel independently found the
missing degree of freedom in the assumption that the disturbances in light waves act at right angles to their
direction of motion; polarization effects arise from the orientation of the disturbance to the optic axis of the
polarizing body.
With the stipulation that light's vibrations are transverse, the wave theorists could describe simply and
precisely a wide range of phenomena. They had trouble, however, in developing a model of the
"luminiferous ether," the vibrations of which they supposed to constitute light. Many models were proposed
likening the ether to an elastic solid. None fully succeeded. After Maxwell linked light and electromagnetism,
the duties of the ether became more burdensome and ambiguous, until Lorentz and Einstein, in their
different ways, removed it from subjection to mechanics.
Heat and Thermodynamics
In Aristotelian physics heat was associated with the presence of a nonmechanical quality, "hotness,"
conveyed by the element fire. The corpuscular philosophers rejected some or all of this representation; they
agreed that heat arose from a rapid motion of the parts of bodies but divided over the existence of a special
fire element. The first theory after Aristotle's to command wide assent was developed by Hermann
Boerhaave during the second decade of the 18th century; it incorporated a peculiar, omnipresent, expansive
"matter of fire," the agitation of which caused heat and flame.
Physicists examined the properties of this fire with the help of thermometers, which improved greatly during
the 18th century. With Fahrenheit thermometers, G. W. Richmann established (1747Ð48) the calorimetric
mixing formula (see calorimeter), which expresses how the fire in different bodies at different temperatures
comes to equilibrium at an intermediate temperature when the bodies are brought into contact. By following
up discrepancies between experimental values and results expected from Richmann's formula, Joseph
Black and J. C. Wilcke independently discovered phenomena that led them to the concepts of latent and
specific heat.
About 1790, physicists began to consider the analytic consequences of the assumption that the material
base of heat, which they called caloric, was conserved. The caloric theory gave a satisfactory quantitative
account of adiabatic processes in gases, including the propagation of sound, which physicists had vainly
sought to understand on mechanical principles alone. Another mathematical theory of caloric was Sadi
Carnot's (see Carnot, family ) analysis (1824) of the efficiency of an ideal, reversible heat engine, which
seemed to rest on the assumption of a conserved material of heat.
In 1822, Joseph Fourier published his theory of heat conduction, developed using the trigonometrical series
that bear his name (see Fourier analysis), and without specifying the nature of heat. He thereby escaped the
attack on the caloric theory by those who thought the arguments of Count Benjamin Rumford persuasive.
Rumford had inferred from the continuous high temperatures of cannon barrels undergoing grinding that
heat is created by friction and cannot be a conserved substance. His qualitative arguments could not carry
the day against the caloric theory, but they gave grounds for doubt, to Carnot among others.
During the late 18th century physicists had speculated about the interrelations of the fluids they associated
with light, heat, and electricity. When the undulatory theory indicated that light and radiant heat consisted of
motion rather than substance, the caloric theory was undermined. Experiments by James Prescott Joule in
the 1840s showed that an electric current could produce either heat or, through an electric motor,
mechanical work; he inferred that heat, like light, was a state of motion, and he succeeded in measuring the
heat generated by mechanical work. Joule had trouble gaining a hearing because his experiments were
delicate and his results seemed to menace Carnot's.
In the early 1850s the conflict was resolved independently by Kelvin and by Clausius, who recognized that
two distinct principles had been confounded. Joule correctly asserted that heat could be created and
destroyed, and always in the same proportion to the amount of mechanical, electrical, or chemical forceÑor,
to use the new term, "energy"Ñconsumed or developed. This assertion is the first law of
thermodynamicsÑthe conservation of energy. Carnot's results, however, also hold; they rest not on
conservation of heat but on that of entropy, the quotient of heat by the temperature at which it is exchanged.
The second law of thermodynamics declares that in all natural processes entropy either remains constant or
increases.
Encouraged by the reasoning of Hermann Helmholtz and others, physicists took mechanical energy, the
form of energy with which they were most familiar, as fundamental and tried to represent other forms in
terms of it. Maxwell and Ludwig Boltzmann set the foundations of a new branch of physics, the mechanical
theory of heat, which included statistical considerations as an integral part of physical analysis for the first
time. After striking initial successes, the theory foundered over the mechanical representation of entropy.
The apparent opposition of the equations of mechanics (which have no direction in time) and the demands
of the second law (which prohibits entropy from decreasing in the future) caused some physicists to doubt
that mechanical reduction could ever be accomplished. A small, radical group led in the 1890s by the
physical chemist Wilhelm Ostwald went so far as to demand the rejection of all mechanical pictures,
including the concept of atoms.
Although Ostwald's program of energetics had few followers and soon collapsed, the attack on mechanical
models proved prescient. Other work about 1900Ñthe discoveries of X rays and radioactivity, the
development of the quantum theory (see quantum mechanics), and the theory of relativityÑdid eventually
force physicists to relinquish in principle, if not in practice, reliance on the clear representations in space and
time on which classical physics had been built.
J. L. Heilbron
MODERN PHYSICS
About 1900 the understanding of the physical universe as a congeries of mechanical parts shattered forever.
In the decades before the outbreak of World War I came new experimental phenomena. The initial
discoveries of radioactivity and X rays were made by, respectively, Antoine Henri Becquerel and Wilhelm
Conrad Roentgen. These new phenomena were studied extensively, but only with Niels Bohr's first atomic
theory in 1913 did a general, theoretical picture for the generation of X rays emerge. Radioactive decay was
gradually clarified with the emergence of quantum mechanics; with the discovery of new fundamental
particles, such as the neutron and the neutrino; and with countless experiments conducted using particle
accelerators.
The central, theoretical syntheses of 20th-century physicsÑthe theories of relativity and quantum
mechanicsÑwere only indirectly associated with the experimental research of most physicists. Albert
Einstein and Wolfgang Pauli, for example, believed that experiment had to be the final arbiter of theory but
that theories were far more imaginative than any inductivist assemblage of experimental data. During the
first third of the century it became clear that the new ideas of physics required that physicists reexamine the
philosophical foundations of their work. For this reason physicists came to be seen by the public as
intellectual Brahmins who probed the dark mysteries of the universe. Excitement over reorganizing physical
knowledge persisted through the 1920s, a decade in which quantum mechanics and a new, indeterminist
epistemology were formulated by Pauli, Werner Karl Heisenberg, Max Born, Erwin Schršdinger, and Paul
Dirac.
The early-20th-century vision of the universe issued principally from German-speaking Europe, in university
environments where daily patterns of activity had been fixed since the 1880s. During the interwar period
first-rate physics research operations flourished in such non-European environments as the United States,
Japan, India, and Argentina. New patterns of activity, intimated in the pre-1914 world, finally crystallized.
Physics research, such as that of Clinton Davisson and William Thomas Astbury (1898Ð1961), came to be
supported heavily by optical, electrical, and textile industries. The National Research Council and private
foundations in the United States, notably the Rockefeller and Carnegie trusts, sponsored expensive and
time-consuming experiments. European governments encouraged special research installations, including
the Kaiser Wilhelm institutes and the Einstein Observatory in Potsdam, Germany. What has been called big
physics emerged in the 1930s. Scores of physicists labored over complicated apparatus in special
laboratories indirectly affiliated with universities. As one of the most significant consequences of the new
institutional arrangements, it became increasingly difficult for a physicist to imitate scientists such as Enrico
Fermi, who had mastered both the theoretical and the experimental sides of the discipline. Following the
models provided in the careers of J. Robert Oppenheimer and Luis Walter Alvarez, the successful physicist
became a manager who spent most of his or her time persuading scientifically untutored people to finance
arcane research projects.
The awesome respect accorded to physicists by governments in the 1950sÑwhen the United States and the
Soviet Union carried out extensive research into thermonuclear weapons and launched artificial
satellitesÑhas been tempered in recent years. In part this new development is the result of a continuing
emergence of new specialties; applied electronics and nuclear engineering, for example, until recently part
of the physicist's domain, have become independent fields of study, just as physical chemistry, geophysics,
and astrophysics split off from the mother discipline about 1900. At the same time, a number of physicists,
such as Richard Phillips Feynman, have come to emphasize the aesthetic value of their research more than
its practical application.
In recent years, physicists have been at the center of major interdisciplinary syntheses in biophysics, solidstate physics, and astrophysics. The identification of the double-helix structure of DNA, the synthesis of
complex protein molecules, and developments in genetic engineering all rest on advances in spectroscopy,
X-ray crystallography, and electron microscopy. Semiconductor technology, at the base of the revolution in
information processing, has been pioneered by solid-state physicists. Fundamental insights into the largescale structure of the universe and its constituent parts have depended on harmonies previously revealed by
theoretical physicists. This cross-fertilization has had an impact on physics itself; it has produced new
understanding of basic physical laws ranging from those governing elementary particles to those in
irreversible thermodynamic processes. Among all modern scientific disciplines, physics has been the most
successful in maintaining a high public profile while adapting to new scientific and social circumstances.
History of Science
In the 19th century the term science, which hitherto was applied to any body of systematic knowledge, came
to denote an organized inquiry into the natural and physical universe. This article will be confined to this
more recent and restrictive definition.
SCIENCE IN THE ANCIENT WORLD
Out of small beginnings, that human enterprise called science emerged some five millennia ago among the
evolving civilizations of the Near EastÑin Mesopotamia and along the Nile River (see Egypt, ancient).
Undoubtedly, the original impulse for scientific activity was the need for technologies (see technology;
technology, history of) to satisfy material necessities. Thus, elementary forms of arithmetic, geometry,
astronomy developed in order to supply the growing needs of engineering, time reckoning, accounting, land
measurement, and agriculture. The relatively sophisticated techniques of surgery and the extensive use of
medicaments filled the needs of early medicine (see medicine, history of). Astronomical observations and
numerical calculations became intricately involved with the emerging mystical and religious systems, thus
resulting in the growth of astrology and numerology, particularly in Babylonia and Assyria.
It would be simplistic, however, to claim that the need for technologiesÑboth secular and religiousÑwas the
sole incentive for scientific activity, at least with regard to the later phases in the development of Near
Eastern civilizations. By the middle of the 2d millennium ©, the accumulation of wealth and leisure brought
about the introduction of curiosity as an important contributing factor in the domain of technology. Thus,
attempts were made to solve a variety of numerical and algebraic equations (see algebra) that could have
had no immediate practical uses and to discern the underlying patterns of empirical calculations.
The Greeks
It was the Greeks, however, who introduced high-powered geometry and rigorous reasoning, as well as
speculations about the nature of the universe (see Greece, ancient). The impetus for this scientific leap is
usually attributed to Thales of Miletus, a merchant who, by the beginning of the 6th century ©, had
abandoned his vocation in favor of science. Thales' travels had no doubt acquainted him with the wideranging mathematical and astronomical achievements of the Egyptians and Babylonians, which he helped
introduce into the Greek world. In addition, he initiated the practice of speculating freely on cosmology,
positing water as the fundamental element out of which the universe was fashioned.
For the next two centuries the pre-Socratic philosophers continued to speculate about the nature of the
physical world (see philosophy). Perhaps the most famous of their ideas involved the dichotomy between
"Being" and "Becoming" associated with Parmenides and Heraclitus: the world as eternally changeless and
the world as a place of perpetual motion. An alternative cosmology was offered by Democritus, whose
atomistic theory viewed the universe as motionless void, interrupted only by islands of matter. Such matter
was the product of the chance configuration of atomsÑsolid, indivisible, and eternalÑwhich, owing to their
different shapes and properties, produced the variety of existing substances as well as all sensations. The
most important speculation for the future development of science was the number theory of the
Pythagoreans, which viewed numbers as the principle of all things (see Pythagoras of Samos). The
Pythagorean number theory not only had enormous implications for the development of mathematics (see
also mathematics, history of); its assumption of an orderly, symmetric universe would also inform future
cosmology as scientists struggled to discover the shape of the Earth and the laws governing the motion of
the heavenly bodies.
The pre-Socratic predilection for speculative physical hypotheses eventually resulted in a reaction against
science in the 4th century ©. Socrates blamed sterile speculation for the neglect of man as the important
focus of nature; instead of the universe, the proper goal of human inquiry should be truth, justice, and virtue
in human conduct. In his search for a program to facilitate this inquiry, Socrates developed a powerful
dialectical reasoning as a method of attaining truth (see dialectic). Despite the antiscientific impetus behind
Socrates' humanism, his dialectical method could be employed as a tool in the realm of nature as well. Such
an application was demonstrated by Socrates' most distinguished pupil, Plato, who also had been greatly
influenced by Pythagorean number theoryÑan influence that can be detected in his cosmology in the
Timaeus. The fusion of these two traditions resulted in the creation of Plato's doctrine of Forms, which
postulated a realm of pure ideas, or essences, that existed above and beyond the illusory sensory world.
Although Plato considered observations and experiments worthless, even harmful, to this domain of abstract
ideas, he nevertheless believed that mathematics, which promised certitude and embodied pure ideas, was
the proper pedagogical device for training the mind in the abstract reasoning necessary to comprehend the
realm of Forms. This elevation of mathematics to the pinnacle of scientific activity extended Pythagorean
influence over Greek mathematics and eventually contributed to the role played by mathematics in modern
science.
One of the many talented mathematicians and astronomers to pass through Plato's Athenian academy was
Eudoxus of Cnidus, whose theory of homocentric spheres contributed to the concept of planetary motion,
and whose theories of magnitude and exhaustion helped to advance geometry. The most distinguished
student to emerge from the academy, however, was Aristotle, who also turned out to be the most severe
critic of Plato's doctrine of Forms as well as of his ideas about mathematics. Like Socrates and Plato before
him, Aristotle stressed the importance of correct reasoning in the attainment of true knowledge. However,
unlike his two predecessors, he replaced dialectic with the logic of the syllogismÑthe drawing of conclusions
from assumed postulatesÑwhich became the core of the Aristotelian deductive method. The science that
emerged from this method was qualitative and strongly grounded in common sense, and its physics was
purged of mathematics. Equally important, it distinguished sharply between the celestial and the terrestrial
domains. The former was placed beyond the grasp of human experience, while the latter was organized into
a comprehensive system that encompassed the entire gamut of human knowledge, from biology, zoology,
and cosmology to ethics, politics, and metaphysics.
Aristotelian cosmology was based on the notion of an enclosed cosmos comprising a series of concentric,
crystalline spheres revolving around a stationary Earth. Motion was supposedly provided by the prime mover
and, once initiated, would remain circular, uniform, and eternal. Aristotle's biological ideas were perhaps the
most original facet of his corpus, for they were based on his own profound firsthand observations and
experiments.
Hellenistic and Roman Science
With the spread of Greek culture into the Near East in the 4th century ©, Alexandria in Egypt replaced
Athens as the center of science. This shift was facilitated by the liberal patronage of learning and the
erection of the magnificent Alexandrian Library by the Ptolemaic rulers of Egypt. In Alexandria the golden
age of Greek geometry reached its zenith. Both Euclid's Elements of Geometry and the work on conic
sections by his younger contemporary Apollonius of Perga were carried out there. Even science outside
AlexandriaÑsuch as Archimedean geometry and mechanics in Syracuse (see Archimedes)Ñwas the product
of men who had studied in, or were influenced by, Alexandrian science. Greek astronomy underwent a
similarly fruitful period in Alexandria, commencing with the work of Aristarchus of Samos, whose heliocentric
world system later became the basis for a similar one by Copernicus, and culminating in the 2d century ¥
with a magnificent astronomical synthesis, the Almagest of Ptolemy.
Although the Romans incorporated much of Greek culture, they remained obliviousÑif not outright
disdainfulÑof Greek science. Essentially utilitarian in outlook, Roman civilization cared little for cosmological
speculations, still less for Greek mathematical sciences, save for its application to engineering. What the
Romans lacked in scientific curiosity and originality, however, they somewhat compensated for by a
fascination with encyclopedic knowledge and technology. The two best examples are the massive
compendiums of knowledge of Varro, which would lay the foundation for the medieval classification of
knowledge and system of education, and the colossal, and sometimes highly entertaining, compilation of
natural phenomena called Natural History, by Pliny the Elder. Although synthetic and for the most part
derivative, these works conveniently amalgamated existing scientific knowledge.
Another scientific writer of the Roman period was the Greek Galen of Pergamum (¥ 130Ð200), whose
survey of ancient anatomy, physiology, and medicine dominated European science until the 17th century.
Strabo of Pontus (c.63 ©Ð¥ c.21) and the Spaniard Pomponius Mela (1st century ¥), both of whom worked
in Rome, helped develop geography.
THE MIDDLE AGES AND THE RENAISSANCE
The Christian church in the early Middle Ages was essentially ambivalent to Greek and pagan science and
philosophy. The dilemma facing the early church fathers was both how to amalgamate the knowledge of
"heathens" and how to define the boundaries between reason and faith. However, the chaos that resulted in
Europe from the Germanic invasions and the collapse of the Western Roman Empire in the 5th century ¥
postponed the real debate about the role of pagan rationalistic science in a Christian society for at least
another seven centuries.
China and India
In many areas of science during the Middle Ages the Chinese achieved advances well ahead of their
European counterparts. Nevertheless, the Chinese were predominantly a technological, rather than a
scientific, society. The reason may be found in Chinese philosophy. Daoism (Taoism), and especially
Confucianism, did not differentiate between the domains of human beings and nature. Instead, the world
was conceived of as a vast organism in which the five "phases"Ñwater, fire, metal, wood, and earthÑand the
two principal forces, yin and yang, were in constant interaction as they sought their affinities. The result was
a predilection for mystical thinking, especially among Daoists. Conversely, the Confucians, who tended to
dominate the scientific domain, embraced a utilitarian view of life, thus informing Chinese science with its
predominantly technological character. Therefore the Chinese had virtually no geometry (the forte of the
Greeks), yet they possessed a well-developed arithmetic and algebra, which apparently included even an
acquaintance with the binomial theorem. In a similar manner, the Chinese demonstrated a flair for inventing
calculating devices, such as the counting rod and the abacus, but showed no interest in developing a
general theory of equations.
This utilitarian bias is evident in other scientific domains as well. In astronomy the Chinese were diligent in
their observations of the heavens, in reckoning time, and in developing instruments. Nevertheless, there
occurred little sustained and independent discussion of cosmologies. Although the fields of chemistry (see
chemistry, history of) and physics (see physics, history of) attracted much attention, virtually every discovery
had a practical application. Finally, the development of a sophisticated corpus of medicine, much in advance
of medieval European medicine, is yet another indication of their practical bent.
Science in India followed a somewhat different development. Before the 5th century ©, when Mesopotamian
and Greek ideas began to infiltrate into the subcontinent, the astronomy of Hinduism was little more than
primitive calendrical computation. Following a short period of flirtation with cosmologies, however, Hindu
astronomy rapidly deteriorated again into practical astrology. In mathematics the Hindus, even more than
the Chinese, developed a powerful arithmetic and algebra; they also showed a more than passing interest in
geometry. Some significant work by Hindu mathematicians, including the so-called "Arabic" numerals, was
later incorporated into Islamic science and eventually transmitted to Europe. (See Indian science and
technology.)
The Islamic World
In contrast to Chinese and Hindu science, Islamic science came to exert an immense influence on the West,
partly because of the geographical proximity of the two cultures and partly because both cultures shared a
common Greek heritage. Following a period of rapid conquest in the 7th and 8th centuries, an Arab Muslim
empire was established in western Asia, north Africa, and Spain. By the second half of the 8th century, its
rulers, the Abbasid caliphs in Baghdad, became munificent patrons of learning. In this role they both
encouraged the collection and translation into Arabic of the huge corpus of Greek knowledge and
generously supported scientific activity.
One of the most notable scientists to benefit from their patronage was al-Kindi, the driving force behind the
creation of a new Islamic philosophy and the author of an important work in optics. A younger contemporary,
al-Battani, was an astronomer who carried out his work in Baghdad, where an observatory had been
constructed as early as ¥ 829. The Persian al-Khwarizmi (780Ð850) introduced into Islamic science both
Hindu numerals and algebra, as well as the Hindu model of astronomical tables. Other eminent scientists
included the mathematicians Thabit ibn Qurra (c.836Ð901) and Abu'l Wafa (940Ð97) and a most original
man of optics, Ibn al-Haytham (Alhazen, c.965Ð1039). In medicine the practical work and encyclopedic
compilations of al-Razi (Rhazes, c.860Ð930) and Ibn Sina (Avicenna) became enormously influential in
Europe during the late Middle Ages and the Renaissance. Also important was the alchemical corpus of Jabir
ibn Hayyan (Geber), which introduced the discipline into Europe.
Islamic science and philosophy emerged, in large part, in response to the Islamic theological assumption
that nature contains "signs," and the unearthing of such signs would bring the believer closer to God. Thus a
philosophical attempt to reconcile faith and reason that is reminiscent of the attempts of Christianity also
characterized Islamic scientific endeavors. However, a strong rationalistic trend, such as can be detected in
al-Razi in the 10th century, or Ibn Rushd (Averro‘s) in the 12th, heralded a religious reaction against
philosophy and science. This reaction, accompanied by the decay of western Islam from the 12th century,
resulted in stagnation and the eventual decline of Islamic science and philosophy. In eastern Islam,
however, the vitality continued into the 15th century.
The Medieval West
At the very time when a decline began in Islamic science, the Latin West was exhibiting a renewed interest
in philosophical and scientific matters. Undoubtedly, the Christian reconquest of Spain and Sicily from the
Muslims in the 11th century contributed to this revival, for the Christians were now able and willing to absorb
the vast repository of Greek knowledge preserved in Arabic as well as to incorporate the original work of
Muslim scientists during the previous three centuries. The result was a major movement involving the
collection of manuscripts, their translation into Latin, and the addition of commentaries. The West thus
regained not only the entire Aristotelian corpus, but also the works of Euclid and Ptolemy (see
scholasticism).
For the next three centuries Europe's newly established universities would serve as centers for scientific
studies, thereby helping to establish the undisputed authority of Aristotle. By the middle of the 13th century,
Thomas Aquinas produced a synthesis between Aristotelian philosophy and Christian doctrine. He stressed
the harmony between reason and faith, thus establishing the foundation of natural theology. But the Thomist
synthesis did not go unchallenged. In 1277, shortly after Aquinas's death, the archbishop of Paris
condemned some 219 propositions (mainly of Aristotle) contained in his writings. As a result of this
condemnation, the nominalist alternative associated with William of Occam was developed; nominalism,
which tended to separate science from theology, would become a cornerstone in the redefinition of the
spheres of science and religion in the 17th century.
During the 13th and 14th centuries, European scholars seriously undermined certain fundamental aspects of
Aristotle's methodology and physics. The English Franciscans Robert Grosseteste and Roger Bacon
introduced mathematics and the experimental method into the domain of science and contributed to the
discussion of vision and the nature of light and color. Their successors at Merton College, Oxford,
introduced quantitative reasoning and physics via their novel treatment of accelerated motion. On the other
side of the Channel, in Paris, Jean Buridan and others elaborated on the concept of impetus, and Nicole
Oresme (1330?Ð1382) introduced some bold views into astronomy that would open the door for the more
speculative ideas of Nicholas of Cusa concerning the motion of the Earth and the concept of an infinite
universe.
Another two centuries would pass, however, before this still largely scholastic discussion would be resumed.
In the meantime, the Renaissance opened up new opportunities. Large numbers of Greek manuscripts were
brought to the West by Byzantine refugees fleeing the Turks in the 15th century. The invention of printing
made these books available in a new wave of editions and translations that revolutionized science. In
addition, Renaissance art and architecture began to utilize, in new ways, the growing knowledge of
perspective and anatomy, thus heralding a new breed of "scientists" who appreciated both the theoretical
knowledge of the schools and the technologies to be learned from craftspersons and artisans. Indeed, it was
the interaction between the scholarly tradition and the scientists' interest in practical technologies that set the
stage for the scientific revolution.
THE RISE OF MODERN SCIENCE
The publication of Nicolaus Copernicus's De revolutionibus orbium coelestium (On the Revolutions of the
Heavenly Spheres) in 1543 is traditionally considered the inauguration of the scientific revolution. Ironically,
Copernicus had no intention of introducing radical ideas into cosmology. His aim was only to restore the
purity of ancient Greek astronomy by eliminating novelties introduced by Ptolemy. And with such an aim in
mind he modeled his own book, which would turn astronomy upside down, on Ptolemy's Almagest. At the
core of the Copernican system, as with that of Aristarchus before him, is the concept of the stationary Sun at
the center of the universe, and the revolution of the planets, Earth included, around the Sun. The Earth was
ascribed, in addition to an annual revolution around the Sun, a daily rotation about its axis.
Copernicus's greatest achievement is his legacy. By introducing mathematical reasoning into cosmology, he
dealt a severe blow to Aristotelian common-sense physics. His concept of an Earth in motion launched the
notion of the Earth as a planet. And his explanation that he had been unable to detect stellar parallax
because of the enormous distance of the sphere of the fixed stars opened the way for future speculation
about an infinite universe. Nevertheless, Copernicus still clung to many traditional features of Aristotelian
cosmology. He continued to advocate the entrenched view of the universe as a closed world and to see the
motion of the planets as uniform and circular. Thus, in evaluating Copernicus's legacy, it should be noted
that he set the stage for far more daring speculations than he himself could make.
Brahe and Kepler
Most of Copernicus's contemporaries regarded his system as bold and unwarranted speculation; it gained
few converts in the scientific community before the early 17th century. A major impediment to its reception
was the existence of yet another planetary theory that took a stance halfway between the Ptolemaic and
Copernican systems. This rival theory was devised by the Danish nobleman Tycho Brahe, who was the
greatest observational astronomer since Ptolemy. Brahe's detailed observations of the appearance of a
"new star" (nova) in 1572, and a comet five years later, called into question the notion of immutability of the
heavens. His observations convincingly demonstrated not only that both events occurred well above the
allegedly perfect and incorruptible lunar region, but that crystalline spheres could not exist since the path of
the comet would have cut through them. Despite this rejection of certain elements of traditional astronomy
and physics, Brahe nevertheless refused to accept the reality of heliocentrism. Instead, he devised an
alternative system according to which the five planets indeed revolved around the Sun, but in turn the entire
system revolved around the stationary Earth and Moon. The fame of Brahe, combined with his construction
of a system capable of accommodating all observable phenomena, caused many who were wary of
Copernicanism, yet equally dissatisfied with the Ptolemaic system, to adopt Brahe's alternative theory.
Johannes Kepler, who succeeded Brahe as mathematician to the Holy Roman emperor Rudolf II, used the
data amassed by his predecessor to vindicate Copernicanism and transform modern astronomy. A deeply
pious man who believed that the glory of God was manifest in his creation, Kepler found in astronomy a
religious vocation. His piety was matched by his commitment to Copernicanism and Renaissance
Neoplatonism, both of which spurred his indefatigable search for harmonious patterns he knew must exist in
the heavens. The immediate result of his quest was the discovery of his three planetary laws (see Kepler's
laws). The first twoÑformulated in his Astronomia nova (The New Astronomy) of 1609Ñdestroyed the theory
of circular and uniform motion. The first law stated that all planets move in elliptical orbits when the Sun is at
one focus of the ellipse. The second stated that the velocity of a planet drops as its distance from the Sun
increases, so that if a line were to be drawn from the planet to the Sun, equal areas in space would be swept
in equal times. Ten years later Kepler published his third law in the Harmonice mundi (The Harmony of the
World), announcing the symmetry he so longed for: the squares of the times it takes any two planets to
complete their revolution around the Sun are proportional to the cubes of their average distance from the
Sun.
Galileo
The heavy metaphysical underpinning of Kepler's laws, combined with an obscure style and a demanding
mathematics, caused most contemporaries to ignore his discoveries. Even his Italian contemporary Galileo
Galilei, who corresponded with Kepler and possessed his books, never referred to the three laws. Instead,
Galileo provided the two important elements missing from Kepler's work: a new science of dynamics that
could be employed in an explanation of planetary motion, and a staggering new body of astronomical
observations. The observations were made possible by the invention (c.1608) of the telescope in Holland
and by Galileo's ability to improve on this instrument without ever having seen the original. Thus equipped,
he turned his telescope skyward and saw some spectacular sights.
The results of his discoveries were immediately published in the Sidereus nuncius (The Starry Messenger)
of 1610. Galileo observed that the Moon was very similar to the Earth, with mountains, valleys, and oceans,
and not at all that perfect, smooth spherical body it was claimed to be. He also discovered four moons
orbiting Jupiter. As for the Milky Way, instead of being a stream of light, it was, rather, a large aggregate of
stars. Later observations resulted in the discovery of sunspots, the phases of Venus, and that strange
phenomenon which would later be designated as the rings of Saturn.
Having announced these sensational astronomical discoveriesÑwhich reinforced his conviction of the reality
of the heliocentric theoryÑGalileo resumed his earlier studies of motion. He now attempted to construct a
comprehensive new science of mechanics necessary in a Copernican world, and the results of his labors
were published in Italian in two epoch-making books: Dialogue Concerning the Two Chief World Systems
(1632) and Discourses and Mathematical Demonstrations Concerning the Two New Sciences (1638). His
studies of projectiles and free-falling bodies brought him very close to the full formulation of the laws of
inertia and acceleration (the first two laws of Isaac Newton). Galileo's legacy includes both the modern
notion of "laws of nature" and the idea of mathematics as nature's true language. He contributed to the
mathematization of nature and the geometrization of space, as well as to the mechanical philosophy that
would dominate the 17th and 18th centuries. Perhaps most important, it is due largely to Galileo that
experiments and observations serve as the cornerstone of scientific reasoning.
Today, Galileo is remembered equally well because of his conflict with the Roman Catholic church. His
uncompromising advocacy of Copernicanism after 1610 was responsible, in part, for the placement of
Copernicus's De revolutionibus on the Index of Forbidden Books in 1616. At the same time, Galileo was
warned not to teach or defend Copernicanism in public. The election of Galileo's friend Maffeo Barberini as
Pope Urban VIII in 1624 filled Galileo with the hope that such a verdict could be revoked. With perhaps
some unwarranted optimism, Galileo set to work to complete his Dialogue (1632). However, Galileo
underestimated the power of the enemies he had made during the previous two decades, particularly some
Jesuits who had been the target of his acerbic tongue. The outcome was that Galileo was summoned to
Rome and there forced to abjure, on his knees, the views he had expressed in his book. Ever since, Galileo
has been portrayed as a victim of a repressive church and a martyr in the cause of freedom of thought; as
such, he has become a powerful symbol.
Newton
Despite his passionate advocacy of Copernicanism and his fundamental work in mechanics, Galileo
continued to accept the age-old views that planetary orbits were circular and the cosmos an enclosed world.
These beliefs, as well as a reluctance to rigorously apply mathematics to astronomy as he had previously
applied it to terrestrial mechanics, prevented him from arriving at the correct law of inertia. Thus it remained
for Isaac Newton to unite heaven and Earth in his immense intellectual achievement, the Philosophiae
naturalis principia mathematica (Mathematical Principles of Natural Philosophy), which was published in
1687. The first book of the Principia contained Newton's three laws of motion. The first expounds the law of
inertia: every body persists in a state of rest or uniform motion in a straight line unless compelled to change
such a state by an impressing force. The second is the law of acceleration, according to which the change of
motion of a body is proportional to the force acting upon it and takes place in the direction of the straight line
along which that force is impressed. The third, and most original, law ascribes to every action an opposite
and equal reaction. These laws governing terrestrial motion were extended to include celestial motion in
book 3 of the Principia, where Newton formulated his most famous law, the law of gravitation: every body in
the universe attracts every other body with a force directly proportional to the product of their masses and
inversely proportional to the square of the distance between them.
The Principia is deservedly considered one of the greatest scientific masterpieces of all time. But in 1704,
Newton published his second great work, the Opticks, in which he formulated his corpuscular theory of light
and his theory of colors. In later editions Newton appended a series of "queries" concerning various related
topics in natural philosophy. These speculative, and sometimes metaphysical, statements on such issues as
light, heat, ether, and matter became most productive during the 18th century, when the book and the
experimental method it propagated became immensely popular.
The Life Sciences
Although not as spectacular as the developments in the mathematical and physical sciences, the biological
and life sciences also produced some important developments in this period. In 1543, the very year in which
Copernicus's masterpiece appeared, Andreas Vesalius published his De humani corporis fabrica (On the
Fabric of the Human Body). The book was seminal to the development of anatomy, for it both provided a far
more accurate and detailed account of the organs and structure of the human body (based on Vesalius's
own dissections) and included superb illustrations that were a vast improvement on any other available text.
It remained for the Englishman William Harvey, however, to use this anatomical knowledge and respect for
detail to bring about a major discovery concerning the human body. In 1628, Harvey, in his De motu cordis
et sanguinis (On Motion of the Heart and Blood), set out his discovery of the circulation of blood. Basing his
theory on the fact that the heart pumps more blood in one hour than is contained in the entire human body,
Harvey also introduced quantitative reasoning into biology.
Additional discoveries had to await the invention of the microscope in mid-century, for even Harvey could not
prove the existence of the passages between the arteries and veins he had predicted. The capillaries were
first seen in 1661 by the Italian Marcello Malpighi. In the following decades there were a series of
microscopical observations by English and Dutch scientists, notably Robert Hooke, Nehemiah Grew, Antoni
van Leeuwenhoek, and Jan Swammerdam. Such observations brought about important discoveries in the
study of blood, insects, embryology, and plant physiology. Despite these advances, however, there was still
little attempt to transcend the level of empirical observations and provide a theoretical framework. Modern
biology was to be born only in the middle of the 19th century.
The Effects of the Scientific Revolution
It must be stressed that the scientific revolution was more than a series of spectacular accomplishments by
individual scientists. The invention of printing in the second half of the 15th century transformed science, as
it did every other scholarly discipline. It enabled the rapid diffusion and dissemination of new scientific ideas
to a rapidly growing literate public. In a similar way, the mathematization of nature and the emergence of
experimental science would have been inconceivable without the invention of a variety of
instrumentsÑincluding the telescope, the microscope, time-keeping devices, thermometers, and the air
pumpÑwhich allowed large numbers of people to carry out the observations, measurements, and
experiments upon which scientific theories were constructed. Finally, the 17th century marked the beginning
of the institutionalization and professionalization of science, which continued until the 19th century. The
foundation of national scientific societies (London, 1660; Paris, 1666; Berlin, 1700; Saint Petersburg, 1724)
was crucial for the formation of group identity and solidarity (see scientific associations). Most important,
these societies served as clearing houses for ideas and experiments, perfected rules and regulations for
their evaluation, and generated interaction and collaboration between like-minded researchers. To such
societies is also owed the development of scientific periodicals, the wide distribution of which was an
important factor in the diffusion and propagation of scientific knowledge.
The impact of the Newtonian accomplishment was enormous. Newton's two great books resulted in the
establishment of two traditions that, though often mutually exclusive, nevertheless permeated into every
area of science. The first was the mathematical and reductionist tradition of the Principia, which, like RenŽ
Descartes's mechanical philosophy, propagated a rational, well-regulated image of the universe. The second
was the experimental tradition of the Opticks, somewhat less demanding than the mathematical tradition
and, owing to the speculative and suggestive queries appended to the Opticks, highly applicable to
chemistry, biology, and the other new scientific disciplines that began to flourish in the 18th century. This is
not to imply that everyone in the scientific establishment was, or would be, a Newtonian. Newtonianism had
its share of detractors. Rather, the Newtonian achievement was so great, and its applicability to other
disciplines so strong, that although Newtonian science could be argued against, it could not be ignored. In
fact, in the physical sciences an initial reaction against universal gravitation occurred. For many, the concept
of action at a distance seemed to hark back to those occult qualities which the mechanical philosophy of the
17th century had done away with. By the second half of the 18th century, however, universal gravitation
would be proved correct, thanks to the work of Leonhard Euler, A. C. Clairaut, and Pierre Simon de Laplace,
the last of whom announced the stability of the solar system in his masterpiece, Celestial Mechanics
(1799Ð1825).
Newton's influence was not confined to the domain of the natural sciences. The philosophes of the 18thcentury Enlightenment sought to apply scientific methods to the study of human society. To them, the
empiricist philosopher John Locke was the first person to attempt this. They believed that in his Essay on
Human Understanding (1690) Locke did for the human mind what Newton had done for the physical world.
Although Locke's psychology and epistemology were to come under increasing attack as the 18th century
advanced, other thinkers such as Adam Smith, David Hume, and AbbŽ de Condillac would aspire to become
the Newtons of the mind or the moral realm. These confident, optimistic men of the Enlightenment argued
that there must exist universal human laws that transcend differences of human behavior and the variety of
social and cultural institutions. Laboring under such an assumption, they sought to uncover these laws and
apply them to the new society they hoped to bring about.
As the 18th century progressed, the optimism of the philosophes waned and a reaction began to set in. Its
first manifestation occurred in the religious realm. The mechanistic interpretation of the worldÑshared by
Newton and DescartesÑhad, in the hands of the philosophes, led to materialism and atheism. Thus by midcentury the stage was set for a revivalist movement, which took the form of Methodism in England and
pietism in Germany. By the end of the 18th century the romantic reaction had begun (see romanticism).
Fueled in part by religious revivalism, the romantics attacked the extreme rationalism of the Enlightenment,
the impersonalization of the mechanistic universe, and the contemptuous attitude of "mathematicians"
toward imagination, emotions, and religion.
The romantic reaction, however, was not antiscientific; its adherents rejected a specific type of the
mathematical sciences, not the entire enterprise. In fact, the romantic reaction, particularly in Germany,
would give rise to a creative movementÑthe NaturphilosophieÑthat in turn would be crucial for the
development of the biological and life sciences in the 19th century and would nourish the metaphysical
foundation necessary for the emergence of the concepts of energy, forces, and conservation.
THE NINETEENTH AND TWENTIETH CENTURIES
The theory of evolution owes its emergence to developments in a variety of fields, including the previously
mentioned Naturphilosophie. Owing in large part to the cosmological theories of Immanuel Kant, Thomas
Wright (1711Ð86), and Laplace in the second half of the 18th century, people now began to realize that the
universe had to be far older than the 6,000 years computed by chronologers on the basis of the Book of
Genesis (see chronology). Some wished to resolve this problem by separating the history of the Earth from
the history of the universe, and thus exempt the story of creation from scientific scrutiny. Another
conservative view was promoted by Abraham Werner, a renowned German geologist and minerologist, who
argued that the Earth once had been covered by a primordial ocean and that all rocks were formed, either
as sediments or by precipitation, following its regression. This "Neptunist" theory fit in with the biblical story
of the DelugeÑalthough Werner did concede that the Earth might be a million years oldÑbut it failed to
account for the disappearance of this ocean. A more radical stance was taken by the Scot James Hutton in
his Theory of the Earth (1795). In his "Vulcanist" theory, Hutton insisted that only forces operative in nature
today could account for the shape of the Earth and the formation of rocks, and therefore volcanic activity in
the crust of the Earth must have been the shaping mechanism at work. This "uniformitarian" theory
demanded an even longer period for the Earth's creation and omitted altogether any catastrophic flood. (See
also catastrophism; uniformitarianism.)
Theorizing of this sort gave way to more sophisticated concepts of Earth time as the science of geology
advanced. The development of stratigraphy made possible the identification of the types of rock strata
according to fossil deposits, thereby allowing for a closer estimation of the age of the rocks. Further strides
in geology and paleontology would allow closer investigation of these deposited fossils, their age, and their
evolution.
The Darwinian Theory
In many ways Newton's relationship to the scientific revolution paralleled the relationship of Charles Darwin
to 19th-century science. Like Newton, Darwin amalgamated ideas and concepts from a variety of disciplines,
synthesizing them into a coherent and powerful theory. In Darwin's case, this was the theory of evolution.
Evolutionary ideas had been in the air for some time. Charles's own grandfather, Erasmus Darwin, had
already argued in the late 18th century that competition for resources could explain evolution, while the
Frenchman Jean Baptiste Lamarck had developed the alternative view that the inheritance of acquired
characteristics was the determining factor in the evolution of species. Equally important for Darwin's
formative ideas was the powerful model of evolution provided by geology. Hence, when Darwin embarked
on his epoch-making voyage aboard the Beagle, his mind was primed to appreciate the botanical (see
botany), zoological, and geological structures he was to encounter in South America. In 1838, two years
after he returned from his voyage, Darwin read Thomas Malthus's An Essay on the Principle of Population
(1798), which provided him with the mechanism necessary to explain evolutionÑnatural selection.
Twenty years later Darwin published his theory in On the Origin of Species (1859). He emphasized the
geographical distribution of species in time and space, pointing out the enormous rate of extinction of
species unable to adapt themselves to environmental changes. Survival in nature, he claimed, was
determined by natural selection, which caused selective adaptation over long periods of time, thereby
ensuring that favorable modifications were transmitted to future generations. (Herbert Spencer called this
idea "the survival of the fittest.") Those species which survived evolved into higher forms of life. Eventually,
humans took their place in this evolutionary framework: in his Descent of Man (1871) Darwin argued that
they too had evolved from a lower form. He was unable, however, to account for what had permitted the
transmission of the favorable characteristics. Ironically, in 1865, Gregor Mendel announced his discovery of
heredity, which provided the needed explanationÑbut his idea was allowed to sink into obscurity until 1900.
19th-Century Physics
Despite the overriding significance of Darwinian theory, other important developments also took place during
the 19th century. Fundamental work was carried out in physics, thus bringing classical physics to a
pinnacleÑonly to be toppled by the turn of the 20th century. Early in the 19th century the study of light had
undergone a major transformation when the research of Thomas Young and Augustin Fresnel replaced
Newton's corpuscular theory with a wave theory of light. The study of heat also generated much interest
during the first half of the century, culminating in the formulation of the principle of the conservation of
energy (see conservation, laws of) by Hermann Helmholtz in 1847 and the second law of thermodynamics
by Rudolf Clausius in 1859. Equally revolutionary was the work of Michael Faraday and James Clerk
Maxwell in generating the theory of electromagnetism (see electromagnetic radiation); the discovery of X
rays by Wilhelm Roentgen in 1895; and the discovery of the electron by Joseph John Thomson about two
years later. All of these developments contributed to the revolution in physics that took place in the following
decade.
Biology and Chemistry
Rapid advances were taking place in other fields as well. The combined effect of the metaphysical
foundations that had been laid by the romantic movement, and the perfection of microscopy, resulted in the
birth of modern biology. Most important was the realization that the cell constitutes the fundamental unit in
organic bodies. It was Theodor Schwann who announced the new cell theory in 1839. Then in rapid
succession came the discovery of protoplasm as the constituent material of the cell, of the production of new
cells as the result of division of existing cells, and of the structure and composition of the cell nucleus; in
addition, chromosomes were identified within this nucleus. The research of Karl Ernst von Baer revived
epigenesis-the evolutionary growth of the embryo from the female egg fertilized by the male sperm (see
reproduction)Ña theory that was originated by William Harvey in 1651 but, during the 18th century, lost
ground to the rival preformationist theory, which viewed embryonic development as the unfolding of already
existing forms.
In chemistry, following the overthrow by Antoine Lavoisier of the phlogiston theory, John Dalton developed
his atomic theory in the early years of the 19th century, thus enabling chemists to calculate the relative
weight of atoms. It was also during these early years of the century that Amedeo Avogadro formulated his
hypothesis concerning the binding together of atoms into moleculesÑa theory not accepted, however, until
the 1860s. By 1858, F. A. KekulŽ von Stradonitz had elucidated the structure of the organic compound
benzene; Dimitry Mendeleyev's discovery of the periodic law followed in the next decade (see periodic
table).
The Professionalization of Science
The major scientific developments that took place during the 19th and 20th centuries would have been
inconceivable without the total transformation of the way science was conducted. One of the most dramatic
changes involved the shift of scientific activity away from the domain of the secludedÑsometimes
amateurÑmathematician or natural philosopher to the sphere of professional science as it is known today.
As mentioned earlier, this process began in the 17th century when science was just gaining its autonomy
and the scientific community began to band together into institutions that would cultivate and promote
science, develop codes and procedures regulating research, and generally oversee conduct and
publications. However, what began as an association of like-minded private individuals had become a largescale industry by the beginning of the 19th century; science would no longer be the disinterested study it had
hitherto been, but a matter of national importance, glory, and profit.
Hand in hand with the increasing institutionalization and professionalization of science came the inevitable
formation of disciplinary boundaries. The "natural philosophers" that dominated the early modern period
gave way to scientific specialists in the 19th century. Thus, by the end of the 18th century chemistry and
geology had been added to the ancient disciplines of mathematics and astronomy; physics and biology
followed in the 19th. This emergence of disciplines certainly reflected the specialization of scientific
knowledge that took place during these centuries; it also reflected, however, the changes that were
occurring within the universitiesÑwhere scientific inquiry was to be increasingly concentratedÑsuch as the
establishment of departments and professorships and the increasing competition for resources. This
fragmentation of the scientific establishment accelerated rapidly during the 20th century, fueled by the
attempts of various practitioners who shared a special body of knowledge, methodology, or technique to
differentiate themselves from other practitioners. And while this search for a distinct scientific identity was in
part justified by differing attitudes toward scientific issues, the intensified scramble for funds, positions, and
students cannot be ignored as an important motive.
The French Revolution (1789Ð99) in large part triggered the dramatic transformation that occurred in
scientific and technological education. In an attempt to harness the fruits of revolution to the service of the
state, the French republican government initiated the establishment of technical and engineering
collegesÑin which both professors and students would be salaried by the stateÑthat were intended for the
instruction of pure as well as applied science. The lead in this development, however, shifted rapidly to
Germany where, following the humiliating German defeats by Napoleon I, a major reorganization took place.
Owing largely to the work of Wilhelm von Humboldt, a new type of university emergedÑcontrolled and
financed by the state but allowing its professors complete freedom. The result was to transform the German
universities into pure research centers and eventually into the site of major research laboratories. This
phenomenon was accompanied by the foundation of dozens of technical schools for such subjects as
mining, textiles, and engineering. By 1900, German universities and scientific laboratories had become the
best in the world, drawing students from all over Europe as well as from the United States. Germany's
example prompted other governments to emulate its scientific educational establishment, especially as it
became increasingly clear that such academic and technical excellence was giving Germany an important
industrial and military advantage. Thus, by the end of the 19th century, science had once again become a
university-based activity.
Science and Technology
These organizational developments were closely tied to the growing role of science in developing
technologies. While science per se had relatively little to do with bringing about the Industrial Revolution, by
the 19th century science rather than empiricism was at the core of technological advance. In particular, the
emergence of chemistry created new opportunities for industry via the discovery of a variety of synthetic
dyestuffs, which resulted in the mass production of textiles. Electromagnetism was the next field to generate
major industrial opportunities: the dynamo, the telegraph, and the electric light are only a few early examples
in a field that with increasing rapidity would provide new and better sources of power.
The relationship between science and technology was not one-sided. Science not only assisted technology;
it became a major beneficiary of technology, which made available a variety of instruments and apparatuses
without which many of the discoveries of the late 19th and 20th centuries would have been impossible.
Optical instruments were crucial for both the biological sciences and astronomy, as was photographic
equipment. By the 20th century there had evolved an intricate, mutually dependent, relationship between the
domains of science and technology. In addition, the very fact that science was now operating on such a
large scale meant that no individual could afford to engage in science without major financial support from
state or industry. Thus, the development of science as the product of laboratories and its reliance on
expensive equipment necessitated the placement of scientific activity on a new, more secure footing than
had previously existed.
Science in the 20th Century and Beyond
It is within the context of this major transformation of the scientific enterprise into a rich, confident industry
involving researchers and students that 20th-century science should be viewed. Some commentators have
even tended to regard the last century or so as the real period of the scientific revolution. Not only is the
output of science being doubled every 15 years or so, but 90 percent of all the scientists who have ever lived
are still alive today. It is certainly undeniable that 20th-century science was vastly different from the science
of previous centuries. The 20th-century opened with a new revolution in physics; it continued with a radical
alteration of the knowledge of the origins and shape of the universe and a new image of the planet Earth.
And perhaps its culmination will be the complete understanding of the constituents and workings of the
human mind and body.
In 1900, Max Planck promulgated his revolutionary idea that energy is not emitted continuously, but in
discrete quanta, or packets, proportional to the frequency of radiation. Coming only three years after J. J.
Thomson's discovery of the electron, Planck's idea would become crucial in the future study of the atom. But
first an even greater revolution had to unfold: Albert Einstein's demonstration of the special theory of
relativity in 1905. The theory stated that all physical laws are the same for all inertial observers, and that the
speed of light is the same for all observers, irrespective of the light source or the observers' motion. Thus the
notion of absolute space and time was abolished and the interconvertibility of mass and energy
demonstrated, as expressed in the famous equation E = mc6. Ten years later Einstein proceeded with the
general theory of relativity, and Werner Heisenberg introduced his uncertainty principle in 1927.
In astronomy the work of Edwin Powell Hubble in the 1920s also dramatically shifted concepts of the
universe. He demonstrated that the universe is not only composed of innumerable galaxies (see
extragalactic systems), but that it is expanding and that the velocity of the receding galaxies is proportional
to their distances. Such a theory sparked renewed interest in the issue of how the universe began. By
reversing the calculation of the receding galaxies, it was possible to ascribe to the universe an age of some
20 billion years; at the beginning of that time it existed in a state of extreme density and immense
temperature. Then, following a gigantic explosionÑthe so-called big bangÑa process of cooling ensued,
atoms were created and distributed in space, and the universe began its expansion.
During the 1910s, Alfred Lothar Wegener focused on the question of Earth history in his continental drift
theory. According to Wegener, up until 200 million years ago the land on Earth was in the shape of a
supercontinent that he called Pangea (Greek, "all land"); the supercontinent broke into several pieces, which
then drifted apart to form the present continents. Wegener, however, was unable to provide any satisfactory
mechanism to explain this phenomenon, and it was not until after his death in 1930 that the results of
experimental work introduced the new science of plate tectonics, which was capable of explaining such
changes in the Earth's structure.
Finally, the study of life took a leap forward in the first decade of the 20th century when the results of
research into the cell nucleus and the rediscovery of Mendel's heredity theory brought about the
development of genetics. Once the chromosomes had been established as housing the hereditary material,
intensive research was directed at identifying the chemical material of the genes. The subsequent discovery
that the chromosomes contained protein and nucleic acid set in motion a new race to discover the exact
genetic structure. The first to penetrate this mystery were Francis Crick and James Watson, who, with the
aid of knowledge garnered from research done by Maurice Wilkins and Rosalind Franklin, worked out the
"double helix" structure of DNA. Efforts to further break down the genetic code have continued to the
present, helping not only to refine the knowledge of heredity and evolution but to create yet another point of
contact between science and technology-genetic engineering.
20th Century Science
The 20th century may well be labeled the "century of science." Phenomenal scientific developments
unfolded across three broad areas during the century: conceptual reformulations that fundamentally
changed the way in which the natural world is understood and portrayed; changes in the status of science in
society; and the establishment of powerful new relations between science and technology.
CONCEPTUAL REFORMULATIONS
Most of the basic ideas held by scientists at the end of the 19th century were overturned in the course of the
20th century. As a result, a host of theoretical novelties emerged across a broad range of disciplines to
radically transform our present understanding of the world.
The Einsteinian Revolution in Physics
In the last decade of the 19th century a nagging series of problems began to undermine the intellectual
edifice of contemporary physics (see physics, history of), and by the dawn of the 20th century a serious
crisis had developed. The discovery of radioactivity in 1896 by French physicist Antoine Henri Becquerel
challenged the accepted principle of the immutability of atoms. The discovery of the electron in 1897 by
British physicist Joseph John Thomson called into question the traditional, indivisible atom as the smallest
unit of matter. Beginning in 1887, the repeated failure of American physicist Albert A. Michelson and his
collaborators to detect the motion of the Earth relative to the ether, an all-pervasive substance posited as the
physical substrate for the propagation of light, likewise proved problematical.
Well educated technically in the central dogmas of the field, yet young and professionally marginal enough
not to be locked into established beliefs, Albert Einstein was perfectly positioned to effect a revolution in
contemporary physics. In 1905 he published an extraordinary series of papers that redirected the discipline.
The most earth-shaking dealt with special relativity, the modern physics of moving bodies. In essence, by
positing that nothing can move faster than light, Einstein reformulated the mechanics of Isaac Newton,
where possibilities of speeds greater than the speed of light exist. The upshot was special relativity, an
interpretation of motion that dispensed with the traditional reference frames of absolute space and absolute
time (see space-time continuum). All observations (such as when an event takes place, how long a ruler is,
or how heavy an object is) become relative; they depend on the position and speed of observer. As a feature
of his new physics, Einstein posited his celebrated formula, E = mc6, which equates energy and mass (via a
constant speed of light, c) and makes matter a kind of frozen energy.
In 1915, Einstein published on general relativity, the modern physics of accelerated motions, wherein he
equated gravity (see gravitation) to acceleration. Observations of a total solar eclipse in 1919 seemed to
confirm Einstein's prediction that the mass of the Sun should bend starlight; similar, very precise calculations
of Mercury's orbit around the Sun (see orbital elements) brought about like agreement with general relativity.
Particle Physics and the Quantum Revolution
With the discovery of the electron and radioactivity, the atom became a prime focus of research. In 1897, J.
J. Thomson proposed a model of the atom having negatively charged electrons sprinkled about like raisins
in a cake. Using radioactive emissions to investigate the interior structure of the atom, British physicist
Ernest Rutherford in 1911 announced that atoms were composed mostly of empty space. Rutherford
proposed a model that had electrons orbiting a solid nucleus, much like planets orbit the Sun. In the 1920s,
problems with this solar system modelÑsuch as: Why do electrons maintain stable orbits? Why do atoms
radiate energy in a discontinuous manner?Ñled to the formulation of the so-called and seemingly
paradoxical "new theory" of quantum mechanics.
The principles of quantum mechanics were difficult to accept. Empirical studies supported the theory,
however, as did the social network that arose around Niels Bohr and his Institute for Theoretical Physics in
Copenhagen. Essentially, quantum theory replaced the deterministic mechanical model of the atom with one
that sees atoms (and, indeed, all material objects) not as sharply delineated entities in the world but, rather,
as having a dual wave-particle nature, the existence of which can be understood as a "probability wave."
That is, quantum-mechanical "waves" predict the likelihood of finding an objectÑwhether an electron or an
automobileÑat a particular place within specified limits. The power of this counterintuitive analysis was
greatly strengthened in 1926 when two distinct mathematical means of expressing these ideasÑthe matrix
mechanics developed by Werner Heisenberg and the wave equations developed by Erwin
SchršdingerÑwere shown to be formally equivalent. In 1927, Heisenberg proposed his famous uncertainty
principle, which states that an observer cannot simultaneously and with absolute accuracy determine both
the position and the speed (or momentum) of a body. In other words, quantum mechanics reveals an
inherent indeterminacy built into nature.
Knowledge of fundamental particles grew rapidly thereafter. In 1930, Wolfgang Pauli postulated the
existence of a virtually massless uncharged particle that he christened the neutrino. In 1932 the discovery of
the neutronÑa neutral body otherwise similar to the positively charged protonÑcomplemented the existence
of electrons and protons. In that same year, detection of the positron, a positively charged electron, revealed
the existence of antimatter, a special kind of matter that annihilates regular matter in bursts of pure energy
(see annihilation). Based on the work of the flamboyant American physicist Richard Feynman, quantum
theory is now known as quantum electrodynamics, or quantum field theory. Hand in hand with experimental
high-energy physics, it has revealed an unexpectedly complex world of fundamental particles. Using particle
accelerators of ever greater energy, nuclear physicists have created and identified more than 200 such
particles, most of them very short-lived.
Active research continues in contemporary particle physics. For example, elusive particles known as the
Higgs boson (see Higgs particle) and the graviton (posited to mediate the force of gravity) are now prime
quarry. Theorists succeeded in the 1970s in conceptually unifying electromagnetism (see electromagnetic
radiation) and the weak nuclear force (see fundamental interactions) into the so-called electroweak force
(see electroweak theory); an as-yet-unrealized grand unification theory will perhaps add the strong nuclear
force. However, the Holy Grail of an ultimate theory uniting all the forces of the universe, including quantum
gravity, remains a distant goal.
Cosmology and Space Science
In the 18th century, astronomers broached the idea that our Galaxy (the Milky Way) is an "island universe."
Observations of nebulous bodies in the 19th century (see nebula) further suggested that additional galaxies
may populate the cosmos outside the Milky Way. Only in the 1920s, howeverÑin particular because of the
work of American astronomer Edwin HubbleÑdid the extragalactic nature of some nebulous bodies, the
immense distances that they involve, and the apparent expansion of the universe as a whole become
established in cosmology (see extragalactic systems).
The question of how to account for the apparent expansion of the universe puzzled theorists in the 1950s.
Essentially, two mutually exclusive views divided their allegiance. One, the steady-state theory, held that
new matter appears as space expands. The alternative theory proposed that the universe originated in an
incredibly hot and dense "big bang," resulting in a universe that continues to expand (see big bang theory).
Both camps had their supporters and arguments, although most cosmologists seemed predisposed to
steady-state models throughout the first half of the 20th century. The debate was ultimately decided only
after 1965 with the almost accidental discovery by two Bell Laboratory scientists (Robert W. Wilson and
Arno W. Penzias) of the so-called 3¡ Kelvin, or K (0¡ K being absolute zero), background radiation. (The idea
that explains their discovery is elegant. If the universe began as a hot fireball with a big bang, then it should
have "cooled" over time, and calculations could predict what the residual temperature of the universe should
be today.) It was this relic heat, measured by an omnipresent background radiation at roughly three degrees
above absolute zero, that Penzias and Wilson stumbled upon. Their discovery won them the Nobel Prize in
1979 and sounded the death knell for steady-state cosmology.
Research in cosmology continues in a number of areas. Precisely how old is the universe? What will be its
fate? Work in quantum cosmology concerning such esoteric concepts as "imaginary" time, "virtual" particles,
quantum "tunneling," and the chance coming-into-being of matter out of the quantum vacuum may prove
useful in addressing these issues. Today, theorists also entertain such esoteric entities as superstrings (see
superstring theory), multiple universes, and equally provocative but perhaps intellectually productive
concepts. Research into black holesÑthose extraordinarily dense celestial objects whose gravity prevents
even the escape of light but that "evaporate" energyÑmay also illuminate these ultimate questions.
More proximate celestial matters also drew attention in the 20th century. For example, a notable landmark
concerns the evidence (uncovered in 1980) suggesting that a catastrophic meteorite or cometary impact
caused a mass extinction, including the extinction of the dinosaurs, at the end of the Cretaceous Period 65
million years ago. The significance of this finding lies in what it implies about the role of accident in the
history of life, including the origin and fate of humans. Scientists are now surveying the orbits of comets and
meteors to see whether any of them offer the threat of future possible collisions.
Planetary space science has similarly transformed the way in which humans visualize the Earth and its near
neighbors. The century had begun with some remaining hope that life (and in some cases, perhaps even
Earthlike life) might yet be found on other planets of the solar system. Long before the century's end these
hopes had almost wholly been dashedÑmost spectacularly in the case of cloud-hidden Venus, which space
probes revealed to be a place too hellishly hot to support any life form whatsoever. Mars, the other
potentially Earthlike planet, did provide suggestions of possible life in the geological past, but the planet as it
now exists is an arid desert. Some hope for solar-system life still remains at Europa, a satellite of Jupiter that
may contain a liquid sea locked beneath its frozen surface. In the meantime, however, space science
beyond the Sun's local system opened up dramatically in the later 20th century with the affirmed discovery
of objects circling other stars (see planets and planetary systems). The bodies found thus far are notably unEarthlike, but their mere existence offers the chance for vistas yet unguessed.
The Life Sciences
The 19th century bequeathed many fundamental biological principles: the concept of the cell, the germ
theory of disease (see medicine, history of), rejection of the possibility of spontaneous generation,
experimental methods in physiology, and a mature field of organic chemistry. However, at the turn of the
20th century the scientific community was far from accepting one notable axiom, that of evolution by natural
selection, first proposed by Charles Darwin in 1859.
Around 1900, rediscovery of the work on heredity of the Austrian monk Gregor Mendel set the stage for the
20th-century synthesis of Darwin's findings. Mendel had shown that individual traits maintained themselves
over the course of generations, thus providing a basis for understanding variation and selection from a
Darwinian point of view. Thomas Hunt Morgan, experimenting with the Drosophila fruit fly at Columbia
University in the 1910s and '20s, showed that chromosomes within cells contain the information for heredity
and that genetic change produces variation (see genetics). Mathematicians specializing in population
genetics soon demonstrated that a single variation could come to characterize a whole new species, and by
the later 1940s DNA (deoxyribonucleic acid) looked to be the material basis of life and inheritance (see also
gene; genetic code). The uncovering of the double helical structure of DNA in 1953 by James D. Watson
and Francis Crick provided the key conceptual breakthrough for understanding reproduction, the nature of
heredity, and the molecular basis of evolution.
Although Darwin had postulated the evolution of humans from simian ancestors, a coherent story of human
evolution (see prehistoric humans) was the product of 20th-century scientific thought. Discovery of the
Neandertal variety of Homo sapiens and Paleolithic cave art (produced by anatomically modern humans;
see prehistoric art) was generally not accepted until the turn of the 20th century. The first Homo erectus
fossil was unearthed in 1895, and the first Australopithecus surfaced only in 1925. The "discovery" of the
notorious Piltdown man in 1908 argued against human evolution from more primitive ancestors, and only in
1950 was the Piltdown artifact proven to be a fraud. Since that time, an increasingly clearer picture has
emerged of the stages of human evolution, from Australopithecus afarensis ("Lucy" and her kin, first
uncovered by Donald Johanson in 1974), through Homo habilis, H. erectus, and then varieties of H. sapiens.
The scientific study of the human mind likewise expanded in the period. Several divergent strands
characterize psychology in the 20th century. One of those strands, the work of Sigmund Freud, was of signal
theoretical importance into the 1960s. In such works as The Interpretation of Dreams (1900), Freud made an
indelible impression on the century through his emphasis on the unconscious and on infantile sexuality as
operative not only in psychological illness but also in everyday life (see sexual development), and Freudian
psychoanalysis provided a new avenue for self-awareness and improved mental health. By the same token,
20th-century psychology was characterized by a wholly reductionist and nonsubjective strand. The Russian
experimentalist I. P. Pavlov pioneered this line of work in proving that conditioning can determine behavior
(see behavior modification). The American psychologists John B. Watson and, later, B. F. Skinner
formalized these notions as behaviorism, a psychology that emphasized learned behaviors and habit and
that rejected the study of mental processes as unscientific. Psychology has since moved in the direction of
cognitive explorations (see cognitive psychology; cognitive science) and is currently under challenge by
increasingly sophisticated biochemical and physiological accounts of mental experience. Sociobiology and
evolutionary psychology, fields that postulate that patterns of culture may be explained by genetics, are
likewise affecting the science of psychology.
Striking theoretical novelties that arose in geology and the Earth sciences in the 20th century also suggested
interesting new chapters in the development and evolution of life forms. Most notably, the idea that the
continents may actually be "plates" riding on the Earth's mantle was generally rejected when it was first put
forward in 1915 by German geologist Alfred Wegener, but it gained virtually universal acceptance in the
1960s for a variety of technical (and social) reasons. Understanding the theories of plate tectonics and
continental drift has since allowed scientists to "run the film backwards"Ñthat is, to recapitulate the
geological history of the EarthÑwith all the implications that this process holds for both geology and biology.
THE SCIENTIFIC ENTERPRISE IN SOCIETY
Changes in the social character of science in the 20th century evolved hand in hand with theoretical
reformulations.
Technical Careers
In the 20th century the pursuit of science became a professional career and the full-time occupational
concern of nearly a million people in the United States who engaged in research and development. Although
wide latitude exists in scientific employment, the social pathways available for scientific training are quite
narrow. Whether an individual will become a scientist is in many respects determined during his or her
middle school years. Almost universally, the individual has to graduate from high school with the requisite
science courses, complete an undergraduate major in a field of science, pursue graduate work, obtain a
Ph.D., and then (usually) continue training for a period of years after the Ph.D. in temporary positions known
as "post-docs."
The Ph.D. constitutes the scientist's license, but from there on career paths diverge. Traditionally, the norm
has been to pursue careers of research and teaching in universities. Increasingly, young men and women of
science find productive lives for themselves in government and private industry. Also, what scientists do in
the present day spans a broad range of practice: from disinterested, pure-science research in universities or
specialized research facilities, to more mundane, applied-science work in industry. Generally speaking,
whether in academe, industry, or government, younger scientists tend to be the more active producers of
scientific knowledge.
The status of women in science changed dramatically in the 20th century. Women have never been
completely absent from the social or intellectual histories of science; and, as they gained admission to
universities in the 19th century, gradually increasing numbers of women became scientists. Heroic accounts
often single out Marie Curie, who was the first female professor at the Sorbonne (see Paris, Universities of)
and the only woman ever to win two Nobel Prizes (in 1903 and 1911). The Austrian physicist Lise Meitner
held a professorship at the University of Berlin before escaping Nazi Germany; she played a seminal role in
articulating the theory behind the atomic bomb, and her case exemplifies the increasing presence and
professionalism of women in modern science. With her mastery of X-ray diffraction techniques, the physical
chemist Rosalind Franklin was instrumental in the discovery of the double helix of DNA. Franklin's story
presents cautionary lessons as well, however, because the social system of British science in the 1950s did
little to welcome her or give her credit for her work. It remains difficult for women to pursue science-based
careers, butÑmirroring other shifts in Western societies over the last several decadesÑopportunities have
improved for women; the idea of a female scientist is no longer exceptional.
Scientists are expected to publish research results in scientific journals, and the old adage of "publish or
perish" continues to apply to contemporary science. Participation in professional societies forms another
requisite part of scientific life today (see scientific associations), as does getting grants. Grants typically
include funds for investigators' salaries, graduate and post-doc support, and equipment. In what amounts to
subsidies of institutions, budgets usually include so-called "indirect" costs paid not to support research but to
underwrite the grantee's institution.
Honorary organizations such as the National Academy of Sciences (founded in 1863) cap the elaborate
social organization of contemporary science. International organizations such as the International Council of
Scientific Unions (1931) govern science on a world level. Finally, at least in the public mind, standing at the
very top of the social and institutional pyramid of contemporary science stand the Nobel Prizes, given
annually since 1901 in the fields of physics, chemistry, and physiology or medicine.
Explosive Growth
More was involved in 20th-century science than simply the linear evolution of scientific ideas or successive
stages in the social and professional development of science. The exponential growth of knowledge
represented another characteristic feature of science in the 20th century. By every indication, science has
grown at a geometric rate since the 17th century. Several paradoxical consequences follow from this
exponential growth. For example, a high proportion of all the scientists who ever livedÑsome 80 to 90
percentÑare said to be alive today. Given its exponential growth, the scientific enterprise clearly cannot
expand indefinitely, for such growth would sooner or later consume all resources. Indeed (and as predicted),
an exponential growth rate for science has not been maintained since the 1970s. In any particular area of
science, howeverÑespecially "hot" newer ones such as superconductivity or AIDS researchÑexponential
growth and an ultimate plateau remain the typical pattern.
Other methods of growth measurement, notably citation studies, add to these basic understandings.
Scientists back up their results by referencing other papers, and much can be learned from studying these
citation patterns. For example, a significant percentage of published work is never cited or actively used. In
essence, much of this production disappears into "black holes" in the scientific literature (a fraction,
however, that is considerably less than in the humanities). Citation studies also show a drastically skewed
inequality of scientific productivity among researchers. That is, for any collection of scientists, a handful will
be big producers while the vast majority will produce little, if any, work. A corollary of this differential
productivity affects institutions, too, in that a handful of elite universities dominates the production of science.
Citation studies also show an unusual "half-life" of scientific information. That is, older scientific results prove
less useful to practicing scientists than more recent science, and therefore citations fall off with time. In
contrast with the traditional humanities, the scientific enterprise thus displays a characteristic ahistorical
present-mindedness.
Big Science and the Bomb
The development of the atomic bomb by the United States during World War II marks a watershed in the
history of modern science and technology. The reasons are twofold. First, the case dramatically revealed the
practical potential of science and what could emerge from turning theory toward useful ends. Second, it
demonstrated what might be forthcoming when government supported large-scale scientific research and
development with abundant resources. Neither of these considerations was entirely new in the 1940s. The
novelty of the atomic bomb caseÑand what it portended for the future of science and technology in the
second half of the 20th centuryÑstemmed from the combination of these two factors: large-scale
government initiatives, used to exploit scientific theory for practical ends.
The scientific theory behind the bomb emerged only in the late 1930s. In 1938 the German physicist Otto
Hahn demonstrated that certain heavy elements (such as uranium) could undergo nuclear fission and be
"split" into more simple components. Then, in 1939, Lise Meitner calculated the immense amounts of energy
that could be released from an explosive nuclear chain reaction. On Aug. 2, 1939, with war nearly underway
in Europe, Albert Einstein wrote a historic letter to President Franklin Roosevelt about the matter. The result
was the largest science-based research-and-development venture to date in history. The Manhattan Project,
as it was called, involved 43,000 people working in 37 installations across the country, and it ended up
costing (in contemporary money) $2.2 billion. On July 16, 1945, directed by American physicist J. Robert
Oppenheimer, the team set off the world's first atomic explosion in New Mexico.
The atomic bomb brought World War II to a dramatic end. It also launched the cold war. The militaryindustrial establishments of the United States and the Soviet Union raced to develop larger atomic weapons
and then, from 1952, even more powerful thermonuclear bombs that derived their energy from the nuclear
fusion of hydrogen into helium (see hydrogen bomb). World War II gave a push to a number of other
government-funded applied-science projects, such as developing radar, penicillin production, jet propulsion,
and the earliest electronic computers. The war established a new paradigm for science, government, and
university relations that has endured to this day. After World War II it became hard to think of technology
other than as applied science.
The Manhattan Project typified a new way of doing science: that of the industrialization of scientific
productionÑwhat has been called "big science." In the 20th century, research came to necessitate large
installations and expensive equipment, increasingly beyond the resources of individual experimenters or
universities. Teams of researchers began to replace the labor of individuals. Each member of such a team
was a specialized worker, and papers issuing from team-based science were sometimes authored by
hundreds of individuals. For example, the discovery of the entity known as the top quark in 1995 was
produced at the Fermi National Accelerator Laboratory by two separate teams, each numbering 450
scientists, and technicians staffing two detectors that cost $100 million apiece. Individual or small-group
research continues in some fields, such as botany, mathematics, and paleontology, but in other areas, such
as particle physics, biomedical engineering, or space exploration, big science represented an important
novelty of the 20th century.
SCIENCE AND TECHNOLOGY
The modern merger of science and technology, or "technoscience," continued in the second half of the 20th
century. In 1995, $73 billion in federal money went to scientific research and developmentÑa figure that
represented 2.6 percent of U.S. gross domestic product. The nation's total public and private investment in
basic research hovered at $160 billion at the century's end.
Military defense overwhelmingly drives government patronage for science in the United States. Indeed, the
science budget for the U.S. Department of Defense is three times that of the next largest consumer of
federal science dollars, the National Institutes of Health (NIH), and defense-related science expenditures
continue to receive more than half of all federal science spending. Similarly, federal dollars overwhelmingly
go to support applied science. The descending order of funding from the Defense Department, to the NIH, to
the National Aeronautics and Space Administration (NASA), and to the National Science Foundation (NSF),
is also revealing in this regard, in that fundings for defense, health, and NASA space projects (which have
always entailed significant political and employment goals) come well before support of the agency
nominally charged with the promotion of pure scientific research, the NSF. Even there, only $2.3 billion of
the NSF's overall 1995 budget of $3.4 billion went to research.
For many years following the start of the cold war, the big applied science of space exploration claimed its
impressive share of U.S. appropriations. NASA itself came into being in the late 1950s, at a time when the
Soviet Union was alarming the United States by its launch of the world's first successful artificial satellites
(see Sputnik). However, after a few early disasters the American manned space program forged ahead with
an effort that resulted in the 1969 landing of the first humans on a celestial bodyÑthe MoonÑother than the
Earth. The Soviet Union also reached the Moon by means of automated vehicles, but its space ventures
were far surpassed by U.S. probes that explored the far limits of the solar system and returned important
images and data on its bodies. Many nations have since joined in the space enterprise to different degrees
and for a wide range of purposes (see space programs, national). The use of satellite data in various
scientific, technological, meteorological, and military programs has long since become a staple of human
life, and Russia, the United States, and a number of other countries are currently engaged in plans for
building a permanent space station.
Examples of applied science in the 20th century were almost too numerous to mention: electric power, radio,
television, satellite communications, computers, huge areas of scientific medicine and pharmaceuticals,
genetics, agriculture, materials science, and so on and so forth. They all testified to the power of modern
scientific research and development. The historical union of science and technology in the 20th century
produced a significant new force for change. Marxist thinkers in the former Soviet Union recognized the
novelty of this development, labeling it the 20th century's "scientific-technological revolution." The term has
not been taken up generally, but it captures much of what is at issue here: the effective merger of science
and technology in the 20th century and the increasing importance of science-based technologies and
technology-based sciences for the economic and social well-being of humanity today.
In the period following World War II, science enjoyed an unquestioned moral, intellectual, and technical
authority. Through the operations of its apparently unique "scientific method," theoretical science seemed to
offer an infallible path to knowledge, while applied science promised to transform human existence.
Paradoxically, or perhaps not, just as the effects of a fused science-technology culture began to refashion
advanced societiesÑwith the bomb, television, interstate highways, computers, the contraceptive pill, and so
onÑso, too, beginning in the 1960s, antiscience and antitechnological reactions began to manifest
themselves. The counterculture of the 1960s and '70s represents one example. Failures of nuclear reactors
at Three Mile Island (1979) and Chernobyl (1986), perceived threats from depletion of the ozone layer (see
global warming), and the spread of new diseases such as AIDS and the Ebola virus made many people
suspicious of the material benefits of science and technology. Increasing societal concerns with ecology,
recycling, "appropriate technologies," and "green" politics derive from such suspicions.
Work in the philosophy of science and the sociology of knowledge have challenged any uniquely privileged
position for science as a means of knowing or staking claims about any ultimate truth. Most thinkers now
recognize that scientific-knowledge claims are relative, inevitably fallible human creations and not final
statements about an objective nature. Although some would seize on this conclusion to promote intellectual
anarchy or the primacy of theology, paradoxically, no better mechanism exists for understanding the natural
world than the human enterprise of science and research.
Zoology
Zoology is the study of animals. For a long time animals were simply classified as any living organism that
could not be included in the plant kingdom. Today they are considered to be those organisms which cannot
be classified into any of three other kingdoms as well: the Monera, Protista, or Fungi. An appreciation of this
concept, and clear insight into the definition of an animal, requires a description of each of the five kingdoms
of living organisms; for this, see classification, biological.
EARLY DEVELOPMENT
Although most animals that were amenable to domestication, such as cattle and poultry, were already
domesticated by 7000 ©, early knowledge, attitudes, and studies in regard to zoology can be characterized,
respectively, as elementary, superstitious, and mystical. Nonetheless, based on pictorial bas-reliefs in blocks
of clay and stones of the Assyrians and Babylonians, the hieroglyphics in papyri of the Egyptians, and the
writings of the Chinese on strips of bamboo, it is known that these ancient civilizations were acquainted with
such medical topics as blood circulation, functions of human organs, headaches, and disease. Almost all the
work prior to 500 ©, however, was confined to the medical and domesticative aspects of animal sciences.
The medical aspects were brought to a pinnacle of thought in the 5th and 4th centuries © by Hippocrates,
who is credited with writing nearly 60 books on the subject, but the science of zoology itself can be said to
have had its origin in the 4th century © in the works of Aristotle, who has had about 146 books attributed to
him.
So all-encompassing were Aristotle's contributions that it is perhaps correct to say he was the shaper of four
biological cornerstones of zoology: taxonomy, anatomy, physiology, and genetics. Taxonomy is the science
of classification, anatomy deals with the structure of organisms, physiology with their function, and genetics
with heredity. On these cornerstones have been built all of the later zoological sciences. One may also see
how the eclectic, or all-encompassing, sciences, such as ecology and evolution, became possible.
TAXONOMY
Historically, taxonomy is rooted in the breakdown or assignment of animals into eight classes by Aristotle, in
the binomial classification system of the 18th-century naturalist Carolus Linnaeus, in the classification
scheme by function rather than structure of Jean Baptiste Lamarck, and in the extensions of the Linnaeus
system to higher taxonomic categories by Georges Cuvier. Because the numbers and kinds of animals now
known are so vast, modern taxonomists find it necessary to specialize in manageably small groups of
organisms, such as beetles or rodents. Taxonomists also contribute to evolutionary thought, and some of
them become deeply engaged in what might be considered the only major directly derivative science,
numerical taxonomy. Numerical taxonomy uses statistics and sophisticated computerized approaches in the
determination of animal classification.
ANATOMY
Anatomy is the study of the structure of a single animal species, such as humans. The study of the internal
structures of groups of species is referred to as comparative anatomy. The roots of anatomy lie in the works
of numerous individuals, notably Galen, who, during the 2d century ¥, wrote numerous medical treatises,
beginning with his observations as a surgeon and physician to gladiators; Leonardo da Vinci and
Michelangelo Buonarotti, who, during the Renaissance, performed dissections and illustrated their
observations; and Andreas Vesalius, who correctly described, depicted, and delineated the anatomy of
bones, muscles, blood vessels, nerves, internal organs, and the brain.
The main derivatives of anatomy are morphology, histology, cytology, and embryology. Morphology is an
aspect of anatomy in which emphasis is placed on the structure, form, and function of an organism and its
parts. The terms morphology and anatomy are often used interchangeably.
Histology is concerned with the interrelations of the four basic animal tissues: epithelial, connective, nerve,
and muscle. The development of this science required the invention of the microscope and was given
impetus in the 17th century by scientists such as Marcello Malpighi, who discovered capillary circulation and
described the inner layer of skin, the papillae of the tongue, and the structure of the cerebral cortex of the
brain; Antoni van Leeuwenhoek, who recognized the significance of human sperm (1677), observed redblood-cell circulation, described the compound eye of insects, and observed muscle fibers in the iris of the
eye; Jan Swammerdam, who observed and described blood corpuscles and identified histological changes
associated with insect metamorphosis; Marie Fran•ois Xavier Bichat, who in 1797 identified 21 different
tissues, including bone, muscle, and blood; and Charles Darwin (son of the author of On the Origin of
Species), who in 1885 designed the microtome, an apparatus to make thin slices of tissue for microscopic
examination.
Cytology is the study of cells. The term cell was coined during the 17th century by Robert Hooke, from the
Latin root cella ("a little room"). The fathers of this science include Robert Brown (1773Ð1858), who
described the nucleus; Theodor Schwann and Matthias Jakob Schleiden (see Schwann, Theodor, and
Schleiden, Matthias Jakob), who generalized that all animals and plants are composed of cells; Eduard
Strasburger (1844Ð1912), who described somatic cell division (in plants); Edouard van Beneden, who
discovered (1887) meiosis and understood its significance in reproduction; and Rudolf Virchow, who
augmented the understanding of cytology by virtue of his studies of cellular pathology.
Embryology was treated at some length by Aristotle, who used the term embryon in his writings. In the
absence of microscopy, however, Aristotle was uncertain whether the embryo was preformed at its onset
and merely had to enlarge during development or whether it began as a formless structure and took form in
the course of its growth. More than 20 centuries were to elapse before this issue could be resolved. In the
interim, certain scientists, such as Swammerdam and Leeuwenhoek, argued in favor of the preformation
theory. OthersÑsuch as William Harvey, who described the closed circulatory system present in all
vertebrate and in some invertebrate animals, and Caspar Friedrich Wolff, who made significant contributions
to the knowledge of vertebrate urinary and genital systemsÑfavored the latter, or epigenetic, theory of
embryogenesis. It was not until the 19th century, when Karl Ernst von Baer discovered (1828) and studied
the development of the true egg of mammals, that the preformation theory was abandoned.
Many scientists confirmed von Baer's findings, which were extended to other animal groups and ultimately
led Ernst Haeckel to proclaim a connection between the development of an individual organism and animal
evolution. This connection was popularized by the phrase "ontogeny recapitulates phylogeny" (development
of an individual retraces the evolutionary development of animals). Haeckel's thesis was oversimplified, as
was later shown by others, but it has served to emphasize the numerous common factors and important
changes that are seen in tracing the evolutionary development from lower to higher or from unspecialized to
specialized forms.
PHYSIOLOGY
Physiology also had its scientific beginnings in studies conducted by Aristotle. He observed and described
the functions of numerous structures in marine animals. Most of the contributions made by Aristotle to
physiology lie in the domain of classical functional anatomy. That is, by observing certain structures in
animals he was able to deduce their functions. Similar kinds of contributions were made by numerous
investigators until about the middle of the 17th century. At this time a number of significant philosophical
theories were advanced. They were important because they set the stage for a physical-chemical approach
to physiology. From this viewpoint, RenŽ Descartes, in the 17th century, offered a mechanistic explanation
for the operation of the nervous system, as did G. A. Bocelli (1608Ð79) for muscular action. Neither of these
workers, however, obtained verification of their theories through direct observations on living systems, and it
was left to other scientists to carry out this aspect of the work. The principal pioneers included Luigi Galvani,
who in the 18th century showed that a nerve could be stimulated by electricity; Emil du Bois-Reymond
(1818Ð96), who found that muscles and nerves produce currents of bioelectricity when they are activated;
Hermann Helmholtz, who demonstrated that both chemical and electrical activity were involved in the
propagation of the nerve impulse; Marshall Hall (1790Ð1857), who described the reflex arc; and Carl F. W.
Ludwig (1816Ð95), the inventor of the kymograph, which is used to this day to record contractions of
skeletal and heart muscles and movements involved in breathing.
In the area of metabolism, the 18th-century French scientist Antoine Lavoisier demonstrated the significance
of oxygen, carbon dioxide, and water in breathing. Claude Bernard described pancreatic enzymes that
reacted with fats, carbohydrates, and proteins; he showed that glycogen was formed in the liver from
carbohydrates, and he demonstrated that the bore, or caliber, of arterial blood vessels was regulated by
nerve fibers. During the period 1902Ð05, William M. Bayliss and Ernest H. Starling described hormonal
secretory activity. At this point all the basic physiologic processes now known were established in principle
but in need of physics and chemistry if they were to be advanced toward greater understanding. Edward
Kendall obtained pure thyroxine from the thyroid gland of swine in 1914; Frederick Banting and Charles Best
isolated insulin in 1921; in 1927, S. Aschheim and B. Zondek discovered the sex hormone estrone and the
hormone called chorionic gonadotropin, which is a telltale sign of human pregnancy when present in the
urine; Frederick Sanger revealed the chemical structure of insulin in 1957; and B. Gutte and R. B. Merrifield
synthesized the enzyme ribonuclease in 1969.
GENETICS
Genetics is a subject to which Hippocrates, Aristotle, and other Greek philosophers and scientists could
contribute only by speculation. The roots of modern genetics were developed first in the 19th century by a
botanist, Gregor Johann Mendel, and Hugo De Vries observed and studied genetic mutations. William
Bateson (1861Ð1926) coined the word genetics in 1906 and demonstrated the principle of genetic linkage,
in which certain traits of inheritance are more often transmitted together as a group than are other traits.
Thomas Hunt Morgan hypothesized that genes were discrete units of inheritance and were located at
specific points on chromosomes; he also explained sex-linked inheritance. George W. Beadle and Edward
L. Tatum demonstrated in 1941 the one geneÐone enzyme hypothesisÑnamely, that a distinct gene is
required for the synthesis of a single type of enzyme.
In 1944, Oswald T. Avery, Colin M. MacLeod, and Maclyn McCarty showed that DNA (deoxyribonucleic
acid) was responsible for transmitting hereditary traits. Erwin Chargaff, working primarily during the period
1950Ð55, determined the chemical structure of nucleic acids. In 1958, Francis Crick and James D. Watson
proposed a model for the structure of DNA, answering the question of how hereditary material could
reproduce itself. M. W. Nirenberg determined the DNA genetic code about 1960.
ZOOLOGY TODAY
As a separate discipline, zoology is increasingly becoming subordinated to the concept that general biology
as a core, plus specializations or composites within biology, provide a more realistic approach to an
advancement of the boundaries of zoology. Among these approaches are the following examples.
In the first approach, which could be called a Theory of Everything Biological, a program was initiated in
1963 through which everything might become known about Caenorhabditis elegans, a 1-mm-long (0.04-in)
transparent worm. This work is now ongoing in nearly 100 labs worldwide. By 1983 the exact lineage of
each of this animal's cells, beginning from a single cell and heading toward final growth in three days, was
known. By 1986 a complete wiring diagram of the animal's 302 nerve cells was known. By 1990 the exact
way in which the animal's approximately 2,000 essential genes are linked to one another in chromosomes
was known. It was also determined by then that the function of some of these genes is to program cell
death. By the year 2000 the entire human genetic sequence was mapped, opening up whole new avenues
for research, including how many of the genes possessed by the worm were inherited by humans in the
course of evolution.
The second example is verbalism, the act of converting thoughts on what is seen, heard, tasted, smelled,
felt, or emotionalized into phonetic sounds and written symbols. In what could be called a Theory of
Preverbalism, zoologists are attempting to determine to what extent animals other than humans are able to
use words and syntax and to create language (see animal communication).
Special attention is being given to the pygmy chimpanzee, or bonobo, Pan paniscus. Found naturally only in
Congo (Zaire), bonobos give scientists the impression that they are more sociable, more intelligent, and
better able to communicate nonverbally than the common chimpanzee, Pan troglodytes. Currently some
zoologists are entertaining the idea that bonobos might be able to learn the English language. Whether this
will be possible anatomically, however, is another question.
The third approach deals with a mechanism through which any kind of decaying matter, such as biological
sewage sludges and feedlot manures, cease to decay and become stable forms of organic matter. Any
biologic sewage and any animal body waste, when first formed, contains millions of live microbes within a
space about the size of a period.
Working with earthworms beginning in 1976, zoologists learned that earthworms can and must derive nearly
all their nutrients by feeding on live microorganisms. In the course of this finding, the zoologists also learned
that a time is reached in the process of decay when neither the sludge nor manure is suitable as a source of
nutrients. At this point no microbial species within the decaying matter can increase its population size by
feeding upon any or even all of the other microbial species also present. The decaying material ceases to
decay and becomes a chemically stable form of what may be called "fossil fuel." The zoologists worked out
the chemistry of this fossil fuel and showed that it was able to destroy bacteria, fungi, viruses, protists, and
other organisms; it could destroy polychlorinated biphenyls (PCBs), a family of molecules known to be
environmentally toxic.
In 1985 the zoologists began a study to determine whether earthworms can be deployed in the field for
managing sewage sludges year in, year out. Based on the results, known by 1987, a procedure is now
available through which three classes of earthworms, each with a different function, can be used anywhere
in the tropic and temperate zones of the world for managing the world's decaying organic wastes of any kind
in an economically and environmentally beneficial way.
Grasshopper Anatomy: External
Grasshopper Anatomy: Digestive and Reproductive
Bird Anatomy: External
Bird Anatomy: Internal
Snake Anatomy: External
Snake Anatomy: Tongue
Geologic Time:
One of the most important discoveries of modern science has been the age of the Earth and the vast length
of time encompassed by its history. The scale of this history, in the millions and billions of years, is
recognized as geologic time.
Most cultures incorporate some form of creation mythology, for example, the biblical Book of Genesis. In the
mid-17th century an Irish churchman, Bishop James Ussher, added the years in the biblical genealogies and
concluded that the Earth was created in 4004 ©. This idea persisted for a long while, although the 18thcentury French scientist Georges Louis Leclerc, comte de Buffon, reasoned that the Earth cooled from an
originally molten body and that this would have required at least 75,000 years.
Buffon had to recant, but development of the principle of uniformitarianism in the late 1700s and early 1800s
provided geologists with new grounds for arguing that the Earth is far older than anyone had imagined.
Similarly, in 1859, Charles Darwin recognized that millions of years were necessary for small evolutionary
changes to accumulate and produce the variety of life seen today. Because they lacked definitive, precise
data, however, 19th-century geologists could only guess at the age of the Earth. In the meantime, an
accurate, relative geologic time scale had been developed, which placed the main events of geologic history
in their proper sequence.
THE RELATIVE TIME SCALE
The relative time scale comprises four major intervals, called eras. The oldest, or Precambrian, includes
ancient rocks whose only fossils are microorganisms and the layered mounds (stromatolites) built by bluegreen algae. The next era, the Paleozoic, was dominated by marine invertebrate life, although arthropods,
mollusks, vertebrates, and plants that invaded land later in the era began rapidly to expand.
After many forms of marine life had become extinct, the Mesozoic Era began with a new radiation of marine
life and the dominance of reptiles on land. This era also closed with a number of extinctions, particularly
among the reptiles, such as the dinosaurs, and some marine groups.
Finally, the Cenozoic, or Present, Era is characterized by the dominance of mammals, insects, and flowering
plants on land and still another radiation of marine life.
During the late 19th and early 20th centuries, scientists, in an attempt to determine the Earth's age,
measured the present rates of physical processes and tried to extrapolate these rates to the past. Estimates
based on the yearly addition of salt to the oceans yielded ages of about 90 million years, whereas others,
based on the rate of accumulation of sedimentary rock, ranged from 3 million to 500 million years. These
methods were fraught with uncertainties and unproved assumptions. Thus the English physicist Lord Kelvin
resurrected Buffon's methods and, using better data, calculated that the Earth had existed for 25 million to
100 million years.
Kelvin also reasoned that the Earth and Sun had been formed at the same time, and that, given
conventional energy sources, the Sun could have emitted energy at its present rate for only about 40 million
years. Geologists and biologists familiar with the geologic and fossil records regarded these figures as being
far too low, but they lacked the quantitative data necessary to refute Kelvin.
ABSOLUTE AGE DATING
In 1896 the French physicist Henri Becquerel discovered radioactivityÑthe spontaneous decay of certain
elements and forms of elements. In 1906, R. J. Strutt (later 4th Baron Rayleigh) demonstrated that the heat
generated by this process refuted Kelvin's arguments.
Then an American chemist, B. B. Boltwood, found (1905Ð07) that in the decay of uranium (U) to lead (Pb),
the ratio Pb/U was consistently greater the older the rock containing these elements. Thus if the rate of
radioactive decay could be determined, it could be used to calculate absolute ages for the geologic time
scale.
The radioactive decay rate is unaffected by normal chemical and physical processes and is exponential. The
decay rate for each radioactive isotope is characterized by a particular half-lifeÑthe time required for half of
the remaining radioactive "parent" to decay to its stable "daughter." For example, suppose a certain
radioactive substance has a half-life of 1 million years. After a million years half of it will remain, after
another million years a quarter will remain, after still another million years an eighth will remain, and so on,
virtually to infinity. The geologic age of a rock can be calculated by estimating the amounts of parent and
daughter elements that were originally present in the rock, measuring the amounts that are now present,
and calculating the number of half-lives that must have passed to bring parent and daughter to current
levels.
The classic techniques of dating rocks by utilizing radioactivity, called radiometric age-dating, involve the
decay of isotopes of uranium to lead, rubidium to strontium, potassium to argon, or carbon to nitrogen. With
the exception of the carbon isotope, Cxy, all of these have half-lives long enough so that they can be used to
determine the Earth's great age. Cxy, which has a short half-life, is useful only for dating substances in the
late Quaternary Period. Several techniques devised in recent years substantiate the earlier dates, as well as
increase the variety of datable materials.
CALIBRATING THE RELATIVE SCALE
Radiometric dating techniques have shown that the oldest rocks are approximately 3.9 billion years old, that
the Paleozoic Era began about 543 million years ago and the Mesozoic Era about 250 million years ago,
and that the Cenozoic Era has occupied the last 65 million years. In addition, meteorites (which are probably
material left over from the formation of the solar system) and rocks from the Moon (the surface of which has
not been altered by atmospheric weathering and erosion or by tectonic processes) indicate that the Earth
and the remainder of the solar system originated 4.65 billion years ago. (See also Earth, geological history
of.)
Most radioactive elements occur in igneous and metamorphic rocks, whereas nearly all fossils, on which the
relative time scale is based, occur in sedimentary rocks. If igneous rock such as a lava flow or an ash bed is
intercalated in a sequence of sedimentary strata, the assignment of a radiometric date to the sediments is
relatively easy. In other instances, geologists must analyze regional relationships in an attempt to relate
radiometrically dated crystalline rocks and sedimentary strata. For example, if a sedimentary formation is cut
by an igneous intrusion that is 250 million years old, the geologist knows that the strata are older than this,
but not how much older. If the intrusion is truncated by an overlying sedimentary formation, the geologist
knows that these strata are less than 250 million years old but not how much less. But if this second
sedimentary formation is intruded by a second igneous body dated at 200 million years, the second set of
strata must be between 200 million and 250 million years old. Thus, geologists can continue to refine the
calibration of radiometric and relative time scales.
Geologic Time Chart
Cenozoic Era
The Cenozoic (Greek, "recent life") Era includes geologic time from the end of the Mesozoic Era 65 million
years ago to the present. It began with a cataclysm that ended the Cretaceous Period, probably when an
asteriod hit the north end of the Yucat‡n Peninsula of Mexico, forming the Chicxulub Crater. The dinosaurs
and many other groups of animals perished suddenly, and surviving small mammals and other groups of
animals underwent a rapid spread, producing the Earth's present lifeforms. American geologists divide the
Cenozoic into the Tertiary Period, which lasted from 65 million to 1.8 million years ago, and the Quaternary
Period, from 1.8 million years ago to the present. European geologists divide the Cenozoic into the
Paleogene Period, which ended 23.8 million years ago; the Neogene Period, extending to 1.8 million years
ago; and the Quaternary Period. These periods are subdivided into the following epochs: Paleocene,
Eocene, Oligocene, Miocene, Pliocene, Pleistocene, and Holocene (see Recent Epoch). The Holocene
Epoch includes events of only the past 10,000 years.
Distribution of Cenozoic Rocks
Cenozoic rocks cover much of the Earth's surface, because they are the most-recent materials deposited
and have had less time to erode away. A blanket of Quaternary and Holocene materials, including the
Earth's soils and alluvium, covers virtually all other rocks on the surface of the Earth. Many older, Tertiary
rocks are well-consolidated sedimentary rocks formed of materials eroded from the land or continental
shelves and deposited in the seas or on land. Many are fossiliferous. Much recent Cenozoic material
comprises unconsolidated masses of gravel, sand, clay, mud, and soil. Volcanic deposits with thin overlying
carbonate and other oozes cover the ocean floor. Basalt from massive fissure eruptions covers Iceland and
the Columbia and Snake river plateaus. The Miocene Epoch was a time of especially active volcanoes
worldwide, and volcanism continues to be an important process on the ocean floor, around the Pacific basin,
along the Mid-Atlantic Ridge, and elsewhere.
Major Events
The Atlantic Ocean continued to open during the Cenozoic Era, and movement of tectonic plates created the
configuration of today's continents (see continental drift). This process involved collisions, still ongoing, that
created the Alps, Andes, Himalayas, Rockies and Pacific-coast mountain ranges and led to the formation of
valuable deposits. During the Pleistocene Epoch great ice sheets extended over the northern areas of
Europe and North America, and glaciers formed in high-mountain areas elsewhere (see ice ages).
Life
Angiosperms became the dominant plants during the Cenozoic Era. Mammals and birds evolved rapidly.
Primates and eventually hominids appeared in late Tertiary and early Quaternary time.
Mesozoic Era
The Mesozoic Era, covering an interval of Earth history from about 230 million to 65 million years ago,
comprises three geologic time periods: Triassic, Jurassic, and Cretaceous. The term, meaning "middle life,"
was introduced in 1841 by the English geologist John Phillips. In most places the rock layers deposited
during the three eras are separated from one another by unconformities, breaks in the sequence of
deposition of the geologic record.
Paleogeography and Tectonics
The Mesozoic was a time of transition in the history of life and in the evolution of the Earth (see Earth,
geological history of). By the close of the Paleozoic Era, geosynclines were confined to the Tethys Seas
(modern Mediterranean Sea and Middle East) and circum-Pacific region, the others having undergone the
final phases of mountain-building orogenies that transformed them into ranges. According to the theory of
plate tectonics, the supercontinent of Pangaea, created by the merging of the ancestral continents during
the Paleozoic Era, was slowly torn apart during the Mesozoic (see continental drift). In the Jurassic, shallow
seas spread northward out of the Tethys and southward from the Arctic onto western Europe. This also
happened in North America, where an Arctic sea spread over the present-day Rocky Mountains south to
Utah; the area adjacent to the Gulf of Mexico and Atlantic coastal plain was also inundated. These seas
retreated by the end of the Jurassic but returned in the Early Cretaceous to roughly their former extent: one
reached from the Gulf of Mexico to the Arctic, another covered much of western Europe, and a broad
channel spread across the Sahara, joining the Gulf of Guinea and the Tethys. At the close of the Cretaceous
Period the seas retreated from the continents. Deformation of the Earth's crust, minor during the Triassic,
intensified in the Jurassic and reached a peak in the Cretaceous, during which time our present AlpineHimalayan chain, Rocky Mountains, and Andes began forming.
Carbonates are the major sedimentary rock of the Mesozoic in the Tethyan belt; outside this region, detrital
rocks predominate. Desert deposits and red bed facies are characteristic of the Triassic, as is chalk of the
Cretaceous.
Life
Marine invertebrate life during the Mesozoic was dominated by the Mollusca, of which the ammonites were
the most important. Other important invertebrate groups were the bivalves, brachiopods, crinoids, and
corals.
The vertebrates of this era were dominated by reptiles, especially the dinosaurs. All the major groups were
established by the Triassic, and this period marked the spread of these reptiles into almost every major
habitat. The dinosaurs underwent extinction at the end of the Cretaceous. Mammals evolved from therapsid
reptiles during the Triassic, and the first birds appeared during the Jurassic (see Archaeopteryx).
Mesozoic plants consisted of the ferns and the gymnosperm orders of cycads, ginkgos, and conifers.
Angiosperms, which may have first appeared in the Triassic Period, became well established by the end of
the Mesozoic.
Paleozoic Era
The Paleozoic Era, the interval of geologic time from 543 million to 248 million years ago, is the first and
longest era for which an abundant fossil record existsÑhence its name, meaning "ancient life" in Greek.
During this time, which covers the Cambrian through the Permian periods, the continents were subjected to
repeated and widespread flooding by shallow marine incursions called epicontinental seas.
GEOGRAPHY AND GEOLOGY
For most of the era, the northern continents consisted of at least three separate massesÑancestral North
America, Europe, and SiberiaÑnone of which had its modern dimensions. The continental masses of South
America, Africa, Arabia, Madagascar, India, Australia, Antarctica, and probably at least part of China were
united into a vast supercontinent, called Gondwanaland, during the entire Paleozoic Era. The northern
continents united at the close of the era to form a supercontinent called Laurasia, and Laurasia in turn joined
Gondwanaland in the present CaribbeanÐwestern Mediterranean area to form an immense world continent,
Pangea. In this process of consolidation, at least three oceansÑthe Iapetus (proto-Atlantic), the Theic
(between Laurasia and Gondwanaland), and the Uralian (between Europe and Siberia)Ñwere consumed,
and three great ancient mountain rangesÑthe Acadian-Caledonian, the Hercynian, and the UralianÑwere
formed. Other mountain ranges formed above subduction zones at the junctions of continents and ocean
basins.
The first three periods of the Paleozoic EraÑthe Cambrian, Ordovician, and SilurianÑwere initially
recognized in the sequence of deformed sandstones, shales, limestones, and volcanic rocks of Wales.
Similar deformed strata in Devon and Cornwall, England, gave the name to the succeeding Devonian
Period. The overlying sandstones, limestones, and coal measures of Great Britain were the basis for the
Carboniferous Period. In North America the lower marine rocks were named Mississippian after the upper
Mississippi River Valley, and the overlying coal beds were named the Pennsylvanian after the state of
Pennsylvania. The Permian Period, which closes the era, was named for marine limestones west of the Ural
Mountains in the Perm province of Russia.
Notable economic resources found in Paleozoic rocks include coal in Upper Carboniferous (Pennsylvanian)
strata of eastern North America and western Europe and in Permian strata in the continents that formed
Gondwanaland, petroleum in the midcontinent area of North America, and building stone from the numerous
pure limestones and sandstones of the American Midwest.
For most of the Paleozoic Era ancestral North America and Europe were in the tropics, as evidenced by
their extensive carbonate and salt deposits, abundant fossil reefs, paleomagnetic data, and, in the latter
periods, great tropical swamp forests. Gondwanaland was located in south polar areas throughout the
Paleozoic Era, although its northern fringe (present Australia) projected into the tropics, and extensive
portions of the supercontinent were in the south temperate zone. As Gondwanaland moved, the pole shifted
from northwest Africa early in the era to southern Africa in the Carboniferous Period and to Antarctica in the
Permian Period. Extensive evidence of continental glaciation occurs in the former polar areas, particularly in
the Ordovician strata of northwest Africa and in the Permo-Carboniferous strata of most of the present
southern continents (see ice ages). Little land was located north of the tropics throughout the Paleozoic Era.
The North Pole lay then within a vast proto-Pacific Ocean called Panthalassa.
LIFE
Marine invertebrates left abundant fossils during all Paleozoic periods but were particularly important in the
Cambrian, Ordovician, and Silurian periods, because life only began to appear on land during the late
Ordovician. These early Paleozoic seas teemed with sponges, corals, bryozoans (moss-animals),
brachiopods (lamp shells), primitive mollusks such as the shelled nautiloids, primitive arthropods such as the
trilobites, stalked echinoderms, and the enigmatic graptolites and conodonts. Many groups continued
throughout the era, but most major subgroups and all trilobites became extinct near the close of the
PermianÑrepresenting one of the most significant changes in the fossil record. Most geologists attribute the
great extinctions to the continental consolidation and ensuing species competition or to the withdrawal of
seas that occurred during the Late Permian Period and the Early Mesozoic Era. Some researchers have
proposed theories that these organisms died out as result of environmental changes caused by the impact
of one or more asteriods, similar to what is believed to have happened to the dinosaurs at the end of the
Cretaceous Period. Invertebrates invaded the land at the close of the Ordovician. Ancestors of arachnid and
insect arthropods, as well as snails among the mollusks, eventually made the transition to land and were
especially common in the Carboniferous coal swamps.
Fish appeared sometime in the Cambrian Period, became abundant in the Devonian PeriodÑthe so-called
Age of FishesÑand continued to thrive thereafter. In the late Devonian Period, however, lobe-finned, airbreathing fish evolved into the earliest amphibians, the first land-dwelling vertebrates. Bulky, alligatorlike
labyrinthodont amphibians dominated the Carboniferous coal swamps, but reptiles, which developed the
land-laid egg, had already appeared. Reptiles became dominant in the Permian Period, and many types
evolved, including the lineage that was ancestral to mammals and that also contained the finbacks (see
Dimetrodon; Edaphosaurus).
During the early part of the Paleozoic Era, aquatic algae were the dominant plants, but in the Late
SilurianÐEarly Devonian interval, plants invaded the land. Primitive, leafless psilophytes were significant at
first, but soon a variety of their spore-bearing descendants evolved and diversified. By Late Devonian time,
ancestors of the modern club mosses, horsetails (scouring rushes), and ferns had evolved into large trees
forming the first forests. The great swamp forests of the Carboniferous Period, which formed the world's
most extensive and most important coal deposits, included these forms plus gymnospermous trees called
the cordaites. In the cooler, drier Permian Period, conifers came to dominate the land, and other
gymnosperms appeared. On land, the transition from the Paleozoic to the Mesozoic Era was marked by
extinctions that were not as massive as those which occurred in the marine realm.
Precambrian Time
Precambian time includes all of geological history preceding the Cambrian Period, all time prior to 543
million years ago. Most Precambrian rocks are highly deformed and metamorphosed so that evidence for
unconformities and other indicators of relative age are difficult to recognize. In addition these rocks contain
few fossils that could be used to create the kinds of fossil-based subdivisions that have been made in the
Phanerozoic Eon. Thus it has been difficult to come up with a scheme to represent even the relative ages of
the various recognizable units within rocks of Precambrian age. The oldest rocks on Earth are now
estimated to be nearly 4 billion years old, based on analyses of radioactive elements in these rocks. The
Earth itself is estimated, on the basis of analogies with the ages of meteorites, to have formed about 4.6
billion years ago. Precambrian time thus extended over more than 4 billion years, covering over 88% of the
Earth's history.
Precambrian rocks were originally subdivided into an older group of more highly metamorphosed rocks,
called the Archean, and a younger group of less metamorphosed rocks, called the Proterozoic. This criterion
was abandoned when it became clear that some very ancient rocks are much less metamorphosed than
some much more recent rocks. Subdivisions of Precambrian rocks were made on various continents and in
various regions, but these were not easily correlated with subdivisions in other areas, so no standard
subdivisions could be accepted. Eventually fossils of various simple organisms were found in some
Precambrian rocks that were relatively undeformed and unmetamorphosed. These included the Ediacaran
fauna, first discovered in Australia and since found on several other continents as well. These fossils led to
the creation of an upper Precambrian time and rock unit called the Eocambrian Period. Ancient algal
organisms called stromatolites are now known to have existed in some of the world's oldest known rocks.
The development of sophisticated radiometric age-dating methods has made it possible to assign wellaccepted ages for many Precambrian rocks, permitting the development of a fuller sense of what happened
on the Earth at various times within this vast expanse of time.
Many geologists now divide the vast Precambrian time into shorter eons. The boundary between the
Proterozoic Eon, the generally accepted upper subdivision, and the older Archean Eon is now set at 2.5
billion years ago. The United States Geological Survey divides the Proterozoic into three subunits:
Precambrian X (2.5 to 1.6 billion years ago), Precambrian Y (1.6 to 0.8 billion years ago), and Precambrian
Z (0.8 to 0.543 billion years ago). Other scientists call these subdivisions the Neoproterozoic,
Mesoproterozoic, and Paleoproterozoic. Canadian geologists indentify three eras called the Hadrynian
(youngest), Helikian, and Aphebian (oldest) in the Proterozoic Eon. Some geologists call the earliest interval
of Earth time, before about 4 billion years ago, the Hadean Eon. However, no rocks of this age have yet
been identified on the Earth.
THE ROCK RECORD
Precambrian rocks are widely distributed on all continents. They form wide, flat, domal masses of rock in
central continental areas such as the Canadian, Brazilian, African, Indian, Baltic, Siberian, Australian, and
Antarctic Shields, named for their resemblence to the domed surface of an overturned warrior's shield. In
recently glaciated northern shield areas of Canada, Scandinavia, and Siberia, Precambrian rocks are
spectacularly well-exposed and subject to extensive detailed studies. Deep clefts in the Earth's crust such as
the Grand Canyon provide windows through which Precambrian rocks can be observed beneath thick layers
of later rocks. In places such as the Rocky Mountains, erosion following the uplift that created these
mountains has exposed cores of Precambrian rocks that were once buried many kilometers below the
surface. Outcrops of Precambrian cores surrounded by collars of Paleozoic Era rocks can be seen in
mountain ranges such as the Black Hills, Sangre de Cristo, San Juan, Uinta, Wind River, Sandia, Big Horn,
Grand Tetons (see Teton Range), and many others. Although Precambrian rocks of Greenland and
Antarctica are largely covered with glacial ice, they can be observed in shoreline outcrops, hills surrounded
by glacial ice, and drill cores that have penetrated the ice. The forests and savanna that cover the enormous
African and Brazilian shields conceal most of the underlying Precambrian rocks, but isolated rocky outcrops
rise above the vegetation in many places. Deep drilling and seismic studies indicate that Precambrian rocks
underlie later rocks in many areas of the continents; thus they are often called the basement rocks. In many
areas younger rocks have been accreted onto the central Precambrian masses in the shields, gradually
increasing the size of the continents over time. Precambrian rocks have not been found in the ocean basins,
however.
Precambrian rocks of the North American continent can be divided into provinces of different ages and
lithologic characteristics. The oldest Archean rocks occur in the Superior Province, centered on Lake
Superior, where the ages of the rocks are greater than 2.5 billion years. The Churchill Province, with rocks of
1.8 to 1.9 billion years old, forms a concentric collar surrounding the Superior Province. A northeast-trending
province just west of the Appalachian Mountains and extending from Labrador southwest to Texas (where it
is known to underlie a thick pile of younger rocks) is called the Grenville Province, represented by rocks from
0.9 to 1.2 billion years in age. Other smaller provinces lie within several areas of the Canadian Shield as
well.
In southern Africa Precambrian groups that appear only slightly metamorphosed and containing many
sedimentary features include the Swaziland Supergroup (3.5 to 3.3 billion years old); Pongolo Group (3.2 to
2.9 billion years old); Witwatersrand Supergroup (2.8 to 2.6 billion years old); Transvaal Supergroup (2.5 to
2.0 billion years old); and Waterberg Group (1.9 to 1.5 billion years old). Divisions into similar kinds of
provinces have been made for other Precambrian areas as well. Several of these subdivisions reflect ages
when mountain-building events produced high enough temperatures and pressures to set or reset the
radiometric "clocks" within the rocks. A relatively young radiometric age may thus mask a much older
original age of the rock.
Although a large percentage of Precambrian rocks are crystalline metamorphic rocks or igneous rocks, they
include virtually all kinds of rocks, recording the same kinds of processes as rocks formed by modern
processes. Precambrian rocks of the Grand Canyon Series include limestone, sandstone, shale, and
volcanic rocks, similar to many later rocks. The boundary between Precambrian and Cambrian sandstone in
California is so gradational that the upper boundary of Precambrian rocks is arbitrarily fixed at the first
appearance of certain fossils. On the other hand, certain kinds of rocks such as the iron formations of the
Canadian Shield and some of the rocks in greenstone belts are found only in early Precambrian rocks or are
so rare among Phanerozoic rocks that they may represent conditions that no longer exist on Earth. Volcanic
rocks are very abundant in the Precambrian section. At least four episodes of glaciation occurred during the
Proterozoic Eon, producing glacial deposits from about 700 million years ago and 2.4 billion years ago (see
glaciers and glaciation).
MAJOR EVENTS
It is believed that in the earliest stages of Precambrian time cosmic material accumulated to form the Earth.
Processes of chemical and physical differentiation led to formation of an inner metallic core and an outer
silicate mantle and crust. The entire surface of the Earth may have been volcanic, much of it molten. To
date, the oldest rocks observed on the Earth probably formed by sedimentary and volcanic processes.
Some crustal continental material formed very early in the Earth's history, but many geologists believe that
the continents expanded and thickened progressively throughout Precambrian time.
The current model of plate tectonics, involving extensive subduction zones such as those operating around
the Pacific margin was probably fully operative at least 800 million years ago. Analysis of the patterns of
ancient magnetic directions, combined with structural and lithologic information on the continents, now
suggests that continental masses were moving about on the Earth's surface during much of the Proterozoic
and even the Archean eons. Much of the present volume of crustal material may have formed in late
Archean time as many minicontinents came together. North America and the southern continent of
Gondwanaland would have been together between 2.75 and 2.4 billion years ago and drifted apart, only to
rejoin again 1.85 billion years ago until another separation 1.7 billion years ago. Again they joined from 1.2
to 0.9 billion years ago and separated. They rejoined to form a supercontinent labeled Rodinia, which began
to break apart 750 to 700 million years ago. Northern North America (Laurentia) rifted apart from Baltica
about 600 million years ago. Western South America also separated from Laurentia at about the same time.
The end of the Proterozoic was thus a time of continental fragmentation as the Rodinian supercontinent split
into smaller plates. These fragments all came together again during the Paleozoic to form the
supercontinent Pangea. These separations and rejoinings of continents were marked by periods of
metamorphism and igneous activity.
LIFE
The 3.8 billion-year-old Archean rocks of southwestern Greenland, among the oldest on the planet, contain
structures that may be the remains of primitive algae or bacteria. Other rocks over 3 billion years old in
South Africa contain fossils of prokaryotic algae and bacteria (without nuclei; see prokaryote). Stromatolites,
algal structures that exist even today, occur in 3 billion-year-old rocks of Zimbabwe's Bulawayo Formation.
By the beginning of the Proterozoic Eon stromatolitic reefs were common. The Gunflint Chert of Ontario
contains fossils of a dozen species of prokaryotic blue-green algae ranging in age from 1.7 to 1.9 billion
years. Stromatolitic reefs are extensively developed in the Belt and Grand Canyon Supergroups of the
western United States. These 1.3 to 1.5 billion-year-old rocks also contain possible worm burrows and
fossils of jellyfish. Clearly identified worm burrows in southern Africa date from a billion years ago.
The oldest well-authenticated occurrence of eukaryotic green algae (those with nuclei; see eukaryote) are in
the Beck Springs Dolomite, 1.3 billion years old, of eastern California. (Some possible eukaryotes have been
reported in rocks such as the Gunflint Chert. In addition, chemical indications of the presence of eukaryotes
were found in 1999 in northwestern Australian shales that are 2.7 billion years old, which would push back
the appearance of prokaryotes by up to a billion years.) Fossils of the first really complex multicellular
organisms, called the Ediacaran fauna, were discovered in the Ediacara Hills of Australia in the early 1960s.
A separate Precambrian time unit called the Eocambrian Period has been established to accommodate
these 700 to 680 million-year-old organisms, which have now been found in several other parts of the world,
including southern England and eastern Newfoundland.
ECONOMIC DEPOSITS
Because of the vast extent of Precambrian time and the wide distribution of Precambrian rocks, the extent of
economic deposits should come as no surprise. Proterozoic rocks contain large deposits of copper, lead,
and zinc in Canada, Scandinavia, the United States, Africa, and South America. Gold and silver are also
found in the Canadian Shield, which contains the greatest gold belt in North America. The Witwatersrand
gold deposits of the Transvaal in South Africa are legendary, and Siberia boasts extensive deposits of both
lode and placer gold from the Precambrian. For many years most of the iron produced in the United States
came from Precambrian iron formation rocks around Lake Superior. A large percentage of the world's nickel
comes from Precambrian rocks around Sudbury, Ontario. Gemstones (see gems) are also widely distributed
in Precambrian rocks. Because there was no extensive development of organic material during the
Precambrian, however, there are no coal or oil deposits associated with these rocks.
Spiders
Spiders comprise a large, widespread group of carnivorous arthropods. They have eight legs, can produce a
filament resembling silk, and most have poison glands associated with fangs. More than 30,000 species of
spiders are found on every continent except Antarctica in almost every kind of terrestrial habitat and a few
aquatic ones as well. Spiders range in body size from about 0.5 mm (0.02 in) to 9 cm (3.5 in). The term
spider is derived from the Old English spinnan ("to spin") referring to the group's use of silk. Spiders make
up the order Araneae in the class Arachnida (see arachnid), which takes its name from the mythological
character Arachne, a peasant girl who challenged the weaving skill of the goddess Athena. Arachne equaled
Athena's skill in a contest, and in response to Athena's anger she hanged herself. Athena changed the body
of Arachne into that of a spider, so she retained her weaving skill.
Classification
The order Araneae is usually divided into two suborders with classification determined primarily by the
structure of the chelicerae (anterior appendages below the eyes). The suborder Orthognatha, or
mygalomorph spiders, comprises about 12 families and includes the trap door spiders of the family
Ctenizidae and the large tarantulas of the family Theraphosidae, often kept as pets. The suborder
Labidognatha, or araneomorphs, with about 60 families, have chelicerae with fangs that open sideways and
work against one another. These are the true spiders and include most of the familiar ones. Among the
major families are those which capture prey in websÑAraneidae, or orb weavers; Theridiidae, which include
the notorious black widow; and Agelenidae, which capture prey that lands on their sheet websÑand those
which huntÑLycosidae, or wolf spiders, mostly nocturnal; Salticidae, or jumping spiders, diurnal with
excellent eyesight; and Thomisidae, or crab spiders.
Structure and Function
Spiders have two major body parts, the cephalothorax and the abdomen, which are connected by a thin
pedicel. The cephalothorax bears six pairs of appendages. A pair of chelicerae with fangs associated with
food capture is followed by a pair of leglike pedipalps, which are modified in adult males as intromittent
organs for the transfer of sperm to the female. The pedipalps are followed by four pairs of walking legs.
Pedipalps and legs often have many sensory hairs. The legs have seven segments and usually bear two or
three claws, which may be obscured by the hairs, at their apex. The top of the cephalothorax is covered by a
carapace with usually four pairs of simple eyes on the anterior end. Some spiders have fewer eyes, and
certain cave species have lost their eyes entirely. The abdomen is usually unsegmented, and its ventral
surface has the genital openings.
The hemolymph (blood) of the circulatory system is used for both the distribution of food and oxygen (some
spiders having the bluish oxygen-carrying pigment, hemocyanin) and the hydraulic system by which the legs
are extended by blood pressure. Much of the nervous system is condensed as a complex brain (group of
ganglia) in the prosoma.
Spiders use two systems for respiration: thin tubes, or trachea, that ramify from abdominal openings
(spiracles) and book lungs, which are groups of thin-walled invaginations that open on either side of the
genital region at the anterior end of the abdomen. Most true spiders have both trachea and a pair of book
lungs, but some of the smallest have lost their book lungs, and some have two pairs of book lungs and no
trachea.
The spinnerets are fingerlike appendages usually located at the spider's posterior end and supplied with
liquid protein silk by several silk glands within the abdomen. The protein polymerizes into dry silk as it is
pulled out of the thousands of tiny spigots covering the four to eight spinnerets. Various types of silk are
spun by spiders and are used for trap building, lining tubes and cavities, wrapping eggs, wrapping prey, as
safety lines, and, in small spiders, as a way to "balloon," or to be carried by the wind.
All spiders have movable fangs at the ends of their chelicerae and most have poison glands that open at the
tips of the fangs. The poison is effective on arthropods, and a few spiders have poisons toxic to vertebrates,
including humans. In the United States the black widow, Latrodectus mactans, has a potent neurotoxin, and
the brown recluse spider, Loxosceles reclusa, possesses an ulcer-producing poison. Most spiders, however,
are harmless to humans, although the bite of some large species can be painful. Spiders are predaceous
animals that feed primarily on other insects. They usually kill prey with their poison. Because spiders can
swallow only liquids, digestive juices are pumped out onto the prey, where digestion occurs externally. The
spider then swallows the resulting nutritive soup.
Spiders use several strategies to capture prey: active hunting, waiting in ambush, and making and using
traps of silk. The most distinctive strategy is the use of a silk orb web. This aerial net uses a minimum of silk
threads to "strain" the air for insect prey of the proper size. Most orb webs are made up of strong support
threads for the frame and radii, which radiate from the hub, and a spiral sticky thread that makes up the
catching surface. The spider often sits at the hub with each of its legs on a different radiating thread. When a
prey is caught, the spider can feel its vibrations; it runs rapidly to the prey and quickly bites it or wraps it in
silk. Some webs reflect ultraviolet light to lure insects to the web. Each species of orb weavers makes a
distinctive form of web.
Reproduction
Reproduction is sexual, with the male often displaying complex courtship behavior (see animal courtship and
mating). The male makes a simple sperm web and deposits a drop of sperm on it from his genital pore. The
sperm is then taken up by the copulatory organs at the ends of his pedipalps. The male finds a female,
courts her, and then matesÑinserting the pedipalps into special openings on the underside of the female's
abdomen, where the sperm is deposited. Within days the female will construct a silken egg case; the eggs
are fertilized as they are laid into the egg case. A single egg sac may have from a few to 3,000 eggs.
Development begins in the egg sac, and the first molt usually takes place there. After emerging, the
spiderlings undergo from 5 to 15 molts in direct development to the mature form. In some spiders the mother
protects her young for a few days after emergence. A few species exhibit colonial and subsocial systems
atypical of most spiders. Most spiders live about a year, but some live up to 20 years.
Importance
Spiders play an important role in natural ecosystems as insect predators. Their silk, which is several times
stronger than steel, is used commercially in the preparation of crosshairs in optical instruments, and
research scientists are seeking ways in which to manfacture the material artificially. Spiders are used to test
certain drugs, because drugs given to spiders can affect their web building. They have also been taken on
space flights to study the effects on web building of a gravity-free environment.
Download