PAPER 2000 Question:1 (a) Al-Beruni Al-Biruni (973-1050?), Arab scientist, who wrote on a wide variety of scientific subjects. His most important contributions as a scientist were his keen observations of natural phenomena, rather than theories. Sometimes called “the master,” he became one of the best-known Muslim scientists of his time. Al-Biruni was born in what is now Uzbekistan. At the time, it was part of a vast region called Persia. Al-Biruni's records show that he wrote 113 works, but most of them have been lost. His subjects included astronomy, astrology, chronology, geography, mathematics, mechanics, medicine, pharmacology, meteorology, mineralogy, history, religion, philosophy, literature, and magic. One or more books on most of these subjects have survived. AlBiruni's important works include Canon, his most comprehensive study of astronomy; Densities, which records specific gravities of various metals, liquids, and gems; Astrolabe, one of the most valuable descriptions of that astronomical instrument; Pharmacology, which contains more than 700 descriptions of drugs; and India, his best-known work, in which he used his knowledge of Sanskrit to describe Indian customs, languages, science, and geography. (b) Water Pollution I INTRODUCTION Water Pollution, contamination of streams, lakes, underground water, bays, or oceans by substances harmful to living things. Water is necessary to life on earth. All organisms contain it; some live in it; some drink it. Plants and animals require water that is moderately pure, and they cannot survive if their water is loaded with toxic chemicals or harmful microorganisms. If severe, water pollution can kill large numbers of fish, birds, and other animals, in some cases killing all members of a species in an affected area. Pollution makes streams, lakes, and coastal waters unpleasant to look at, to smell, and to swim in. Fish and shellfish harvested from polluted waters may be unsafe to eat. People who ingest polluted water can become ill, and, with prolonged exposure, may develop cancers or bear children with birth defects. II MAJOR TYPES OF POLLUTANTS The major water pollutants are chemical, biological, or physical materials that degrade water quality. Pollutants can be classed into eight categories, each of which presents its own set of hazards. A Petroleum Products Oil and chemicals derived from oil are used for fuel, lubrication, plastics manufacturing, and many other purposes. These petroleum products get into water mainly by means of accidental spills from ships, tanker trucks, pipelines, and leaky underground storage tanks. Many petroleum products are poisonous if ingested by animals, and spilled oil damages the feathers of birds or the fur of animals, often causing death. In addition, spilled oil may be contaminated with other harmful substances, such as polychlorinated biphenyls (PCBs). B Pesticides and Herbicides Chemicals used to kill unwanted animals and plants, for instance on farms or in suburban yards, may be collected by rainwater runoff and carried into streams, especially if these substances are applied too lavishly. Some of these chemicals are biodegradable and quickly decay into harmless or less harmful forms, while others are nonbiodegradable and remain dangerous for a long time. When animals consume plants that have been treated with certain nonbiodegradable chemicals, such as chlordane and dichlorodiphenyltrichloroethane (DDT), these chemicals are absorbed into the tissues or organs of the animals. When other animals feed on these contaminated animals, the chemicals are passed up the food chain. With each step up the food chain, the concentration of the pollutant increases. In one study, DDT levels in ospreys (a family of fish-eating birds) were found to be 10 to 50 times higher than in the fish that they ate, 600 times the level in the plankton that the fish ate, and 10 million times higher than in the water. Animals at the top of food chains may, as a result of these chemical concentrations, suffer cancers, reproductive problems, and death. Many drinking water supplies are contaminated with pesticides from widespread agricultural use. More than 14 million Americans drink water contaminated with pesticides, and the Environmental Protection Agency (EPA) estimates that 10 percent of wells contain pesticides. Nitrates, a pollutant often derived from fertilizer runoff, can cause methemoglobinemia in infants, a potentially lethal form of anemia that is also called blue baby syndrome. C Heavy Metals Heavy metals, such as copper, lead, mercury, and selenium, get into water from many sources, including industries, automobile exhaust, mines, and even natural soil. Like pesticides, heavy metals become more concentrated as animals feed on plants and are consumed in turn by other animals. When they reach high levels in the body, heavy metals can be immediately poisonous, or can result in long-term health problems similar to those caused by pesticides and herbicides. For example, cadmium in fertilizer derived from sewage sludge can be absorbed by crops. If these crops are eaten by humans in sufficient amounts, the metal can cause diarrhea and, over time, liver and kidney damage. Lead can get into water from lead pipes and solder in older water systems; children exposed to lead in water can suffer mental retardation. D Hazardous Wastes Hazardous wastes are chemical wastes that are either toxic (poisonous), reactive (capable of producing explosive or toxic gases), corrosive (capable of corroding steel), or ignitable (flammable). If improperly treated or stored, hazardous wastes can pollute water supplies. In 1969 the Cuyahoga River in Cleveland, Ohio, was so polluted with hazardous wastes that it caught fire and burned. PCBs, a class of chemicals once widely used in electrical equipment such as transformers, can get into the environment through oil spills and can reach toxic levels as organisms eat one another. E Excess Organic Matter Fertilizers and other nutrients used to promote plant growth on farms and in gardens may find their way into water. At first, these nutrients encourage the growth of plants and algae in water. However, when the plant matter and algae die and settle underwater, microorganisms decompose them. In the process of decomposition, these microorganisms consume oxygen that is dissolved in the water. Oxygen levels in the water may drop to such dangerously low levels that oxygen-dependent animals in the water, such as fish, die. This process of depleting oxygen to deadly levels is called eutrophication. F Sediment Sediment, soil particles carried to a streambed, lake, or ocean, can also be a pollutant if it is present in large enough amounts. Soil erosion produced by the removal of soil-trapping trees near waterways, or carried by rainwater and floodwater from croplands, strip mines, and roads, can damage a stream or lake by introducing too much nutrient matter. This leads to eutrophication. Sedimentation can also cover streambed gravel in which many fish, such as salmon and trout, lay their eggs. G Infectious Organisms A 1994 study by the Centers for Disease Control and Prevention (CDC) estimated that about 900,000 people get sick annually in the United States because of organisms in their drinking water, and around 900 people die. Many disease-causing organisms that are present in small numbers in most natural waters are considered pollutants when found in drinking water. Such parasites as Giardia lamblia and Cryptosporidium parvum occasionally turn up in urban water supplies. These parasites can cause illness, especially in people who are very old or very young, and in people who are already suffering from other diseases. In 1993 an outbreak of Cryptosporidium in the water supply of Milwaukee, Wisconsin, sickened more than 400,000 people and killed more than 100. H Thermal Pollution Water is often drawn from rivers, lakes, or the ocean for use as a coolant in factories and power plants. The water is usually returned to the source warmer than when it was taken. Even small temperature changes in a body of water can drive away the fish and other species that were originally present, and attract other species in place of them. Thermal pollution can accelerate biological processes in plants and animals or deplete oxygen levels in water. The result may be fish and other wildlife deaths near the discharge source. Thermal pollution can also be caused by the removal of trees and vegetation that shade and cool streams. III SOURCES OF WATER POLLUTANTS Water pollutants result from many human activities. Pollutants from industrial sources may pour out from the outfall pipes of factories or may leak from pipelines and underground storage tanks. Polluted water may flow from mines where the water has leached through mineral-rich rocks or has been contaminated by the chemicals used in processing the ores. Cities and other residential communities contribute mostly sewage, with traces of household chemicals mixed in. Sometimes industries discharge pollutants into city sewers, increasing the variety of pollutants in municipal areas. Pollutants from such agricultural sources as farms, pastures, feedlots, and ranches contribute animal wastes, agricultural chemicals, and sediment from erosion. The oceans, vast as they are, are not invulnerable to pollution. Pollutants reach the sea from adjacent shorelines, from ships, and from offshore oil platforms. Sewage and food waste discarded from ships on the open sea do little harm, but plastics thrown overboard can kill birds or marine animals by entangling them, choking them, or blocking their digestive tracts if swallowed. Oil spills often occur through accidents, such as the wrecks of the tanker Amoco Cadiz off the French coast in 1978 and the Exxon Valdez in Alaska in 1992. Routine and deliberate discharges, when tanks are flushed out with seawater, also add a lot of oil to the oceans. Offshore oil platforms also produce spills: The second largest oil spill on record was in the Gulf of Mexico in 1979 when the Ixtoc 1 well spilled 530 million liters (140 million gallons). The largest oil spill ever was the result of an act of war. During the Gulf War of 1991, Iraqi forces destroyed eight tankers and onshore terminals in Kuwait, releasing a record 910 million liters (240 million gallons). An oil spill has its worst effects when the oil slick encounters a shoreline. Oil in coastal waters kills tidepool life and harms birds and marine mammals by causing feathers and fur to lose their natural waterproof quality, which causes the animals to drown or die of cold. Additionally, these animals can become sick or poisoned when they swallow the oil while preening (grooming their feathers or fur). Water pollution can also be caused by other types of pollution. For example, sulfur dioxide from a power plant’s chimney begins as air pollution. The polluted air mixes with atmospheric moisture to produce airborne sulfuric acid, which falls to the earth as acid rain. In turn, the acid rain can be carried into a stream or lake, becoming a form of water pollution that can harm or even eliminate wildlife. Similarly, the garbage in a landfill can create water pollution if rainwater percolating through the garbage absorbs toxins before it sinks into the soil and contaminates the underlying groundwater (water that is naturally stored underground in beds of gravel and sand, called aquifers). Pollution may reach natural waters at spots we can easily identify, known as point sources, such as waste pipes or mine shafts. Nonpoint sources are more difficult to recognize. Pollutants from these sources may appear a little at a time from large areas, carried along by rainfall or snowmelt. For instance, the small oil leaks from automobiles that produce discolored spots on the asphalt of parking lots become nonpoint sources of water pollution when rain carries the oil into local waters. Most agricultural pollution is nonpoint since it typically originates from many fields. IV CONTROLS In the United States, the serious campaign against water pollution began in 1972, when Congress passed the Clean Water Act. This law initiated a national goal to end all pollution discharges into surface waters, such as lakes, rivers, streams, wetlands, and coastal waters. The law required those who discharge pollutants into waterways to apply for federal permits and to be responsible for reducing the amount of pollution over time. The law also authorized generous federal grants to help states build water treatment plants that remove pollutants, principally sewage, from wastewater before it is discharged. Since the passage of the Clean Water Act in 1972, most of the obvious point sources of pollution in the United States have been substantially cleaned up. Municipal sewage plants in many areas are now yielding water so clean that it can be used again. Industries are treating their waste and also changing their manufacturing processes so that less waste is produced. As a result, surface waters are far cleaner than they were in 1972. In 1990 a survey of rivers and streams found that three-quarters of these waters were clean enough for swimming and fishing. Cleaning up the remainder of these rivers and streams will require tackling the more difficult problems of diffuse, nonpoint source pollution. Congress first took up the nonpoint source problem in 1987, requiring the states to develop programs to combat this kind of pollution. Since interception and treatment of nonpoint pollution is very difficult, the prime strategy is to prevent it. In urban areas, one obvious sign of the campaign against nonpoint pollution is the presence of stenciled notices often seen beside storm drains: Drains To Bay, Drains To Creek, or Drains To Lake. These signs discourage people from dumping contaminants, such as used engine oil, down grates because the material will likely pollute nearby waterways. Householders are urged to be sparing in their use of garden pesticides and fertilizers in order to reduce contaminated runoff and eutrophication. At construction sites, builders are required to fight soil erosion by laying down tarps, building sediment traps, and seeding grasses. In the countryside, efforts are underway to reduce pollution from agricultural wastes, fertilizers, and pesticides, and from erosion caused by logging and farming. Farmers and foresters are encouraged to protect streams by leaving streamside trees and vegetation undisturbed; this practice stabilizes banks and traps sediment coming down the slope, preventing sediment buildup in water. Hillside fields are commonly plowed on the contour of the land, rather than up and down the incline, to reduce erosion and to discourage the formation of gullies. Cows are kept away from streamsides and housed in barns where their waste can be gathered and treated. Increasingly, governments are protecting wetlands, which are valuable pollution traps because their plants absorb excess nutrients and their fine sediments absorb other pollutants. In some places, lost wetlands are being restored. Despite these steps, a great deal remains to be done. In the United States, the EPA is in overall charge of antipollution efforts. The EPA sets standards, approves state control plans, and steps in (if necessary) to enforce its own rules. Under the Safe Drinking Water Act (SDWA), passed in 1974 and amended in 1986 and 1996, the EPA sets standards for drinking water. Among other provisions, the SWDA requires that all water drawn from surface water supplies must be filtered to remove Cryptosporidium bacteria by the year 2000. The law also requires that states map the watersheds from which drinking water comes and identify sources of pollution within those watersheds. While America’s drinking water is among the safest in the world, and has been improving since passage of the SDWA, many water utilities that serve millions of Americans provide tap water that fails to meet the EPA standards. The EPA has equivalents in many countries, although details of responsibilities vary. For instance, the federal governments may have a larger role in pollution control, as in France, or more of this responsibility may be shifted to the state and provincial governments, as in Canada. Because many rivers, lakes, and ocean shorelines are shared by several nations, many international treaties also address water pollution. For example, the governments of Canada and the United States have negotiated at least nine treaties or agreements, starting with the Canada-U.S. Boundary Waters Treaty of 1909, governing water pollution of the many rivers and lakes that flow along or across their common border. Several major treaties deal with oceanic pollution, including the 1972 Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter and the 1973 International Convention for the Prevention of Pollution from Ships (known as MARPOL). International controls and enforcement, however, are generally weak. Contributed By: John Hart Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (c) Semi Conductors I INTRODUCTION Semiconductor, solid or liquid material, able to conduct electricity at room temperature more readily than an insulator, but less easily than a metal. Electrical conductivity, which is the ability to conduct electrical current under the application of a voltage, has one of the widest ranges of values of any physical property of matter. Such metals as copper, silver, and aluminum are excellent conductors, but such insulators as diamond and glass are very poor conductors (see Conductor, electrical; Insulation; Metals). At low temperatures, pure semiconductors behave like insulators. Under higher temperatures or light or with the addition of impurities, however, the conductivity of semiconductors can be increased dramatically, reaching levels that may approach those of metals. The physical properties of semiconductors are studied in solid-state physics. See Physics. II CONDUCTION ELECTRONS AND HOLES The common semiconductors include chemical elements and compounds such as silicon, germanium; selenium, gallium arsenide, zinc selenide, and lead telluride. The increase in conductivity with temperature, light, or impurities arises from an increase in the number of conduction electrons, which are the carriers of the electrical current See Electricity; Electron. In a pure, or intrinsic, semiconductor such as silicon, the valence electrons, or outer electrons, of an atom are paired and shared between atoms to make a covalent bond that holds the crystal together See Chemical Reaction; see Crystal). These valence electrons are not free to carry electrical current. To produce conduction electrons, temperature or light is used to excite the valence electrons out of their bonds, leaving them free to conduct current. Deficiencies, or “holes,” are left behind that contribute to the flow of electricity. (These holes are said to be carriers of positive electricity.) This is the physical origin of the increase in the electrical conductivity of semiconductors with temperature. The energy required to excite the electron and hole is called the energy gap. III DOPING Another method to produce free carriers of electricity is to add impurities to, or to “dope,” the semiconductor. The difference in the number of valence electrons between the doping material, or dopant (either donors or acceptors of electrons), and host gives rise to negative (n-type) or positive (p-type) carriers of electricity. This concept is illustrated in the accompanying diagram of a doped silicon (Si) crystal. Each silicon atom has four valence electrons (represented by dots); two are required to form a covalent bond. In n- type silicon, atoms such as phosphorus (P) with five valence electrons replace some silicon and provide extra negative electrons. In p-type silicon, atoms with three valence electrons such as aluminum (Al) lead to a deficiency of electrons, or to holes, which act as positive electrons. The extra electrons or holes can conduct electricity. When p-type and n-type semiconductor regions are adjacent to each other, they form a semiconductor diode, and the region of contact is called a p-n junction. (A diode is a twoterminal device that has a high resistance to electric current in one direction but a low resistance in the other direction.) The conductance properties of the p-n junction depend on the direction of the voltage, which can, in turn, be used to control the electrical nature of the device. Series of such junctions are used to make transistors and other semiconductor devices such as solar cells, p-n junction lasers, rectifiers, and many others. Semiconductor devices have many varied applications in electrical engineering. Recent engineering developments have yielded small semiconductor chips containing hundreds of thousands of transistors. These chips have made possible great miniaturization of electronic devices. More efficient use of such chips has been developed through what is called complementary metal-oxide semiconductor circuitry, or CMOS, which consists of pairs of pand n-channel transistors controlled by a single circuit. In addition, extremely small devices are being made using the technique of molecular-beam epitaxy. Contributed By: Marvin L. Cohen Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question:2 Movements of Earth Earth (planet) I INTRODUCTION Earth (planet), one of nine planets in the solar system, the only planet known to harbor life, and the “home” of human beings. From space Earth resembles a big blue marble with swirling white clouds floating above blue oceans. About 71 percent of Earth’s surface is covered by water, which is essential to life. The rest is land, mostly in the form of continents that rise above the oceans. Earth’s surface is surrounded by a layer of gases known as the atmosphere, which extends upward from the surface, slowly thinning out into space. Below the surface is a hot interior of rocky material and two core layers composed of the metals nickel and iron in solid and liquid form. Unlike the other planets, Earth has a unique set of characteristics ideally suited to supporting life as we know it. It is neither too hot, like Mercury, the closest planet to the Sun, nor too cold, like distant Mars and the even more distant outer planets—Jupiter, Saturn, Uranus, Neptune, and tiny Pluto. Earth’s atmosphere includes just the right amount of gases that trap heat from the Sun, resulting in a moderate climate suitable for water to exist in liquid form. The atmosphere also helps block radiation from the Sun that would be harmful to life. Earth’s atmosphere distinguishes it from the planet Venus, which is otherwise much like Earth. Venus is about the same size and mass as Earth and is also neither too near nor too far from the Sun. But because Venus has too much heat-trapping carbon dioxide in its atmosphere, its surface is extremely hot—462°C (864°F)—hot enough to melt lead and too hot for life to exist. Although Earth is the only planet known to have life, scientists do not rule out the possibility that life may once have existed on other planets or their moons, or may exist today in primitive form. Mars, for example, has many features that resemble river channels, indicating that liquid water once flowed on its surface. If so, life may also have evolved there, and evidence for it may one day be found in fossil form. Water still exists on Mars, but it is frozen in polar ice caps, in permafrost, and possibly in rocks below the surface. For thousands of years, human beings could only wonder about Earth and the other observable planets in the solar system. Many early ideas—for example, that the Earth was a sphere and that it traveled around the Sun—were based on brilliant reasoning. However, it was only with the development of the scientific method and scientific instruments, especially in the 18th and 19th centuries, that humans began to gather data that could be used to verify theories about Earth and the rest of the solar system. By studying fossils found in rock layers, for example, scientists realized that the Earth was much older than previously believed. And with the use of telescopes, new planets such as Uranus, Neptune, and Pluto were discovered. In the second half of the 20th century, more advances in the study of Earth and the solar system occurred due to the development of rockets that could send spacecraft beyond Earth. Human beings were able to study and observe Earth from space with satellites equipped with scientific instruments. Astronauts landed on the Moon and gathered ancient rocks that revealed much about the early solar system. During this remarkable advancement in human history, humans also sent unmanned spacecraft to the other planets and their moons. Spacecraft have now visited all of the planets except Pluto. The study of other planets and moons has provided new insights about Earth, just as the study of the Sun and other stars like it has helped shape new theories about how Earth and the rest of the solar system formed. As a result of this recent space exploration, we now know that Earth is one of the most geologically active of all the planets and moons in the solar system. Earth is constantly changing. Over long periods of time land is built up and worn away, oceans are formed and re-formed, and continents move around, break up, and merge. Life itself contributes to changes on Earth, especially in the way living things can alter Earth’s atmosphere. For example, Earth at one time had the same amount of carbon dioxide in its atmosphere as Venus now has, but early forms of life helped remove this carbon dioxide over millions of years. These life forms also added oxygen to Earth’s atmosphere and made it possible for animal life to evolve on land. A variety of scientific fields have broadened our knowledge about Earth, including biogeography, climatology, geology, geophysics, hydrology, meteorology, oceanography, and zoogeography. Collectively, these fields are known as Earth science. By studying Earth’s atmosphere, its surface, and its interior and by studying the Sun and the rest of the solar system, scientists have learned much about how Earth came into existence, how it changed, and why it continues to change. II EARTH, THE SOLAR SYSTEM, AND THE GALAXY Earth is the third planet from the Sun, after Mercury and Venus. The average distance between Earth and the Sun is 150 million km (93 million mi). Earth and all the other planets in the solar system revolve, or orbit, around the Sun due to the force of gravitation. The Earth travels at a velocity of about 107,000 km/h (about 67,000 mph) as it orbits the Sun. All but one of the planets orbit the Sun in the same plane—that is, if an imaginary line were extended from the center of the Sun to the outer regions of the solar system, the orbital paths of the planets would intersect that line. The exception is Pluto, which has an eccentric (unusual) orbit. Earth’s orbital path is not quite a perfect circle but instead is slightly elliptical (oval-shaped). For example, at maximum distance Earth is about 152 million km (about 95 million mi) from the Sun; at minimum distance Earth is about 147 million km (about 91 million mi) from the Sun. If Earth orbited the Sun in a perfect circle, it would always be the same distance from the Sun. The solar system, in turn, is part of the Milky Way Galaxy, a collection of billions of stars bound together by gravity. The Milky Way has armlike discs of stars that spiral out from its center. The solar system is located in one of these spiral arms, known as the Orion arm, which is about two-thirds of the way from the center of the Galaxy. In most parts of the Northern Hemisphere, this disc of stars is visible on a summer night as a dense band of light known as the Milky Way. Earth is the fifth largest planet in the solar system. Its diameter, measured around the equator, is 12,756 km (7,926 mi). Earth is not a perfect sphere but is slightly flattened at the poles. Its polar diameter, measured from the North Pole to the South Pole, is somewhat less than the equatorial diameter because of this flattening. Although Earth is the largest of the four planets—Mercury, Venus, Earth, and Mars—that make up the inner solar system (the planets closest to the Sun), it is small compared with the giant planets of the outer solar system—Jupiter, Saturn, Uranus, and Neptune. For example, the largest planet, Jupiter, has a diameter at its equator of 143,000 km (89,000 mi), 11 times greater than that of Earth. A famous atmospheric feature on Jupiter, the Great Red Spot, is so large that three Earths would fit inside it. Earth has one natural satellite, the Moon. The Moon orbits the Earth, completing one revolution in an elliptical path in 27 days 7 hr 43 min 11.5 sec. The Moon orbits the Earth because of the force of Earth’s gravity. However, the Moon also exerts a gravitational force on the Earth. Evidence for the Moon’s gravitational influence can be seen in the ocean tides. A popular theory suggests that the Moon split off from Earth more than 4 billion years ago when a large meteorite or small planet struck the Earth. As Earth revolves around the Sun, it rotates, or spins, on its axis, an imaginary line that runs between the North and South poles. The period of one complete rotation is defined as a day and takes 23 hr 56 min 4.1 sec. The period of one revolution around the Sun is defined as a year, or 365.2422 solar days, or 365 days 5 hr 48 min 46 sec. Earth also moves along with the Milky Way Galaxy as the Galaxy rotates and moves through space. It takes more than 200 million years for the stars in the Milky Way to complete one revolution around the Galaxy’s center. Earth’s axis of rotation is inclined (tilted) 23.5° relative to its plane of revolution around the Sun. This inclination of the axis creates the seasons and causes the height of the Sun in the sky at noon to increase and decrease as the seasons change. The Northern Hemisphere receives the most energy from the Sun when it is tilted toward the Sun. This orientation corresponds to summer in the Northern Hemisphere and winter in the Southern Hemisphere. The Southern Hemisphere receives maximum energy when it is tilted toward the Sun, corresponding to summer in the Southern Hemisphere and winter in the Northern Hemisphere. Fall and spring occur in between these orientations. III EARTH’S ATMOSPHERE The atmosphere is a layer of different gases that extends from Earth’s surface to the exosphere, the outer limit of the atmosphere, about 9,600 km (6,000 mi) above the surface. Near Earth’s surface, the atmosphere consists almost entirely of nitrogen (78 percent) and oxygen (21 percent). The remaining 1 percent of atmospheric gases consists of argon (0.9 percent); carbon dioxide (0.03 percent); varying amounts of water vapor; and trace amounts of hydrogen, nitrous oxide, ozone, methane, carbon monoxide, helium, neon, krypton, and xenon. A Layers of the Atmosphere The layers of the atmosphere are the troposphere, the stratosphere, the mesosphere, the thermosphere, and the exosphere. The troposphere is the layer in which weather occurs and extends from the surface to about 16 km (about 10 mi) above sea level at the equator. Above the troposphere is the stratosphere, which has an upper boundary of about 50 km (about 30 mi) above sea level. The layer from 50 to 90 km (30 to 60 mi) is called the mesosphere. At an altitude of about 90 km, temperatures begin to rise. The layer that begins at this altitude is called the thermosphere because of the high temperatures that can be reached in this layer (about 1200°C, or about 2200°F). The region beyond the thermosphere is called the exosphere. The thermosphere and the exosphere overlap with another region of the atmosphere known as the ionosphere, a layer or layers of ionized air extending from almost 60 km (about 50 mi) above Earth’s surface to altitudes of 1,000 km (600 mi) and more. Earth’s atmosphere and the way it interacts with the oceans and radiation from the Sun are responsible for the planet’s climate and weather. The atmosphere plays a key role in supporting life. Almost all life on Earth uses atmospheric oxygen for energy in a process known as cellular respiration, which is essential to life. The atmosphere also helps moderate Earth’s climate by trapping radiation from the Sun that is reflected from Earth’s surface. Water vapor, carbon dioxide, methane, and nitrous oxide in the atmosphere act as “greenhouse gases.” Like the glass in a greenhouse, they trap infrared, or heat, radiation from the Sun in the lower atmosphere and thereby help warm Earth’s surface. Without this greenhouse effect, heat radiation would escape into space, and Earth would be too cold to support most forms of life. Other gases in the atmosphere are also essential to life. The trace amount of ozone found in Earth’s stratosphere blocks harmful ultraviolet radiation from the Sun. Without the ozone layer, life as we know it could not survive on land. Earth’s atmosphere is also an important part of a phenomenon known as the water cycle or the hydrologic cycle. See also Atmosphere. B The Atmosphere and the Water Cycle The water cycle simply means that Earth’s water is continually recycled between the oceans, the atmosphere, and the land. All of the water that exists on Earth today has been used and reused for billions of years. Very little water has been created or lost during this period of time. Water is constantly moving on Earth’s surface and changing back and forth between ice, liquid water, and water vapor. The water cycle begins when the Sun heats the water in the oceans and causes it to evaporate and enter the atmosphere as water vapor. Some of this water vapor falls as precipitation directly back into the oceans, completing a short cycle. Some of the water vapor, however, reaches land, where it may fall as snow or rain. Melted snow or rain enters rivers or lakes on the land. Due to the force of gravity, the water in the rivers eventually empties back into the oceans. Melted snow or rain also may enter the ground. Groundwater may be stored for hundreds or thousands of years, but it will eventually reach the surface as springs or small pools known as seeps. Even snow that forms glacial ice or becomes part of the polar caps and is kept out of the cycle for thousands of years eventually melts or is warmed by the Sun and turned into water vapor, entering the atmosphere and falling again as precipitation. All water that falls on land eventually returns to the ocean, completing the water cycle. IV EARTH’S SURFACE Earth’s surface is the outermost layer of the planet. It includes the hydrosphere, the crust, and the biosphere. A Hydrosphere The hydrosphere consists of the bodies of water that cover 71 percent of Earth’s surface. The largest of these are the oceans, which contain over 97 percent of all water on Earth. Glaciers and the polar ice caps contain just over 2 percent of Earth’s water in the form of solid ice. Only about 0.6 percent is under the surface as groundwater. Nevertheless, groundwater is 36 times more plentiful than water found in lakes, inland seas, rivers, and in the atmosphere as water vapor. Only 0.017 percent of all the water on Earth is found in lakes and rivers. And a mere 0.001 percent is found in the atmosphere as water vapor. Most of the water in glaciers, lakes, inland seas, rivers, and groundwater is fresh and can be used for drinking and agriculture. Dissolved salts compose about 3.5 percent of the water in the oceans, however, making it unsuitable for drinking or agriculture unless it is treated to remove the salts. B Crust The crust consists of the continents, other land areas, and the basins, or floors, of the oceans. The dry land of Earth’s surface is called the continental crust. It is about 15 to 75 km (9 to 47 mi) thick. The oceanic crust is thinner than the continental crust. Its average thickness is 5 to 10 km (3 to 6 mi). The crust has a definite boundary called the Mohorovičić discontinuity, or simply the Moho. The boundary separates the crust from the underlying mantle, which is much thicker and is part of Earth’s interior. Oceanic crust and continental crust differ in the type of rocks they contain. There are three main types of rocks: igneous, sedimentary, and metamorphic. Igneous rocks form when molten rock, called magma, cools and solidifies. Sedimentary rocks are usually created by the breakdown of igneous rocks. They tend to form in layers as small particles of other rocks or as the mineralized remains of dead animals and plants that have fused together over time. The remains of dead animals and plants occasionally become mineralized in sedimentary rock and are recognizable as fossils. Metamorphic rocks form when sedimentary or igneous rocks are altered by heat and pressure deep underground. Oceanic crust consists of dark, dense igneous rocks, such as basalt and gabbro. Continental crust consists of lighter-colored, less dense igneous rocks, such as granite and diorite. Continental crust also includes metamorphic rocks and sedimentary rocks. C Biosphere The biosphere includes all the areas of Earth capable of supporting life. The biosphere ranges from about 10 km (about 6 mi) into the atmosphere to the deepest ocean floor. For a long time, scientists believed that all life depended on energy from the Sun and consequently could only exist where sunlight penetrated. In the 1970s, however, scientists discovered various forms of life around hydrothermal vents on the floor of the Pacific Ocean where no sunlight penetrated. They learned that primitive bacteria formed the basis of this living community and that the bacteria derived their energy from a process called chemosynthesis that did not depend on sunlight. Some scientists believe that the biosphere may extend relatively deep into Earth’s crust. They have recovered what they believe are primitive bacteria from deeply drilled holes below the surface. D Changes to Earth’s Surface Earth’s surface has been constantly changing ever since the planet formed. Most of these changes have been gradual, taking place over millions of years. Nevertheless, these gradual changes have resulted in radical modifications, involving the formation, erosion, and reformation of mountain ranges, the movement of continents, the creation of huge supercontinents, and the breakup of supercontinents into smaller continents. The weathering and erosion that result from the water cycle are among the principal factors responsible for changes to Earth’s surface. Another principal factor is the movement of Earth’s continents and seafloors and the buildup of mountain ranges due to a phenomenon known as plate tectonics. Heat is the basis for all of these changes. Heat in Earth’s interior is believed to be responsible for continental movement, mountain building, and the creation of new seafloor in ocean basins. Heat from the Sun is responsible for the evaporation of ocean water and the resulting precipitation that causes weathering and erosion. In effect, heat in Earth’s interior helps build up Earth’s surface while heat from the Sun helps wear down the surface. D1 Weathering Weathering is the breakdown of rock at and near the surface of Earth. Most rocks originally formed in a hot, high-pressure environment below the surface where there was little exposure to water. Once the rocks reached Earth’s surface, however, they were subjected to temperature changes and exposed to water. When rocks are subjected to these kinds of surface conditions, the minerals they contain tend to change. These changes constitute the process of weathering. There are two types of weathering: physical weathering and chemical weathering. Physical weathering involves a decrease in the size of rock material. Freezing and thawing of water in rock cavities, for example, splits rock into small pieces because water expands when it freezes. Chemical weathering involves a chemical change in the composition of rock. For example, feldspar, a common mineral in granite and other rocks, reacts with water to form clay minerals, resulting in a new substance with totally different properties than the parent feldspar. Chemical weathering is of significance to humans because it creates the clay minerals that are important components of soil, the basis of agriculture. Chemical weathering also causes the release of dissolved forms of sodium, calcium, potassium, magnesium, and other chemical elements into surface water and groundwater. These elements are carried by surface water and groundwater to the sea and are the sources of dissolved salts in the sea. D2 Erosion Erosion is the process that removes loose and weathered rock and carries it to a new site. Water, wind, and glacial ice combined with the force of gravity can cause erosion. Erosion by running water is by far the most common process of erosion. It takes place over a longer period of time than other forms of erosion. When water from rain or melted snow moves downhill, it can carry loose rock or soil with it. Erosion by running water forms the familiar gullies and V-shaped valleys that cut into most landscapes. The force of the running water removes loose particles formed by weathering. In the process, gullies and valleys are lengthened, widened, and deepened. Often, water overflows the banks of the gullies or river channels, resulting in floods. Each new flood carries more material away to increase the size of the valley. Meanwhile, weathering loosens more and more material so the process continues. Erosion by glacial ice is less common, but it can cause the greatest landscape changes in the shortest amount of time. Glacial ice forms in a region where snow fails to melt in the spring and summer and instead builds up as ice. For major glaciers to form, this lack of snowmelt has to occur for a number of years in areas with high precipitation. As ice accumulates and thickens, it flows as a solid mass. As it flows, it has a tremendous capacity to erode soil and even solid rock. Ice is a major factor in shaping some landscapes, especially mountainous regions. Glacial ice provides much of the spectacular scenery in these regions. Features such as horns (sharp mountain peaks), arêtes (sharp ridges), glacially formed lakes, and U-shaped valleys are all the result of glacial erosion. Wind is an important cause of erosion only in arid (dry) regions. Wind carries sand and dust, which can scour even solid rock. Many factors determine the rate and kind of erosion that occurs in a given area. The climate of an area determines the distribution, amount, and kind of precipitation that the area receives and thus the type and rate of weathering. An area with an arid climate erodes differently than an area with a humid climate. The elevation of an area also plays a role by determining the potential energy of running water. The higher the elevation the more energetically water will flow due to the force of gravity. The type of bedrock in an area (sandstone, granite, or shale) can determine the shapes of valleys and slopes, and the depth of streams. A landscape’s geologic age—that is, how long current conditions of weathering and erosion have affected the area—determines its overall appearance. Relatively young landscapes tend to be more rugged and angular in appearance. Older landscapes tend to have more rounded slopes and hills. The oldest landscapes tend to be low-lying with broad, open river valleys and low, rounded hills. The overall effect of the wearing down of an area is to level the land; the tendency is toward the reduction of all land surfaces to sea level. D3 Plate Tectonics Opposing this tendency toward leveling is a force responsible for raising mountains and plateaus and for creating new landmasses. These changes to Earth’s surface occur in the outermost solid portion of Earth, known as the lithosphere. The lithosphere consists of the crust and another region known as the upper mantle and is approximately 65 to 100 km (40 to 60 mi) thick. Compared with the interior of the Earth, however, this region is relatively thin. The lithosphere is thinner in proportion to the whole Earth than the skin of an apple is to the whole apple. Scientists believe that the lithosphere is broken into a series of plates, or segments. According to the theory of plate tectonics, these plates move around on Earth’s surface over long periods of time. Tectonics comes from the Greek word, tektonikos, which means “builder.” According to the theory, the lithosphere is divided into large and small plates. The largest plates include the Pacific plate, the North American plate, the Eurasian plate, the Antarctic plate, the Indo-Australian plate, and the African plate. Smaller plates include the Cocos plate, the Nazca plate, the Philippine plate, and the Caribbean plate. Plate sizes vary a great deal. The Cocos plate is 2,000 km (1,000 mi) wide, while the Pacific plate is nearly 14,000 km (nearly 9,000 mi) wide. These plates move in three different ways in relation to each other. They pull apart or move away from each other, they collide or move against each other, or they slide past each other as they move sideways. The movement of these plates helps explain many geological events, such as earthquakes and volcanic eruptions as well as mountain building and the formation of the oceans and continents. D3a When Plates Pull Apart When the plates pull apart, two types of phenomena occur depending on whether the movement takes place in the oceans or on land. When plates pull apart on land, deep valleys known as rift valleys form. An example of a rift valley is the Great Rift Valley that extends from Syria in the Middle East to Mozambique in Africa. When plates pull apart in the oceans, long, sinuous chains of volcanic mountains called mid-ocean ridges form, and new seafloor is created at the site of these ridges. Rift valleys are also present along the crests of the mid-ocean ridges. Most scientists believe that gravity and heat from the interior of the Earth cause the plates to move apart and to create new seafloor. According to this explanation, molten rock known as magma rises from Earth’s interior to form hot spots beneath the ocean floor. As two oceanic plates pull apart from each other in the middle of the oceans, a crack, or rupture, appears and forms the mid-ocean ridges. These ridges exist in all the world’s ocean basins and resemble the seams of a baseball. The molten rock rises through these cracks and creates new seafloor. D3b When Plates Collide When plates collide or push against each other, regions called convergent plate margins form. Along these margins, one plate is usually forced to dive below the other. As that plate dives, it triggers the melting of the surrounding lithosphere and a region just below it known as the asthenosphere. These pockets of molten crust rise behind the margin through the overlying plate, creating curved chains of volcanoes known as arcs. This process is called subduction. If one plate consists of oceanic crust and the other consists of continental crust, the denser oceanic crust will dive below the continental crust. If both plates are oceanic crust, then either may be subducted. If both are continental crust, subduction can continue for a while but will eventually end because continental crust is not dense enough to be forced very far into the upper mantle. The results of this subduction process are readily visible on a map showing that 80 percent of the world’s volcanoes rim the Pacific Ocean where plates are colliding against each other. The subduction zone created by the collision of two oceanic plates—the Pacific plate and the Philippine plate—can also create a trench. Such a trench resulted in the formation of the deepest point on Earth, the Mariana Trench, which is estimated to be 11,033 m (36,198 ft) below sea level. On the other hand, when two continental plates collide, mountain building occurs. The collision of the Indo-Australian plate with the Eurasian plate has produced the Himalayan Mountains. This collision resulted in the highest point of Earth, Mount Everest, which is 8,850 m (29,035 ft) above sea level. D3c When Plates Slide Past Each Other Finally, some of Earth’s plates neither collide nor pull apart but instead slide past each other. These regions are called transform margins. Few volcanoes occur in these areas because neither plate is forced down into Earth’s interior and little melting occurs. Earthquakes, however, are abundant as the two rigid plates slide past each other. The San Andreas Fault in California is a well-known example of a transform margin. The movement of plates occurs at a slow pace, at an average rate of only 2.5 cm (1 in) per year. But over millions of years this gradual movement results in radical changes. Current plate movement is making the Pacific Ocean and Mediterranean Sea smaller, the Atlantic Ocean larger, and the Himalayan Mountains higher. V EARTH’S INTERIOR The interior of Earth plays an important role in plate tectonics. Scientists believe it is also responsible for Earth’s magnetic field. This field is vital to life because it shields the planet’s surface from harmful cosmic rays and from a steady stream of energetic particles from the Sun known as the solar wind. A Composition of the Interior Earth’s interior consists of the mantle and the core. The mantle and core make up by far the largest part of Earth’s mass. The distance from the base of the crust to the center of the core is about 6,400 km (about 4,000 mi). Scientists have learned about Earth’s interior by studying rocks that formed in the interior and rose to the surface. The study of meteorites, which are believed to be made of the same material that formed the Earth and its interior, has also offered clues about Earth’s interior. Finally, seismic waves generated by earthquakes provide geophysicists with information about the composition of the interior. The sudden movement of rocks during an earthquake causes vibrations that transmit energy through the Earth in the form of waves. The way these waves travel through the interior of Earth reveals the nature of materials inside the planet. The mantle consists of three parts: the lower part of the lithosphere, the region below it known as the asthenosphere, and the region below the asthenosphere called the lower mantle. The entire mantle extends from the base of the crust to a depth of about 2,900 km (about 1,800 mi). Scientists believe the asthenosphere is made up of mushy plastic-like rock with pockets of molten rock. The term asthenosphere is derived from Greek and means “weak layer.” The asthenosphere’s soft, plastic quality allows plates in the lithosphere above it to shift and slide on top of the asthenosphere. This shifting of the lithosphere’s plates is the source of most tectonic activity. The asthenosphere is also the source of the basaltic magma that makes up much of the oceanic crust and rises through volcanic vents on the ocean floor. The mantle consists of mostly solid iron-magnesium silicate rock mixed with many other minor components including radioactive elements. However, even this solid rock can flow like a “sticky” liquid when it is subjected to enough heat and pressure. The core is divided into two parts, the outer core and the inner core. The outer core is about 2,260 km (about 1,404 mi) thick. The outer core is a liquid region composed mostly of iron, with smaller amounts of nickel and sulfur in liquid form. The inner core is about 1,220 km (about 758 mi) thick. The inner core is solid and is composed of iron, nickel, and sulfur in solid form. The inner core and the outer core also contain a small percentage of radioactive material. The existence of radioactive material is one of the sources of heat in Earth’s interior because as radioactive material decays, it gives off heat. Temperatures in the inner core may be as high as 6650°C (12,000°F). B The Core and Earth’s Magnetism Scientists believe that Earth’s liquid iron core is instrumental in creating a magnetic field that surrounds Earth and shields the planet from harmful cosmic rays and the Sun’s solar wind. The idea that Earth is like a giant magnet was first proposed in 1600 by English physician and natural philosopher William Gilbert. Gilbert proposed the idea to explain why the magnetized needle in a compass points north. According to Gilbert, Earth’s magnetic field creates a magnetic north pole and a magnetic south pole. The magnetic poles do not correspond to the geographic North and South poles, however. Moreover, the magnetic poles wander and are not always in the same place. The north magnetic pole is currently close to Ellef Ringnes Island in the Queen Elizabeth Islands near the boundary of Canada’s Northwest Territories with Nunavut. The south magnetic pole lies just off the coast of Wilkes Land, Antarctica. Not only do the magnetic poles wander, but they also reverse their polarity—that is, the north magnetic pole becomes the south magnetic pole and vice versa. Magnetic reversals have occurred at least 170 times over the past 100 million years. The reversals occur on average about every 200,000 years and take place gradually over a period of several thousand years. Scientists still do not understand why these magnetic reversals occur but think they may be related to Earth’s rotation and changes in the flow of liquid iron in the outer core. Some scientists theorize that the flow of liquid iron in the outer core sets up electrical currents that produce Earth’s magnetic field. Known as the dynamo theory, this theory appears to be the best explanation yet for the origin of the magnetic field. Earth’s magnetic field operates in a region above Earth’s surface known as the magnetosphere. The magnetosphere is shaped somewhat like a teardrop with a long tail that trails away from the Earth due to the force of the solar wind. Inside the magnetosphere are the Van Allen radiation belts, named for the American physicist James A. Van Allen who discovered them in 1958. The Van Allen belts are regions where charged particles from the Sun and from cosmic rays are trapped and sent into spiral paths along the lines of Earth’s magnetic field. The radiation belts thereby shield Earth’s surface from these highly energetic particles. Occasionally, however, due to extremely strong magnetic fields on the Sun’s surface, which are visible as sunspots, a brief burst of highly energetic particles streams along with the solar wind. Because Earth’s magnetic field lines converge and are closest to the surface at the poles, some of these energetic particles sneak through and interact with Earth’s atmosphere, creating the phenomenon known as an aurora. VI EARTH’S PAST A Origin of Earth Most scientists believe that the Earth, Sun, and all of the other planets and moons in the solar system formed about 4.6 billion years ago from a giant cloud of gas and dust known as the solar nebula. The gas and dust in this solar nebula originated in a star that ended its life in a violent explosion known as a supernova. The solar nebula consisted principally of hydrogen, the lightest element, but the nebula was also seeded with a smaller percentage of heavier elements, such as carbon and oxygen. All of the chemical elements we know were originally made in the star that became a supernova. Our bodies are made of these same chemical elements. Therefore, all of the elements in our solar system, including all of the elements in our bodies, originally came from this star-seeded solar nebula. Due to the force of gravity tiny clumps of gas and dust began to form in the early solar nebula. As these clumps came together and grew larger, they caused the solar nebula to contract in on itself. The contraction caused the cloud of gas and dust to flatten in the shape of a disc. As the clumps continued to contract, they became very dense and hot. Eventually the atoms of hydrogen became so dense that they began to fuse in the innermost part of the cloud, and these nuclear reactions gave birth to the Sun. The fusion of hydrogen atoms in the Sun is the source of its energy. Many scientists favor the planetesimal theory for how the Earth and other planets formed out of this solar nebula. This theory helps explain why the inner planets became rocky while the outer planets, except for Pluto, are made up mostly of gases. The theory also explains why all of the planets orbit the Sun in the same plane. According to this theory, temperatures decreased with increasing distance from the center of the solar nebula. In the inner region, where Mercury, Venus, Earth, and Mars formed, temperatures were low enough that certain heavier elements, such as iron and the other heavy compounds that make up rock, could condense out—that is, could change from a gas to a solid or liquid. Due to the force of gravity, small clumps of this rocky material eventually came together with the dust in the original solar nebula to form protoplanets or planetesimals (small rocky bodies). These planetesimals collided, broke apart, and reformed until they became the four inner rocky planets. The inner region, however, was still too hot for other light elements, such as hydrogen and helium, to be retained. These elements could only exist in the outermost part of the disc, where temperatures were lower. As a result two of the outer planets—Jupiter and Saturn—are mostly made of hydrogen and helium, which are also the dominant elements in the atmospheres of Uranus and Neptune. B The Early Earth Within the planetesimal Earth, heavier matter sank to the center and lighter matter rose toward the surface. Most scientists believe that Earth was never truly molten and that this transfer of matter took place in the solid state. Much of the matter that went toward the center contained radioactive material, an important source of Earth’s internal heat. As heavier material moved inward, lighter material moved outward, the planet became layered, and the layers of the core and mantle were formed. This process is called differentiation. Not long after they formed, more than 4 billion years ago, the Earth and the Moon underwent a period when they were bombarded by meteorites, the rocky debris left over from the formation of the solar system. The impact craters created during this period of heavy bombardment are still visible on the Moon’s surface, which is unchanged. Earth’s craters, however, were long ago erased by weathering, erosion, and mountain building. Because the Moon has no atmosphere, its surface has not been subjected to weathering or erosion. Thus, the evidence of meteorite bombardment remains. Energy released from the meteorite impacts created extremely high temperatures on Earth that melted the outer part of the planet and created the crust. By 4 billion years ago, both the oceanic and continental crust had formed, and the oldest rocks were created. These rocks are known as the Acasta Gneiss and are found in the Canadian territory of Nunavut. Due to the meteorite bombardment, the early Earth was too hot for liquid water to exist and so it was impossible for life to exist. C Geologic Time Geologists divide the history of the Earth into three eons: the Archean Eon, which lasted from around 4 billion to 2.5 billion years ago; the Proterozoic Eon, which lasted from 2.5 billion to 543 million years ago; and the Phanerozoic Eon, which lasted from 543 million years ago to the present. Each eon is subdivided into different eras. For example, the Phanerozoic Eon includes the Paleozoic Era, the Mesozoic Era, and the Cenozoic Era. In turn, eras are further divided into periods. For example, the Paleozoic Era includes the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian Periods. The Archean Eon is subdivided into four eras, the Eoarchean, the Paleoarchean, the Mesoarchean, and the Neoarchean. The beginning of the Archean is generally dated as the age of the oldest terrestrial rocks, which are about 4 billion years old. The Archean Eon ended 2.5 billion years ago when the Proterozoic Eon began. The Proterozoic Eon is subdivided into three eras: the Paleoproterozoic Era, the Mesoproterozoic Era, and the Neoproterozoic Era. The Proterozoic Eon lasted from 2.5 billion years ago to 543 million years ago when the Phanerozoic Eon began. The Phanerozoic Eon is subdivided into three eras: the Paleozoic Era from 543 million to 248 million years ago, the Mesozoic Era from 248 million to 65 million years ago, and the Cenozoic Era from 65 million years ago to the present. Geologists base these divisions on the study and dating of rock layers or strata, including the fossilized remains of plants and animals found in those layers. Until the late 1800s scientists could only determine the relative ages of rock strata. They knew that in general the top layers of rock were the youngest and formed most recently, while deeper layers of rock were older. The field of stratigraphy shed much light on the relative ages of rock layers. The study of fossils also enabled geologists to determine the relative ages of different rock layers. The fossil record helped scientists determine how organisms evolved or when they became extinct. By studying rock layers around the world, geologists and paleontologists saw that the remains of certain animal and plant species occurred in the same layers, but were absent or altered in other layers. They soon developed a fossil index that also helped determine the relative ages of rock layers. Beginning in the 1890s, scientists learned that radioactive elements in rock decay at a known rate. By studying this radioactive decay, they could determine an absolute age for rock layers. This type of dating, known as radiometric dating, confirmed the relative ages determined through stratigraphy and the fossil index and assigned absolute ages to the various strata. As a result scientists were able to assemble Earth’s geologic time scale from the Archean Eon to the present. See also Geologic Time. C1 Precambrian The Precambrian is a time span that includes the Archean and Proterozoic eons and began about 4 billion years ago. The Precambrian marks the first formation of continents, the oceans, the atmosphere, and life. The Precambrian represents the oldest chapter in Earth’s history that can still be studied. Very little remains of Earth from the period of 4.6 billion to about 4 billion years ago due to the melting of rock caused by the early period of meteorite bombardment. Rocks dating from the Precambrian, however, have been found in Africa, Antarctica, Australia, Brazil, Canada, and Scandinavia. Some zircon mineral grains deposited in Australian rock layers have been dated to 4.2 billion years. The Precambrian is also the longest chapter in Earth’s history, spanning a period of about 3.5 billion years. During this timeframe, the atmosphere and the oceans formed from gases that escaped from the hot interior of the planet as a result of widespread volcanic eruptions. The early atmosphere consisted primarily of nitrogen, carbon dioxide, and water vapor. As Earth continued to cool, the water vapor condensed out and fell as precipitation to form the oceans. Some scientists believe that much of Earth’s water vapor originally came from comets containing frozen water that struck Earth during the period of meteorite bombardment. By studying 2-billion-year-old rocks found in northwestern Canada, as well as 2.5-billion- year-old rocks in China, scientists have found evidence that plate tectonics began shaping Earth’s surface as early as the middle Precambrian. About a billion years ago, the Earth’s plates were centered around the South Pole and formed a supercontinent called Rodinia. Slowly, pieces of this supercontinent broke away from the central continent and traveled north, forming smaller continents. Life originated during the Precambrian. The earliest fossil evidence of life consists of prokaryotes, one-celled organisms that lacked a nucleus and reproduced by dividing, a process known as asexual reproduction. Asexual division meant that a prokaryote’s hereditary material was copied unchanged. The first prokaryotes were bacteria known as archaebacteria. Scientists believe they came into existence perhaps as early as 3.8 billion years ago, but certainly by about 3.5 billion years ago, and were anaerobic—that is, they did not require oxygen to produce energy. Free oxygen barely existed in the atmosphere of the early Earth. Archaebacteria were followed about 3.46 billion years ago by another type of prokaryote known as cyanobacteria or blue-green algae. These cyanobacteria gradually introduced oxygen in the atmosphere as a result of photosynthesis. In shallow tropical waters, cyanobacteria formed mats that grew into humps called stromatolites. Fossilized stromatolites have been found in rocks in the Pilbara region of western Australia that are more than 3.4 billion years old and in rocks of the Gunflint Chert region of northwest Lake Superior that are about 2.1 billion years old. For billions of years, life existed only in the simple form of prokaryotes. Prokaryotes were followed by the relatively more advanced eukaryotes, organisms that have a nucleus in their cells and that reproduce by combining or sharing their heredity makeup rather than by simply dividing. Sexual reproduction marked a milestone in life on Earth because it created the possibility of hereditary variation and enabled organisms to adapt more easily to a changing environment. The very latest part of Precambrian time some 560 million to 545 million years ago saw the appearance of an intriguing group of fossil organisms known as the Ediacaran fauna. First discovered in the northern Flinders Range region of Australia in the mid-1940s and subsequently found in many locations throughout the world, these strange fossils appear to be the precursors of many of the fossil groups that were to explode in Earth's oceans in the Paleozoic Era. See also Evolution; Natural Selection. C2 Paleozoic Era At the start of the Paleozoic Era about 543 million years ago, an enormous expansion in the diversity and complexity of life occurred. This event took place in the Cambrian Period and is called the Cambrian explosion. Nothing like it has happened since. Almost all of the major groups of animals we know today made their first appearance during the Cambrian explosion. Almost all of the different “body plans” found in animals today—that is, the way an animal’s body is designed, with heads, legs, rear ends, claws, tentacles, or antennae— also originated during this period. Fishes first appeared during the Paleozoic Era, and multicellular plants began growing on the land. Other land animals, such as scorpions, insects, and amphibians, also originated during this time. Just as new forms of life were being created, however, other forms of life were going out of existence. Natural selection meant that some species were able to flourish, while others failed. In fact, mass extinctions of animal and plant species were commonplace. Most of the early complex life forms of the Cambrian explosion lived in the sea. The creation of warm, shallow seas, along with the buildup of oxygen in the atmosphere, may have aided this explosion of life forms. The shallow seas were created by the breakup of the supercontinent Rodinia. During the Ordovician, Silurian, and Devonian periods, which followed the Cambrian Period and lasted from 490 million to 354 million years ago, some of the continental pieces that had broken off Rodinia collided. These collisions resulted in larger continental masses in equatorial regions and in the Northern Hemisphere. The collisions built a number of mountain ranges, including parts of the Appalachian Mountains in North America and the Caledonian Mountains of northern Europe. Toward the close of the Paleozoic Era, two large continental masses, Gondwanaland to the south and Laurasia to the north, faced each other across the equator. Their slow but eventful collision during the Permian Period of the Paleozoic Era, which lasted from 290 million to 248 million years ago, assembled the supercontinent Pangaea and resulted in some of the grandest mountains in the history of Earth. These mountains included other parts of the Appalachians and the Ural Mountains of Asia. At the close of the Paleozoic Era, Pangaea represented over 90 percent of all the continental landmasses. Pangaea straddled the equator with a huge mouthlike opening that faced east. This opening was the Tethys Ocean, which closed as India moved northward creating the Himalayas. The last remnants of the Tethys Ocean can be seen in today’s Mediterranean Sea. The Paleozoic came to an end with a major extinction event, when perhaps as many as 90 percent of all plant and animal species died out. The reason is not known for sure, but many scientists believe that huge volcanic outpourings of lavas in central Siberia, coupled with an asteroid impact, were joint contributing factors. C3 Mesozoic Era The Mesozoic Era, beginning 248 million years ago, is often characterized as the Age of Reptiles because reptiles were the dominant life forms during this era. Reptiles dominated not only on land, as dinosaurs, but also in the sea, in the form of the plesiosaurs and ichthyosaurs, and in the air, as pterosaurs, which were flying reptiles. See also Dinosaur; Plesiosaur; Ichthyosaur; Pterosaur. The Mesozoic Era is divided into three geological periods: the Triassic, which lasted from 248 million to 206 million years ago; the Jurassic, from 206 million to 144 million years ago; and the Cretaceous, from 144 million to 65 million years ago. The dinosaurs emerged during the Triassic Period and were one of the most successful animals in Earth’s history, lasting for about 180 million years before going extinct at the end of the Cretaceous Period. The first birds and mammals and the first flowering plants also appeared during the Mesozoic Era. Before flowering plants emerged, plants with seed-bearing cones known as conifers were the dominant form of plants. Flowering plants soon replaced conifers as the dominant form of vegetation during the Mesozoic Era. The Mesozoic was an eventful era geologically with many changes to Earth’s surface. Pangaea continued to exist for another 50 million years during the early Mesozoic Era. By the early Jurassic Period, Pangaea began to break up. What is now South America began splitting from what is now Africa, and in the process the South Atlantic Ocean formed. As the landmass that became North America drifted away from Pangaea and moved westward, a long subduction zone extended along North America’s western margin. This subduction zone and the accompanying arc of volcanoes extended from what is now Alaska to the southern tip of South America. Much of this feature, called the American Cordillera, exists today as the eastern margin of the Pacific Ring of Fire. During the Cretaceous Period, heat continued to be released from the margins of the drifting continents, and as they slowly sank, vast inland seas formed in much of the continental interiors. The fossilized remains of fishes and marine mollusks called ammonites can be found today in the middle of the North American continent because these areas were once underwater. Large continental masses broke off the northern part of southern Gondwanaland during this period and began to narrow the Tethys Ocean. The largest of these continental masses, present-day India, moved northward toward its collision with southern Asia. As both the North Atlantic Ocean and South Atlantic Ocean continued to open, North and South America became isolated continents for the first time in 450 million years. Their westward journey resulted in mountains along their western margins, including the Andes of South America. C4 Cenozoic Era The Cenozoic Era, beginning about 65 million years ago, is the period when mammals became the dominant form of life on land. Human beings first appeared in the later stages of the Cenozoic Era. In short, the modern world as we know it, with its characteristic geographical features and its animals and plants, came into being. All of the continents that we know today took shape during this era. A single catastrophic event may have been responsible for this relatively abrupt change from the Age of Reptiles to the Age of Mammals. Most scientists now believe that a huge asteroid or comet struck the Earth at the end of the Mesozoic and the beginning of the Cenozoic eras, causing the extinction of many forms of life, including the dinosaurs. Evidence of this collision came with the discovery of a large impact crater off the coast of Mexico’s Yucatán Peninsula and the worldwide finding of iridium, a metallic element rare on Earth but abundant in meteorites, in rock layers dated from the end of the Cretaceous Period. The extinction of the dinosaurs opened the way for mammals to become the dominant land animals. The Cenozoic Era is divided into the Tertiary and the Quaternary periods. The Tertiary Period lasted from about 65 million to about 1.8 million years ago. The Quaternary Period began about 1.8 million years ago and continues to the present day. These periods are further subdivided into epochs, such as the Pleistocene, from 1.8 million to 10,000 years ago, and the Holocene, from 10,000 years ago to the present. Early in the Tertiary Period, Pangaea was completely disassembled, and the modern continents were all clearly outlined. India and other continental masses began colliding with southern Asia to form the Himalayas. Africa and a series of smaller microcontinents began colliding with southern Europe to form the Alps. The Tethys Ocean was nearly closed and began to resemble today’s Mediterranean Sea. As the Tethys continued to narrow, the Atlantic continued to open, becoming an ever-wider ocean. Iceland appeared as a new island in later Tertiary time, and its active volcanism today indicates that seafloor spreading is still causing the country to grow. Late in the Tertiary Period, about 6 million years ago, humans began to evolve in Africa. These early humans began to migrate to other parts of the world between 2 million and 1.7 million years ago. The Quaternary Period marks the onset of the great ice ages. Many times, perhaps at least once every 100,000 years on average, vast glaciers 3 km (2 mi) thick invaded much of North America, Europe, and parts of Asia. The glaciers eroded considerable amounts of material that stood in their paths, gouging out U-shaped valleys. Anatomically modern human beings, known as Homo sapiens, became the dominant form of life in the Quaternary Period. Most anthropologists (scientists who study human life and culture) believe that anatomically modern humans originated only recently in Earth’s 4.6-billion-year history, within the past 200,000 years. See also Human Evolution. VII EARTH’S FUTURE With the rise of human civilization about 8,000 years ago and especially since the Industrial Revolution in the mid-1700s, human beings began to alter the surface, water, and atmosphere of Earth. In doing so, they have become active geological agents, not unlike other forces of change that influence the planet. As a result, Earth’s immediate future depends to a great extent on the behavior of humans. For example, the widespread use of fossil fuels is releasing carbon dioxide and other greenhouse gases into the atmosphere and threatens to warm the planet’s surface. This global warming could melt glaciers and the polar ice caps, which could flood coastlines around the world and many island nations. In effect, the carbon dioxide that was removed from Earth’s early atmosphere by the oceans and by primitive plant and animal life, and subsequently buried as fossilized remains in sedimentary rock, is being released back into the atmosphere and is threatening the existence of living things. See also Global Warming. Even without human intervention, Earth will continue to change because it is geologically active. Many scientists believe that some of these changes can be predicted. For example, based on studies of the rate that the seafloor is spreading in the Red Sea, some geologists predict that in 200 million years the Red Sea will be the same size as the Atlantic Ocean is today. Other scientists predict that the continent of Asia will break apart millions of years from now, and as it does, Lake Baikal in Siberia will become a vast ocean, separating two landmasses that once made up the Asian continent. In the far, far distant future, however, scientists believe that Earth will become an uninhabitable planet, scorched by the Sun. Knowing the rate at which nuclear fusion occurs in the Sun and knowing the Sun’s mass, astrophysicists (scientists who study stars) have calculated that the Sun will become brighter and hotter about 3 billion years from now, when it will be hot enough to boil Earth’s oceans away. Based on studies of how other Sunlike stars have evolved, scientists predict that the Sun will become a red giant, a star with a very large, hot atmosphere, about 7 billion years from now. As a red giant the Sun’s outer atmosphere will expand until it engulfs the planet Mercury. The Sun will then be 2,000 times brighter than it is now and so hot it will melt Earth’s rocks. Earth will end its existence as a burnt cinder. See also Sun. Three billion years is the life span of millions of human generations, however. Perhaps by then, humans will have learned how to journey beyond the solar system to colonize other planets in the Milky Way Galaxy and find another place to call “home.” Reviewed By: Alan V. Morgan Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question No:3 (a) Endocrine System I INTRODUCTION Endocrine System, group of specialized organs and body tissues that produce, store, and secrete chemical substances known as hormones. As the body's chemical messengers, hormones transfer information and instructions from one set of cells to another. Because of the hormones they produce, endocrine organs have a great deal of influence over the body. Among their many jobs are regulating the body's growth and development, controlling the function of various tissues, supporting pregnancy and other reproductive functions, and regulating metabolism. Endocrine organs are sometimes called ductless glands because they have no ducts connecting them to specific body parts. The hormones they secrete are released directly into the bloodstream. In contrast, the exocrine glands, such as the sweat glands or the salivary glands, release their secretions directly to target areas—for example, the skin or the inside of the mouth. Some of the body's glands are described as endo-exocrine glands because they secrete hormones as well as other types of substances. Even some nonglandular tissues produce hormone-like substances—nerve cells produce chemical messengers called neurotransmitters, for example. The earliest reference to the endocrine system comes from ancient Greece, in about 400 BC. However, it was not until the 16th century that accurate anatomical descriptions of many of the endocrine organs were published. Research during the 20th century has vastly improved our understanding of hormones and how they function in the body. Today, endocrinology, the study of the endocrine glands, is an important branch of modern medicine. Endocrinologists are medical doctors who specialize in researching and treating disorders and diseases of the endocrine system. II COMPONENTS OF THE ENDOCRINE SYSTEM The primary glands that make up the human endocrine system are the hypothalamus, pituitary, thyroid, parathyroid, adrenal, pineal body, and reproductive glands—the ovary and testis. The pancreas, an organ often associated with the digestive system, is also considered part of the endocrine system. In addition, some nonendocrine organs are known to actively secrete hormones. These include the brain, heart, lungs, kidneys, liver, thymus, skin, and placenta. Almost all body cells can either produce or convert hormones, and some secrete hormones. For example, glucagon, a hormone that raises glucose levels in the blood when the body needs extra energy, is made in the pancreas but also in the wall of the gastrointestinal tract. However, it is the endocrine glands that are specialized for hormone production. They efficiently manufacture chemically complex hormones from simple chemical substances—for example, amino acids and carbohydrates—and they regulate their secretion more efficiently than any other tissues. The hypothalamus, found deep within the brain, directly controls the pituitary gland. It is sometimes described as the coordinator of the endocrine system. When information reaching the brain indicates that changes are needed somewhere in the body, nerve cells in the hypothalamus secrete body chemicals that either stimulate or suppress hormone secretions from the pituitary gland. Acting as liaison between the brain and the pituitary gland, the hypothalamus is the primary link between the endocrine and nervous systems. Located in a bony cavity just below the base of the brain is one of the endocrine system's most important members: the pituitary gland. Often described as the body’s master gland, the pituitary secretes several hormones that regulate the function of the other endocrine glands. Structurally, the pituitary gland is divided into two parts, the anterior and posterior lobes, each having separate functions. The anterior lobe regulates the activity of the thyroid and adrenal glands as well as the reproductive glands. It also regulates the body's growth and stimulates milk production in women who are breast-feeding. Hormones secreted by the anterior lobe include adrenocorticotropic hormone (ACTH), thyrotropic hormone (TSH), luteinizing hormone (LH), follicle-stimulating hormone (FSH), growth hormone (GH), and prolactin. The anterior lobe also secretes endorphins, chemicals that act on the nervous system to reduce sensitivity to pain. The posterior lobe of the pituitary gland contains the nerve endings (axons) from the hypothalamus, which stimulate or suppress hormone production. This lobe secretes antidiuretic hormones (ADH), which control water balance in the body, and oxytocin, which controls muscle contractions in the uterus. The thyroid gland, located in the neck, secretes hormones in response to stimulation by TSH from the pituitary gland. The thyroid secretes hormones—for example, thyroxine and threeiodothyronine—that regulate growth and metabolism, and play a role in brain development during childhood. The parathyroid glands are four small glands located at the four corners of the thyroid gland. The hormone they secrete, parathyroid hormone, regulates the level of calcium in the blood. Located on top of the kidneys, the adrenal glands have two distinct parts. The outer part, called the adrenal cortex, produces a variety of hormones called corticosteroids, which include cortisol. These hormones regulate salt and water balance in the body, prepare the body for stress, regulate metabolism, interact with the immune system, and influence sexual function. The inner part, the adrenal medulla, produces catecholamines, such as epinephrine, also called adrenaline, which increase the blood pressure and heart rate during times of stress. The reproductive components of the endocrine system, called the gonads, secrete sex hormones in response to stimulation from the pituitary gland. Located in the pelvis, the female gonads, the ovaries, produce eggs. They also secrete a number of female sex hormones, including estrogen and progesterone, which control development of the reproductive organs, stimulate the appearance of female secondary sex characteristics, and regulate menstruation and pregnancy. Located in the scrotum, the male gonads, the testes, produce sperm and also secrete a number of male sex hormones, or androgens. The androgens, the most important of which is testosterone, regulate development of the reproductive organs, stimulate male secondary sex characteristics, and stimulate muscle growth. The pancreas is positioned in the upper abdomen, just under the stomach. The major part of the pancreas, called the exocrine pancreas, functions as an exocrine gland, secreting digestive enzymes into the gastrointestinal tract. Distributed through the pancreas are clusters of endocrine cells that secrete insulin, glucagon, and somastatin. These hormones all participate in regulating energy and metabolism in the body. The pineal body, also called the pineal gland, is located in the middle of the brain. It secretes melatonin, a hormone that may help regulate the wake-sleep cycle. Research has shown that disturbances in the secretion of melatonin are responsible, in part, for the jet lag associated with long-distance air travel. III HOW THE ENDOCRINE SYSTEM WORKS Hormones from the endocrine organs are secreted directly into the bloodstream, where special proteins usually bind to them, helping to keep the hormones intact as they travel throughout the body. The proteins also act as a reservoir, allowing only a small fraction of the hormone circulating in the blood to affect the target tissue. Specialized proteins in the target tissue, called receptors, bind with the hormones in the bloodstream, inducing chemical changes in response to the body’s needs. Typically, only minute concentrations of a hormone are needed to achieve the desired effect. Too much or too little hormone can be harmful to the body, so hormone levels are regulated by a feedback mechanism. Feedback works something like a household thermostat. When the heat in a house falls, the thermostat responds by switching the furnace on, and when the temperature is too warm, the thermostat switches the furnace off. Usually, the change that a hormone produces also serves to regulate that hormone's secretion. For example, parathyroid hormone causes the body to increase the level of calcium in the blood. As calcium levels rise, the secretion of parathyroid hormone then decreases. This feedback mechanism allows for tight control over hormone levels, which is essential for ideal body function. Other mechanisms may also influence feedback relationships. For example, if an individual becomes ill, the adrenal glands increase the secretions of certain hormones that help the body deal with the stress of illness. The adrenal glands work in concert with the pituitary gland and the brain to increase the body’s tolerance of these hormones in the blood, preventing the normal feedback mechanism from decreasing secretion levels until the illness is gone. Long-term changes in hormone levels can influence the endocrine glands themselves. For example, if hormone secretion is chronically low, the increased stimulation by the feedback mechanism leads to growth of the gland. This can occur in the thyroid if a person's diet has insufficient iodine, which is essential for thyroid hormone production. Constant stimulation from the pituitary gland to produce the needed hormone causes the thyroid to grow, eventually producing a medical condition known as goiter. IV DISEASES OF THE ENDOCRINE SYSTEM Endocrine disorders are classified in two ways: disturbances in the production of hormones, and the inability of tissues to respond to hormones. The first type, called production disorders, are divided into hypofunction (insufficient activity) and hyperfunction (excess activity). Hypofunction disorders can have a variety of causes, including malformations in the gland itself. Sometimes one of the enzymes essential for hormone production is missing, or the hormone produced is abnormal. More commonly, hypofunction is caused by disease or injury. Tuberculosis can appear in the adrenal glands, autoimmune diseases can affect the thyroid, and treatments for cancer—such as radiation therapy and chemotherapy—can damage any of the endocrine organs. Hypofunction can also result when target tissue is unable to respond to hormones. In many cases, the cause of a hypofunction disorder is unknown. Hyperfunction can be caused by glandular tumors that secrete hormone without responding to feedback controls. In addition, some autoimmune conditions create antibodies that have the side effect of stimulating hormone production. Infection of an endocrine gland can have the same result. Accurately diagnosing an endocrine disorder can be extremely challenging, even for an astute physician. Many diseases of the endocrine system develop over time, and clear, identifying symptoms may not appear for many months or even years. An endocrinologist evaluating a patient for a possible endocrine disorder relies on the patient's history of signs and symptoms, a physical examination, and the family history—that is, whether any endocrine disorders have been diagnosed in other relatives. A variety of laboratory tests— for example, a radioimmunoassay—are used to measure hormone levels. Tests that directly stimulate or suppress hormone production are also sometimes used, and genetic testing for deoxyribonucleic acid (DNA) mutations affecting endocrine function can be helpful in making a diagnosis. Tests based on diagnostic radiology show anatomical pictures of the gland in question. A functional image of the gland can be obtained with radioactive labeling techniques used in nuclear medicine. One of the most common diseases of the endocrine systems is diabetes mellitus, which occurs in two forms. The first, called diabetes mellitus Type 1, is caused by inadequate secretion of insulin by the pancreas. Diabetes mellitus Type 2 is caused by the body's inability to respond to insulin. Both types have similar symptoms, including excessive thirst, hunger, and urination as well as weight loss. Laboratory tests that detect glucose in the urine and elevated levels of glucose in the blood usually confirm the diagnosis. Treatment of diabetes mellitus Type 1 requires regular injections of insulin; some patients with Type 2 can be treated with diet, exercise, or oral medication. Diabetes can cause a variety of complications, including kidney problems, pain due to nerve damage, blindness, and coronary heart disease. Recent studies have shown that controlling blood sugar levels reduces the risk of developing diabetes complications considerably. Diabetes insipidus is caused by a deficiency of vasopressin, one of the antidiuretic hormones (ADH) secreted by the posterior lobe of the pituitary gland. Patients often experience increased thirst and urination. Treatment is with drugs, such as synthetic vasopressin, that help the body maintain water and electrolyte balance. Hypothyroidism is caused by an underactive thyroid gland, which results in a deficiency of thyroid hormone. Hypothyroidism disorders cause myxedema and cretinism, more properly known as congenital hypothyroidism. Myxedema develops in older adults, usually after age 40, and causes lethargy, fatigue, and mental sluggishness. Congenital hypothyroidism, which is present at birth, can cause more serious complications including mental retardation if left untreated. Screening programs exist in most countries to test newborns for this disorder. By providing the body with replacement thyroid hormones, almost all of the complications are completely avoidable. Addison's disease is caused by decreased function of the adrenal cortex. Weakness, fatigue, abdominal pains, nausea, dehydration, fever, and hyperpigmentation (tanning without sun exposure) are among the many possible symptoms. Treatment involves providing the body with replacement corticosteroid hormones as well as dietary salt. Cushing's syndrome is caused by excessive secretion of glucocorticoids, the subgroup of corticosteroid hormones that includes hydrocortisone, by the adrenal glands. Symptoms may develop over many years prior to diagnosis and may include obesity, physical weakness, easily bruised skin, acne, hypertension, and psychological changes. Treatment may include surgery, radiation therapy, chemotherapy, or blockage of hormone production with drugs. Thyrotoxicosis is due to excess production of thyroid hormones. The most common cause for it is Graves' disease, an autoimmune disorder in which specific antibodies are produced, stimulating the thyroid gland. Thyrotoxicosis is eight to ten times more common in women than in men. Symptoms include nervousness, sensitivity to heat, heart palpitations, and weight loss. Many patients experience protruding eyes and tremors. Drugs that inhibit thyroid activity, surgery to remove the thyroid gland, and radioactive iodine that destroys the gland are common treatments. Acromegaly and gigantism both are caused by a pituitary tumor that stimulates production of excessive growth hormone, causing abnormal growth in particular parts of the body. Acromegaly is rare and usually develops over many years in adult subjects. Gigantism occurs when the excess of growth hormone begins in childhood. Contributed By: Gad B. Kletter Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (b) Eco-System Ecosystem I INTRODUCTION Ecosystem, organisms living in a particular environment, such as a forest or a coral reef, and the physical parts of the environment that affect them. The term ecosystem was coined in 1935 by the British ecologist Sir Arthur George Tansley, who described natural systems in “constant interchange” among their living and nonliving parts. The ecosystem concept fits into an ordered view of nature that was developed by scientists to simplify the study of the relationships between organisms and their physical environment, a field known as ecology. At the top of the hierarchy is the planet’s entire living environment, known as the biosphere. Within this biosphere are several large categories of living communities known as biomes that are usually characterized by their dominant vegetation, such as grasslands, tropical forests, or deserts. The biomes are in turn made up of ecosystems. The living, or biotic, parts of an ecosystem, such as the plants, animals, and bacteria found in soil, are known as a community. The physical surroundings, or abiotic components, such as the minerals found in the soil, are known as the environment or habitat. Any given place may have several different ecosystems that vary in size and complexity. A tropical island, for example, may have a rain forest ecosystem that covers hundreds of square miles, a mangrove swamp ecosystem along the coast, and an underwater coral reef ecosystem. No matter how the size or complexity of an ecosystem is characterized, all ecosystems exhibit a constant exchange of matter and energy between the biotic and abiotic community. Ecosystem components are so interconnected that a change in any one component of an ecosystem will cause subsequent changes throughout the system. II HOW ECOSYSTEMS WORK The living portion of an ecosystem is best described in terms of feeding levels known as trophic levels. Green plants make up the first trophic level and are known as primary producers. Plants are able to convert energy from the sun into food in a process known as photosynthesis. In the second trophic level, the primary consumers—known as herbivores— are animals and insects that obtain their energy solely by eating the green plants. The third trophic level is composed of the secondary consumers, flesh-eating or carnivorous animals that feed on herbivores. At the fourth level are the tertiary consumers, carnivores that feed on other carnivores. Finally, the fifth trophic level consists of the decomposers, organisms such as fungi and bacteria that break down dead or dying matter into nutrients that can be used again. Some or all of these trophic levels combine to form what is known as a food web, the ecosystem’s mechanism for circulating and recycling energy and materials. For example, in an aquatic ecosystem algae and other aquatic plants use sunlight to produce energy in the form of carbohydrates. Primary consumers such as insects and small fish may feed on some of this plant matter, and are in turn eaten by secondary consumers, such as salmon. A brown bear may play the role of the tertiary consumer by catching and eating salmon. Bacteria and fungi may then feed upon and decompose the salmon carcass left behind by the bear, enabling the valuable nonliving components of the ecosystem, such as chemical nutrients, to leach back into the soil and water, where they can be absorbed by the roots of plants. In this way nutrients and the energy that green plants derive from sunlight are efficiently transferred and recycled throughout the ecosystem. In addition to the exchange of energy, ecosystems are characterized by several other cycles. Elements such as carbon and nitrogen travel throughout the biotic and abiotic components of an ecosystem in processes known as nutrient cycles. For example, nitrogen traveling in the air may be snatched by a tree-dwelling, or epiphytic, lichen that converts it to a form useful to plants. When rain drips through the lichen and falls to the ground, or the lichen itself falls to the forest floor, the nitrogen from the raindrops or the lichen is leached into the soil to be used by plants and trees. Another process important to ecosystems is the water cycle, the movement of water from ocean to atmosphere to land and eventually back to the ocean. An ecosystem such as a forest or wetland plays a significant role in this cycle by storing, releasing, or filtering the water as it passes through the system. Every ecosystem is also characterized by a disturbance cycle, a regular cycle of events such as fires, storms, floods, and landslides that keeps the ecosystem in a constant state of change and adaptation. Some species even depend on the disturbance cycle for survival or reproduction. For example, longleaf pine forests depend on frequent low-intensity fires for reproduction. The cones of the trees, which contain the reproductive structures, are sealed shut with a resin that melts away to release the seeds only under high heat. III ECOSYSTEM MANAGEMENT Humans benefit from these smooth-functioning ecosystems in many ways. Healthy forests, streams, and wetlands contribute to clean air and clean water by trapping fast-moving air and water, enabling impurities to settle out or be converted to harmless compounds by plants or soil. The diversity of organisms, or biodiversity, in an ecosystem provides essential foods, medicines, and other materials. But as human populations increase and their encroachment on natural habitats expands, humans are having detrimental effects on the very ecosystems on which they depend. The survival of natural ecosystems around the world is threatened by many human activities: bulldozing wetlands and clear-cutting forests—the systematic cutting of all trees in a specific area—to make room for new housing and agricultural land; damming rivers to harness the energy for electricity and water for irrigation; and polluting the air, soil, and water. Many organizations and government agencies have adopted a new approach to managing natural resources—naturally occurring materials that have economic or cultural value, such as commercial fisheries, timber, and water—in order to prevent their catastrophic depletion. This strategy, known as ecosystem management, treats resources as interdependent ecosystems rather than simply commodities to be extracted. Using advances in the study of ecology to protect the biodiversity of an ecosystem, ecosystem management encourages practices that enable humans to obtain necessary resources using methods that protect the whole ecosystem. Because regional economic prosperity may be linked to ecosystem health, the needs of the human community are also considered. Ecosystem management often requires special measures to protect threatened or endangered species that play key roles in the ecosystem. In the commercial shrimp trawling industry, for example, ecosystem management techniques protect loggerhead sea turtles. In the last thirty years, populations of loggerhead turtles on the southeastern coasts of the United States have been declining at alarming rates due to beach development and the ensuing erosion, bright lights, and traffic, which make it nearly impossible for female turtles to build nests on beaches. At sea, loggerheads are threatened by oil spills and plastic debris, offshore dredging, injury from boat propellers, and getting caught in fishing nets and equipment. In 1970 the species was listed as threatened under the Endangered Species Act. When scientists learned that commercial shrimp trawling nets were trapping and killing between 5000 and 50,000 loggerhead sea turtles a year, they developed a large metal grid called a Turtle Excluder Device (TED) that fits into the trawl net, preventing 97 percent of trawl-related loggerhead turtle deaths while only minimally reducing the commercial shrimp harvest. In 1992 the National Marine Fisheries Service (NMFS) implemented regulations requiring commercial shrimp trawlers to use TEDs, effectively balancing the commercial demand for shrimp with the health and vitality of the loggerhead sea turtle population. Contributed By: Joel P. Clement Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (c) Troposphere Troposphere Troposphere, lowest layer of the earth's atmosphere and site of all weather on the earth. The troposphere is bounded on the top by a layer of air called the tropopause, which separates the troposphere from the stratosphere, and on the bottom by the surface of the earth. The troposphere is wider at the equator (16 km/10 mi) than at the poles (8 km/5 mi). The temperature of the troposphere is warmest in the tropical (latitude 0º to about 30º north and south) and subtropical (latitude about 30º to about 40º north and south) climatic zones (see climate) and coldest at the polar climatic zones (latitude about 70º to 90º north and south). Observations from weather balloons have shown that temperature decreases with height at an average of 6.5º C per 1000 m (3.6º F per 1000 ft), reaching about -80º C (about -110º F) above the tropical regions and about -50º C (about -60º F) above the polar regions. The troposphere contains 75 percent of the atmosphere's mass—on an average day the weight of the molecules in air (see Pressure) is 1.03 kg/sq cm (14.7 lb/sq in)—and most of the atmosphere's water vapor. Water vapor concentration varies from trace amounts in polar regions to nearly 4 percent in the tropics. The most prevalent gases are nitrogen (78 percent) and oxygen (21 percent), with the remaining 1 percent consisting of argon (0.9 percent) and traces of hydrogen, ozone (a form of oxygen), methane, and other constituents. Carbon dioxide is present in small amounts, but its concentration has nearly doubled since 1900. Like water vapor, carbon dioxide is a greenhouse gas (see Greenhouse Effect), which traps some of the earth's heat close to the surface and prevents its release into space. Scientists fear that the increasing amounts of carbon dioxide could raise the earth's surface temperature during the next century, bringing significant changes to worldwide weather patterns. Such changes may include a shift in climatic zones and the melting of the polar ice caps, which could raise the level of the world's oceans. The uneven heating of the regions of the troposphere by the sun (the sun warms the air at the equator more than the air at the poles) causes convection currents (see Heat Transfer), large-scale patterns of winds that move heat and moisture around the globe. In the Northern and Southern hemispheres, air rises along the equator and subpolar (latitude about 50º to about 70º north and south) climatic regions and sinks in the polar and subtropical regions. Air is deflected by the earth's rotation as it moves between the poles and equator, creating belts of surface winds moving from east to west (easterly winds) in tropical and polar regions, and winds moving from west to east (westerly winds) in the middle latitudes. This global circulation is disrupted by the circular wind patterns of migrating high and low air pressure areas, plus locally abrupt changes in wind speed and direction known as turbulence. A common feature of the troposphere of densely populated areas is smog, which restricts visibility and is irritating to the eyes and throat. Smog is produced when pollutants accumulate close to the surface beneath an inversion layer (a layer of air in which the usual rule that temperature of air decreases with altitude does not apply), and undergo a series of chemical reactions in the presence of sunlight. Inversions suppress convection, or the normal expansion and rise of warm air, and prevent pollutants from escaping into the upper atmosphere. Convection is the mechanism responsible for the vertical transport of heat in the troposphere while horizontal heat transfer is accomplished through advection. The exchange and movement of water between the earth and atmosphere is called the water cycle. The cycle, which occurs in the troposphere, begins as the sun evaporates large amounts of water from the earth's surface and the moisture is transported to other regions by the wind. As air rises, expands, and cools, water vapor condenses and clouds develop. Clouds cover large portions of the earth at any given time and vary from fair-weather cirrus to towering cumulus clouds (see Cloud). When liquid or solid water particles grow large enough in size, they fall toward the earth as precipitation. The type of precipitation that reaches the ground, be it rain, snow, sleet, or freezing rain, depends upon the temperature of the air through which it falls. As sunlight enters the atmosphere, a portion is immediately reflected back to space, but the rest penetrates the atmosphere and is absorbed by the earth's surface. This energy is then reemitted by the earth back into the atmosphere as long-wave radiation. Carbon dioxide and water molecules absorb this energy and emit much of it back toward the earth again. This delicate exchange of energy between the earth's surface and atmosphere keeps the average global temperature from changing drastically from year to year. Contributed By: Frank Christopher Hawthorne Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (d) Carbon Cycle Carbon Cycle (ecology) I INTRODUCTION Carbon Cycle (ecology), in ecology, the cycle of carbon usage by which energy flows through the earth's ecosystem. The basic cycle begins when photosynthesizing plants (see Photosynthesis) use carbon dioxide (CO2) found in the atmosphere or dissolved in water. Some of this carbon is incorporated in plant tissue as carbohydrates, fats, and protein; the rest is returned to the atmosphere or water primarily by aerobic respiration. Carbon is thus passed on to herbivores that eat the plants and thereby use, rearrange, and degrade the carbon compounds. Much of it is given off as CO2, primarily as a by-product of aerobic respiration, but some is stored in animal tissue and is passed on to carnivores feeding on the herbivores. Ultimately, all the carbon compounds are broken down by decomposition, and the carbon is released as CO2 to be used again by plants. II AIR-WATER EXCHANGES On a global scale the carbon cycle involves an exchange of CO2 between two great reservoirs: the atmosphere and the earth's waters. Atmospheric CO2 enters water by diffusion across the air-water surface. If the CO2 concentration in the water is less than that in the atmosphere, it diffuses into water, but if the CO2 concentration is greater in the water than in the atmosphere, CO2 enters the atmosphere. Additional exchanges take place within aquatic ecosystems. Excess carbon may combine with water to form carbonates and bicarbonates. Carbonates may precipitate out and become deposited in bottom sediments. Some carbon is incorporated in the forest-vegetation biomass (living matter) and may remain out of circulation for hundreds of years. Incomplete decomposition of organic matter in wet areas results in the accumulation of peat. Such accumulation during the Carboniferous period created great stores of fossil fuels: coal, oil, and gas. III TOTAL CARBON POOL The total carbon pool, estimated at about 49,000 metric gigatons (1 metric gigaton equals 109 metric tons), is distributed among organic and inorganic forms. Fossil carbon accounts for 22 percent of the total pool. The oceans contain 71 percent of the world's carbon, mostly in the form of bicarbonate and carbonate ions. An additional 3 percent is in dead organic matter and phytoplankton. Terrestrial ecosystems, in which forests are the main reservoir, hold about 3 percent of the total carbon. The remaining 1 percent is held in the atmosphere, circulated, and used in photosynthesis. IV ADDITIONS TO ATMOSPHERE Because of the burning of fossil fuels, the clearing of forests, and other such practices, the amount of CO2 in the atmosphere has been increasing since the Industrial Revolution. Atmospheric concentrations have risen from an estimated 260 to 300 parts per million (ppm) in preindustrial times to more than 350 ppm today. This increase accounts for only half of the estimated amount of carbon dioxide poured into the atmosphere. The other 50 percent has probably been taken up by and stored in the oceans. Although terrestrial vegetation may take up considerable quantities of carbon, it is also an additional source of CO2. Atmospheric CO2 acts as a shield over the earth. It is penetrated by short-wave radiation from outer space but blocks the escape of long-wave radiation. As increased quantities of CO2 are added to the atmosphere, the shield thickens and more heat is retained, increasing global temperatures. Although such increases have not yet been great enough to cancel out natural climatic variability, projected increases in CO2 from the burning of fossil fuels suggest that global temperatures could rise some 2° to 6° C (about 4° to 11° F) by early in the 21st century. This increase would be significant enough to alter global climates and thereby affect human welfare. See also Air Pollution; Greenhouse Effect. Contributed By: Robert Leo Smith Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (e) Meningitis Meningitis I INTRODUCTION Meningitis, inflammation of the meninges, the membranes that surround the brain and spinal cord. Meningitis may be caused by a physical injury, a reaction to certain drugs, or more commonly, infection by certain viruses, bacteria, fungi, or parasites. This article focuses on meningitis caused by viral or bacterial infection. In the United States viral meningitis is the most common form of the disease, while bacterial meningitis, which affects an estimated 17,500 people each year, is the most serious form of the disease. Most cases of both viral and bacterial meningitis occur in the first five years of life. II CAUSE The most common causes of viral meningitis are coxsackie viruses and echoviruses, although herpesviruses, the mumps virus, and many other viruses can also cause the disease. Viral meningitis is rarely fatal, and most patients recover from the disease completely. Most cases of bacterial meningitis are caused by one of three species of bacteria— Haemophilus influenzae, Streptococcus pneumoniae, and Neisseria meningitidis. Many other bacteria, including Escherichia coli and the bacteria that are responsible for tuberculosis and syphilis, can also cause the disease. Bacterial meningitis can be fatal if not treated promptly. Some children who survive the infection are left with permanent neurological impairments, such as hearing loss or learning disabilities. Many of the microorganisms that cause meningitis are quite common in the environment and are usually harmless. The microorganisms typically enter the body through the respiratory system or, sometimes, through the middle ear or nasal sinuses. Many people carry these bacteria or viruses without having any symptoms at all, while others experience minor, coldlike symptoms. Meningitis only develops if these microorganisms enter a patient’s bloodstream and then the cerebrospinal fluid (CSF), which surrounds the brain and spinal cord. The CSF contains no protective white blood cells to fight infection, so once the microorganisms enter the CSF, they multiply rapidly and make a person sick. Although the viruses and bacteria that cause meningitis are contagious, not everyone who comes in contact with someone with meningitis will develop the disease. In fact, meningitis typically occurs in isolated cases. Occasionally outbreaks of meningitis caused by Neisseria meningitidis, also known as meningococcal meningitis, occur in group living situations, such as day-care centers, college dormitories, or military barracks. A child whose immune system is weakened—due to a disease or genetic disorder, for instance--is at increased risk for developing meningitis. In general, however, scientists do not know why microorganisms that are usually harmless are able to cross into the CSF and cause meningitis in some people but not others. III SYMPTOMS AND DIAGNOSIS No matter what the cause, the symptoms of meningitis are always similar and usually develop rapidly, often over the course of a few hours. Nearly all patients with meningitis experience vomiting, high fever, and a stiff neck. Meningitis may also cause severe headache, back pain, muscle aches, sensitivity of the eyes to light, drowsiness, confusion, and even loss of consciousness. Some children have convulsions. In infants, the symptoms of meningitis are often more difficult to detect and may include irritability, lethargy, and loss of appetite. Most patients with meningococcal meningitis develop a rash of red, pinprick spots on the skin. The spots do not turn white when pressed, and they quickly grow to look like purple bruises. Meningitis is diagnosed by a lumbar puncture, or spinal tap, in which a doctor inserts a needle into the lower back to obtain a sample of CSF. The fluid is then tested for the presence of bacteria and other cells, as well as certain chemical changes that are characteristic of meningitis. IV TREATMENT AND PREVENTION It is imperative to seek immediate medical attention if the symptoms of meningitis develop in order to determine whether the meningitis is viral or bacterial. Any delays in treating bacterial meningitis can lead to stroke, severe brain damage, and even death. Patients with bacterial meningitis are usually hospitalized and given large doses of intravenous antibiotics. The specific antibiotic used depends on the bacterium responsible for the infection. Antibiotic therapy is very effective, and if treatment begins in time, the risk of dying from bacterial meningitis today is less than 15 percent. No specific treatment is available for viral meningitis. With bed rest, plenty of fluids, and medicine to reduce fever and control headache, most patients recover from viral meningitis within a week or two and suffer no lasting effects. Good hygiene to prevent the spread of viruses is the only method of preventing viral meningitis. To help prevent the spread of bacterial meningitis, antibiotics are sometimes given to family members and other people who have had close contact with patients who develop the disease. Vaccines are also available against some of the bacteria that can cause meningitis. A vaccine against one strain of Haemophilus influenzae, once the most common cause of bacterial meningitis, was introduced during the 1980s and has been a part of routine childhood immunization in the United States since 1990. This vaccine has dramatically reduced the number of cases of bacterial meningitis. Vaccines also exist for certain strains of Neisseria meningitidis and Streptococcus pneumoniae but are not a part of routine immunization. The Neisseria meningitidis vaccine is given to military recruits and people who are planning travel to areas of the world where outbreaks of meningococcal meningitis are common. The Streptococcus pneumoniae vaccine is recommended for people over age 65. Contributed By: David Spilker Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question NO:4 Excretion The energy required for maintenance and proper functioning of the human body is supplied by food. After it is broken into fragments by chewing (see Teeth) and mixed with saliva, digestion begins. The food passes down the gullet into the stomach, where the process is continued by the gastric and intestinal juices. Thereafter, the mixture of food and secretions, called chyme, is pushed down the alimentary canal by peristalsis, rhythmic contractions of the smooth muscle of the gastrointestinal system. The contractions are initiated by the parasympathetic nervous system; such muscular activity can be inhibited by the sympathetic nervous system. Absorption of nutrients from chyme occurs mainly in the small intestine; unabsorbed food and secretions and waste substances from the liver pass to the large intestines and are expelled as feces. Water and water-soluble substances travel via the bloodstream from the intestines to the kidneys, which absorb all the constituents of the blood plasma except its proteins. The kidneys return most of the water and salts to the body, while excreting other salts and waste products, along with excess water, as urine. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Blood enters the kidney through the renal artery. The artery divides into smaller and smaller blood vessels, called arterioles, eventually ending in the tiny capillaries of the glomerulus. The capillary walls here are quite thin, and the blood pressure within the capillaries is high. The result is that water, along with any substances that may be dissolved in it—typically salts, glucose or sugar, amino acids, and the waste products urea and uric acid—are pushed out through the thin capillary walls, where they are collected in Bowman's capsule. Larger particles in the blood, such as red blood cells and protein molecules, are too bulky to pass through the capillary walls and they remain in the bloodstream. The blood, which is now filtered, leaves the glomerulus through another arteriole, which branches into the meshlike network of blood vessels around the renal tubule. The blood then exits the kidney through the renal vein. Approximately 180 liters (about 50 gallons) of blood moves through the two kidneys every day. Urine production begins with the substances that the blood leaves behind during its passage through the kidney—the water, salts, and other substances collected from the glomerulus in Bowman’s capsule. This liquid, called glomerular filtrate, moves from Bowman’s capsule through the renal tubule. As the filtrate flows through the renal tubule, the network of blood vessels surrounding the tubule reabsorbs much of the water, salt, and virtually all of the nutrients, especially glucose and amino acids, that were removed in the glomerulus. This important process, called tubular reabsorption, enables the body to selectively keep the substances it needs while ridding itself of wastes. Eventually, about 99 percent of the water, salt, and other nutrients is reabsorbed. At the same time that the kidney reabsorbs valuable nutrients from the glomerular filtrate, it carries out an opposing task, called tubular secretion. In this process, unwanted substances from the capillaries surrounding the nephron are added to the glomerular filtrate. These substances include various charged particles called ions, including ammonium, hydrogen, and potassium ions. Together, glomerular filtration, tubular reabsorption, and tubular secretion produce urine, which flows into collecting ducts, which guide it into the microtubules of the pyramids. The urine is then stored in the renal cavity and eventually drained into the ureters, which are long, narrow tubes leading to the bladder. From the roughly 180 liters (about 50 gallons) of blood that the kidneys filter each day, about 1.5 liters (1.3 qt) of urine are produced. IV. OTHER FUNCTIONS OF THE KIDNEYS In addition to cleaning the blood, the kidneys perform several other essential functions. One such activity is regulation of the amount of water contained in the blood. This process is influenced by antidiuretic hormone (ADH), also called vasopressin, which is produced in the hypothalamus (a part of the brain that regulates many internal functions) and stored in the nearby pituitary gland. Receptors in the brain monitor the blood’s water concentration. When the amount of salt and other substances in the blood becomes too high, the pituitary gland releases ADH into the bloodstream. When it enters the kidney, ADH makes the walls of the renal tubules and collecting ducts more permeable to water, so that more water is reabsorbed into the bloodstream. The hormone aldosterone, produced by the adrenal glands, interacts with the kidneys to regulate the blood’s sodium and potassium content. High amounts of aldosterone cause the nephrons to reabsorb more sodium ions, more water, and fewer potassium ions; low levels of aldosterone have the reverse effect. The kidney’s responses to aldosterone help keep the blood’s salt levels within the narrow range that is best for crucial physiological activities. Aldosterone also helps regulate blood pressure. When blood pressure starts to fall, the kidney releases an enzyme (a specialized protein) called renin, which converts a blood protein into the hormone angiotensin. This hormone causes blood vessels to constrict, resulting in a rise in blood pressure. Angiotensin then induces the adrenal glands to release aldosterone, which promotes sodium and water to be reabsorbed, further increasing blood volume and blood pressure. The kidney also adjusts the body's acid-base balance to prevent such blood disorders as acidosis and alkalosis, both of which impair the functioning of the central nervous system. If the blood is too acidic, meaning that there is an excess of hydrogen ions, the kidney moves these ions to the urine through the process of tubular secretion. An additional function of the kidney is the processing of vitamin D; the kidney converts this vitamin to an active form that stimulates bone development. Several hormones are produced in the kidney. One of these, erythropoietin, influences the production of red blood cells in the bone marrow. When the kidney detects that the number of red blood cells in the body is declining, it secretes erythropoietin. This hormone travels in the bloodstream to the bone marrow, stimulating the production and release of more red cells. V. KIDNEY DISEASE AND TREATMENT Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Urine, pale yellow fluid produced by the kidneys, composed of dissolved wastes and excess water or chemical substances from the body. It is produced when blood filters through the kidneys, which remove about 110 liters (230 pints) of watery fluid from the blood every day. Most of this fluid is reabsorbed into the blood, but the remainder is passed from the body as urine. Urine leaves the kidneys, passes to the bladder through two slender tubes, the ureters, and exits the body through the urethra. A healthy adult can produce between 0.5 to 2 liters (1 to 4 pints) of urine a day, but the quantity varies considerably, depending on fluid intake and loss of fluid from sweating, vomiting, or diarrhea. Water accounts for about 96 percent, by volume, of the urine excreted by a healthy person. Urine also contains small amounts of urea, chloride, sodium, potassium, ammonia, and calcium. Other substances, such as sugar, are sometimes excreted in the urine if their concentration in the body becomes too great. The volume, acidity, and salt concentration of urine are controlled by hormones. Measurements of the composition of urine are useful in the diagnosis of a wide variety of conditions, including kidney disease, diabetes, and pregnancy. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question No:5 Telephone Telephone I INTRODUCTION Telephone, instrument that sends and receives voice messages and data. Telephones convert speech and data to electrical energy, which is sent great distances. All telephones are linked by complex switching systems called central offices or exchanges, which establish the pathway for information to travel. Telephones are used for casual conversations, to conduct business, and to summon help in an emergency (as in the 911 service in the United States). The telephone has other uses that do not involve one person talking to another, including paying bills (the caller uses the telephone to communicate with a bank’s distant computer) and retrieving messages from an answering machine. In 2003 there were 621 main telephone lines per 1,000 people in the United States and 629 main telephone lines per 1,000 people in Canada. About half of the information passing through telephone lines occurs entirely between special-purpose telephones, such as computers with modems. A modem converts the digital bits of a computer’s output to an audio tone, which is then converted to an electrical signal and passed over telephone lines to be decoded by a modem attached to a computer at the receiving end. Another special-purpose telephone is a facsimile machine, or fax machine, which produces a duplicate of a document at a distant point. II PARTS OF A TELEPHONE A basic telephone set contains a transmitter that transfers the caller’s voice; a receiver that amplifies sound from an incoming call; a rotary or push-button dial; a ringer or alerter; and a small assembly of electrical parts, called the antisidetone network, that keeps the caller’s voice from sounding too loud through the receiver. If it is a two-piece telephone set, the transmitter and receiver are mounted in the handset, the ringer is typically in the base, and the dial may be in either the base or handset. The handset cord connects the base to the handset, and the line cord connects the telephone to the telephone line. More sophisticated telephones may vary from this pattern. A speakerphone has a microphone and speaker in the base in addition to the transmitter and receiver in the handset. Speakerphones allow callers’ hands to be free, and allow more than two people to listen and speak during a call. In a cordless phone, the handset cord is replaced by a radio link between the handset and base, but a line cord is still used. This allows a caller to move about in a limited area while on the telephone. A cellular phone has extremely miniaturized components that make it possible to combine the base and handset into one handheld unit. No line or handset cords are needed with a cellular phone. A cellular phone permits more mobility than a cordless phone. A Transmitter There are two common kinds of telephone transmitters: the carbon transmitter and the electret transmitter. The carbon transmitter is constructed by placing carbon granules between metal plates called electrodes. One of the metal plates is a thin diaphragm that takes variations in pressure caused by sound waves and transmits these variations to the carbon granules. The electrodes conduct electricity that flows through the carbon. Variations in pressure caused by sound waves hitting the diaphragm cause the electrical resistance of the carbon to vary—when the grains are squeezed together, they conduct electricity more easily; and when they are far apart, they conduct electricity less efficiently. The resultant current varies with the sound-wave pressure applied to the transmitter. The electret transmitter is composed of a thin disk of metal-coated plastic and a thicker, hollow metal disk. In the handset, the plastic disk is held slightly above most of the metal disk. The plastic disk is electrically charged, and an electric field is created in the space where the disks do not touch. Sound waves from the caller’s voice cause the plastic disk to vibrate, which changes the distance between the disks, and so changes the intensity of the electric field between them. The variations in the electric field are translated into variations of electric current, which travels across telephone lines. An amplifier using transistors is needed with an electret transmitter to obtain sufficiently strong variations of electric current. B Receiver The receiver of a telephone set is made from a flat ring of magnetic material with a short cuff of the same material attached to the ring’s outer rim. Underneath the magnetic ring and inside the magnetic cuff is a coil of wire through which electric current, representing the sounds from the distant telephone, flows. A thin diaphragm of magnetic material is suspended from the inside edges of the magnetic ring so it is positioned between the magnet and the coil. The magnetic field created by the magnet changes with the current in the coil and makes the diaphragm vibrate. The vibrating diaphragm creates sound waves that replicate the sounds that were transformed into electricity by the other person’s transmitter. C Alerter The alerter in a telephone is usually called the ringer, because for most of the telephone’s history, a bell was used to indicate a call. The alerter responds only to a special frequency of electricity that is sent by the exchange in response to the request for that telephone number. Creating an electronic replacement for the bell that can provide a pleasing yet attention-getting sound at a reasonable cost was a surprisingly difficult task. For many people, the sound of a bell is still preferable to the sound of an electronic alerter. However, since a mechanical bell requires a certain amount of space in the telephone to be effective, smaller telephones mandate the use of electronic alerters. D Dial The telephone dial has undergone major changes in its history. Two forms of dialing still exist within the telephone system: dial pulse from a rotary dial, and multifrequency tone, which is commonly called by its original trade name of Touch-Tone, from a push-button dial. In a rotary dial, the numerals one to nine, followed by zero, are placed in a circle behind round holes in a movable plate. The user places a finger in the hole corresponding to the desired digit and rotates the movable plate clockwise until the user’s finger hits the finger stop; then the user removes the finger. A spring mechanism causes the plate to return to its starting position, and, while the plate is turning, the mechanism opens an electrical switch the number of times equal to the dial digit. Zero receives ten switch openings since it is the last digit on the dial. The result is a number of "dial pulses" in the electrical current flowing between the telephone set and the exchange. Equipment at the exchange counts these pulses to determine the number being called. The rotary dial has been used since the 1920s. But mechanical dials are expensive to repair and the rotary-dialing process itself is slow, especially if a long string of digits is dialed. The development of inexpensive and reliable amplification provided by the introduction of the transistor in the 1960s made practical the design of a dialing system based on the transmission of relatively low power tones instead of the higher-power dial pulses. Today most telephones have push buttons instead of a rotary dial. Touch-Tone is an optional service, and telephone companies still maintain the ability to receive pulse dialing. Push-button telephones usually have a switch on the base that the customer can set to determine whether the telephone will send pulses or tones. E Business Telephones A large business will usually have its own switching machine called a Private Branch Exchange (PBX), with hundreds or possibly thousands of lines, all of which can be reached by dialing one number. The extension telephones connected to the large business’s PBX are often identical to the simple single-line instruments used in residences. The telephones used by small businesses, which do not have their own PBX, must incorporate the capability of accessing several telephone lines and are called multiline sets. The small-business environment usually requires the capability of transferring calls from one set to another as well as intercom calls, which allow one employee to call another without using an outside telephone line. F Cellular Telephones A cellular telephone is designed to give the user maximum freedom of movement while using a telephone. A cellular telephone uses radio signals to communicate between the set and an antenna. The served area is divided into cells something like a honeycomb, and an antenna is placed within each cell and connected by telephone lines to one exchange devoted to cellular-telephone calls. This exchange connects cellular telephones to one another or transfers the call to a regular exchange if the call is between a cellular telephone and a noncellular telephone. The special cellular exchange, through computer control, selects the antenna closest to the telephone when service is requested. As the telephone roams, the exchange automatically determines when to change the serving cell based on the power of the radio signal received simultaneously at adjacent sites. This change occurs without interrupting conversation. Practical power considerations limit the distance between the telephone and the nearest cellular antenna, and since cellular phones use radio signals, it is very easy for unauthorized people to access communications carried out over cellular phones. Currently, digital cellular phones are gaining in popularity because the radio signals are harder to intercept and decode. III MAKING A TELEPHONE CALL A telephone call starts when the caller lifts a handset off the base. This closes an electrical switch that initiates the flow of a steady electric current over the line between the user’s location and the exchange. The exchange detects the current and returns a dial tone, a precise combination of two notes that lets a caller know the line is ready. Once the dial tone is heard, the caller uses a rotary or push-button dial mounted either on the handset or base to enter a sequence of digits, the telephone number of the called party. The switching equipment in the exchange removes the dial tone from the line after the first digit is received and, after receiving the last digit, determines whether the called party is in the same exchange or a different exchange. If the called party is in the same exchange, bursts of ringing current are applied to the called party’s line. Each telephone contains a ringer that responds to a specific electric frequency. When the called party answers the telephone by picking up the handset, steady current starts to flow in the called party’s line and is detected by the exchange. The exchange then stops applying ringing and sets up a connection between the caller and the called party. If the called party is in a different exchange from the caller, the caller’s exchange sets up a connection over the telephone network to the called party’s exchange. The called exchange then handles the process of ringing, detecting an answer, and notifying the calling exchange and billing machinery when the call is completed (in telephone terminology, a call is completed when the called party answers, not when the conversation is over). When the conversation is over, one or both parties hang up by replacing their handset on the base, stopping the flow of current. The exchange then initiates the process of taking down the connection, including notifying billing equipment of the duration of the call if appropriate. Billing equipment may or may not be involved because calls within the local calling area, which includes several nearby exchanges, may be either flat rate or message rate. In flat-rate service, the subscriber is allowed an unlimited number of calls for a fixed fee each month. For message-rate subscribers, each call involves a charge that depends on the distance between the calling and called parties and the duration of the call. A longdistance call is a call out of the local calling area and is always billed as a message-rate call. A Switching Telephone switching equipment interprets the number dialed and then completes a path through the network to the called subscriber. For long-distance calls with complicated paths through the network, several levels of switching equipment may be needed. The automatic exchange to which the subscriber’s telephone is connected is the lowest level of switching equipment and is called by various names, including local exchange, local office, centraloffice switch, or, simply, switch. Higher levels of switching equipment include tandem and toll switches, and are not needed when both caller and called subscribers are within the same local exchange. Before automatic exchanges were invented, all calls were placed through manual exchanges in which a small light on a switchboard alerted an operator that a subscriber wanted service. The operator inserted an insulated electrical cable into a jack corresponding to the subscriber requesting service. This allowed the operator and the subscriber to converse. The caller told the operator the called party’s name, and the operator used another cord adjacent to the first to plug into the called party’s jack and then operated a key (another type of electrical switch) that connected ringing current to the called party’s telephone. The operator listened for the called party to answer, and then disconnected to ensure the privacy of the call. Today there are no telephones served by manual exchanges in the United States. All telephone subscribers are served by automatic exchanges, which perform the functions of the human operator. The number being dialed is stored and then passed to the exchange’s central computer, which in turn operates the switch to complete the call or routes it to a higher-level switch for further processing. Today’s automatic exchanges use a pair of computers, one running the program that provides service, and the second monitoring the operation of the first, ready to take over in a few seconds in the event of an equipment failure. Early telephone exchanges, a grouping of 10,000 individual subscriber numbers, were originally given names corresponding to their town or location within a city, such as Murray Hill or Market. When the dialing area grew to cover more than one exchange, there was a need for the dial to transmit letters as well as numbers. This problem was solved by equating three letters to each digit on the dial except for the one and the zero. Each number from two to nine represented three letters, so there was room for only 24 letters. Q and Z were left off the dial because these letters rarely appear in place-names. In dialing, the first two letters of each exchange name were used ahead of the rest of the subscriber’s number, and all exchange names were standardized as two letters and a digit. Eventually the place-names were replaced with their equivalent digits, giving us our current U.S. and Canadian seven-digit telephone numbers. In other parts of the world, a number may consist of more or less than seven digits. The greatly expanded information-processing capability of modern computers permits Direct Distance Dialing, with which a subscriber can automatically place a call to a distant city without needing the services of a human operator to determine the appropriate routing path through the network. Computers in the switching machines used for long-distance calls store the routing information in their electronic memory. A toll-switching machine may store several different possible routes for a call. As telephone traffic becomes heavier during the day, some routes may become unavailable. The toll switch will then select a less direct alternate route to permit the completion of the call. B Transmission Calling from New York City to Hong Kong involves using a path that transmits electrical energy halfway around the world. During the conversation, it is the task of the transmission system to deliver that energy so that the speech or data is transmitted clearly and free from noise. Since the telephone in New York City does not know whether it is connected to a telephone next door or to one in Hong Kong, the amount of energy put on the line is not different in either case. However, it requires much more energy to converse with Hong Kong than with next door because energy is lost in the transmission. The transmission path must provide amplification of the signal as well as transport. Analog transmission, in which speech or data is converted directly into a varying electrical current, is suitable for local calls. But once the call involves any significant distance, the necessary amplification of the analog signal can add so much noise that the received signal becomes unintelligible. For long-distance calls, the signal is digitized, or converted to a series of pulses that encodes the information. When an analog electrical signal is digitized, samples of the signal’s strength are taken at regular intervals, usually about 8,000 samples per second. Each sample is converted into a binary form, a number made up of a series of 1s and 0s. This number is easily and swiftly passed through the switching system. Digital transmission systems are much less subject to interfering noise than are analog systems. The digitized signal can then be passed through a digital-to-analog converter (DAC) at a point close to the receiving party, and converted to a form that the ear cannot distinguish from the original signal. There are several ways a digital or analog signal may be transmitted, including coaxial and fiber-optic cables and microwave and longwave radio signals sent along the ground or bounced off satellites in orbit around the earth. A coaxial wire, like the wire between a videocassette recorder, or VCR (see Video Recording), and a television set, is an efficient transmission system. A coaxial wire has a conducting tube surrounding another conductor. A coaxial cable contains several coaxial wires in a common outer covering. The important benefit of a coaxial cable over a cable composed of simple wires is that the coaxial cable is more efficient at carrying very high frequency currents. This is important because in providing transmission over long distances, many telephone conversations are combined using frequency-modulation (FM) techniques similar to the combining of many channels in the television system. The combined signal containing hundreds of individual telephone conversations is sent over one pair of wires in a coaxial cable, so the signal has to be very clear. Coaxial cable is expensive to install and maintain, especially when it is lying on the ocean floor. Two methods exist for controlling this expense. The first consists of increasing the capacity of the cable and so spreading the expense over more users. The installation of the first transatlantic submarine coaxial telephone cable in 1956 provided only about 30 channels, but the number of submarine cable channels across the ocean has grown to thousands with the addition of only a few more cables because of the greatly expanded capacity of each new coaxial cable. Another telephone-transmission method uses fiber-optic cable, which is made of bundles of optical fibers (see Fiber Optics), long strands of specially made glass encased in a protective coating. Optical fibers transmit energy in the form of light pulses. The technology is similar to that of the coaxial cable, except that the optical fibers can handle tens of thousands of conversations simultaneously. Another approach to long-distance transmission is the use of radio. Before coaxial cables were invented, very powerful longwave (low frequency) radio stations were used for intercontinental calls. Only a few calls could be in progress at one time, however, and such calls were very expensive. Microwave radio uses very high frequency radio waves and has the ability to handle a large number of simultaneous conversations over the same microwave link. Because cable does not have to be installed between microwave towers, this system is usually cheaper than coaxial cable. On land, the coaxial-cable systems are often supplemented with microwave-radio systems. The technology of microwave radio is carried one step further by the use of communications satellites. Most communications satellites are in geosynchronous orbit—that is, they orbit the earth once a day over the equator, so the satellite is always above the same place on the earth’s surface. That way, only a single satellite is needed for continuous service between two points on the surface, provided both points can be seen from the satellite. Even considering the expense of a satellite, this method is cheaper to install and maintain per channel than using coaxial cables on the ocean floor. Consequently, satellite links are used regularly in long-distance calling. Since radio waves, while very fast, take time to travel from one point to another, satellite communication does have one serious shortcoming: Because of the satellite’s distance from the earth, there is a noticeable lag in conversational responses. As a result, many calls use a satellite for only one direction of transmission, such as from the caller to the receiver, and use a ground microwave or coaxial link for receiver-to-caller transmission. A combination of microwave, coaxial-cable, optical-fiber, and satellite paths now link the major cities of the world. The capacity of each type of system depends on its age and the territory covered, but capacities generally fall into the following ranges: Frequency modulation over a simple pair of wires like the earliest telephone lines yields tens of circuits (a circuit can transmit one telephone conversation) per pair; coaxial cable yields hundreds of circuits per pair of conductors, and thousands per cable; microwave and satellite transmissions yield thousands of circuits per link; and optical fiber has the potential for tens of thousands of circuits per fiber. IV TELEPHONE SERVICES In the United States and Canada, universal service was a stated goal of the telephone industry during the first half of the 20th century—every household was to have its own telephone. This goal has now been essentially reached, but before it became a reality, the only access many people had to the telephone was through pay (or public) telephones, usually placed in a neighborhood store. A pay telephone is a telephone that may have special hardware to count and safeguard coins or, more recently, to read the information off credit cards or calling cards. Additional equipment at the exchange responds to signals from the pay phone to indicate to the operator or automatic exchange how much money has been deposited or to which account the call will be charged. Today the pay phone still exists, but it usually serves as a convenience rather than as primary access to the telephone network. Computer-controlled exchange switches make it possible to offer a variety of extra services to both the residential and the business customer. Some services to which users may subscribe at extra cost are call waiting, in which a second incoming call, instead of receiving a busy signal, hears normal ringing while the subscriber hears a beep superimposed on the conversation in progress; and three-way calling, in which a second outgoing call may be placed while one is already in progress so that three subscribers can then talk to each other. Some services available to users within exchanges with the most-modern transmission systems are: caller ID, in which the calling party’s number is displayed to the receiver (with the calling party’s permission—subscribers can elect to make their telephone number hidden from caller-ID services) on special equipment before the call is answered; and repeat dialing, in which a called number, if busy, will be automatically redialed for a certain amount of time. For residential service, voice mail can either be purchased from the telephone company or can be obtained by purchasing an answering machine. An answering machine usually contains a regular telephone set along with the ability to detect incoming calls and to record and play back messages, with either an audiotape or a digital system. After a preset number of rings, the answering machine plays a prerecorded message inviting the caller to leave a message to be recorded. Toll-free 800 numbers are a very popular service. Calls made to a telephone number that has an 800 area code are billed to the called party rather than to the caller. This is very useful to any business that uses mail-order sales, because it encourages potential customers to call to place orders. A less expensive form of 800-number service is now available for residential subscribers. In calling telephone numbers with area codes of 900, the caller is billed an extra charge, often on a per-minute basis. The use of these numbers has ranged from collecting contributions for charitable organizations, to businesses that provide information for which the caller must pay. While the United States and Canada are the most advanced countries in the world in telephone-service technologies, most other industrialized nations are not far behind. An organization based in Geneva, Switzerland, called the International Telecommunication Union (ITU), works to standardize telephone service throughout the world. Without its coordinating activities, International Direct Distance Dialing (a service that provides the ability to place international calls without the assistance of an operator) would have been extremely difficult to implement. Among its other services, the ITU creates an environment in which a special service introduced in one country can be quickly duplicated elsewhere. V THE HISTORY OF THE TELEPHONE The history of the invention of the telephone is a stormy one. A number of inventors contributed to carrying a voice signal over wires. In 1854 the French inventor Charles Bourseul suggested that vibrations caused by speaking into a flexible disc or diaphragm might be used to connect and disconnect an electric circuit, thereby producing similar vibrations in a diaphragm at another location, where the original sound would be reproduced. A few years later, the German physicist Johann Philip Reis invented an instrument that transmitted musical tones, but it could not reproduce speech. An acoustic communication device that could transmit speech was developed around 1860 by an Italian American inventor, Antonio Meucci. The first to achieve commercial success and inaugurate widespread use of the telephone, however, was a Scottish-born American inventor, Alexander Graham Bell, a speech teacher in Boston, Massachusetts. Bell had built an experimental telegraph, which began to function strangely one day because a part had come loose. The accident gave Bell insight into how voices could be reproduced at a distance, and he constructed a transmitter and a receiver, for which he received a patent on March 7, 1876. On March 10, 1876, as he and his assistant, Thomas A. Watson, were preparing to test the mechanism, Bell spilled some acid on himself. In another room, Watson, next to the receiver, heard clearly the first telephone message: “Mr. Watson, come here; I want you.” A few hours after Bell had patented his invention, another American inventor, Elisha Gray, filed a document called a caveat with the U.S. Patent Office, announcing that he was well on his way to inventing a telephone. Other inventors, including Meucci and Amos E. Dolbear, also made claim to having invented the telephone. Lawsuits were filed by various individuals, and Bell’s claim to being the inventor of the first telephone had to be defended in court some 600 times. Gray’s case was decided in Bell’s favor. Meucci’s case was never resolved because Meucci died before it reached the Supreme Court of the United States. A Advances in Technology After the invention of the telephone instrument itself, the second greatest technological advance in the industry may have been the invention of automatic switching. The first automatic exchanges were called Strowger switches, after Almon Brown Strowger, an undertaker in Kansas City, Missouri, who invented the system because he thought his town’s human operators were steering prospective business to his competitors. Strowger received a patent for the switches in 1891. Long-distance telephony was established in small steps. The first step was the introduction of the long-distance telephone, originally a special highly efficient instrument permanently installed in a telephone company building and used for calling between cities. The invention at the end of the 19th century of the loading coil (a coil of copper wire wound on an iron core and connected to the cable every mile or so) increased the speaking range to approximately 1,000 miles. Until the 1910s the long-distance service used repeaters, electromechanical devices spaced along the route of the call that amplified and repeated conversations into another long-distance instrument. The obvious shortcomings of this arrangement were overcome with the invention of the triode vacuum tube, which amplified electrical signals. In 1915 vacuum-tube repeaters were used to initiate service from New York City to San Francisco, California. The vacuum tube also made possible the development of longwave radio circuits that could span oceans. Sound quality on early radio circuits was poor, and transmission subject to unpredictable interruption. In the 1950s the technology of the coaxial-cable system was combined with high-reliability vacuum-tube circuits in an undersea cable linking North America and Europe, greatly improving transmission quality. Unlike the first transatlantic telegraph cable placed in service in 1857, which failed after two months, the first telephone cable (laid in 1956) served many years before becoming obsolete. The application of digital techniques to transmission, along with undersea cable and satellites, finally made it possible to link points halfway around the earth with a circuit that had speech quality almost as good as that between next-door neighbors. Improved automatic-switching systems followed the gradual improvement in transmission technology. Until Direct Distance Dialing became available, all long-distance calls still required the assistance of an operator to complete. By adding a three-digit area code in front of the subscriber’s old number and developing more sophisticated common-controlswitching machines, it became possible for subscribers to complete their own long-distance calls. Today customer-controlled international dialing is available between many countries. B Evolution of the Telephone Industry In the late 1800s, the Bell Telephone Company (established in 1877 by Alexander Graham Bell and financial backers Gardiner Greene Hubbard, a lawyer, and Thomas Sanders, a leather merchant) strongly defended its patents in order to exclude others from the telephone business. After these patents expired in 1893 and 1894, independent telephone companies were started in many cities and most small towns. A period of consolidation followed in the early 1900s, and eventually about 80 percent of the customers in the United States and many of those in Canada were served by the American Telephone and Telegraph Company (AT&T), which had bought the Bell Telephone Company in 1900. AT&T sold off its Canadian interests in 1908. From 1885 to 1887 and from 1907 to 1919 AT&T was headed by Theodore Vail, whose vision shaped the industry for most of the 20th century. At that time, AT&T included 22 regional operating companies, each providing telephone service to an area comprising a large city, state, or group of states. In addition to owning virtually all of the long-distance circuits in use in the United States, AT&T owned the Western Electric Company, which manufactured most of the equipment. Such a corporate combination is called a vertically integrated monopoly because it dominates all facets of a business. Both the long-distance part of AT&T and the operating companies were considered to be “natural monopolies,” and by law were decreed to be the sole provider of telephone service within a designated area. More than 5,000 independent companies remained, but each independent was also a monopoly with an exclusive service region. This arrangement reduced the costs associated with more than one company stringing wires in an area, and eliminated the early problems that had arisen when customers of one company serving a region wished to call customers of another company serving the same area. In exchange for the absence of competition, the companies were regulated by various levels of government, which told them what services they must provide and what prices they could charge. During this time, telephone sets were never sold to the customer—they were leased as part of an overall service package that included the telephone, the connecting lines to the exchange, and the capability of calling other customers. In this way, the telephone company was responsible for any problems, whether they arose from equipment failures, damage to exposed wires, or even the conduct of operators on their job. If a telephone set broke, it was fixed or replaced at no charge. Since stringing wires between exchanges and users was a major part of the cost of providing telephone service, especially in rural environments, early residential subscribers often shared the same line. These were called party lines—as opposed to private, or singleparty, lines. When one subscriber on a party line was making a telephone call, the other parties on the line could not use the line. Unfortunately, they could listen to the conversation, thereby compromising its privacy. Such arrangements also meant that, unless special equipment was used, all the telephones on the line would ring whenever there was a call for any of the parties. Each party had a distinct combination of short and long rings to indicate whether the call was for that house or another party. Business telephones were usually private lines. A business could not afford to have its service blocked by another user. This meant that business service was more expensive than residential service. Businesses continued to be charged more for their private lines than were subscribers with private lines in homes. This subsidization of telephones in homes permeated the government-regulated rate structure of the telephone industry until about 1980. Long-distance service was priced artificially high, and the consequent extra revenues to the telephone company were used to keep the price of residential service artificially low. While most consumers were happy with the control of all equipment by the telephone companies, some were not. Also, because of strong vertical integration within AT&T, the purchase of equipment from independent manufacturers was tightly controlled. AT&T initially refused to allow the independently manufactured Carterphone, a device that linked two-way-radio equipment to a telephone, to be connected to its network. After protracted lawsuits, AT&T agreed in 1968 to allow the connection of independently manufactured telephones to its network, provided they met legal standards set by the Federal Communications Commission (FCC). While the AT&T agreement did not directly involve the other telephone companies in the country, over time the entire industry followed AT&T’s lead. In 1974 MCI Communications Corporation challenged AT&T about its right to maintain a monopoly over long-distance service. Antitrust proceedings were brought, and eventually settled in 1982 in a consent decree that brought about the breakup of AT&T. In a consent decree, the federal government agrees to stop proceedings against a company in return for restrictions on or changes in the company. The antitrust proceedings were dropped when AT&T agreed to sell off its local operating companies, retaining the long-distance network and manufacturing companies. The former AT&T operating companies were regrouped into seven Regional Holding Companies (RHCs), which were initially restricted from engaging in any business other than telephone service within their assigned service area. The RHCs promptly began sidestepping these restrictions by setting up subsidiaries to operate in the unregulated environment and seeking legislation to further remove restrictions. At the same time, alternate long-distance carriers, such as MCI and Sprint, sought legislation to keep AT&T under as much regulation as possible while freeing themselves from any regulation. C The Telephone Industry Today In 1996 the U.S. government enacted the Telecommunications Reform Act, which removed government rules preventing local and long-distance phone companies, cable television operators, broadcasters, and wireless services from directly competing with one another. The act spurred consolidation in the industry, as regional companies joined forces to create telecommunications giants that provided telephone, wireless, cable, and Internet services. In other countries, until the 1990s, most of the telephone companies were owned by each nation’s central government and operated as part of the post office, an arrangement that inevitably led to tight control. Many countries are now privatizing telephone service. In order to escape government regulation at home, U.S. companies are investing heavily in the phone systems of other countries. For example, in 1995 AT&T announced it would attempt to gain a share of the market for telephone services in India. In a reverse trend, European companies are investing in U.S. long-distance carriers. Other major markets for telephone companies are opening up around the globe as the developing world becomes more technologically advanced. Nonindustrial countries are now trying to leapfrog their development by encouraging private companies to install only the latest technology. In remote places in India and Africa, the use of solar cells is now making it possible to introduce telephones in areas still without electricity. VI RECENT DEVELOPMENTS The introduction of radio into the telephone set has been the most important recent development in telephone technology, permitting first the cordless phone and now the cellular phone. In addition to regular telephone service, modern cellular phones also provide wireless Internet connections, enabling users to send and receive electronic mail and search the World Wide Web. Answering machines and phones with dials that remember several stored numbers (repertory dials) have been available for decades, but because of their expense and unreliability were never as popular as they are today. Multifunctional telephones that use microprocessors and integrated circuits have overcome both these barriers to make repertory dials a standard feature in most phones sold today. Many multifunctional telephones also include automatic answering and message-recording capability. Videophones are devices that use a miniature video camera to send images as well as voice communication. Videophones can be connected to regular telephone lines or their messages can be sent via wireless technology. Since the transmission of a picture requires much more bandwidth (a measure of the amount of data a system can transmit per period of time) than the transmission of voice, the high cost of transmission facilities has limited the use of videophone service. This problem is being overcome by technologies that compress the video information, and by the steadily declining cost of transmission and video-terminal equipment. Video service is now used to hold business “teleconferences” between groups in distant cities using high-capacity transmission paths with wide bandwidth. Videophones suitable for conversations between individuals over the normal network are commercially available, but because they provide a picture inferior to that of a television set, have not proven very popular. Television news organizations adopted the use of videophones to cover breaking news stories in remote areas. Their use escalated in 2001 during the U.S. war against terrorists and the Taliban regime in Afghanistan. Telecommunications companies are rapidly expanding their use of digital technology, such as Digital Subscriber Line (DSL) or Integrated Services Digital Network (ISDN), to allow users to get more information faster over the telephone. Telecommunications companies are also investing heavily in fiber optic cable to meet the ever-increasing demand for increased bandwidth. As bandwidth continues to improve, an instrument that functions as a telephone, computer, and television becomes more commercially viable. Such a device is now available, but its cost will likely limit its widespread use in the early part of the 21st century. Contributed By: Richard M. Rickert Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question:6 Latitude and Longitude Latitude and Longitude Latitude and Longitude, system of geometrical coordinates used in designating the location of places on the surface of the earth. (For the use of these terms in astronomy, see Coordinate System; Ecliptic.) Latitude, which gives the location of a place north or south of the equator, is expressed by angular measurements ranging from 0° at the equator to 90° at the poles. Longitude, the location of a place east or west of a north-south line called the prime meridian, is measured in angles ranging from 0° at the prime meridian to 180° at the International Date Line. Midway between the poles, the equator, a great circle, divides the earth into northern and southern hemispheres. Parallel to the equator and north and south of it are a succession of imaginary circles that become smaller and smaller the closer they are to the poles. This series of east-west-running circles, known as the parallels of latitude, is crossed at right angles by a series of half-circles extending north and south from one pole to the other, called the meridians of longitude. Although the equator was an obvious choice as the prime parallel, being the largest, no one meridian was uniquely qualified as prime. Until a single prime meridian could be agreed upon, each nation was free to choose its own, with the result that many 19th-century maps of the world lacked a standardized grid. The problem was resolved in 1884, when an international prime meridian, passing through London's Greenwich Observatory, was officially designated. A metallic marker there indicates its exact location. Degrees of latitude are equally spaced, but the slight flattening at the poles causes the length of a degree of latitude to vary from 110.57 km (68.70 mi) at the equator to 111.70 km (69.41 mi) at the poles. At the equator, meridians of longitude 1 degree apart are separated by a distance of 111.32 km (69.17 mi); at the poles, meridians converge. Each degree of latitude and longitude is divided into 60 minutes, and each minute divided into 60 seconds, thereby allowing the assignment of a precise numerical location to any place on earth. Contributed By: Geoffrey J. Martin Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question:7 (a)Cardiac Muscles and Skeletal Muscles Skeletal Muscle Skeletal muscle enables the voluntary movement of bones. Skeletal muscle consists of densely packed groups of elongated cells known as muscle fibers. This type of muscle is composed of long fibers surrounded by a membranous sheath, the sarcolemma. The fibers are elongated, sausage-shaped cells containing many nuclei and clearly display longitudinal and cross striations. Skeletal muscle is supplied with nerves from the central nervous system, and because it is partly under conscious control, it is also called voluntary muscle. Most skeletal muscle is attached to portions of the skeleton by connective-tissue attachments called tendons. Contractions of skeletal muscle serve to move the various bones and cartilages of the skeleton. Skeletal muscle forms most of the underlying flesh of vertebrates. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Cardiac Muscle Cardiac muscle, found only in the heart, drives blood through the circulatory system. Cardiac muscle cells connect to each other by specialized junctions called intercalated disks. Without a constant supply of oxygen, cardiac muscle will die, and heart attacks occur from the damage caused by insufficient blood supply to cardiac muscle. This muscle tissue composes most of the vertebrate heart. The cells, which show both longitudinal and imperfect cross striations, differ from skeletal muscle primarily in having centrally placed nuclei and in the branching and interconnecting of fibers. Cardiac muscle is not under voluntary control. It is supplied with nerves from the autonomic nervous system, but autonomic impulses merely speed or slow its action and are not responsible for the continuous rhythmic contraction characteristic of living cardiac muscle. The mechanism of cardiac contraction is not yet understood. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (b)Haze and Smog haze 1 haze [hayz] noun (plural haz•es) 1. particles in atmosphere: mist, cloud, or smoke suspended in the atmosphere and obscuring or obstructing the view 2. vague obscuring factor: something that is vague and serves to obscure something 3. disoriented mental or physical state: a mental or physical state or condition when feelings and perceptions are vague, disorienting, or obscured intransitive verb (past and past participle hazed, present participle haz•ing, 3rd person present singular haz•es) become filled with particles: to become saturated with suspended particles • As the temperatures rose, the sky began to haze over. [Early 18th century. Probably back-formation < hazy ] Microsoft® Encarta® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Smog Smog, mixture of solid and liquid fog and smoke particles formed when humidity is high and the air so calm that smoke and fumes accumulate near their source. Smog reduces natural visibility and often irritates the eyes and respiratory tract. In dense urban areas, the death rate usually goes up considerably during prolonged periods of smog, particularly when a process of heat inversion creates a smog-trapping ceiling over a city. Smog occurs most often in and near coastal cities and is an especially severe problem in Los Angeles and Tokyo. Smog prevention requires control of smoke from furnaces; reduction of fumes from metalworking and other industrial plants; and control of noxious emissions from automobiles, trucks, and incinerators. In the U.S. internal-combustion engines are regarded as the largest contributors to the smog problem, emitting large amounts of contaminants, including unburned hydrocarbons and oxides of nitrogen. The number of undesirable components in smog, however, is considerable, and the proportions highly variable. They include ozone, sulfur dioxide, hydrogen cyanide, and hydrocarbons and their products formed by partial oxidation. Fuel obtained from fractionation of coal and petroleum produces sulfur dioxide, which is oxidized by atmospheric oxygen, forming sulfur trioxide (SO3). Sulfur trioxide is in turn hydrated by the water vapor in the atmosphere to form sulfuric acid (H2SO4). The so-called photochemical smog, which irritates sensitive membranes and damages plants, is formed when nitrogen oxides in the atmosphere undergo reactions with the hydrocarbons energized by ultraviolet and other radiations from the sun. See Air Pollution. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (c) Enzyme and Harmone Enzyme I INTRODUCTION Enzyme, any one of many specialized organic substances, composed of polymers of amino acids, that act as catalysts to regulate the speed of the many chemical reactions involved in the metabolism of living organisms, such as digestion. The name enzyme was suggested in 1867 by the German physiologist Wilhelm Kühne (1837-1900); it is derived from the Greek phrase en zymē, meaning “in leaven.” Those enzymes identified now number more than 700. Enzymes are classified into several broad categories, such as hydrolytic, oxidizing, and reducing, depending on the type of reaction they control. Hydrolytic enzymes accelerate reactions in which a substance is broken down into simpler compounds through reaction with water molecules. Oxidizing enzymes, known as oxidases, accelerate oxidation reactions; reducing enzymes speed up reduction reactions, in which oxygen is removed. Many other enzymes catalyze other types of reactions. Individual enzymes are named by adding ase to the name of the substrate with which they react. The enzyme that controls urea decomposition is called urease; those that control protein hydrolyses are known as proteinases. Some enzymes, such as the proteinases trypsin and pepsin, retain the names used before this nomenclature was adopted. II PROPERTIES OF ENZYMES As the Swedish chemist Jöns Jakob Berzelius suggested in 1823, enzymes are typical catalysts: they are capable of increasing the rate of reaction without being consumed in the process. See Catalysis. Some enzymes, such as pepsin and trypsin, which bring about the digestion of meat, control many different reactions, whereas others, such as urease, are extremely specific and may accelerate only one reaction. Still others release energy to make the heart beat and the lungs expand and contract. Many facilitate the conversion of sugar and foods into the various substances the body requires for tissue-building, the replacement of blood cells, and the release of chemical energy to move muscles. Pepsin, trypsin, and some other enzymes possess, in addition, the peculiar property known as autocatalysis, which permits them to cause their own formation from an inert precursor called zymogen. As a consequence, these enzymes may be reproduced in a test tube. As a class, enzymes are extraordinarily efficient. Minute quantities of an enzyme can accomplish at low temperatures what would require violent reagents and high temperatures by ordinary chemical means. About 30 g (about 1 oz) of pure crystalline pepsin, for example, would be capable of digesting nearly 2 metric tons of egg white in a few hours. The kinetics of enzyme reactions differ somewhat from those of simple inorganic reactions. Each enzyme is selectively specific for the substance in which it causes a reaction and is most effective at a temperature peculiar to it. Although an increase in temperature may accelerate a reaction, enzymes are unstable when heated. The catalytic activity of an enzyme is determined primarily by the enzyme's amino-acid sequence and by the tertiary structure—that is, the three-dimensional folded structure—of the macromolecule. Many enzymes require the presence of another ion or a molecule, called a cofactor, in order to function. As a rule, enzymes do not attack living cells. As soon as a cell dies, however, it is rapidly digested by enzymes that break down protein. The resistance of the living cell is due to the enzyme's inability to pass through the membrane of the cell as long as the cell lives. When the cell dies, its membrane becomes permeable, and the enzyme can then enter the cell and destroy the protein within it. Some cells also contain enzyme inhibitors, known as antienzymes, which prevent the action of an enzyme upon a substrate. III PRACTICAL USES OF ENZYMES Alcoholic fermentation and other important industrial processes depend on the action of enzymes that are synthesized by the yeasts and bacteria used in the production process. A number of enzymes are used for medical purposes. Some have been useful in treating areas of local inflammation; trypsin is employed in removing foreign matter and dead tissue from wounds and burns. IV HISTORICAL REVIEW Alcoholic fermentation is undoubtedly the oldest known enzyme reaction. This and similar phenomena were believed to be spontaneous reactions until 1857, when the French chemist Louis Pasteur proved that fermentation occurs only in the presence of living cells (see Spontaneous Generation). Subsequently, however, the German chemist Eduard Buchner discovered (1897) that a cell-free extract of yeast can cause alcoholic fermentation. The ancient puzzle was then solved; the yeast cell produces the enzyme, and the enzyme brings about the fermentation. As early as 1783 the Italian biologist Lazzaro Spallanzani had observed that meat could be digested by gastric juices extracted from hawks. This experiment was probably the first in which a vital reaction was performed outside the living organism. After Buchner's discovery scientists assumed that fermentations and vital reactions in general were caused by enzymes. Nevertheless, all attempts to isolate and identify their chemical nature were unsuccessful. In 1926, however, the American biochemist James B. Sumner succeeded in isolating and crystallizing urease. Four years later pepsin and trypsin were isolated and crystallized by the American biochemist John H. Northrop. Enzymes were found to be proteins, and Northrop proved that the protein was actually the enzyme and not simply a carrier for another compound. Research in enzyme chemistry in recent years has shed new light on some of the most basic functions of life. Ribonuclease, a simple three-dimensional enzyme discovered in 1938 by the American bacteriologist René Dubos and isolated in 1946 by the American chemist Moses Kunitz, was synthesized by American researchers in 1969. The synthesis involves hooking together 124 molecules in a very specific sequence to form the macromolecule. Such syntheses led to the probability of identifying those areas of the molecule that carry out its chemical functions, and opened up the possibility of creating specialized enzymes with properties not possessed by the natural substances. This potential has been greatly expanded in recent years by genetic engineering techniques that have made it possible to produce some enzymes in great quantity (see Biochemistry). The medical uses of enzymes are illustrated by research into L-asparaginase, which is thought to be a potent weapon for treatment of leukemia; into dextrinases, which may prevent tooth decay; and into the malfunctions of enzymes that may be linked to such diseases as phenylketonuria, diabetes, and anemia and other blood disorders. Contributed By: John H. Northrop Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Hormone I INTRODUCTION Hormone, chemical that transfers information and instructions between cells in animals and plants. Often described as the body’s chemical messengers, hormones regulate growth and development, control the function of various tissues, support reproductive functions, and regulate metabolism (the process used to break down food to create energy). Unlike information sent by the nervous system, which is transmitted via electronic impulses that travel quickly and have an almost immediate and short-term effect, hormones act more slowly, and their effects typically are maintained over a longer period of time. Hormones were first identified in 1902 by British physiologists William Bayliss and Ernest Starling. These researchers showed that a substance taken from the lining of the intestine could be injected into a dog to stimulate the pancreas to secrete fluid. They called the substance secretin and coined the term hormone from the Greek word hormo, which means to set in motion. Today more than 100 hormones have been identified. Hormones are made by specialized glands or tissues that manufacture and secrete these chemicals as the body needs them. The majority of hormones are produced by the glands of the endocrine system, such as the pituitary, thyroid, adrenal glands, and the ovaries or testes. These endocrine glands produce and secrete hormones directly into the bloodstream. However, not all hormones are produced by endocrine glands. The mucous membranes of the small intestine secrete hormones that stimulate secretion of digestive juices from the pancreas. Other hormones are produced in the placenta, an organ formed during pregnancy, to regulate some aspects of fetal development. Hormones are classified into two basic types based on their chemical makeup. The majority of hormones are peptides, or amino acid derivatives that include the hormones produced by the anterior pituitary, thyroid, parathyroid, placenta, and pancreas. Peptide hormones are typically produced as larger proteins. When they are called into action, these peptides are broken down into biologically active hormones and secreted into the blood to be circulated throughout the body. The second type of hormones are steroid hormones, which include those hormones secreted by the adrenal glands and ovaries or testes. Steroid hormones are synthesized from cholesterol (a fatty substance produced by the body) and modified by a series of chemical reactions to form a hormone ready for immediate action. II HOW HORMONES WORK Most hormones are released directly into the bloodstream, where they circulate throughout the body in very low concentrations. Some hormones travel intact in the bloodstream. Others require a carrier substance, such as a protein molecule, to keep them dissolved in the blood. These carriers also serve as a hormone reservoir, keeping hormone concentrations constant and protecting the bound hormone from chemical breakdown over time. Hormones travel in the bloodstream until they reach their target tissue, where they activate a series of chemical changes. To achieve its intended result, a hormone must be recognized by a specialized protein in the cells of the target tissue called a receptor. Typically, hormones that are water-soluble use a receptor located on the cell membrane surface of the target tissues. A series of special molecules within the cell, known as second messengers, transport the hormone’s information into the cell. Fat-soluble hormones, such as steroid hormones, pass through the cell membrane and bind to receptors found in the cytoplasm. When a receptor and a hormone bind together, both the receptor and hormone molecules undergo structural changes that activate mechanisms within the cell. These mechanisms produce the special effects induced by the hormone. Receptors on the cell membrane surface are in constant turnover. New receptors are produced by the cell and inserted into the cell wall, and receptors that have reacted with hormones are broken down or recycled. The cell can respond, if necessary, to irregular hormone concentrations in the blood by decreasing or increasing the number of receptors on its surface. If the concentration of a hormone in the blood increases, the number of receptors in the cell wall may go down to maintain the same level of hormonal interaction in the cell. This is known as downregulation. If concentrations of hormones in the blood decrease, upregulation increases the number of receptors in the cell wall. Some hormones are delivered directly to the target tissues instead of circulating throughout the entire bloodstream. For example, hormones from the hypothalamus, a portion of the brain that controls the endocrine system, are delivered directly to the adjacent pituitary gland, where their concentrations are several hundred times higher than in the circulatory system. III HORMONAL EFFECTS Hormonal effects are complex, but their functions can be divided into three broad categories. Some hormones change the permeability of the cell membrane. Other hormones can alter enzyme activity, and some hormones stimulate the release of other hormones. Recent studies have shown that the more lasting effects of hormones ultimately result in the activation of specific genes. For example, when a steroid hormone enters a cell, it binds to a receptor in the cell’s cytoplasm. The receptor becomes activated and enters the cell’s nucleus, where it binds to specific sites in the deoxyribonucleic acid (DNA), the long molecules that contain individual genes. This activates some genes and inactivates others, altering the cell’s activity. Hormones have also been shown to regulate ribonucleic acids (RNA) in protein synthesiss. A single hormone may affect one tissue in a different way than it affects another tissue, because tissue cells are programmed to respond differently to the same hormone. A single hormone may also have different effects on the same tissue at different times in life. To add to this complexity, some hormone-induced effects require the action of more than one hormone. This complex control system provides safety controls so that if one hormone is deficient, others will compensate. IV TYPES OF HORMONES Hormones exist in mammals, including humans, as well as in invertebrates and plants. The hormones of humans, mammals, and other vertebrates are nearly identical in chemical structure and function in the body. They are generally characterized by their effect on specific tissues. A Human Hormones Human hormones significantly affect the activity of every cell in the body. They influence mental acuity, physical agility, and body build and stature. Growth hormone is a hormone produced by the pituitary gland. It regulates growth by stimulating the formation of bone and the uptake of amino acids, molecules vital to building muscle and other tissue. Sex hormones regulate the development of sexual organs, sexual behavior, reproduction, and pregnancy. For example, gonadotropins, also secreted by the pituitary gland, are sex hormones that stimulate egg and sperm production. The gonadotropin that stimulates production of sperm in men and formation of ovary follicles in women is called a folliclestimulating hormone. When a follicle-stimulating hormone binds to an ovary cell, it stimulates the enzymes needed for the synthesis of estradiol, a female sex hormone. Another gonadotropin called luteinizing hormone regulates the production of eggs in women and the production of the male sex hormone testosterone. Produced in the male gonads, or testes, testosterone regulates changes to the male body during puberty, influences sexual behavior, and plays a role in growth. The female sex hormones, called estrogens, regulate female sexual development and behavior as well as some aspects of pregnancy. Progesterone, a female hormone secreted in the ovaries, regulates menstruation and stimulates lactation in humans and other mammals. Other hormones regulate metabolism. For example, thyroxine, a hormone secreted by the thyroid gland, regulates rates of body metabolism. Glucagon and insulin, secreted in the pancreas, control levels of glucose in the blood and the availability of energy for the muscles. A number of hormones, including insulin, glucagon, cortisol, growth hormone, epinephrine, and norepinephrine, maintain glucose levels in the blood. While insulin lowers the blood glucose, all the other hormones raise it. In addition, several other hormones participate indirectly in the regulation. A protein called somatostatin blocks the release of insulin, glucagon, and growth hormone, while another hormone, gastric inhibitory polypeptide, enhances insulin release in response to glucose absorption. This complex system permits blood glucose concentration to remain within a very narrow range, despite external conditions that may vary to extremes. Hormones also regulate blood pressure and other involuntary body functions. Epinephrine, also called adrenaline, is a hormone secreted in the adrenal gland. During periods of stress, epinephrine prepares the body for physical exertion by increasing the heart rate, raising the blood pressure, and releasing sugar stored in the liver for quick energy. Hormones are sometimes used to treat medical problems, particularly diseases of the endocrine system. In people with diabetes mellitus type 1, for example, the pancreas secretes little or no insulin. Regular injections of insulin help maintain normal blood glucose levels. Sometimes, an illness or injury not directly related to the endocrine system can be helped by a dose of a particular hormone. Steroid hormones are often used as antiinflammatory agents to treat the symptoms of various diseases, including cancer, asthma, or rheumatoid arthritis. Oral contraceptives, or birth control pills, use small, regular doses of female sex hormones to prevent pregnancy. Initially, hormones used in medicine were collected from extracts of glands taken from humans or animals. For example, pituitary growth hormone was collected from the pituitary glands of dead human bodies, or cadavers, and insulin was extracted from cattle and hogs. As technology advanced, insulin molecules collected from animals were altered to produce the human form of insulin. With improvements in biochemical technology, many hormones are now made in laboratories from basic chemical compounds. This eliminates the risk of transferring contaminating agents sometimes found in the human and animal sources. Advances in genetic engineering even enable scientists to introduce a gene of a specific protein hormone into a living cell, such as a bacterium, which causes the cell to secrete excess amounts of a desired hormone. This technique, known as recombinant DNA technology, has vastly improved the availability of hormones. Recombinant DNA has been especially useful in producing growth hormone, once only available in limited supply from the pituitary glands of human cadavers. Treatments using the hormone were far from ideal because the cadaver hormone was often in short supply. Moveover, some of the pituitary glands used to make growth hormone were contaminated with particles called prions, which could cause diseases such as Creutzfeldt-Jakob disease, a fatal brain disorder. The advent of recombinant technology made growth hormone widely available for safe and effective therapy. B Invertebrate Hormones In invertebrates, hormones regulate metamorphosis (the process in which many insects, crustaceans, and mollusks transform from eggs, to larva, to pupa, and finally to mature adults). A hormone called ecdysone triggers the insect molting process, in which these animals periodically shed their outer covering, or exoskeletons, and grow new ones. The molting process is delayed by juvenile hormone, which inhibits secretion of ecdysone. As an insect larva grows, secretion of juvenile hormone declines steadily until its concentrations are too low to prevent the secretion of ecdysone. When this happens, ecdysone concentrations increase until they are high enough to trigger the metamorphic molt. In insects that migrate long distances, such as the locust, a hormone called octopamine increases the efficiency of glucose utilization by the muscles, while adipokinetic hormone increases the burning of fat as an energy source. In these insects, octopamine levels build up in the first five minutes of flight and then level off as adipokinetic hormone takes over, triggering the metabolism of fat reserves during long distance flights. Hormones also trigger color changes in invertebrates. Squids, octopuses, and other mollusks, for example, have hormonally controlled pigment cells that enable the animals to change color to blend in with their surroundings. C Plant Hormones Hormones in plants are called phytohormones. They regulate most of the life cycle events in plants, such as germination, cell division and extension, flowering, fruit ripening, seed and bud dormancy, and death (see Plant: Growth and Differentiation). Plant biologists believe that hormones exert their effects via specific receptor sites in target cells, similar to the mechanism found in animals. Five plant hormones have long been identified: auxin, cytokinin, gibberellin, abscisic acid, and ethylene. Recent discoveries of other plant hormones include brassinosteroids, salicylates, and jasmonates. Auxins are primarily responsible for protein synthesis and promote the growth of the plant's length. The most common auxin, indoleacetic acid (IAA), is usually formed near the growing top shoots and flows downward, causing newly formed leaves to grow longer. Auxins stimulate growth toward light and root growth. Gibberellins, which form in the seeds, young leaves, and roots, are also responsible for protein synthesis, especially in the main stem of the plant. Unlike auxins, gibberellins move upward from the roots. Cytokinins form in the roots and move up to the leaves and fruit to maintain growth, cell differentiation, and cell division. Among the growth inhibitors is abscisic acid, which promotes abscission, or leaf fall; dormancy in buds; and the formation of bulbs or tubers, possibly by preventing the synthesis of protein. Ethylene, another inhibitor, also causes abscission, perhaps by its destructive effect on auxins, and it also stimulates the ripening of fruit. Brassinosteroids act with auxins to encourage leaf elongation and inhibit root growth. Brassinosteroids also protect plants from some insects because they work against some of the hormones that regulate insect molting. Salicylates stimulate flowering and cause disease resistance in some plants. Jasmonates regulate growth, germination, and flower bud formation. They also stimulate the formation of proteins that protect the plant against environmental stresses, such as temperature changes or droughts. V COMMERCIAL USE OF HORMONES Hormones are used for a variety of commercial purposes. In the livestock industry, for example, growth hormones increase the amount of lean (non-fatty) meat in both cattle and hogs to produce bigger, less fatty animals. The cattle hormone bovine somatotropin increases milk production in dairy cows. Hormones are also used in animal husbandry to increase the success rates of artificial insemination and speed maturation of eggs. In plants, auxins are used as herbicides, to induce fruit development without pollination, and to induce root formation in cuttings. Cytokinins are used to maintain the greenness of plant parts, such as cut flowers. Gibberellins are used to increase fruit size, increase cluster size in grapes, delay ripening of citrus fruits, speed up flowering of strawberries, and stimulate starch break down in barley used in beer making. In addition, ethylene is used to control fruit ripening, which allows hard fruit to be transported without much bruising. The fruit is allowed to ripen after it is delivered to market. Genetic engineering also has produced fruits unable to form ethylene naturally. These fruits will ripen only if exposed to ethylene, allowing for extended shipping and storage of produce. Contributed By: Gad B. Kletter Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (d) Sedimentary Rock Sedimentary Rock, in geology, rock composed of geologically reworked materials, formed by the accumulation and consolidation of mineral and particulate matter deposited by the action of water or, less frequently, wind or glacial ice. Most sedimentary rocks are characterized by parallel or discordant bedding that reflects variations in either the rate of deposition of the material or the nature of the matter that is deposited. Sedimentary rocks are classified according to their manner of origin into mechanical or chemical sedimentary rocks. Mechanical rocks, or fragmental rocks, are composed of mineral particles produced by the mechanical disintegration of other rocks and transported, without chemical deterioration, by flowing water. They are carried into larger bodies of water, where they are deposited in layers. Shale, sandstone, and conglomerate are common sedimentary rocks of mechanical origin. The materials making up chemical sedimentary rocks may consist of the remains of microscopic marine organisms precipitated on the ocean floor, as in the case of limestone. They may also have been dissolved in water circulating through the parent rock formation and then deposited in a sea or lake by precipitation from the solution. Halite, gypsum, and anhydrite are formed by the evaporation of salt solutions and the consequent precipitation of the salts. See also Geology; Igneous Rock. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Igneous Rock I INTRODUCTION Igneous Rock, rock formed when molten or partially molten material, called magma, cools and solidifies. Igneous rocks are one of the three main types of rocks; the other types are sedimentary rocks and metamorphic rocks. Of the three types of rocks, only igneous rocks are formed from melted material. The two most common types of igneous rocks are granite and basalt. Granite is light colored and is composed of large crystals of the minerals quartz, feldspar, and mica. Basalt is dark and contains minute crystals of the minerals olivine, pyroxene, and feldspar. II TYPES OF IGNEOUS ROCKS Geologists classify igneous rocks according to the depth at which they formed in the earth’s crust. Using this principle, they divide igneous rocks into two broad categories: those that formed beneath the earth’s surface, and those that formed at the surface. Igneous rocks may also be classified according to the minerals they contain. A Classification by Depth of Formation Rocks formed within the earth are called intrusive or plutonic rocks because the magma from which they form often intrudes into the neighboring rock. Rocks formed at the surface of the earth are called extrusive rocks. In extrusive rocks, the magma has extruded, or erupted, through a volcano or fissure. Geologists can tell the difference between intrusive and extrusive rocks by the size of their crystals: crystals in intrusive rocks are larger than those in extrusive rocks. The crystals in intrusive rocks are larger because the magma that forms them is insulated by the surrounding rock and therefore cools slowly. This slow cooling gives the crystals time to grow larger. Extrusive rocks cool rapidly, so the crystals are very small. In some cases, the magma cools so rapidly that crystals have no time to form, and the magma hardens in an amorphous glass, such as obsidian. One special type of rock, called porphyry, is partly intrusive and partly extrusive. Porphyry has large crystals embedded in a mass of much smaller crystals. The large crystals formed underground and only melt at extremely high temperatures. They were carried in lava when it erupted. The mass of much smaller crystals formed around the large crystals when the lava cooled quickly above ground. B Classification by Composition Geologists also classify igneous rocks based on the minerals the rocks contain. If the mineral grains in the rocks are large enough, geologists can identify specific minerals by eye and easily classify the rocks by their mineral composition. However, extrusive rocks are generally too fine-grained to identify their minerals by eye. Geologists must classify these rocks by determining their chemical composition in the laboratory. Most magmas are composed primarily of the same elements that make up the crust and the mantle of the earth: oxygen (O), silicon (Si), aluminum (Al), iron (Fe), magnesium (Mg), calcium (Ca), sodium (Na), and potassium (K). These elements make up the rock-forming minerals quartz, feldspar, mica, amphibole, pyroxene, and olivine. Rocks and minerals rich in silicon are called silica-rich or felsic (rich in feldspar and silica). Rocks and minerals low in silicon are rich in magnesium and iron. They are called mafic (rich in magnesium and ferrum, the Latin term for iron). Rocks very low in silicon are called ultramafic. Rocks with a composition between felsic and mafic are called intermediate. B1 Felsic Rocks The most felsic, or silicon-rich, mineral is quartz. It is pure silicon dioxide and contains no aluminum, iron, magnesium, calcium, sodium, or potassium. The other important felsic mineral is feldspar. In feldspar, a quarter or a half of the silicon has been replaced by aluminum. Feldspar also contains potassium, sodium, or calcium but no magnesium or iron. Felsic intrusive rocks are classified as either granite or granodiorite, depending on how much potassium they contain. Both are light-colored rocks that have large crystals of quartz and feldspar. Extrusive rocks that have the same chemical composition as granite are called rhyolite and those with the same chemical composition as granodiorite are called dacite. Both rhyolite and dacite are fine-grained light-colored rocks. B2 Intermediate Rocks Rocks intermediate in composition between felsic and mafic rocks are termed syenite, monzonite, or monzodiorite if they are intrusive and trachyte, latite, and andesite if they are extrusive. Syenite and trachyte are rich in potassium while monzodiorite and andesite contain little potassium. B3 Mafic Rocks The mafic rock-forming minerals are olivine, pyroxene, and amphibole. All three contain silicon and a lot of either magnesium or iron or both. All three of these minerals are often dark colored. Mafic intrusive rocks are termed diorite or gabbro. Both are dark rocks with large, dark, mafic crystals as well as crystals of light-colored feldspar. Neither contains quartz. Diorite contains amphibole and pyroxene, while gabbro contains pyroxene and olivine. The feldspar in diorite tends to be sodium-rich, while the feldspar in gabbro is calcium-rich. Extrusive rocks that have the same chemical composition as diorite or gabbro are called basalt. Basalt is a fine-grained dark rock. Ultramafic rocks are composed almost exclusively of mafic minerals. Dunite is composed of more than 90 percent olivine; peridotites have between 90 and 40 percent olivine with pyroxene and amphibole as the other two principal minerals. Pyroxenite is composed primarily of pyroxene, and hornblendite is composed primarily of hornblende, which is a type of amphibole. III FORMATION OF IGNEOUS ROCKS The magmas that form igneous rock are hot, chemical soups containing a complex mixture of many different elements. As they cool, many different minerals could form. Indeed, two magmas with identical composition could form quite distinct sets of minerals, depending on the conditions of crystallization. As a magma cools, the first crystals to form will be of minerals that become solid at relatively high temperatures (usually olivine and a type of feldspar known as anorthite). The composition of these early-formed mineral crystals will be different from the initial composition of the magma. Consequently, as these growing crystals take certain elements out of the magma in certain proportions, the composition of the remaining liquid changes. This process is known as magmatic differentiation. Sometimes, the early-formed crystals are separated from the rest of the magma, either by settling to the floor of the magma chamber, or by compression that expels the liquid, leaving the crystals behind. As the magma cools to temperatures below the point where other minerals begin to crystallize (such as pyroxene and another type of feldspar known as bytownite), their crystals will start to form as well. However, early-formed minerals often cannot coexist in magma with the later-formed mineral crystals. If the early-formed minerals are not separated from the magma, they will react with or dissolve back into the magma over time. This process repeats through several cycles as the temperature of the magma continues to cool to the point where the remaining minerals become solid. The final mix of minerals formed from a cooling magma depends on three factors: the initial composition of the magma, the degree to which already-formed crystals separate from the magma, and the speed of cooling. IV INTRUSIONS When magma intrudes a region of the crust and cools, the resulting mass of igneous rock is called an intrusion. Geologists describe intrusions by their size, their shape, and whether they are concordant, meaning they run parallel to the structure of neighboring rocks, or discordant, meaning they cut across the structure of neighboring rocks. An example of a concordant intrusion is a horizontal bed formed when magma flows between horizontal beds of neighboring rock. A discordant intrusion would form when magma flows into cracks in neighboring rock, and the cracks lie at an angle to the neighboring beds of rock. A batholith is an intrusion with a cross-sectional area of more than 100 sq km (39 sq mi), usually consisting of granite, granodiorite, and diorite. Deep batholiths are often concordant, while shallow batholiths are usually discordant. Deep batholiths can be extremely large; the Coast Range batholith of North America is 100 to 200 km (60 to 120 mi) wide and extends 600 km (370 mi) through Alaska and British Columbia, Canada. Lopoliths are saucer-shaped concordant intrusions. They may be up to 100 km (60 mi) in diameter and 8 km (5 mi) thick. Lopoliths, which are usually basaltic in composition, are frequently called layered intrusions because they are strongly layered. Well-known examples are the Bushveld complex in South Africa and the Muskox intrusion in the Northwest Territories, Canada. Laccoliths have a flat base and a domed ceiling, and are concordant with the neighboring rocks; they are usually small. The classic area from which they were first described is the Henry Mountains in the state of Utah. Dikes and sills are sheetlike intrusions that are very thin relative to their length; sills are concordant and dikes are discordant. They are commonly fairly small features (a few meters thick) but can be larger. The Palisades Sill in the state of New York is 300 m (1000 ft) thick and 80 km (50 mi) long. V EXTRUSIVE BODIES Many different types of extrusive bodies occur throughout the world. The physical characteristics of these bodies depend on their chemical composition and on how the magma from which they formed erupted. The chemical composition of the parent magma affects its viscosity, or its resistance to flow, which in turn affects how the magma erupts. Felsic magma tends to be thick and viscous, while mafic magma tends to be fluid. (See also Volcano) Flood basalts are the most common type of extrusive rock. They form when highly fluid basaltic lava erupts from long fissures and many vents. The lava coalesces and floods large areas to considerable depths (up to 100 m/300 ft). Repeated eruptions can result in accumulated deposits up to 5 km (3 mi) thick. Typical examples are the Columbia River basalts in Washington and the Deccan trap of western India; the latter covers an area of more than 500,000 sq km (200,000 sq mi). When basalt erupts underwater, the rapid cooling causes it to form a characteristic texture known as pillow basalt. Pillow basalts are lava flows made up of interconnected pillowshaped and pillow-sized rocks. Much of the ocean floor is made up of pillow basalt. Extrusive rocks that erupt from a main central vent form volcanoes, and these are classified according to their physical form and the type of volcanic activity. Mafic, or basaltic, lava is highly fluid and erupts nonexplosively. The fluid lava quickly spreads out, forming large volcanoes with shallow slopes called shield volcanoes. Mauna Loa (Hawaii) is the bestknown example. Intermediate, or andesitic, magmas have a higher viscosity and so they erupt more explosively. They form steep-sided composite volcanoes. A composite volcano, or stratovolcano, is made up of layers of lava and volcanic ash. Well-known examples of composite volcanoes include Mount Rainier (Washington), Mount Vesuvius (Italy), and Mount Fuji (Japan). Felsic (rhyolitic) magmas are so viscous that they do not flow very far at all; instead, they form a dome above their central vent. This dome can give rise to very explosive eruptions when pressure builds up in a blocked vent, as happened with Mount Saint Helens (Washington) in 1983, Krakatau (Indonesia) in 1883, and Vesuvius (Italy) in AD 79. This type of explosive behavior can eject enormous amounts of ash and rock fragments, referred to as pyroclastic material, which form pyroclastic deposits (See also Pyroclastic Flow) VI PLATE TECTONICS AND IGNEOUS ROCKS The advent of the theory of plate tectonics in the 1960s provided a theoretical framework for understanding the worldwide distribution of different types of igneous rocks. According to the theory of plate tectonics, the surface of the earth is covered by about a dozen large plates. Some of these plates are composed primarily of basalt and are called oceanic plates, since most of the ocean floor is covered with basalt. Other plates, called continental plates because they contain the continents, are composed of a wide range of rocks, including sedimentary and metamorphic rocks, and large amounts of granite. Where two plates diverge (move apart), such as along a mid-ocean ridge, magma rises from the mantle to fill the gap. This material is mafic in composition and forms basalt. Where this divergence occurs on land, such as in Iceland, flood basalts are formed. When an oceanic plate collides with a continental plate, the heavier oceanic plate subducts, or slides, under the lighter continental plate. Some of the subducted material melts and rises. As it travels through the overriding continental plate, it melts and mixes with the continental material. Since continental material, on average, is more felsic than the mafic basalt of the oceanic plate, this mixing causes the composition of the magma to become more mafic. The magma may become intermediate in composition and form andesitic volcanoes. The Andes Mountains of South America are a long chain of andesitic volcanoes formed from the subduction of the Pacific Plate under the South American plate. If the magma becomes mafic, it may form rhyolitic volcanoes like Mount Saint Helens. Magma that is too viscous to rise to the surface may instead form granitic batholiths. VII ECONOMIC IMPORTANCE OF IGNEOUS ROCKS Many types of igneous rocks are used as building stone, facing stone, and decorative material, such as that used for tabletops, cutting boards, and carved figures. For example, polished granite facing stone is exported all over the world from countries such as Italy, Brazil, and India. Igneous rocks may also contain many important ores as accessory or trace minerals. Certain mafic intrusives are sources of chromium, titanium, platinum, and palladium. Some felsic rocks, called granitic pegmatites, contain a wealth of rare elements, such as lithium, tantalum, tin, and niobium, which are of economic importance. Kimberlites, formed from magmas from deep within the earth, are the primary source of diamonds. Many magmas release large amounts of metal-rich hot fluids that migrate through nearby rock, forming veins rich in metallic ores. Newly formed igneous rocks are also hot and can be an important source of geothermal energy. Contributed By: Frank Christopher Hawthorne Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (e)producers A more useful way of looking at the terrestrial and aquatic landscapes is to view them as ecosystems, a word coined in 1935 by the British plant ecologist Sir Arthur George Tansley to stress the concept of each locale or habitat as an integrated whole. A system is a collection of interdependent parts that function as a unit and involve inputs and outputs. The major parts of an ecosystem are the producers (green plants), the consumers (herbivores and carnivores), the decomposers (fungi and bacteria), and the nonliving, or abiotic, component, consisting of dead organic matter and nutrients in the soil and water. Inputs into the ecosystem are solar energy, water, oxygen, carbon dioxide, nitrogen, and other elements and compounds. Outputs from the ecosystem include water, oxygen, carbon dioxide, nutrient losses, and the heat released in cellular respiration, or heat of respiration. The major driving force is solar energy. Plants are primary producers. All life in an ecosystem depends on primary producers to capture energy from the Sun, convert it to food that is stored in plant cells, and pass this energy on to organisms that eat plants. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Consumers Primary Primary consumers are animals that feed on plants. This group includes some insects, seedand fruit-eating birds, rodents, and larger animals that graze on vegetation, such as deer. When primary consumers eat primary producers (plants), the energy in plant cells changes into a form that can be stored in animal cells. Secondary Secondary consumers are a diverse group of animals—some eat primary consumers and some eat other secondary consumers. Those animals that eat smaller primary consumers include frogs, snakes, foxes, and spiders. Animals that eat secondary consumers include hawks, wolves, and lions. Decomposers Decomposers include worms, mushrooms, and microscopic bacteria. These organisms break down dead plants and animals into the nutrients needed by plants to survive. Question:8 Control Unit A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants. As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question:9 Cell (biology) I INTRODUCTION Cell (biology), basic unit of life. Cells are the smallest structures capable of basic life processes, such as taking in nutrients, expelling waste, and reproducing. All living things are composed of cells. Some microscopic organisms, such as bacteria and protozoa, are unicellular, meaning they consist of a single cell. Plants, animals, and fungi are multicellular; that is, they are composed of a great many cells working in concert. But whether it makes up an entire bacterium or is just one of trillions in a human being, the cell is a marvel of design and efficiency. Cells carry out thousands of biochemical reactions each minute and reproduce new cells that perpetuate life. Cells vary considerably in size. The smallest cell, a type of bacterium known as a mycoplasma, measures 0.0001 mm (0.000004 in) in diameter; 10,000 mycoplasmas in a row are only as wide as the diameter of a human hair. Among the largest cells are the nerve cells that run down a giraffe’s neck; these cells can exceed 3 m (9.7 ft) in length. Human cells also display a variety of sizes, from small red blood cells that measure 0.00076 mm (0.00003 in) to liver cells that may be ten times larger. About 10,000 average-sized human cells can fit on the head of a pin. Along with their differences in size, cells present an array of shapes. Some, such as the bacterium Escherichia coli, resemble rods. The paramecium, a type of protozoan, is slipper shaped; and the amoeba, another protozoan, has an irregular form that changes shape as it moves around. Plant cells typically resemble boxes or cubes. In humans, the outermost layers of skin cells are flat, while muscle cells are long and thin. Some nerve cells, with their elongated, tentacle-like extensions, suggest an octopus. In multicellular organisms, shape is typically tailored to the cell’s job. For example, flat skin cells pack tightly into a layer that protects the underlying tissues from invasion by bacteria. Long, thin muscle cells contract readily to move bones. The numerous extensions from a nerve cell enable it to connect to several other nerve cells in order to send and receive messages rapidly and efficiently. By itself, each cell is a model of independence and self-containment. Like some miniature, walled city in perpetual rush hour, the cell constantly bustles with traffic, shuttling essential molecules from place to place to carry out the business of living. Despite their individuality, however, cells also display a remarkable ability to join, communicate, and coordinate with other cells. The human body, for example, consists of an estimated 20 to 30 trillion cells. Dozens of different kinds of cells are organized into specialized groups called tissues. Tendons and bones, for example, are composed of connective tissue, whereas skin and mucous membranes are built from epithelial tissue. Different tissue types are assembled into organs, which are structures specialized to perform particular functions. Examples of organs include the heart, stomach, and brain. Organs, in turn, are organized into systems such as the circulatory, digestive, or nervous systems. All together, these assembled organ systems form the human body. The components of cells are molecules, nonliving structures formed by the union of atoms. Small molecules serve as building blocks for larger molecules. Proteins, nucleic acids, carbohydrates, and lipids, which include fats and oils, are the four major molecules that underlie cell structure and also participate in cell functions. For example, a tightly organized arrangement of lipids, proteins, and protein-sugar compounds forms the plasma membrane, or outer boundary, of certain cells. The organelles, membrane-bound compartments in cells, are built largely from proteins. Biochemical reactions in cells are guided by enzymes, specialized proteins that speed up chemical reactions. The nucleic acid deoxyribonucleic acid (DNA) contains the hereditary information for cells, and another nucleic acid, ribonucleic acid(RNA), works with DNA to build the thousands of proteins the cell needs. II CELL STRUCTURE Cells fall into one of two categories: prokaryotic or eukaryotic (see Prokaryote). In a prokaryotic cell, found only in bacteria and archaebacteria, all the components, including the DNA, mingle freely in the cell’s interior, a single compartment. Eukaryotic cells, which make up plants, animals, fungi, and all other life forms, contain numerous compartments, or organelles, within each cell. The DNA in eukaryotic cells is enclosed in a special organelle called the nucleus, which serves as the cell’s command center and information library. The term prokaryote comes from Greek words that mean “before nucleus” or “prenucleus,” while eukaryote means “true nucleus.” A Prokaryotic Cells Prokaryotic cells are among the tiniest of all cells, ranging in size from 0.0001 to 0.003 mm (0.000004 to 0.0001 in) in diameter. About a hundred typical prokaryotic cells lined up in a row would match the thickness of a book page. These cells, which can be rodlike, spherical, or spiral in shape, are surrounded by a protective cell wall. Like most cells, prokaryotic cells live in a watery environment, whether it is soil moisture, a pond, or the fluid surrounding cells in the human body. Tiny pores in the cell wall enable water and the substances dissolved in it, such as oxygen, to flow into the cell; these pores also allow wastes to flow out. Pushed up against the inner surface of the prokaryotic cell wall is a thin membrane called the plasma membrane. The plasma membrane, composed of two layers of flexible lipid molecules and interspersed with durable proteins, is both supple and strong. Unlike the cell wall, whose open pores allow the unregulated traffic of materials in and out of the cell, the plasma membrane is selectively permeable, meaning it allows only certain substances to pass through. Thus, the plasma membrane actively separates the cell’s contents from its surrounding fluids. While small molecules such as water, oxygen, and carbon dioxide diffuse freely across the plasma membrane, the passage of many larger molecules, including amino acids (the building blocks of proteins) and sugars, is carefully regulated. Specialized transport proteins accomplish this task. The transport proteins span the plasma membrane, forming an intricate system of pumps and channels through which traffic is conducted. Some substances swirling in the fluid around the cell can enter it only if they bind to and are escorted in by specific transport proteins. In this way, the cell fine-tunes its internal environment. The plasma membrane encloses the cytoplasm, the semifluid that fills the cell. Composed of about 65 percent water, the cytoplasm is packed with up to a billion molecules per cell, a rich storehouse that includes enzymes and dissolved nutrients, such as sugars and amino acids. The water provides a favorable environment for the thousands of biochemical reactions that take place in the cell. Within the cytoplasm of all prokaryotes is deoxyribonucleic acid (DNA), a complex molecule in the form of a double helix, a shape similar to a spiral staircase. The DNA is about 1,000 times the length of the cell, and to fit inside, it repeatedly twists and folds to form a compact structure called a chromosome. The chromosome in prokaryotes is circular, and is located in a region of the cell called the nucleoid. Often, smaller chromosomes called plasmids are located in the cytoplasm. The DNA is divided into units called genes, just like a long train is divided into separate cars. Depending on the species, the DNA contains several hundred or even thousands of genes. Typically, one gene contains coded instructions for building all or part of a single protein. Enzymes, which are specialized proteins, determine virtually all the biochemical reactions that support and sustain the cell. Also immersed in the cytoplasm are the only organelles in prokaryotic cells—tiny bead-like structures called ribosomes. These are the cell’s protein factories. Following the instructions encoded in the DNA, ribosomes churn out proteins by the hundreds every minute, providing needed enzymes, the replacements for worn-out transport proteins, or other proteins required by the cell. While relatively simple in construction, prokaryotic cells display extremely complex activity. They have a greater range of biochemical reactions than those found in their larger relatives, the eukaryotic cells. The extraordinary biochemical diversity of prokaryotic cells is manifested in the wide-ranging lifestyles of the archaebacteria and the bacteria, whose habitats include polar ice, deserts, and hydrothermal vents—deep regions of the ocean under great pressure where hot water geysers erupt from cracks in the ocean floor. B Eukaryotic Animal Cells Eukaryotic cells are typically about ten times larger than prokaryotic cells. In animal cells, the plasma membrane, rather than a cell wall, forms the cell’s outer boundary. With a design similar to the plasma membrane of prokaryotic cells, it separates the cell from its surroundings and regulates the traffic across the membrane. The eukaryotic cell cytoplasm is similar to that of the prokaryote cell except for one major difference: Eukaryotic cells house a nucleus and numerous other membrane-enclosed organelles. Like separate rooms of a house, these organelles enable specialized functions to be carried out efficiently. The building of proteins and lipids, for example, takes place in separate organelles where specialized enzymes geared for each job are located. The nucleus is the largest organelle in an animal cell. It contains numerous strands of DNA, the length of each strand being many times the diameter of the cell. Unlike the circular prokaryotic DNA, long sections of eukaryotic DNA pack into the nucleus by wrapping around proteins. As a cell begins to divide, each DNA strand folds over onto itself several times, forming a rod-shaped chromosome. The nucleus is surrounded by a double-layered membrane that protects the DNA from potentially damaging chemical reactions that occur in the cytoplasm. Messages pass between the cytoplasm and the nucleus through nuclear pores, which are holes in the membrane of the nucleus. In each nuclear pore, molecular signals flash back and forth as often as ten times per second. For example, a signal to activate a specific gene comes in to the nucleus and instructions for production of the necessary protein go out to the cytoplasm. Attached to the nuclear membrane is an elongated membranous sac called the endoplasmic reticulum. This organelle tunnels through the cytoplasm, folding back and forth on itself to form a series of membranous stacks. Endoplasmic reticulum takes two forms: rough and smooth. Rough endoplasmic reticulum (RER) is so called because it appears bumpy under a microscope. The bumps are actually thousands of ribosomes attached to the membrane’s surface. The ribosomes in eukaryotic cells have the same function as those in prokaryotic cells—protein synthesis—but they differ slightly in structure. Eukaryote ribosomes bound to the endoplasmic reticulum help assemble proteins that typically are exported from the cell. The ribosomes work with other molecules to link amino acids to partially completed proteins. These incomplete proteins then travel to the inner chamber of the endoplasmic reticulum, where chemical modifications, such as the addition of a sugar, are carried out. Chemical modifications of lipids are also carried out in the endoplasmic reticulum. The endoplasmic reticulum and its bound ribosomes are particularly dense in cells that produce many proteins for export, such as the white blood cells of the immune system, which produce and secrete antibodies. Some ribosomes that manufacture proteins are not attached to the endoplasmic reticulum. These so-called free ribosomes are dispersed in the cytoplasm and typically make proteins—many of them enzymes—that remain in the cell. The second form of endoplasmic reticulum, the smooth endoplasmic reticulum (SER), lacks ribosomes and has an even surface. Within the winding channels of the smooth endoplasmic reticulum are the enzymes needed for the construction of molecules such as carbohydrates and lipids. The smooth endoplasmic reticulum is prominent in liver cells, where it also serves to detoxify substances such as alcohol, drugs, and other poisons. Proteins are transported from free and bound ribosomes to the Golgi apparatus, an organelle that resembles a stack of deflated balloons. It is packed with enzymes that complete the processing of proteins. These enzymes add sulfur or phosphorus atoms to certain regions of the protein, for example, or chop off tiny pieces from the ends of the proteins. The completed protein then leaves the Golgi apparatus for its final destination inside or outside the cell. During its assembly on the ribosome, each protein has acquired a group of from 4 to 100 amino acids called a signal. The signal works as a molecular shipping label to direct the protein to its proper location. Lysosomes are small, often spherical organelles that function as the cell’s recycling center and garbage disposal. Powerful digestive enzymes concentrated in the lysosome break down worn-out organelles and ship their building blocks to the cytoplasm where they are used to construct new organelles. Lysosomes also dismantle and recycle proteins, lipids, and other molecules. The mitochondria are the powerhouses of the cell. Within these long, slender organelles, which can appear oval or bean shaped under the electron microscope, enzymes convert the sugar glucose and other nutrients into adenosine triphosphate (ATP). This molecule, in turn, serves as an energy battery for countless cellular processes, including the shuttling of substances across the plasma membrane, the building and transport of proteins and lipids, the recycling of molecules and organelles, and the dividing of cells. Muscle and liver cells are particularly active and require dozens and sometimes up to a hundred mitochondria per cell to meet their energy needs. Mitochondria are unusual in that they contain their own DNA in the form of a prokaryote-like circular chromosome; have their own ribosomes, which resemble prokaryotic ribosomes; and divide independently of the cell. Unlike the tiny prokaryotic cell, the relatively large eukaryotic cell requires structural support. The cytoskeleton, a dynamic network of protein tubes, filaments, and fibers, crisscrosses the cytoplasm, anchoring the organelles in place and providing shape and structure to the cell. Many components of the cytoskeleton are assembled and disassembled by the cell as needed. During cell division, for example, a special structure called a spindle is built to move chromosomes around. After cell division, the spindle, no longer needed, is dismantled. Some components of the cytoskeleton serve as microscopic tracks along which proteins and other molecules travel like miniature trains. Recent research suggests that the cytoskeleton also may be a mechanical communication structure that converses with the nucleus to help organize events in the cell. C Eukaryotic Plant Cells Plant cells have all the components of animal cells and boast several added features, including chloroplasts, a central vacuole, and a cell wall. Chloroplasts convert light energy— typically from the Sun—into the sugar glucose, a form of chemical energy, in a process known as photosynthesis. Chloroplasts, like mitochondria, possess a circular chromosome and prokaryote-like ribosomes, which manufacture the proteins that the chloroplasts typically need. The central vacuole of a mature plant cell typically takes up most of the room in the cell. The vacuole, a membranous bag, crowds the cytoplasm and organelles to the edges of the cell. The central vacuole stores water, salts, sugars, proteins, and other nutrients. In addition, it stores the blue, red, and purple pigments that give certain flowers their colors. The central vacuole also contains plant wastes that taste bitter to certain insects, thus discouraging the insects from feasting on the plant. In plant cells, a sturdy cell wall surrounds and protects the plasma membrane. Its pores enable materials to pass freely into and out of the cell. The strength of the wall also enables a cell to absorb water into the central vacuole and swell without bursting. The resulting pressure in the cells provides plants with rigidity and support for stems, leaves, and flowers. Without sufficient water pressure, the cells collapse and the plant wilts. III CELL FUNCTIONS To stay alive, cells must be able to carry out a variety of functions. Some cells must be able to move, and most cells must be able to divide. All cells must maintain the right concentration of chemicals in their cytoplasm, ingest food and use it for energy, recycle molecules, expel wastes, and construct proteins. Cells must also be able to respond to changes in their environment. A Movement Many unicellular organisms swim, glide, thrash, or crawl to search for food and escape enemies. Swimming organisms often move by means of a flagellum, a long tail-like structure made of protein. Many bacteria, for example, have one, two, or many flagella that rotate like propellers to drive the organism along. Some single-celled eukaryotic organisms, such as euglena, also have a flagellum, but it is longer and thicker than the prokaryotic flagellum. The eukaryotic flagellum works by waving up and down like a whip. In higher animals, the sperm cell uses a flagellum to swim toward the female egg for fertilization. Movement in eukaryotes is also accomplished with cilia, short, hairlike proteins built by centrioles, which are barrel-shaped structures located in the cytoplasm that assemble and break down protein filaments. Typically, thousands of cilia extend through the plasma membrane and cover the surface of the cell, giving it a dense, hairy appearance. By beating its cilia as if they were oars, an organism such as the paramecium propels itself through its watery environment. In cells that do not move, cilia are used for other purposes. In the respiratory tract of humans, for example, millions of ciliated cells prevent inhaled dust, smog, and microorganisms from entering the lungs by sweeping them up on a current of mucus into the throat, where they are swallowed. Eukaryotic flagella and cilia are formed from basal bodies, small protein structures located just inside the plasma membrane. Basal bodies also help to anchor flagella and cilia. Still other eukaryotic cells, such as amoebas and white blood cells, move by amoeboid motion, or crawling. They extrude their cytoplasm to form temporary pseudopodia, or false feet, which actually are placed in front of the cell, rather like extended arms. They then drag the trailing end of their cytoplasm up to the pseudopodia. A cell using amoeboid motion would lose a race to a euglena or paramecium. But while it is slow, amoeboid motion is strong enough to move cells against a current, enabling water-dwelling organisms to pursue and devour prey, for example, or white blood cells roaming the blood stream to stalk and engulf a bacterium or virus. B Nutrition All cells require nutrients for energy, and they display a variety of methods for ingesting them. Simple nutrients dissolved in pond water, for example, can be carried through the plasma membrane of pond-dwelling organisms via a series of molecular pumps. In humans, the cavity of the small intestine contains the nutrients from digested food, and cells that form the walls of the intestine use similar pumps to pull amino acids and other nutrients from the cavity into the bloodstream. Certain unicellular organisms, such as amoebas, are also capable of reaching out and grabbing food. They use a process known as endocytosis, in which the plasma membrane surrounds and engulfs the food particle, enclosing it in a sac, called a vesicle, that is within the amoeba’s interior. C Energy Cells require energy for a variety of functions, including moving, building up and breaking down molecules, and transporting substances across the plasma membrane. Nutrients contains energy, but cells must convert the energy locked in nutrients to another form— specifically, the ATP molecule, the cell’s energy battery—before it is useful. In single-celled eukaryotic organisms, such as the paramecium, and in multicellular eukaryotic organisms, such as plants, animals, and fungi, mitochondria are responsible for this task. The interior of each mitochondrion consists of an inner membrane that is folded into a mazelike arrangement of separate compartments called cristae. Within the cristae, enzymes form an assembly line where the energy in glucose and other energy-rich nutrients is harnessed to build ATP; thousands of ATP molecules are constructed each second in a typical cell. In most eukaryotic cells, this process requires oxygen and is known as aerobic respiration. Some prokaryotic organisms also carry out aerobic respiration. They lack mitochondria, however, and carry out aerobic respiration in the cytoplasm with the help of enzymes sequestered there. Many prokaryote species live in environments where there is little or no oxygen, environments such as mud, stagnant ponds, or within the intestines of animals. Some of these organisms produce ATP without oxygen in a process known as anaerobic respiration, where sulfur or other substances take the place of oxygen. Still other prokaryotes, and yeast, a single-celled eukaryote, build ATP without oxygen in a process known as fermentation. Almost all organisms rely on the sugar glucose to produce ATP. Glucose is made by the process of photosynthesis, in which light energy is transformed to the chemical energy of glucose. Animals and fungi cannot carry out photosynthesis and depend on plants and other photosynthetic organisms for this task. In plants, as we have seen, photosynthesis takes place in organelles called chloroplasts. Chloroplasts contain numerous internal compartments called thylakoids where enzymes aid in the energy conversion process. A single leaf cell contains 40 to 50 chloroplasts. With sufficient sunlight, one large tree is capable of producing upwards of two tons of sugar in a single day. Photosynthesis in prokaryotic organisms—typically aquatic bacteria—is carried out with enzymes clustered in plasma membrane folds called chromatophores. Aquatic bacteria produce the food consumed by tiny organisms living in ponds, rivers, lakes, and seas. D Protein Synthesis A typical cell must have on hand about 30,000 proteins at any one time. Many of these proteins are enzymes needed to construct the major molecules used by cells— carbohydrates, lipids, proteins, and nucleic acids—or to aid in the breakdown of such molecules after they have worn out. Other proteins are part of the cell’s structure—the plasma membrane and ribosomes, for example. In animals, proteins also function as hormones and antibodies, and they function like delivery trucks to transport other molecules around the body. Hemoglobin, for example, is a protein that transports oxygen in red blood cells. The cell’s demand for proteins never ceases. Before a protein can be made, however, the molecular directions to build it must be extracted from one or more genes. In humans, for example, one gene holds the information for the protein insulin, the hormone that cells need to import glucose from the bloodstream, while at least two genes hold the information for collagen, the protein that imparts strength to skin, tendons, and ligaments. The process of building proteins begins when enzymes, in response to a signal from the cell, bind to the gene that carries the code for the required protein, or part of the protein. The enzymes transfer the code to a new molecule called messenger RNA, which carries the code from the nucleus to the cytoplasm. This enables the original genetic code to remain safe in the nucleus, with messenger RNA delivering small bits and pieces of information from the DNA to the cytoplasm as needed. Depending on the cell type, hundreds or even thousands of molecules of messenger RNA are produced each minute. Once in the cytoplasm, the messenger RNA molecule links up with a ribosome. The ribosome moves along the messenger RNA like a monorail car along a track, stimulating another form of RNA—transfer RNA—to gather and link the necessary amino acids, pooled in the cytoplasm, to form the specific protein, or section of protein. The protein is modified as necessary by the endoplasmic reticulum and Golgi apparatus before embarking on its mission. Cells teem with activity as they forge the numerous, diverse proteins that are indispensable for life. For a more detailed discussion about protein synthesis, see Genetics: The Genetic Code. E Cell Division Most cells divide at some time during their life cycle, and some divide dozens of times before they die. Organisms rely on cell division for reproduction, growth, and repair and replacement of damaged or worn out cells. Three types of cell division occur: binary fission, mitosis, and meiosis. Binary fission, the method used by prokaryotes, produces two identical cells from one cell. The more complex process of mitosis, which also produces two genetically identical cells from a single cell, is used by many unicellular eukaryotic organisms for reproduction. Multicellular organisms use mitosis for growth, cell repair, and cell replacement. In the human body, for example, an estimated 25 million mitotic cell divisions occur every second in order to replace cells that have completed their normal life cycles. Cells of the liver, intestine, and skin may be replaced every few days. Recent research indicates that even brain cells, once thought to be incapable of mitosis, undergo cell division in the part of the brain associated with memory. The type of cell division required for sexual reproduction is meiosis. Sexually reproducing organisms include seaweeds, fungi, plants, and animals—including, of course, human beings. Meiosis differs from mitosis in that cell division begins with a cell that has a full complement of chromosomes and ends with gamete cells, such as sperm and eggs, that have only half the complement of chromosomes. When a sperm and egg unite during fertilization, the cell resulting from the union, called a zygote, contains the full number of chromosomes. IV ORIGIN OF CELLS The story of how cells evolved remains an open and actively investigated question in science (see Life). The combined expertise of physicists, geologists, chemists, and evolutionary biologists has been required to shed light on the evolution of cells from the nonliving matter of early Earth. The planet formed about 4.5 billion years ago, and for millions of years, violent volcanic eruptions blasted substances such as carbon dioxide, nitrogen, water, and other small molecules into the air. These small molecules, bombarded by ultraviolet radiation and lightning from intense storms, collided to form the stable chemical bonds of larger molecules, such as amino acids and nucleotides—the building blocks of proteins and nucleic acids. Experiments indicate that these larger molecules form spontaneously under laboratory conditions that simulate the probable early environment of Earth. Scientists speculate that rain may have carried these molecules into lakes to create a primordial soup—a breeding ground for the assembly of proteins, the nucleic acid RNA, and lipids. Some scientists postulate that these more complex molecules formed in hydrothermal vents rather than in lakes. Other scientists propose that these key substances may have reached Earth on meteorites from outer space. Regardless of the origin or environment, however, scientists do agree that proteins, nucleic acids, and lipids provided the raw materials for the first cells. In the laboratory, scientists have observed lipid molecules joining to form spheres that resemble a cell’s plasma membrane. As a result of these observations, scientists postulate that millions of years of molecular collisions resulted in lipid spheres enclosing RNA, the simplest molecule capable of self-replication. These primitive aggregations would have been the ancestors of the first prokaryotic cells. Fossil studies indicate that cyanobacteria, bacteria capable of photosynthesis, were among the earliest bacteria to evolve, an estimated 3.4 billion to 3.5 billion years ago. In the environment of the early Earth, there was no oxygen, and cyanobacteria probably used fermentation to produce ATP. Over the eons, cyanobacteria performed photosynthesis, which produces oxygen as a byproduct; the result was the gradual accumulation of oxygen in the atmosphere. The presence of oxygen set the stage for the evolution of bacteria that used oxygen in aerobic respiration, a more efficient ATP-producing process than fermentation. Some molecular studies of the evolution of genes in archaebacteria suggest that these organisms may have evolved in the hot waters of hydrothermal vents or hot springs slightly earlier than cyanobacteria, around 3.5 billion years ago. Like cyanobacteria, archaebacteria probably relied on fermentation to synthesize ATP. Eukaryotic cells may have evolved from primitive prokaryotes about 2 billion years ago. One hypothesis suggests that some prokaryotic cells lost their cell walls, permitting the cell’s plasma membrane to expand and fold. These folds, ultimately, may have given rise to separate compartments within the cell—the forerunners of the nucleus and other organelles now found in eukaryotic cells. Another key hypothesis is known as endosymbiosis. Molecular studies of the bacteria-like DNA and ribosomes in mitochondria and chloroplasts indicate that mitochondrion and chloroplast ancestors were once free-living bacteria. Scientists propose that these free-living bacteria were engulfed and maintained by other prokaryotic cells for their ability to produce ATP efficiently and to provide a steady supply of glucose. Over generations, eukaryotic cells complete with mitochondria—the ancestors of animals— or with both mitochondria and chloroplasts—the ancestors of plants—evolved (see Evolution). V THE DISCOVERY AND STUDY OF CELLS The first observations of cells were made in 1665 by English scientist Robert Hooke, who used a crude microscope of his own invention to examine a variety of objects, including a thin piece of cork. Noting the rows of tiny boxes that made up the dead wood’s tissue, Hooke coined the term cell because the boxes reminded him of the small cells occupied by monks in a monastery. While Hooke was the first to observe and describe cells, he did not comprehend their significance. At about the same time, the Dutch maker of microscopes Antoni van Leeuwenhoek pioneered the invention of one of the best microscopes of the time. Using his invention, Leeuwenhoek was the first to observe, draw, and describe a variety of living organisms, including bacteria gliding in saliva, one-celled organisms cavorting in pond water, and sperm swimming in semen. Two centuries passed, however, before scientists grasped the true importance of cells. Modern ideas about cells appeared in the 1800s, when improved light microscopes enabled scientists to observe more details of cells. Working together, German botanist Matthias Jakob Schleiden and German zoologist Theodor Schwann recognized the fundamental similarities between plant and animal cells. In 1839 they proposed the revolutionary idea that all living things are made up of cells. Their theory gave rise to modern biology: a whole new way of seeing and investigating the natural world. By the late 1800s, as light microscopes improved still further, scientists were able to observe chromosomes within the cell. Their research was aided by new techniques for staining parts of the cell, which made possible the first detailed observations of cell division, including observations of the differences between mitosis and meiosis in the 1880s. In the first few decades of the 20th century, many scientists focused on the behavior of chromosomes during cell division. At that time, it was generally held that mitochondria transmitted the hereditary information. By 1920, however, scientists determined that chromosomes carry genes and that genes transmit hereditary information from generation to generation. During the same period, scientists began to understand some of the chemical processes in cells. In the 1920s, the ultracentrifuge was developed. The ultracentrifuge is an instrument that spins cells or other substances in test tubes at high speeds, which causes the heavier parts of the substance to fall to the bottom of the test tube. This instrument enabled scientists to separate the relatively abundant and heavy mitochondria from the rest of the cell and study their chemical reactions. By the late 1940s, scientists were able to explain the role of mitochondria in the cell. Using refined techniques with the ultracentrifuge, scientists subsequently isolated the smaller organelles and gained an understanding of their functions. While some scientists were studying the functions of cells, others were examining details of their structure. They were aided by a crucial technological development in the 1940s: the invention of the electron microscope, which uses high-energy electrons instead of light waves to view specimens. New generations of electron microscopes have provided resolution, or the differentiation of separate objects, thousands of times more powerful than that available in light microscopes. This powerful resolution revealed organelles such as the endoplasmic reticulum, lysosomes, the Golgi apparatus, and the cytoskeleton. The scientific fields of cell structure and function continue to complement each other as scientists explore the enormous complexity of cells. The discovery of the structure of DNA in 1953 by American biochemist James D. Watson and British biophysicist Francis Crick ushered in the era of molecular biology. Today, investigation inside the world of cells—of genes and proteins at the molecular level— constitutes one of the largest and fastest moving areas in all of science. One particularly active field in recent years has been the investigation of cell signaling, the process by which molecular messages find their way into the cell via a series of complex protein pathways in the cell. Another busy area in cell biology concerns programmed cell death, or apoptosis. Millions of times per second in the human body, cells commit suicide as an essential part of the normal cycle of cellular replacement. This also seems to be a check against disease: When mutations build up within a cell, the cell will usually self-destruct. If this fails to occur, the cell may divide and give rise to mutated daughter cells, which continue to divide and spread, gradually forming a growth called a tumor. This unregulated growth by rogue cells can be benign, or harmless, or cancerous, which may threaten healthy tissue. The study of apoptosis is one avenue that scientists explore in an effort to understand how cells become cancerous. Scientists are also discovering exciting aspects of the physical forces within cells. Cells employ a form of architecture called tensegrity, which enables them to withstand battering by a variety of mechanical stresses, such as the pressure of blood flowing around cells or the movement of organelles within the cell. Tensegrity stabilizes cells by evenly distributing mechanical stresses to the cytoskeleton and other cell components. Tensegrity also may explain how a change in the cytoskeleton, where certain enzymes are anchored, initiates biochemical reactions within the cell, and can even influence the action of genes. The mechanical rules of tensegrity may also account for the assembly of molecules into the first cells. Such new insights—made some 300 years after the tiny universe of cells was first glimpsed—show that cells continue to yield fascinating new worlds of discovery. Animal Cell An animal cell typically contains several types of membrane-bound organs, or organelles. The nucleus directs activities of the cell and carries genetic information from generation to generation. The mitochondria generate energy for the cell. Proteins are manufactured by ribosomes, which are bound to the rough endoplasmic reticulum or float free in the cytoplasm. The Golgi apparatus modifies, packages, and distributes proteins while lysosomes store enzymes for digesting food. The entire cell is wrapped in a lipid membrane that selectively permits materials to pass in and out of the cytoplasm. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Question :11 Plastics I INTRODUCTION Plastics, materials made up of large, organic (carbon-containing) molecules that can be formed into a variety of products. The molecules that compose plastics are long carbon chains that give plastics many of their useful properties. In general, materials that are made up of long, chainlike molecules are called polymers. The word plastic is derived from the words plasticus (Latin for “capable of molding”) and plastikos (Greek “to mold,” or “fit for molding”). Plastics can be made hard as stone, strong as steel, transparent as glass, light as wood, and elastic as rubber. Plastics are also lightweight, waterproof, chemical resistant, and produced in almost any color. More than 50 families of plastics have been produced, and new types are currently under development. Like metals, plastics come in a variety of grades. For instance, nylons are plastics that are separated by different properties, costs, and the manufacturing processes used to produce them. Also like metals, some plastics can be alloyed, or blended, to combine the advantages possessed by several different plastics. For example, some types of impact-resistant (shatterproof) plastics and heat-resistant plastics are made by blending different plastics together. Plastics are moldable, synthetic (chemically-fabricated) materials derived mostly from fossil fuels, such as oil, coal, or natural gas. The raw forms of other materials, such as glass, metals, and clay, are also moldable. The key difference between these materials and plastics is that plastics consist of long molecules that give plastics many of their unique properties, while glass, metals, and clay consist of short molecules. II USES OF PLASTICS Plastics are indispensable to our modern way of life. Many people sleep on pillows and mattresses filled with a type of plastic—either cellular polyurethane or polyester. At night, people sleep under blankets and bedspreads made of acrylic plastics, and in the morning, they step out of bed onto polyester and nylon carpets. The cars we drive, the computers we use, the utensils we cook with, the recreational equipment we play with, and the houses and buildings we live and work in all include important plastic components. The average car contains almost 136 kg (almost 300 lb) of plastics—nearly 12 percent of the vehicle’s overall weight. Telephones, textiles, compact discs, paints, plumbing fixtures, boats, and furniture are other domestic products made of plastics. In 1979 the volume of plastics produced in the United States surpassed the volume of domestically produced steel. Plastics are used extensively by many key industries, including the automobile, aerospace, construction, packaging, and electrical industries. The aerospace industry uses plastics to make strategic military parts for missiles, rockets, and aircraft. Plastics are also used in specialized fields, such as the health industry, to make medical instruments, dental fillings, optical lenses, and biocompatible joints. III GENERAL PROPERTIES OF PLASTICS Plastics possess a wide variety of useful properties and are relatively inexpensive to produce. They are lighter than many materials of comparable strength, and unlike metals and wood, plastics do not rust or rot. Most plastics can be produced in any color. They can also be manufactured as clear as glass, translucent (transmitting small amounts of light), or opaque (impenetrable to light). Plastics have a lower density than that of metals, so plastics are lighter. Most plastics vary in density from 0.9 to 2.2 g/cm3 (0.45 to 1.5 oz/cu in), compared to steel’s density of 7.85 g/cm3 (5.29 oz/cu in). Plastic can also be reinforced with glass and other fibers to form incredibly strong materials. For example, nylon reinforced with glass can have a tensile strength (resistance of a material to being elongated or pulled apart) of up to 165 Mega Pascal (24,000 psi). Plastics have some disadvantages. When burned, some plastics produce poisonous fumes. Although certain plastics are specifically designed to withstand temperatures as high as 288° C (550° F), in general plastics are not used when high heat resistance is needed. Because of their molecular stability, plastics do not easily break down into simpler components. As a result, disposal of plastics creates a solid waste problem (see Plastics and the Environment below). IV CHEMISTRY OF PLASTICS Plastics consist of very long molecules each composed of carbon atoms linked into chains. One type of plastic, known as , is composed of extremely long molecules that each contain over 200,000 carbon atoms. These long, chainlike molecules give plastics unique properties and distinguish plastics from materials, such as metals, that have short, crystalline molecular structures. Although some plastics are made from plant oils, the majority are made from fossil fuels. Fossil fuels contain hydrocarbons (compounds containing hydrogen and carbon), which provide the building blocks for long polymer molecules. These small building blocks, called monomers, link together to form long carbon chains called polymers. The process of forming these long molecules from hydrocarbons is known as polymerization. The molecules typically form viscous, sticky substances known as resins, which are used to make plastic products. Ethylene, for example, is a gaseous hydrocarbon. When it is subjected to heat, pressure, and certain catalysts (substances used to enable faster chemical reactions), the ethylene molecules join together into long, repeating carbon chains. These joined molecules form a plastic resin known as . Joining identical monomers to make carbon chains is called addition polymerization, because the process is similar to stringing many identical beads on a string. Plastics made by addition polymerization include polyethylene, , polyvinyl chloride, and . Joining two or more different monomers of varying lengths is known as condensation polymerization, because water or other by-products are eliminated as the polymer forms. Condensation polymers include (polyamide), polyester, and polyurethane. The properties of a plastic are determined by the length of the plastic’s molecules and the specific monomer present. For example, elastomers are plastics composed of long, tightly twisted molecules. These coiled molecules allow the plastic to stretch and recoil like a spring. Rubber bands and flexible silicone caulking are examples of elastomers. The carbon backbone of polymer molecules often bonds with smaller side chains consisting of other elements, including chlorine, fluorine, nitrogen, and silicon. These side chains give plastics some distinguishing characteristics. For example, when chlorine atoms substitute for hydrogen atoms along the carbon chain, the result is polyvinyl chloride, one of the most versatile and widely used plastics in the world. The addition of chlorine makes this plastic harder and more heat resistant. Different plastics have advantages and disadvantages associated with the unique chemistry of each plastic. For example, longer polymer molecules become more entangled (like spaghetti noodles), which gives plastics containing these longer polymers high tensile strength and high impact resistance. However, plastics made from longer molecules are more difficult to mold. V THERMOPLASTICS AND THERMOSETTING PLASTICS All plastics, whether made by addition or condensation polymerization, can be divided into two groups: thermoplastics and thermosetting plastics. These terms refer to the different ways these types of plastics respond to heat. Thermoplastics can be repeatedly softened by heating and hardened by cooling. Thermosetting plastics, on the other hand, harden permanently after being heated once. The reason for the difference in response to heat between thermoplastics and thermosetting plastics lies in the chemical structures of the plastics. Thermoplastic molecules, which are linear or slightly branched, do not chemically bond with each other when heated. Instead, thermoplastic chains are held together by weak van der Waal forces (weak attractions between the molecules) that cause the long molecular chains to clump together like piles of entangled spaghetti. Thermoplastics can be heated and cooled, and consequently softened and hardened, repeatedly, like candle wax. For this reason, thermoplastics can be remolded and reused almost indefinitely. Thermosetting plastics consist of chain molecules that chemically bond, or cross-link, with each other when heated. When thermosetting plastics cross-link, the molecules create a permanent, three-dimensional network that can be considered one giant molecule. Once cured, thermosetting plastics cannot be remelted, in the same way that cured concrete cannot be reset. Consequently, thermosetting plastics are often used to make heat-resistant products, because these plastics can be heated to temperatures of 260° C (500° F) without melting. The different molecular structures of thermoplastics and thermosetting plastics allow manufacturers to customize the properties of commercial plastics for specific applications. Because thermoplastic materials consist of individual molecules, properties of thermoplastics are largely influenced by molecular weight. For instance, increasing the molecular weight of a thermoplastic material increases its tensile strength, impact strength, and fatigue strength (ability of a material to withstand constant stress). Conversely, because thermosetting plastics consist of a single molecular network, molecular weight does not significantly influence the properties of these plastics. Instead, many properties of thermosetting plastics are determined by adding different types and amounts of fillers and reinforcements, such as glass fibers (see Materials Science and Technology). Thermoplastics may be grouped according to the arrangement of their molecules. Highly aligned molecules arrange themselves more compactly, resulting in a stronger plastic. For example, molecules in nylon are highly aligned, making this thermoplastic extremely strong. The degree of alignment of the molecules also determines how transparent a plastic is. Thermoplastics with highly aligned molecules scatter light, which makes these plastics appear opaque. Thermoplastics with semialigned molecules scatter some light, which makes most of these plastics appear translucent. Thermoplastics with random (amorphous) molecular arrangement do not scatter light and are clear. Amorphous thermoplastics are used to make optical lenses, windshields, and other clear products. VI MANUFACTURING PLASTIC PRODUCTS The process of forming plastic resins into plastic products is the basis of the plastics industry. Many different processes are used to make plastic products, and in each process, the plastic resin must be softened or sufficiently liquefied to be shaped. A Forming Thermoplastics Although some processes are used to manufacture both thermoplastics and thermosetting plastics, certain processes are specific to forming thermoplastics. (For more information, see the Casting and Expansion Processes section of this article.) A1 Injection Molding Injection molding uses a piston or screw to force plastic resin through a heated tube into a mold, where the plastic cools and hardens to the shape of the mold. The mold is then opened and the plastic cast removed. Thermoplastic items made by injection molding include toys, combs, car grills, and various containers. A2 Extrusion Extrusion is a continuous process, as opposed to all other plastic production processes, which start over at the beginning of the process after each new part is removed from the mold. In the extrusion process, plastic pellets are first heated in a long barrel. In a manner similar to that of a pasta-making or sausage-stuffing machine, a rotating screw then forces the heated plastic through a die (device used for forming material) opening of the desired shape. As the continuous plastic form emerges from the die opening, it is cooled and solidified, and the continuous plastic form is then cut to the desired length. Plastic products made by extrusion include garden hoses, drinking straws, pipes, and ropes. Melted thermoplastic forced through extremely fine die holes can be cooled and woven into fabrics for clothes, curtains, and carpets. A3 Blow Molding Blow molding is used to form bottles and other containers from soft, hollow thermoplastic tubes. First a mold is fitted around the outside of the softened thermoplastic tube, and then the tube is heated. Next, air is blown into the softened tube (similar to inflating a balloon), which forces the outside of the softened tube to conform to the inside walls of the mold. Once the plastic cools, the mold is opened and the newly molded container is removed. Blow molding is used to make many plastic containers, including soft-drink bottles, jars, detergent bottles, and storage drums. A4 Blow Film Extrusion Blow film extrusion is the process used to make plastic garbage bags and continuous sheets. This process works by extruding a hollow, sealed-end thermoplastic tube through a die opening. As the flattened plastic tube emerges from the die opening, air is blown inside the hollow tube to stretch and thin the tube (like a balloon being inflated) to the desired size and wall thickness. The plastic is then air-cooled and pulled away on take-up rollers to a heat-sealing operation. The heat-sealer cuts and seals one end of the thinned, flattened thermoplastic tube, creating various bag lengths for products such as plastic grocery and garbage bags. For sheeting (flat film), the thinned plastic tube is slit along one side and opened to form a continuous sheet. A5 Calendering The calendering process forms continuous plastic sheets that are used to make flooring, wall siding, tape, and other products. These plastic sheets are made by forcing hot thermoplastic resin between heated rollers called calenders. A series of secondary calenders further thins the plastic sheets. Paper, cloth, and other plastics may be pressed between layers of calendered plastic to make items such as credit cards, playing cards, and wallpaper. A6 Thermoforming Thermoforming is a term used to describe several techniques for making products from plastic sheets. Products made from thermoformed sheets include trays, signs, briefcase shells, refrigerator door liners, and packages. In a vacuum-forming process, hot thermoplastic sheets are draped over a mold. Air is removed from between the mold and the hot plastic, which creates a vacuum that draws the plastic into the cavities of the mold. When the plastic cools, the molded product is removed. In the pressure-forming process, compressed air is used to drive a hot plastic sheet into the cavities and depressions of a concave, or female, mold. Vent holes in the bottom of the mold allow trapped air to escape. B Forming Thermosetting Plastics Thermosetting plastics are manufactured by several methods that use heat or pressure to induce polymer molecules to bond, or cross-link, into typically hard and durable products. B1 Compression Molding Compression molding forms plastics through a technique that is similar to the way a waffle iron forms waffles from batter. First, thermosetting resin is placed into a steel mold. The application of heat and pressure, which accelerate cross-linking of the resin, softens the material and squeezes it into all parts of the mold to form the desired shape. Once the material has cooled and hardened, the newly formed object is removed from the mold. This process creates hard, heat-resistant plastic products, including dinnerware, telephones, television set frames, and electrical parts. B2 Laminating The laminating process binds layers of materials, such as textiles and paper, together in a plastic matrix. This process is similar to the process of joining sheets of wood to make plywood. Resin-impregnated layers of textiles or paper are stacked on hot plates, then squeezed and fused together by heat and pressure, which causes the polymer molecules to cross-link. The best-known laminate trade name is Formica, which is a product consisting of resin-impregnated layers of paper with decorative patterns such as wood grain, marble, and colored designs. Formica is often used as a surface finish for furniture, and kitchen and bathroom countertops. Thermosetting resins known as melamine and phenolic resins form the plastic matrix for Formica and other laminates. Electric circuit boards are also laminated from resin-impregnated paper, fabric, and glass fibers. B3 Reaction Injection Molding (RIM) Strong, sizable, and durable plastic products such as automobile body panels, skis, and business machine housings are formed by reaction injection molding. In this process, liquid thermosetting resin is combined with a curing agent (a chemical that causes the polymer molecules to cross-link) and injected into a mold. Most products made by reaction injection molding are made from . C Forming Both Types of Plastics Certain plastic fabrication processes can be used to form either thermoplastics or thermosetting plastics. C1 Casting The casting process is similar to that of molding plaster or cement. Fluid thermosetting or thermoplastic resin is poured into a mold, and additives cause the resin to solidify. Photographic film is made by pouring a fluid solution of resin onto a highly polished metal belt. A thin plastic film remains as the solution evaporates. The casting process is also used to make furniture parts, tabletops, sinks, and acrylic window sheets. C2 Expansion Processes Thermosetting and thermoplastic resins can be expanded by injecting gases (often nitrogen or methyl chloride) into the plastic melt. As the resin cools, tiny bubbles of gas are trapped inside, forming a cellular plastic structure. This process is used to make foam products such as cushions, pillows, sponges, egg cartons, and polystyrene cups. Foam plastics can be classified according to their bubble, or cell, structure. Sponges and carpet pads are examples of open-celled foam plastics, in which the bubbles are interconnected. Flotation devices are examples of closed-celled foam plastics, in which the bubbles are sealed like tiny balloons. Foam plastics can also be classified by density (ratio of plastic to cells), by the type of plastic resin used, and by flexibility (rigid or flexible foam). For example, rigid, closed-celled polyurethane plastics make excellent insulation for refrigerators and freezers. VII IMPORTANT TYPES OF PLASTICS A wide variety of both thermoplastics and thermosetting plastics are manufactured. These plastics have a spectrum of properties that are derived from their chemical compositions. As a result, manufactured plastics can be used in applications ranging from contact lenses to jet body components. A Thermoplastics Thermoplastic materials are in high demand because they can be repeatedly softened and remolded. The most commonly manufactured thermoplastics are presented in this section in order of decreasing volume of production. A1 Polyethylene ]n (where n denotes that the chemical formula inside the brackets repeats itself to form the plastic molecule) is made in low- and high-density forms. Low-density polyethylene (LDPE) has a density ranging from 0.91 to 0.93 g/cm3 (0.60 to 0.61 oz/cu in). The molecules of LDPE have a carbon backbone with side groups of four to six carbon atoms attached randomly along the main backbone. LDPE is the most widely used of all plastics, because it is inexpensive, flexible, extremely tough, and chemical-resistant. LDPE is molded into bottles, garment bags, frozen food packages, and plastic toys.CH2CH2CH2). Polyethylene, with the chemical formula [(PE) resins are milky white, translucent substances derived from ethylene (CH2 High-density polyethylene (HDPE) has a density that ranges from 0.94 to 0.97 g/cm3 (0.62 to 0.64 oz/cu in). Its molecules have an extremely long carbon backbone with no side groups. As a result, these molecules align into more compact arrangements, accounting for the higher density of HDPE. HDPE is stiffer, stronger, and less translucent than low-density polyethylene. HDPE is formed into grocery bags, car fuel tanks, packaging, and piping. A2 Polyvinyl Chloride CHCl). PVC is the most widely used of the amorphous plastics. PVC is lightweight, durable, and waterproof. Chlorine atoms bonded to the carbon backbone of its molecules give PVC its hard and flame-resistant properties.Polyvinyl chloride (PVC) is prepared from the organic compound vinyl chloride (CH2 In its rigid form, PVC is weather-resistant and is extruded into pipe, house siding, and gutters. Rigid PVC is also blow molded into clear bottles and is used to form other consumer products, including compact discs and computer casings. PVC can be softened with certain chemicals. This softened form of PVC is used to make shrink-wrap, food packaging, rainwear, shoe soles, shampoo containers, floor tile, gloves, upholstery, and other products. Most softened PVC plastic products are manufactured by extrusion, injection molding, or casting. A3 Polypropylene CH3) branching off of every other carbon along the molecular backbone. Because the most common form of polypropylene has the methyl groups all on one side of the carbon backbone, polypropylene molecules tend to be highly aligned and compact, giving this thermoplastic the properties of durability and chemical resistance. Many polypropylene products, such as rope, fiber, luggage, carpet, and packaging film, are formed by injection molding.CH2) and has a methyl group (CHis polymerized from the organic compound propylene (CH3 A4 Polystyrene CH2), has phenyl groups (six-member carbon ring) attached in random locations along the carbon backbone of the molecule. The random attachment of benzene prevents the molecules from becoming highly aligned. As a result, polystyrene is an amorphous, transparent, and somewhat brittle plastic. Polystyrene is widely used because of its rigidity and superior insulation properties. Polystyrene can undergo all thermoplastic processes to form products such as toys, utensils, display boxes, model aircraft kits, and ballpoint pen barrels. Polystyrene is also expanded into foam plastics such as packaging materials, egg cartons, flotation devices, and styrofoam. (For more information, see the Expansion Processes section of this article.), produced from styrene (C6H5CH A5 Polyethylene Terephthalate ]n. PET molecules are highly aligned, creating a strong and abrasion-resistant material that is used to produce films and polyester fibers. PET is injection molded into windshield wiper arms, sunroof frames, gears, pulleys, and food trays. This plastic is used to make the trademarked textiles Dacron, Fibre V, Fortrel, and Kodel. Tough, transparent PET films (marketed under the brand name Mylar) are magnetically coated to make both audio and video recording tape.CH2CH2COOC6H4OOCCH2OH), which produces the PET monomer [COOH) and ethylene glycol (HOCH2C6H4Polyethylene terephthalate (PET) is formed from the reaction of terephthalic acid (HOOC A6 Acrylonitrile Butadiene Styrene ] n, which allows these monomers to form chains by attaching to the rubber molecules.CHCHCHCHCH2). Acrylonitrile and styrene are dissolved in polybutadiene rubber [Acrylonitrile butadiene styrene (ABS) is made by copolymerizing (combining two or more monomers) the monomers acrylonitrile (CH2CHCN) and styrene (C6H5CH The advantage of ABS is that this material combines the strength and rigidity of the acrylonitrile and styrene polymers with the toughness of the polybutadiene rubber. Although the cost of producing ABS is roughly twice the cost of producing polystyrene, ABS is considered superior for its hardness, gloss, toughness, and electrical insulation properties. ABS plastic is injection molded to make telephones, helmets, washing machine agitators, and pipe joints. This plastic is thermoformed to make luggage, golf carts, toys, and car grills. ABS is also extruded to make piping, to which pipe joints are easily solventcemented. A7 Polymethyl Methacrylate Polymethyl methacrylate (PMMA), more commonly known by the generic name acrylic, is polymerized from the hydrocarbon compound methyl methacrylate (C4O2H8). PMMA is a hard material and is extremely clear because of the amorphous arrangement of its molecules. As a result, this thermoplastic is used to make optical lenses, watch crystals, aircraft windshields, skylights, and outdoor signs. These PMMA products are marketed under familiar trade names, including Plexiglas, Lucite, and Acrylite. Because PMMA can be cast to resemble marble, it is also used to make sinks, countertops, and other fixtures. A8 Polyamide Polyamides (PA), known by the trade name Nylon, consist of highly ordered molecules, which give polyamides high tensile strength. Some polyamides are made by reacting dicarboxylic acid with diamines (carbon molecules with the ion –NH2 on each end), as in nylon-6,6 and nylon-6,10. (The two numbers in each type of nylon represent the number of carbon atoms in the diamine and the dicarboxylic acid, respectively.) Other types of nylon are synthesized by the condensation of amino acids. Polyamides have mechanical properties such as high abrasion resistance, low coefficients of friction (meaning they are slippery), and tensile strengths comparable to the softer of the aluminum alloys. Therefore, nylons are commonly used for mechanical applications, such as gears, bearings, and bushings (see Engineering: Mechanical Engineering). Nylons are also extruded into millions of tons of synthetic fibers every year. The most commonly used nylon fibers, nylon-6,6 and nylon-6 (single number because this nylon forms by the selfcondensation of an amino acid) are made into textiles, ropes, fishing lines, brushes, and other items. B Thermosetting Materials Because thermosetting plastics cure, or cross-link, after being heated, these plastics can be made into durable and heat-resistant materials. The most commonly manufactured thermosetting plastics are presented below in order of decreasing volume of production. B1 Polyurethane ]n, where R may represent a different alkyl group than R’. Alkyl groups are chemical groups obtained by removing a hydrogen atom from an alkane—a hydrocarbon containing all carbon-carbon single bonds. Most types of polyurethane resin cross-link and become thermosetting plastics. However, some polyurethane resins have a linear molecular arrangement that does not cross-link, resulting in thermoplastics.R’OOCNHRPolyurethane is a polymer consisting of the repeating unit [ Thermosetting polyurethane molecules cross-link into a single giant molecule. Thermosetting polyurethane is widely used in various forms, including soft and hard foams. Soft, open-celled polyurethane foams are used to make seat cushions, mattresses, and packaging. Hard polyurethane foams are used as insulation in refrigerators, freezers, and homes. Thermoplastic polyurethane molecules have linear, highly crystalline molecular structures that form an abrasion-resistant material. Thermoplastic polyurethanes are molded into shoe soles, car fenders, door panels, and other products. B2 Phenolics Phenolic (phenol-formaldehyde) resins, first commercially available in 1910, were some of the first polymers made. Today phenolics are some of the most widely produced thermosetting plastics. They are produced by reacting phenol (C6H5OH) with formaldehyde (HCOH). Phenolic plastics are hard, strong, inexpensive to produce, and they possess excellent electrical resistance. Phenolic resins cure (cross-link) when heat and pressure are applied during the molding process. Phenolic resin-impregnated paper or cloth can be laminated into numerous products, such as electrical circuit boards. Phenolic resins are also compression molded into electrical switches, pan and iron handles, radio and television casings, and toaster knobs and bases. B3 Melamine-Formaldehyde and Urea-Formaldehyde Urea-formaldehyde (UF) and melamine-formaldehyde (MF) resins are composed of molecules that cross-link into clear, hard plastics. Properties of UF and MF resins are similar to the properties of phenolic resins. As their names imply, these resins are formed by condensation reactions between urea (H2NCONH2) or melamine (C3H6N6) and formaldehyde (CH2O). Melamine-formaldehyde resins are easily molded in compression and special injection molding machines. MF plastics are more heat-resistant, scratch-proof, and stain-resistant than urea-formaldehyde plastics are. MF resins are used to manufacture dishware, electrical components, laminated furniture veneers, and to bond wood layers into plywood. Urea-formaldehyde resins form products such as appliance knobs, knife handles, and plates. UF resins are used to give drip-dry properties to wash-and-wear clothes as well as to bond wood chips and wood sheets into chip board and plywood. B4 Unsaturated Polyesters CH2]n. Unsaturated polyesters (an unsaturated compound contains multiple bonds) crosslink when the long molecules are joined (copolymerized) by the aromatic organic compound styrene (see Aromatic Compounds).CH2COOC6H4OOCUnsaturated polyesters (UP) belong to the polyester group of plastics. Polyesters are composed of long carbon chains containing [ Unsaturated polyester resins are often premixed with glass fibers for additional strength. Two types of premixed resins are bulk molding compounds (BMC) and sheet molding compounds (SMC). Both types of compounds are doughlike in consistency and may contain short fiber reinforcements and other additives. Sheet molding compounds are preformed into large sheets or rolls that can be molded into products such as shower floors, small boat hulls, and roofing materials. Bulk molding compounds are also preformed to be compression molded into car body panels and other automobile components. B5 Epoxy Epoxy (EP) resins are named for the epoxide groups (cycl-CH2OCH; cycl or cyclic refers to the triangle formed by this group) that terminate the molecules. The oxygen along epoxy’s carbon chain and the epoxide groups at the ends of the carbon chain give epoxy resins some useful properties. Epoxies are tough, extremely weather-resistant, and do not shrink as they cure (dry). Epoxies cross-link when a catalyzing agent (hardener) is added, forming a threedimensional molecular network. Because of their outstanding bonding strength, epoxy resins are used to make coatings, adhesives, and composite laminates. Epoxy has important applications in the aerospace industry. All composite aircraft are made of epoxy. Epoxy is used to make the wing skins for the F-18 and F-22 fighters, as well as the horizontal stabilizer for the F-16 fighter and the B-1 bomber. In addition, almost 20 percent of the Harrier jet’s total weight is composed of reinforcements bound with an epoxy matrix (see Airplane). Because of epoxy’s chemical resistance and excellent electrical insulation properties, electrical parts such as relays, coils, and transformers are insulated with epoxy. B6 Reinforced Plastics Reinforced plastics, called composites, are plastics strengthened with fibers, strands, cloth, or other materials. Thermosetting epoxy and polyester resins are commonly used as the polymer matrix (binding material) in reinforced plastics. Due to a combination of strength and affordability, glass fibers, which are woven into the product, are the most common reinforcing material. Organic synthetic fibers such as aramid (an aromatic polyamide with the commercial name Kevlar) offer greater strength and stiffness than glass fibers, but these synthetic fibers are considerably more expensive. The Boeing 777 aircraft makes extensive use of lightweight reinforced plastics. Other products made from reinforced plastics include boat hulls and automobile body panels, as well as recreation equipment, such as tennis rackets, golf clubs, and jet skis. VIII HISTORY OF PLASTICS Humankind has been using natural plastics for thousands of years. For example, the early Egyptians soaked burial wrappings in natural resins to help preserve their dead. People have been using animal horns and turtle shells (which contain natural resins) for centuries to make items such as spoons, combs, and buttons. During the mid-19th century, shellac (resinous substance secreted by the lac insect) was gathered in southern Asia and transported to the United States to be molded into buttons, small cases, knobs, phonograph records, and hand-mirror frames. During that time period, gutta-percha (rubberlike sap taken from certain trees in Malaya) was used as the first insulating coating for electrical wires. In order to find more efficient ways to produce plastics and rubbers, scientists began trying to produce these materials in the laboratory. In 1839 American inventor Charles Goodyear vulcanized rubber by accidentally dropping a piece of sulfur-treated rubber onto a hot stove. Goodyear discovered that heating sulfur and rubber together improved the properties of natural rubber so that it would no longer become brittle when cold and soft when hot. In 1862 British chemist Alexander Parkes synthesized a plastic known as pyroxylin, which was used as a coating film on photographic plates. The following year, American inventor John W. Hyatt began working on a substitute for ivory billiard balls. Hyatt added camphor to nitrated cellulose and formed a modified natural plastic called celluloid, which became the basis of the early plastics industry. Celluloid was used to make products such as umbrella handles, dental plates, toys, photographic film, and billiard balls. These early plastics based on natural products shared numerous drawbacks. For example, many of the necessary natural materials were in short supply, and all proved difficult to mold. Finished products were inconsistent from batch to batch, and most products darkened and cracked with age. Furthermore, celluloid proved to be a very flammable material. Due to these shortcomings, scientists attempted to find more reliable plastic source materials. In 1909 American chemist Leo Hendrik Baekeland made a breakthrough when he created the first commercially successful thermosetting synthetic resin, which was called Bakelite (known today as phenolic resin). Use of Bakelite quickly grew. It has been used to make products such as telephones and pot handles. The chemistry of joining small molecules into macromolecules became the foundation of an emerging plastics industry. Between 1920 and 1932, the I.G. Farben Company of Germany synthesized polystyrene and polyvinyl chloride, as well as a synthetic rubber called Buna-S. In 1934 Du Pont made a breakthrough when it introduced nylon—a material finer, stronger, and more elastic than silk. By 1936 acrylics were being produced by German, British, and U.S. companies. That same year, the British company Imperial Chemical Industries developed polyethylene. In 1937 polyurethane was invented by the German company Friedrich Bayer & Co. (see Bayer AG), but this plastic was not available to consumers until it was commercialized by U.S. companies in the 1950s. In 1939 the German company I.G. Farbenindustrie filed a patent for polyepoxide (epoxy), which was not sold commercially until a U.S. firm made epoxy resins available to the consumer market almost four years later. After World War II (1939-1945), the pace of new polymer discoveries accelerated. In 1941 a small English company developed polyethylene terephthalate (PET). Although Du Pont and Imperial Chemical Industries produced PET fibers (marketed under the names Dacron and Terylene, respectively) during the postwar era, the use of PET as a material for making bottles, films, and coatings did not become widespread until the 1970s. In the postwar era, research by Bayer and by General Electric resulted in production of plastics such as polycarbonates, which are used to make small appliances, aircraft parts, and safety helmets. In 1965 introduced a linear, heat-resistant thermoplastic known as polysulfone, which is used to make face shields for astronauts and hospital equipment that can be sterilized in an autoclave (a device that uses high pressure steam for sterilization). Today, scientists can tailor the properties of plastics to numerous design specifications. Modern plastics are used to make products such as artificial joints, contact lenses, space suits, and other specialized materials. As plastics have become more versatile, use of plastics has grown as well. By the year 2005, annual global demand for plastics is projected to exceed 200 million metric tons (441 billion lb). IX PLASTICS AND THE ENVIRONMENT Every year in the United States, consumers throw millions of tons of plastic away—of the estimated 210 million metric tons (232 short tons) of municipal waste produced annually in the United States, 10.7 percent are plastics. As municipal landfills reach capacity and additional landfill space diminishes across the United States, alternative methods for reducing and disposing of wastes—including plastics—are being explored. Some of these options include reducing consumption of plastics, using biodegradable plastics, and incinerating or recycling plastic waste. A Source Reduction Source reduction is the practice of using less material to manufacture a product. For example, the wall thickness of many plastic and metal containers has been reduced in recent years, and some European countries have proposed to eliminate packaging that cannot be easily recycled. B Biodegradable Plastics Due to their molecular stability, plastics do not easily break down into simpler components. Plastics are therefore not considered biodegradable (see Solid Waste Disposal). However, researchers are working to develop biodegradable plastics that will disintegrate due to bacterial action or exposure to sunlight. For example, scientists are incorporating starch molecules into some plastic resins during the manufacturing process. When these plastics are discarded, bacteria eat the starch molecules. This causes the polymer molecules to break apart, allowing the plastic to decompose. Researchers are also investigating ways to make plastics more biodegradable from exposure to sunlight. Prolonged exposure to ultraviolet radiation from the sun causes many plastics molecules to become brittle and slowly break apart. Researchers are working to create plastics that will degrade faster in sunlight, but not so fast that the plastic begins to degrade while still in use. C Incineration Some wastes, such as paper, plastics, wood, and other flammable materials can be burned in incinerators. The resulting ash requires much less space for disposal than the original waste would. Because incineration of plastics can produce hazardous air emissions and other pollutants, this process is strictly regulated. D Recycling Plastics All plastics can be recycled. Thermoplastics can be remelted and made into new products. Thermosetting plastics can be ground, commingled (mixed), and then used as filler in moldable thermoplastic materials. Highly filled and reinforced thermosetting plastics can be pulverized and used in new composite formulations. Chemical recycling is a depolymerization process that uses heat and chemicals to break plastic molecules down into more basic components, which can then be reused. Another process, called pyrolysis, vaporizes and condenses both thermoplastics and thermosetting plastics into hydrocarbon liquids. Collecting and sorting used plastics is an expensive and time-consuming process. While about 27 percent of aluminum products, 45 percent of paper products, and 23 percent of glass products are recycled in the United States, only about 5 percent of plastics are currently recovered and recycled. Once plastic products are thrown away, they must be collected and then separated by plastic type. Most modern automated plastic sorting systems are not capable of differentiating between many different types of plastics. However, some advances are being made in these sorting systems to separate plastics by color, density, and chemical composition. For example, x-ray sensors can distinguish PET from PVC by sensing the presence of chlorine atoms in the polyvinyl chloride material. If plastic types are not segregated, the recycled plastic cannot achieve high remolding performance, which results in decreased market value of the recycled plastic. Other factors can adversely affect the quality of recycled plastics. These factors include the possible degradation of the plastic during its original life cycle and the possible addition of foreign materials to the scrap recycled plastic during the recycling process. For health reasons, recycled plastics are rarely made into food containers. Instead, most recycled plastics are typically made into items such as carpet fibers, motor oil bottles, trash carts, soap packages, and textile fibers. To promote the conservation and recycling of materials, the U.S. federal government passed the Resource Conservation and Recovery Act (RCRA) in 1976. In 1988 the Plastic Bottle Institute of the Society of the Plastics Industry established a system for identifying plastic containers by plastic type. The purpose of the "chasing arrows" symbol that appears on the bottom of many plastic containers is to promote plastics recycling. The chasing arrows enclose a number (such as a 1 indicating PET, a 2 indicating high density polyethylene (HDPE), and a 3 indicating PVC), which aids in the plastics sorting process. By 1994, 40 states had legislative mandates for litter control and recycling. Today, a growing number of communities have collection centers for recyclable materials, and some larger municipalities have implemented curbside pickup for recyclable materials, including plastics, paper, metal, and glass. Contributed By: Terry L. Richardson Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q14: Earthquake I INTRODUCTION Earthquake, shaking of the Earth’s surface caused by rapid movement of the Earth’s rocky outer layer. Earthquakes occur when energy stored within the Earth, usually in the form of strain in rocks, suddenly releases. This energy is transmitted to the surface of the Earth by earthquake waves. The study of earthquakes and the waves they create is called seismology (from the Greek seismos, “to shake”). Scientists who study earthquakes are called seismologists. The destruction an earthquake causes depends on its magnitude and duration, or the amount of shaking that occurs. A structure’s design and the materials used in its construction also affect the amount of damage the structure incurs. Earthquakes vary from small, imperceptible shaking to large shocks felt over thousands of kilometers. Earthquakes can deform the ground, make buildings and other structures collapse, and create tsunamis (large sea waves). Lives may be lost in the resulting destruction. Earthquakes, or seismic tremors, occur at a rate of several hundred per day around the world. A worldwide network of seismographs (machines that record movements of the Earth) detects about 1 million small earthquakes per year. Very large earthquakes, such as the 1964 Alaskan earthquake, which caused millions of dollars in damage, occur worldwide once every few years. Moderate earthquakes, such as the 1989 tremor in Loma Prieta, California, and the 1995 tremor in Kōbe, Japan, occur about 20 times a year. Moderate earthquakes also cause millions of dollars in damage and can harm many people. In the last 500 years, several million people have been killed by earthquakes around the world, including over 240,000 in the 1976 T’ang-Shan, China, earthquake. Worldwide, earthquakes have also caused severe property and structural damage. Adequate precautions, such as education, emergency planning, and constructing stronger, more flexible, safely designed structures, can limit the loss of life and decrease the damage caused by earthquakes. II ANATOMY OF AN EARTHQUAKE Seismologists examine the parts of an earthquake, such as what happens to the Earth’s surface during an earthquake, how the energy of an earthquake moves from inside the Earth to the surface, how this energy causes damage, and the slip of the fault that causes the earthquake. Faults are cracks in Earth’s crust where rocks on either side of the crack have moved. By studying the different parts and actions of earthquakes, seismologists learn more about their effects and how to predict and prepare for their ground shaking in order to reduce damage. A Focus and Epicenter The point within the Earth along the rupturing geological fault where an earthquake originates is called the focus, or hypocenter. The point on the Earth’s surface directly above the focus is called the epicenter. Earthquake waves begin to radiate out from the focus and subsequently form along the fault rupture. If the focus is near the surface—between 0 and 70 km (0 and 40 mi) deep—shallow-focus earthquakes are produced. If it is intermediate or deep below the crust—between 70 and 700 km (40 and 400 mi) deep—a deep-focus earthquake will be produced. Shallow-focus earthquakes tend to be larger, and therefore more damaging, earthquakes. This is because they are closer to the surface where the rocks are stronger and build up more strain. Seismologists know from observations that most earthquakes originate as shallow-focus earthquakes and most of them occur near plate boundaries—areas where the Earth’s crustal plates move against each other (see Plate Tectonics). Other earthquakes, including deepfocus earthquakes, can originate in subduction zones, where one tectonic plate subducts, or moves under another plate. See also Geology; Earth. B Faults Stress in the Earth’s crust creates faults, resulting in earthquakes. The properties of an earthquake depend strongly on the type of fault slip, or movement along the fault, that causes the earthquake. Geologists categorize faults according to the direction of the fault slip. The surface between the two sides of a fault lies in a plane, and the direction of the plane is usually not vertical; rather it dips at an angle into the Earth. When the rock hanging over the dipping fault plane slips downward into the ground, the fault is called a normal fault. When the hanging wall slips upward in relation to the footwall, the fault is called a reverse fault. Both normal and reverse faults produce vertical displacements, or the upward movement of one side of the fault above the other side, that appear at the surface as fault scarps. Strike-slip faults are another type of fault that produce horizontal displacements, or the side by side sliding movement of the fault, such as seen along the San Andreas fault in California. Strike-slip faults are usually found along boundaries between two plates that are sliding past each other. C Waves The sudden movement of rocks along a fault causes vibrations that transmit energy through the Earth in the form of waves. Waves that travel in the rocks below the surface of the Earth are called body waves, and there are two types of body waves: primary, or P, waves, and secondary, or S, waves. The S waves, also known as shearing waves, move the ground back and forth. Earthquakes also contain surface waves that travel out from the epicenter along the surface of the Earth. Two types of these surface waves occur: Rayleigh waves, named after British physicist Lord Rayleigh, and Love waves, named after British geophysicist A. E. H. Love. Surface waves also cause damage to structures, as they shake the ground underneath the foundations of buildings and other structures. Body waves, or P and S waves, radiate out from the rupturing fault starting at the focus of the earthquake. P waves are compression waves because the rocky material in their path moves back and forth in the same direction as the wave travels alternately compressing and expanding the rock. P waves are the fastest seismic waves; they travel in strong rock at about 6 to 7 km (about 4 mi) per second. P waves are followed by S waves, which shear, or twist, rather than compress the rock they travel through. S waves travel at about 3.5 km (about 2 mi) per second. S waves cause rocky material to move either side to side or up and down perpendicular to the direction the waves are traveling, thus shearing the rocks. Both P and S waves help seismologists to locate the focus and epicenter of an earthquake. As P and S waves move through the interior of the Earth, they are reflected and refracted, or bent, just as light waves are reflected and bent by glass. Seismologists examine this bending to determine where the earthquake originated. On the surface of the Earth, Rayleigh waves cause rock particles to move forward, up, backward, and down in a path that contains the direction of the wave travel. This circular movement is somewhat like a piece of seaweed caught in an ocean wave, rolling in a circular path onto a beach. The second type of surface wave, the Love wave, causes rock to move horizontally, or side to side at right angles to the direction of the traveling wave, with no vertical displacements. Rayleigh and Love waves always travel slower than P waves and usually travel slower than S waves. III CAUSES Most earthquakes are caused by the sudden slip along geologic faults. The faults slip because of movement of the Earth’s tectonic plates. This concept is called the elastic rebound theory. The rocky tectonic plates move very slowly, floating on top of a weaker rocky layer. As the plates collide with each other or slide past each other, pressure builds up within the rocky crust. Earthquakes occur when pressure within the crust increases slowly over hundreds of years and finally exceeds the strength of the rocks. Earthquakes also occur when human activities, such as the filling of reservoirs, increase stress in the Earth’s crust. A Elastic Rebound Theory In 1911 American seismologist Harry Fielding Reid studied the effects of the April 1906 California earthquake. He proposed the elastic rebound theory to explain the generation of certain earthquakes that scientists now know occur in tectonic areas, usually near plate boundaries. This theory states that during an earthquake, the rocks under strain suddenly break, creating a fracture along a fault. When a fault slips, movement in the crustal rock causes vibrations. The slip changes the local strain out into the surrounding rock. The change in strain leads to aftershocks (smaller earthquakes that occur after the initial earthquake), which are produced by further slips of the main fault or adjacent faults in the strained region. The slip begins at the focus and travels along the plane of the fault, radiating waves out along the rupture surface. On each side of the fault, the rock shifts in opposite directions. The fault rupture travels in irregular steps along the fault; these sudden stops and starts of the moving rupture give rise to the vibrations that propagate as seismic waves. After the earthquake, strain begins to build again until it is greater than the forces holding the rocks together, then the fault snaps again and causes another earthquake. B Human Activities Fault rupture is not the only cause of earthquakes; human activities can also be the direct or indirect cause of significant earthquakes. Injecting fluid into deep wells for waste disposal, filling reservoirs with water, and firing underground nuclear test blasts can, in limited circumstances, lead to earthquakes. These activities increase the strain within the rock near the location of the activity so that rock slips and slides along pre-existing faults more easily. While earthquakes caused by human activities may be harmful, they can also provide useful information. Prior to the Nuclear Test Ban treaty, scientists were able to analyze the travel and arrival times of P waves from known earthquakes caused by underground nuclear test blasts. Scientists used this information to study earthquake waves and determine the interior structure of the Earth. Scientists have determined that as water level in a reservoir increases, water pressure in pores inside the rocks along local faults also increases. The increased pressure may cause the rocks to slip, generating earthquakes. Beginning in 1935, the first detailed evidence of reservoir-induced earthquakes came from the filling of Lake Mead behind Hoover Dam on the Nevada-Arizona state border. Earthquakes were rare in the area prior to construction of the dam, but seismographs registered at least 600 shallow-focus earthquakes between 1936 and 1946. Most reservoirs, however, do not cause earthquakes. IV DISTRIBUTION Seismologists have been monitoring the frequency and locations of earthquakes for most of the 20th century. Seismologists generally classify naturally occurring earthquakes into one of two categories: interplate and intraplate. Interplate earthquakes are the most common; they occur primarily along plate boundaries. Intraplate earthquakes occur where the crust is fracturing within a plate. Both interplate and intraplate earthquakes may be caused by tectonic or volcanic forces. A Tectonic Earthquakes Tectonic earthquakes are caused by the sudden release of energy stored within the rocks along a fault. The released energy is produced by the strain on the rocks due to movement within the Earth, called tectonic deformation. The effect is like the sudden breaking and snapping back of a stretched elastic band. B Volcanic Earthquakes Volcanic earthquakes occur near active volcanoes but have the same fault slip mechanism as tectonic earthquakes. Volcanic earthquakes are caused by the upward movement of magma under the volcano, which strains the rock locally and leads to an earthquake. As the fluid magma rises to the surface of the volcano, it moves and fractures rock masses and causes continuous tremors that can last up to several hours or days. Volcanic earthquakes occur in areas that are associated with volcanic eruptions, such as in the Cascade Mountain Range of the Pacific Northwest, Japan, Iceland, and at isolated hot spots such as Hawaii. V LOCATIONS Seismologists use global networks of seismographic stations to accurately map the focuses of earthquakes around the world. After studying the worldwide distribution of earthquakes, the pattern of earthquake types, and the movement of the Earth’s rocky crust, scientists proposed that plate tectonics, or the shifting of the plates as they move over another weaker rocky layer, was the main underlying cause of earthquakes. The theory of plate tectonics arose from several previous geologic theories and discoveries. Scientists now use the plate tectonics theory to describe the movement of the Earth’s plates and how this movement causes earthquakes. They also use the knowledge of plate tectonics to explain the locations of earthquakes, mountain formation, and deep ocean trenches, and to predict which areas will be damaged the most by earthquakes. It is clear that major earthquakes occur most frequently in areas with features that are found at plate boundaries: high mountain ranges and deep ocean trenches. Earthquakes within plates, or intraplate tremors, are rare compared with the thousands of earthquakes that occur at plate boundaries each year, but they can be very large and damaging. Earthquakes that occur in the area surrounding the Pacific Ocean, at the edges of the Pacific plate, are responsible for an average of 80 percent of the energy released in earthquakes worldwide. Japan is shaken by more than 1,000 tremors greater than 3.5 in magnitude each year. The western coasts of North and South America are very also active earthquake zones, with several thousand small to moderate earthquakes each year. Intraplate earthquakes are less frequent than plate boundary earthquakes, but they are still caused by the internal fracturing of rock masses. The New Madrid, Missouri, earthquakes of 1811 and 1812 were extreme examples of intraplate seismic events. Scientists estimate that the three main earthquakes of this series were about magnitude 8.0 and that there were at least 1,500 aftershocks. VI EFFECTS Ground shaking leads to landslides and other soil movement. These are the main damagecausing events that occur during an earthquake. Primary effects that can accompany an earthquake include property damage, loss of lives, fire, and tsunami waves. Secondary effects, such as economic loss, disease, and lack of food and clean water, also occur after a large earthquake. A Ground Shaking and Landslides Earthquake waves make the ground move, shaking buildings and causing poorly designed or weak structures to partially or totally collapse. The ground shaking weakens soils and foundation materials under structures and causes dramatic changes in fine-grained soils. During an earthquake, water-saturated sandy soil becomes like liquid mud, an effect called liquefaction. Liquefaction causes damage as the foundation soil beneath structures and buildings weakens. Shaking may also dislodge large earth and rock masses, producing dangerous landslides, mudslides, and rock avalanches that may lead to loss of lives or further property damage. B Fire Another post-earthquake threat is fire, such as the fires that happened in San Francisco after the 1906 earthquake and after the devastating 1923 Tokyo earthquake. In the 1923 earthquake, about 130,000 lives were lost in Tokyo, Yokohama, and other cities, many in firestorms fanned by high winds. The amount of damage caused by post-earthquake fire depends on the types of building materials used, whether water lines are intact, and whether natural gas mains have been broken. Ruptured gas mains may lead to numerous fires, and fire fighting cannot be effective if the water mains are not intact to transport water to the fires. Fires can be significantly reduced with pre-earthquake planning, fireresistant building materials, enforced fire codes, and public fire drills. C Tsunami Waves and Flooding Along the coasts, sea waves called tsunamis that accompany some large earthquakes centered under the ocean can cause more death and damage than ground shaking. Tsunamis are usually made up of several oceanic waves that travel out from the slipped fault and arrive one after the other on shore. They can strike without warning, often in places very distant from the epicenter of the earthquake. Tsunami waves are sometimes inaccurately referred to as tidal waves, but tidal forces do not cause them. Rather, tsunamis occur when a major fault under the ocean floor suddenly slips. The displaced rock pushes water above it like a giant paddle, producing powerful water waves at the ocean surface. The ocean waves spread out from the vicinity of the earthquake source and move across the ocean until they reach the coastline, where their height increases as they reach the continental shelf, the part of the Earth’s crust that slopes, or rises, from the ocean floor up to the land. Tsunamis wash ashore with often disastrous effects such as severe flooding, loss of lives due to drowning, and damage to property. Earthquakes can also cause water in lakes and reservoirs to oscillate, or slosh back and forth. The water oscillations are called seiches (pronounced saysh). Seiches can cause retaining walls and dams to collapse and lead to flooding and damage downstream. D Disease Catastrophic earthquakes can create a risk of widespread disease outbreaks, especially in underdeveloped countries. Damage to water supply lines, sewage lines, and hospital facilities as well as lack of housing may lead to conditions that contribute to the spread of contagious diseases, such as influenza (the flu) and other viral infections. In some instances, lack of food supplies, clean water, and heating can create serious health problems as well. VII REDUCING DAMAGE Earthquakes cannot be prevented, but the damage they cause can be greatly reduced with communication strategies, proper structural design, emergency preparedness planning, education, and safer building standards. In response to the tragic loss of life and great cost of rebuilding after past earthquakes, many countries have established earthquake safety and regulatory agencies. These agencies require codes for engineers to use in order to regulate development and construction. Buildings built according to these codes survive earthquakes better and ensure that earthquake risk is reduced. Tsunami early warning systems can prevent some damage because tsunami waves travel at a very slow speed. Seismologists immediately send out a warning when evidence of a large undersea earthquake appears on seismographs. Tsunami waves travel slower than seismic P and S waves—in the open ocean, they move about ten times slower than the speed of seismic waves in the rocks below. This gives seismologists time to issue tsunami alerts so that people at risk can evacuate the coastal area as a preventative measure to reduce related injuries or deaths. Scientists radio or telephone the information to the Tsunami Warning Center in Honolulu and other stations. Engineers minimize earthquake damage to buildings by using flexible, reinforced materials that can withstand shaking in buildings. Since the 1960s, scientists and engineers have greatly improved earthquake-resistant designs for buildings that are compatible with modern architecture and building materials. They use computer models to predict the response of the building to ground shaking patterns and compare these patterns to actual seismic events, such as in the 1994 Northridge, California, earthquake and the 1995 Kōbe, Japan, earthquake. They also analyze computer models of the motions of buildings in the most hazardous earthquake zones to predict possible damage and to suggest what reinforcement is needed. See also Engineering: Civil Engineering. A Structural Design Geologists and engineers use risk assessment maps, such as geologic hazard and seismic hazard zoning maps, to understand where faults are located and how to build near them safely. Engineers use geologic hazard maps to predict the average ground motions in a particular area and apply these predicted motions during engineering design phases of major construction projects. Engineers also use risk assessment maps to avoid building on major faults or to make sure that proper earthquake bracing is added to buildings constructed in zones that are prone to strong tremors. They can also use risk assessment maps to aid in the retrofit, or reinforcement, of older structures. In urban areas of the world, the seismic risk is greater in nonreinforced buildings made of brick, stone, or concrete blocks because they cannot resist the horizontal forces produced by large seismic waves. Fortunately, single-family timber-frame homes built under modern construction codes resist strong earthquake shaking very well. Such houses have laterally braced frames bolted to their foundations to prevent separation. Although they may suffer some damage, they are unlikely to collapse because the strength of the strongly jointed timber-frame can easily support the light loads of the roof and the upper stories even in the event of strong vertical and horizontal ground motions. B Emergency Preparedness Plans Earthquake education and preparedness plans can help significantly reduce death and injury caused by earthquakes. People can take several preventative measures within their homes and at the office to reduce risk. Supports and bracing for shelves reduce the likelihood of items falling and potentially causing harm. Maintaining an earthquake survival kit in the home and at the office is also an important part of being prepared. In the home, earthquake preparedness includes maintaining an earthquake kit and making sure that the house is structurally stable. The local chapter of the American Red Cross is a good source of information for how to assemble an earthquake kit. During an earthquake, people indoors should protect themselves from falling objects and flying glass by taking refuge under a heavy table. After an earthquake, people should move outside of buildings, assemble in open spaces, and prepare themselves for aftershocks. They should also listen for emergency bulletins on the radio, stay out of severely damaged buildings, and avoid coastal areas in the event of a tsunami. In many countries, government emergency agencies have developed extensive earthquake response plans. In some earthquake hazardous regions, such as California, Japan, and Mexico City, modern strong motion seismographs in urban areas are now linked to a central office. Within a few minutes of an earthquake, the magnitude can be determined, the epicenter mapped, and intensity of shaking information can be distributed via radio to aid in response efforts. VIII STUDYING EARTHQUAKES Seismologists measure earthquakes to learn more about them and to use them for geological discovery. They measure the pattern of an earthquake with a machine called a seismograph. Using multiple seismographs around the world, they can accurately locate the epicenter of the earthquake, as well as determine its magnitude, or size, and fault slip properties. A Measuring Earthquakes An analog seismograph consists of a base that is anchored into the ground so that it moves with the ground during an earthquake, and a spring or wire that suspends a weight, which remains stationary during an earthquake. In older models, the base includes a rotating roll of paper, and the stationary weight is attached to a stylus, or writing utensil, that rests on the roll of paper. During the passage of a seismic wave, the stationary weight and stylus record the motion of the jostling base and attached roll of paper. The stylus records the information of the shaking seismograph onto the paper as a seismogram. Scientists also use digital seismographs, computerized seismic monitoring systems that record seismic events. Digital seismographs use rewriteable, or multiple-use, disks to record data. They usually incorporate a clock to accurately record seismic arrival times, a printer to print out digital seismograms of the information recorded, and a power supply. Some digital seismographs are portable; seismologists can transport these devices with them to study aftershocks of a catastrophic earthquake when the networks upon which seismic monitoring stations depend have been damaged. There are more than 1,000 seismograph stations in the world. One way that seismologists measure the size of an earthquake is by measuring the earthquake’s seismic magnitude, or the amplitude of ground shaking that occurs. Seismologists compare the measurements taken at various stations to identify the earthquake’s epicenter and determine the magnitude of the earthquake. This information is important in order to determine whether the earthquake occurred on land or in the ocean. It also helps people prepare for resulting damage or hazards such as tsunamis. When readings from a number of observatories around the world are available, the integrated system allows for rapid location of the epicenter. At least three stations are required in order to triangulate, or calculate, the epicenter. Seismologists find the epicenter by comparing the arrival times of seismic waves at the stations, thus determining the distance the waves have traveled. Seismologists then apply travel-time charts to determine the epicenter. With the present number of worldwide seismographic stations, many now providing digital signals by satellite, distant earthquakes can be located within about 10 km (6 mi) of the epicenter and about 10 to 20 km (6 to 12 mi) in focal depth. Special regional networks of seismographs can locate the local epicenters within a few kilometers. All magnitude scales give relative numbers that have no physical units. The first widely used seismic magnitude scale was developed by the American seismologist Charles Richter in 1935. The Richter scale measures the amplitude, or height, of seismic surface waves. The scale is logarithmic, so that each successive unit of magnitude measure represents a tenfold increase in amplitude of the seismogram patterns. This is because ground displacement of earthquake waves can range from less than a millimeter to many meters. Richter adjusted for this huge range in measurements by taking the logarithm of the recorded wave heights. So, a magnitude 5 Richter measurement is ten times greater than a magnitude 4; while it is 10 x 10, or 100 times greater than a magnitude 3 measurement. Today, seismologists prefer to use a different kind of magnitude scale, called the moment magnitude scale, to measure earthquakes. Seismologists calculate moment magnitude by measuring the seismic moment of an earthquake, or the earthquake’s strength based on a calculation of the area and the amount of displacement in the slip. The moment magnitude is obtained by multiplying these two measurements. It is more reliable for earthquakes that measure above magnitude 7 on other scales that refer only to part of the seismic waves, whereas the moment magnitude scale measures the total size. The moment magnitude of the 1906 San Francisco earthquake was 7.6; the Alaskan earthquake of 1964, about 9.0; and the 1995 Kōbe, Japan, earthquake was a 7.0 moment magnitude; in comparison, the Richter magnitudes were 8.3, 9.2, and 6.8, respectively for these tremors. Earthquake size can be measured by seismic intensity as well, a measure of the effects of an earthquake. Before the advent of seismographs, people could only judge the size of an earthquake by its effects on humans or on geological or human-made structures. Such observations are the basis of earthquake intensity scales first set up in 1873 by Italian seismologist M. S. Rossi and Swiss scientist F. A. Forel. These scales were later superseded by the Mercalli scale, created in 1902 by Italian seismologist Giuseppe Mercalli. In 1931 American seismologists H. O. Wood and Frank Neumann adapted the standards set up by Giuseppe Mercalli to California conditions and created the Modified Mercalli scale. Many seismologists around the world still use the Modified Mercalli scale to measure the size of an earthquake based on its effects. The Modified Mercalli scale rates the ground shaking by a general description of human reactions to the shaking and of structural damage that occur during a tremor. This information is gathered from local reports, damage to specific structures, landslides, and peoples’ descriptions of the damage. B Predicting Earthquakes Seismologists try to predict how likely it is that an earthquake will occur, with a specified time, place, and size. Earthquake prediction also includes calculating how a strong ground motion will affect a certain area if an earthquake does occur. Scientists can use the growing catalogue of recorded earthquakes to estimate when and where strong seismic motions may occur. They map past earthquakes to help determine expected rates of repetition. Seismologists can also measure movement along major faults using global positioning satellites (GPS) to track the relative movement of the rocky crust of a few centimeters each year along faults. This information may help predict earthquakes. Even with precise instrumental measurement of past earthquakes, however, conclusions about future tremors always involve uncertainty. This means that any useful earthquake prediction must estimate the likelihood of the earthquake occurring in a particular area in a specific time interval compared with its occurrence as a chance event. The elastic rebound theory gives a generalized way of predicting earthquakes because it states that a large earthquake cannot occur until the strain along a fault exceeds the strength holding the rock masses together. Seismologists can calculate an estimated time when the strain along the fault would be great enough to cause an earthquake. As an example, after the 1906 San Francisco earthquake, the measurements showed that in the 50 years prior to 1906, the San Andreas fault accumulated about 3.2 meters (10 feet) of displacement, or movement, at points across the fault. The maximum 1906 fault slip was 6.5 meters (21 feet), so it was suggested that 50 years x 6.5 meters/3.2 meters (21 feet/10 feet), about 100 years, would elapse before sufficient energy would again accumulate to produce a comparable earthquake. Scientists have measured other changes along active faults to try and predict future activity. These measurements have included changes in the ability of rocks to conduct electricity, changes in ground water levels, and changes in variations in the speed at which seismic waves pass through the region of interest. None of these methods, however, has been successful in predicting earthquakes to date. Seismologists have also developed field methods to date the years in which past earthquakes occurred. In addition to information from recorded earthquakes, scientists look into geologic history for information about earthquakes that occurred before people had instruments to measure them. This research field is called paleoseismology (paleo is Greek for “ancient”). Seismologists can determine when ancient earthquakes occurred. C The Earth’s Interior Seismologists also study earthquakes to learn more about the structure of the Earth’s interior. Earthquakes provide a rare opportunity for scientists to observe how the Earth’s interior responds when an earthquake wave passes through it. Measuring depths and geologic structures within the Earth using earthquake waves is more difficult for scientists than is measuring distances on the Earth’s surface. However, seismologists have used earthquake waves to determine that there are four main regions that make up the interior of the Earth: the crust, the mantle, and the inner and outer core. The intense study of earthquake waves began during the last decades of the 19th century, when people began placing seismographs at observatories around the world. By 1897 scientists had gathered enough seismograms from distant earthquakes to identify that P and S waves had traveled through the deep Earth. Seismologists studying these seismograms later in the late 19th and early 20th centuries discovered P wave and S wave shadow zones—areas on the opposite side of the Earth from the earthquake focus that P waves and S waves do not reach. These shadow zones showed that the waves were bouncing off some large geologic interior structures of the planet. Seismologists used these measurements to begin interpreting the paths along which the earthquake the paths of P and S waves indicated a rocky surface layer, or crust, overlying more rigid rocks below. He proposed that inside the Earth, the waves are reflected by discontinuities, chemical or structural changes of the rock. Because of his discovery, the interface between Discontinuity. In 1906 Richard Dixon Oldham of the Geological Survey of India used the arrival times of seismic P and S waves to deduce that the Earth must have a large and distinct central core. He interpreted the interior structure by comparing the faster speed of P waves to S waves, and noting that P waves were bent by the discontinuities such as the Moho Discontinuity. In 1914 German American seismologist Beno Gutenberg used travel times of seismic waves reflected at this boundary between the mantle and the core to determine the value for the radius of the core to be about 3,500 km (about 2,200 mi). In 1936 Danish seismologist Inge Lehmann discovered a smaller center structure, the inner core of the Earth. She estimated it to have a radius of 1,216 km (755 mi) by measuring the travel times of waves produced by South Pacific earthquakes. As the waves passed through the Earth and arrived at the Danish observatory, she determined that their speed and arrival times indicated that they must have been deflected by an inner core structure. In further studies of earthquake waves, seismologists found that the outer core is liquid and the inner core is solid. IX EXTRATERRESTRIAL QUAKES Seismic events similar to earthquakes also occur on other planets and on their satellites. Scientific missions to Earth’s moon and to Mars have provided some information related to extraterrestrial quakes. The current Galileo mission to Jupiter’s moons may provide evidence of quakes on the moons of Jupiter. Between 1969 and 1977, scientists conducted the Passive Seismic Experiment as part of the United States Apollo Program. Astronauts set up seismograph stations at five lunar sites. Each lunar seismograph detected between 600 and 3,000 moonquakes every year, a surprising result because the Moon has no tectonic plates, active volcanoes, or ocean trench systems. Most moonquakes had magnitudes less than about 2.0 on the Richter scale. Scientists used this information to determine the interior structure of the Moon and to examine the frequency of moonquakes. Besides the Moon and the Earth, Mars is the only other planetary body on which seismographs have been placed. The Viking 1 and 2 spacecraft carried two seismographs to Mars in 1976. Unfortunately, the instrument on Viking 1 failed to return signals to Earth. The instrument on Viking 2 operated, but in one year, only one wave motion was detected. Scientists were unable to determine the interior structure of Mars with only this single event. Q15: Endocrine System I INTRODUCTION Endocrine System, group of specialized organs and body tissues that produce, store, and secrete chemical substances known as hormones. As the body's chemical messengers, hormones transfer information and instructions from one set of cells to another. Because of the hormones they produce, endocrine organs have a great deal of influence over the body. Among their many jobs are regulating the body's growth and development, controlling the function of various tissues, supporting pregnancy and other reproductive functions, and regulating metabolism. Endocrine organs are sometimes called ductless glands because they have no ducts connecting them to specific body parts. The hormones they secrete are released directly into the bloodstream. In contrast, the exocrine glands, such as the sweat glands or the salivary glands, release their secretions directly to target areas—for example, the skin or the inside of the mouth. Some of the body's glands are described as endo-exocrine glands because they secrete hormones as well as other types of substances. Even some nonglandular tissues produce hormone-like substances—nerve cells produce chemical messengers called neurotransmitters, for example. The earliest reference to the endocrine system comes from ancient Greece, in about 400 BC. However, it was not until the 16th century that accurate anatomical descriptions of many of the endocrine organs were published. Research during the 20th century has vastly improved our understanding of hormones and how they function in the body. Today, endocrinology, the study of the endocrine glands, is an important branch of modern medicine. Endocrinologists are medical doctors who specialize in researching and treating disorders and diseases of the endocrine system. II COMPONENTS OF THE ENDOCRINE SYSTEM The primary glands that make up the human endocrine system are the hypothalamus, pituitary, thyroid, parathyroid, adrenal, pineal body, and reproductive glands—the ovary and testis. The pancreas, an organ often associated with the digestive system, is also considered part of the endocrine system. In addition, some nonendocrine organs are known to actively secrete hormones. These include the brain, heart, lungs, kidneys, liver, thymus, skin, and placenta. Almost all body cells can either produce or convert hormones, and some secrete hormones. For example, glucagon, a hormone that raises glucose levels in the blood when the body needs extra energy, is made in the pancreas but also in the wall of the gastrointestinal tract. However, it is the endocrine glands that are specialized for hormone production. They efficiently manufacture chemically complex hormones from simple chemical substances—for example, amino acids and carbohydrates—and they regulate their secretion more efficiently than any other tissues. The hypothalamus, found deep within the brain, directly controls the pituitary gland. It is sometimes described as the coordinator of the endocrine system. When information reaching the brain indicates that changes are needed somewhere in the body, nerve cells in the hypothalamus secrete body chemicals that either stimulate or suppress hormone secretions from the pituitary gland. Acting as liaison between the brain and the pituitary gland, the hypothalamus is the primary link between the endocrine and nervous systems. Located in a bony cavity just below the base of the brain is one of the endocrine system's most important members: the pituitary gland. Often described as the body’s master gland, the pituitary secretes several hormones that regulate the function of the other endocrine glands. Structurally, the pituitary gland is divided into two parts, the anterior and posterior lobes, each having separate functions. The anterior lobe regulates the activity of the thyroid and adrenal glands as well as the reproductive glands. It also regulates the body's growth and stimulates milk production in women who are breast-feeding. Hormones secreted by the anterior lobe include adrenocorticotropic hormone (ACTH), thyrotropic hormone (TSH), luteinizing hormone (LH), follicle-stimulating hormone (FSH), growth hormone (GH), and prolactin. The anterior lobe also secretes endorphins, chemicals that act on the nervous system to reduce sensitivity to pain. The posterior lobe of the pituitary gland contains the nerve endings (axons) from the hypothalamus, which stimulate or suppress hormone production. This lobe secretes antidiuretic hormones (ADH), which control water balance in the body, and oxytocin, which controls muscle contractions in the uterus. The thyroid gland, located in the neck, secretes hormones in response to stimulation by TSH from the pituitary gland. The thyroid secretes hormones—for example, thyroxine and three- iodothyronine—that regulate growth and metabolism, and play a role in brain development during childhood. The parathyroid glands are four small glands located at the four corners of the thyroid gland. The hormone they secrete, parathyroid hormone, regulates the level of calcium in the blood. Located on top of the kidneys, the adrenal glands have two distinct parts. The outer part, called the adrenal cortex, produces a variety of hormones called corticosteroids, which include cortisol. These hormones regulate salt and water balance in the body, prepare the body for stress, regulate metabolism, interact with the immune system, and influence sexual function. The inner part, the adrenal medulla, produces catecholamines, such as epinephrine, also called adrenaline, which increase the blood pressure and heart rate during times of stress. The reproductive components of the endocrine system, called the gonads, secrete sex hormones in response to stimulation from the pituitary gland. Located in the pelvis, the female gonads, the ovaries, produce eggs. They also secrete a number of female sex hormones, including estrogen and progesterone, which control development of the reproductive organs, stimulate the appearance of female secondary sex characteristics, and regulate menstruation and pregnancy. Located in the scrotum, the male gonads, the testes, produce sperm and also secrete a number of male sex hormones, or androgens. The androgens, the most important of which is testosterone, regulate development of the reproductive organs, stimulate male secondary sex characteristics, and stimulate muscle growth. The pancreas is positioned in the upper abdomen, just under the stomach. The major part of the pancreas, called the exocrine pancreas, functions as an exocrine gland, secreting digestive enzymes into the gastrointestinal tract. Distributed through the pancreas are clusters of endocrine cells that secrete insulin, glucagon, and somastatin. These hormones all participate in regulating energy and metabolism in the body. The pineal body, also called the pineal gland, is located in the middle of the brain. It secretes melatonin, a hormone that may help regulate the wake-sleep cycle. Research has shown that disturbances in the secretion of melatonin are responsible, in part, for the jet lag associated with long-distance air travel. III HOW THE ENDOCRINE SYSTEM WORKS Hormones from the endocrine organs are secreted directly into the bloodstream, where special proteins usually bind to them, helping to keep the hormones intact as they travel throughout the body. The proteins also act as a reservoir, allowing only a small fraction of the hormone circulating in the blood to affect the target tissue. Specialized proteins in the target tissue, called receptors, bind with the hormones in the bloodstream, inducing chemical changes in response to the body’s needs. Typically, only minute concentrations of a hormone are needed to achieve the desired effect. Too much or too little hormone can be harmful to the body, so hormone levels are regulated by a feedback mechanism. Feedback works something like a household thermostat. When the heat in a house falls, the thermostat responds by switching the furnace on, and when the temperature is too warm, the thermostat switches the furnace off. Usually, the change that a hormone produces also serves to regulate that hormone's secretion. For example, parathyroid hormone causes the body to increase the level of calcium in the blood. As calcium levels rise, the secretion of parathyroid hormone then decreases. This feedback mechanism allows for tight control over hormone levels, which is essential for ideal body function. Other mechanisms may also influence feedback relationships. For example, if an individual becomes ill, the adrenal glands increase the secretions of certain hormones that help the body deal with the stress of illness. The adrenal glands work in concert with the pituitary gland and the brain to increase the body’s tolerance of these hormones in the blood, preventing the normal feedback mechanism from decreasing secretion levels until the illness is gone. Long-term changes in hormone levels can influence the endocrine glands themselves. For example, if hormone secretion is chronically low, the increased stimulation by the feedback mechanism leads to growth of the gland. This can occur in the thyroid if a person's diet has insufficient iodine, which is essential for thyroid hormone production. Constant stimulation from the pituitary gland to produce the needed hormone causes the thyroid to grow, eventually producing a medical condition known as goiter. IV DISEASES OF THE ENDOCRINE SYSTEM Endocrine disorders are classified in two ways: disturbances in the production of hormones, and the inability of tissues to respond to hormones. The first type, called production disorders, are divided into hypofunction (insufficient activity) and hyperfunction (excess activity). Hypofunction disorders can have a variety of causes, including malformations in the gland itself. Sometimes one of the enzymes essential for hormone production is missing, or the hormone produced is abnormal. More commonly, hypofunction is caused by disease or injury. Tuberculosis can appear in the adrenal glands, autoimmune diseases can affect the thyroid, and treatments for cancer—such as radiation therapy and chemotherapy—can damage any of the endocrine organs. Hypofunction can also result when target tissue is unable to respond to hormones. In many cases, the cause of a hypofunction disorder is unknown. Hyperfunction can be caused by glandular tumors that secrete hormone without responding to feedback controls. In addition, some autoimmune conditions create antibodies that have the side effect of stimulating hormone production. Infection of an endocrine gland can have the same result. Accurately diagnosing an endocrine disorder can be extremely challenging, even for an astute physician. Many diseases of the endocrine system develop over time, and clear, identifying symptoms may not appear for many months or even years. An endocrinologist evaluating a patient for a possible endocrine disorder relies on the patient's history of signs and symptoms, a physical examination, and the family history—that is, whether any endocrine disorders have been diagnosed in other relatives. A variety of laboratory tests— for example, a radioimmunoassay—are used to measure hormone levels. Tests that directly stimulate or suppress hormone production are also sometimes used, and genetic testing for deoxyribonucleic acid (DNA) mutations affecting endocrine function can be helpful in making a diagnosis. Tests based on diagnostic radiology show anatomical pictures of the gland in question. A functional image of the gland can be obtained with radioactive labeling techniques used in nuclear medicine. One of the most common diseases of the endocrine systems is diabetes mellitus, which occurs in two forms. The first, called diabetes mellitus Type 1, is caused by inadequate secretion of insulin by the pancreas. Diabetes mellitus Type 2 is caused by the body's inability to respond to insulin. Both types have similar symptoms, including excessive thirst, hunger, and urination as well as weight loss. Laboratory tests that detect glucose in the urine and elevated levels of glucose in the blood usually confirm the diagnosis. Treatment of diabetes mellitus Type 1 requires regular injections of insulin; some patients with Type 2 can be treated with diet, exercise, or oral medication. Diabetes can cause a variety of complications, including kidney problems, pain due to nerve damage, blindness, and coronary heart disease. Recent studies have shown that controlling blood sugar levels reduces the risk of developing diabetes complications considerably. Diabetes insipidus is caused by a deficiency of vasopressin, one of the antidiuretic hormones (ADH) secreted by the posterior lobe of the pituitary gland. Patients often experience increased thirst and urination. Treatment is with drugs, such as synthetic vasopressin, that help the body maintain water and electrolyte balance. Hypothyroidism is caused by an underactive thyroid gland, which results in a deficiency of thyroid hormone. Hypothyroidism disorders cause myxedema and cretinism, more properly known as congenital hypothyroidism. Myxedema develops in older adults, usually after age 40, and causes lethargy, fatigue, and mental sluggishness. Congenital hypothyroidism, which is present at birth, can cause more serious complications including mental retardation if left untreated. Screening programs exist in most countries to test newborns for this disorder. By providing the body with replacement thyroid hormones, almost all of the complications are completely avoidable. Addison's disease is caused by decreased function of the adrenal cortex. Weakness, fatigue, abdominal pains, nausea, dehydration, fever, and hyperpigmentation (tanning without sun exposure) are among the many possible symptoms. Treatment involves providing the body with replacement corticosteroid hormones as well as dietary salt. Cushing's syndrome is caused by excessive secretion of glucocorticoids, the subgroup of corticosteroid hormones that includes hydrocortisone, by the adrenal glands. Symptoms may develop over many years prior to diagnosis and may include obesity, physical weakness, easily bruised skin, acne, hypertension, and psychological changes. Treatment may include surgery, radiation therapy, chemotherapy, or blockage of hormone production with drugs. Thyrotoxicosis is due to excess production of thyroid hormones. The most common cause for it is Graves' disease, an autoimmune disorder in which specific antibodies are produced, stimulating the thyroid gland. Thyrotoxicosis is eight to ten times more common in women than in men. Symptoms include nervousness, sensitivity to heat, heart palpitations, and weight loss. Many patients experience protruding eyes and tremors. Drugs that inhibit thyroid activity, surgery to remove the thyroid gland, and radioactive iodine that destroys the gland are common treatments. Acromegaly and gigantism both are caused by a pituitary tumor that stimulates production of excessive growth hormone, causing abnormal growth in particular parts of the body. Acromegaly is rare and usually develops over many years in adult subjects. Gigantism occurs when the excess of growth hormone begins in childhood. Last edited by Last Island; Sunday, December 30, 2007 at 07:17 PM. #2 Sunday, December 30, 2007 Dilrauf Junior Member Join Date: Sep 2005 Posts: 26 Thanks: 3 Thanked 24 Times in 7 Posts PAPER 2001 Q2: Cyclone, in strict meteorological terminology, an area of low atmospheric pressure surrounded by a wind system blowing, in the northern hemisphere, in a counterclockwise direction. A corresponding high-pressure area with clockwise winds is known as an anticyclone. In the southern hemisphere these wind directions are reversed. Cyclones are commonly called lows and anticyclones highs. The term cyclone has often been more loosely applied to a storm and disturbance attending such pressure systems, particularly the violent tropical hurricane and the typhoon, which center on areas of unusually low pressure. Tornado, violently rotating column of air extending from within a thundercloud (see Cloud) down to ground level. The strongest tornadoes may sweep houses from their foundations, destroy brick buildings, toss cars and school buses through the air, and even lift railroad cars from their tracks. Tornadoes vary in diameter from tens of meters to nearly 2 km (1 mi), with an average diameter of about 50 m (160 ft). Most tornadoes in the northern hemisphere create winds that blow counterclockwise around a center of extremely low atmospheric pressure. In the southern hemisphere the winds generally blow clockwise. Peak wind speeds can range from near 120 km/h (75 mph) to almost 500 km/h (300 mph). The forward motion of a tornado can range from a near standstill to almost 110 km/h (70 mph). Hurricane, name given to violent storms that originate over the tropical or subtropical waters of the Atlantic Ocean, Caribbean Sea, Gulf of Mexico, or North Pacific Ocean east of the International Date Line. Such storms over the North Pacific west of the International Date Line are called typhoons; those elsewhere are known as tropical cyclones, which is the general name for all such storms including hurricanes and typhoons. These storms can cause great damage to property and loss of human life due to high winds, flooding, and large waves crashing against shorelines. The worst natural disaster in United States history was caused by a hurricane that struck the coast of Texas in 1900. See also Tropical Storm; Cyclone. Q3: Energy Energy, capacity of matter to perform work as the result of its motion or its position in relation to forces acting on it. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum potential energy at the terminal points; at all intermediate positions it has both kinetic and potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are interconvertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two remains always the same. A weight suspended from a cord has potential energy due to its position, inasmuch as it can perform work in the process of falling. An electric battery has potential energy in chemical form. A piece of magnesium has potential energy stored in chemical form that is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic mechanical energy of the moving rotor of a dynamo is changed into kinetic electrical energy by electromagnetic induction. All forms of energy tend to be transformed into heat, which is the most transient form of energy. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses. Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At higher velocities close to that of light, as in nuclear reactions, energy and matter are interconvertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified. ENERGY CONVERSION Transducer, device that converts an input energy into an output energy. Usually, the output energy is a different kind of energy than the input energy. An example is a temperature gauge in which a spiral metallic spring converts thermal energy into a mechanical deflection of the dial needle. Because of the ease with which electrical energy may be transmitted and amplified, the most useful transducers are those that convert other forms of energy, such as heat, light, or sound, into electrical energy. Some examples are microphones, which convert sound energy into electrical energy; photoelectric materials, which convert light energy into electrical energy; and pyroelectric energy crystals, which convert heat energy into electrical energy. Electric Motors and Generators, group of devices used to convert mechanical energy into electrical energy, or electrical energy into mechanical energy, by electromagnetic means (see Energy). A machine that converts mechanical energy into electrical energy is called a generator, alternator, or dynamo, and a machine that converts electrical energy into mechanical energy is called a motor. Most electric cars use lead-acid batteries, but new types of batteries, including zincchlorine, nickel metal hydride, and sodium-sulfur, are becoming more common. The motor of an electric car harnesses the battery's electrical energy by converting it to kinetic energy. The driver simply switches on the power, selects “Forward” or “Reverse” with another switch, and steps on the accelerator pedal. Photosynthesis, process by which green plants and certain other organisms use the energy of light to convert carbon dioxide and water into the simple sugar glucose. Turbine, rotary engine that converts the energy of a moving stream of water, steam, or gas into mechanical energy. The basic element in a turbine is a wheel or rotor with paddles, propellers, blades, or buckets arranged on its circumference in such a fashion that the moving fluid exerts a tangential force that turns the wheel and imparts energy to it. This mechanical energy is then transferred through a drive shaft to operate a machine, compressor, electric generator, or propeller. Turbines are classified as hydraulic, or water, turbines, steam turbines, or gas turbines. Today turbine-powered generators produce most of the world's electrical energy. Windmills that generate electricity are known as wind turbines (see Windmill). Wind Energy, energy contained in the force of the winds blowing across the earth’s surface. When harnessed, wind energy can be converted into mechanical energy for performing work such as pumping water, grinding grain, and milling lumber. By connecting a spinning rotor (an assembly of blades attached to a hub) to an electric generator, modern wind turbines convert wind energy, which turns the rotor, into electrical energy. Q4: (I) Polymer I INTRODUCTION Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units. Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances. II STRUCTURE OF POLYMERS Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC). Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers. In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt. III SYNTHESIS Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon). In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes. In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well. Synthetic polymers include the plastics polystyrene, polyester, nylon (a polyamide), and polyvinyl chloride. These polymers differ in their repeating monomer units. Scientists build polymers from different monomer units to create plastics with different properties. For example, polyvinyl chloride is tough and nylon is silklike. Synthetic polymers usually do not dissolve in water or react with other chemicals. Strong synthetic polymers form fibers for clothing and other materials. Synthetic fibers usually last longer than natural fibers do. (II) Laser I INTRODUCTION Laser, a device that produces and amplifies light. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser light is very pure in color, can be extremely intense, and can be directed with great accuracy. Lasers are used in many modern technological devices including bar code readers, compact disc (CD) players, and laser printers. Lasers can generate light beyond the range visible to the human eye, from the infrared through the X-ray range. Masers are similar devices that produce and amplify microwaves. II PRINCIPLES OF OPERATION Lasers generate light by storing energy in particles called electrons inside atoms and then inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of all matter on Earth and are a thousand times smaller than viruses. Electrons are the underlying source of almost all light. Light is composed of tiny packets of energy called photons. Lasers produce coherent light: light that is monochromatic (one color) and whose photons are “in step” with one another. A Excited Atoms At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is made up of two types of particles: protons, which have a positive charge, and neutrons, which have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but occupies only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee Stadium and the equally magnified nucleus is only the size of a baseball. Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside atoms. Electrons travel in complex orbits and exist only in certain specific energy states or levels (see Quantum Theory). Electrons can move from a low to a high energy level by absorbing energy. An atom with at least one electron that occupies a higher energy level than it normally would is said to be excited. An atom can become excited by absorbing a photon whose energy equals the difference between the two energy levels. A photon’s energy, color, frequency, and wavelength are directly related: All photons of a given energy are the same color and have the same frequency and wavelength. Usually, electrons quickly jump back to the low energy level, giving off the extra energy as light (see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as many electrons independently emit photons of different colors in all directions. B Stimulated Emission Lasers are different from more familiar sources of light. Excited atoms in lasers collectively emit photons of a single color, all traveling in the same direction and all in step with one another. When two photons are in step, the peaks and troughs of their waves line up. The electrons in the atoms of a laser are first pumped, or energized, to an excited state by an energy source. An excited atom can then be “stimulated” by a photon of exactly the same color (or, equivalently, the same wavelength) as the photon this atom is about to emit spontaneously. If the photon approaches closely enough, the photon can stimulate the excited atom to immediately emit light that has the same wavelength and is in step with the photon that interacted with it. This stimulated emission is the key to laser operation. The new light adds to the existing light, and the two photons go on to stimulate other excited atoms to give up their extra energy, again in step. The phenomenon snowballs into an amplified, coherent beam of light: laser light. In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with highly reflective mirrors facing inward at each end. As the photons bounce between the two parallel mirrors, they trigger further stimulated emissions and the light gets brighter and brighter with each pass through the excited atoms. One of the mirrors is only partially silvered, allowing a small amount of light to pass through rather than reflecting it all. The intense, directional, and single-colored laser light finally escapes through this slightly transparent mirror. The escaped light forms the laser beam. Albert Einstein first proposed stimulated emission, the underlying process for laser action, in 1917. Translating the idea of stimulated emission into a working model, however, required more than four decades. The working principles of lasers were outlined by the American physicists Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application. (Both men won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981). The patent for the laser was granted to Townes and Schawlow, but it was later challenged by the American physicist and engineer Gordon Gould, who had written down some ideas and coined the word laser in 1957. Gould eventually won a partial patent covering several types of laser. In 1960 American physicist Theodore Maiman of Hughes Aircraft Corporation constructed the first working laser from a ruby rod. III TYPES OF LASERS Lasers are generally classified according to the material, called the medium, they use to produce the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all common types of lasers. A Solid-State Lasers Solid-state lasers produce light by means of a solid medium. The most common solid laser media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rods are fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film. Solid-state lasers offer the highest power output. They are usually pulsed to generate a very brief burst of light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts are useful for studying physical phenomena of very brief duration. One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-energy light than the laser produces. This procedure, called pumping, is achieved with brilliant strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps. B Gas Lasers The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor. The medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet light, electron beams, electric current, or chemical reactions. The helium-neon laser is known for its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at turning the energy used to excite their atoms into laser light. Consequently, they are the most powerful continuous wave (CW) lasers—that is, lasers that emit light continuously rather than in pulses. C Liquid Lasers The most common liquid laser media are inorganic dyes contained in glass vessels. They are pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous wave mode. Some dye lasers are tunable, meaning that the color of the laser light they emit can be adjusted with the help of a prism located inside the laser cavity. D Semiconductor Lasers Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common semiconductor used. A typical semiconductor laser consists of a junction between two flat layers of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra electron, and the other with an impurity whose atoms are one electron short. Semiconductor lasers are pumped by the direct application of electric current across the junction. They can be operated in the continuous wave mode with better than 50 percent efficiency. Only a small percentage of the energy used to excite most other lasers is converted into light. Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit on a chip the size of a fingernail. Common uses for semiconductor lasers include compact disc (CD) players and laser printers. Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber Optics). E Free Electron Lasers. Free electron lasers employ an array of magnets to excite free electrons (electrons not bound to atoms). First developed in 1977, they are now becoming important research instruments. Free electron lasers are tunable over a broader range of energies than dye lasers. The devices become more difficult to operate at higher energies but generally work successfully from infrared through ultraviolet wavelengths. Theoretically, electron lasers can function even in the X-ray range. The free electron laser facility at the University of California at Santa Barbara uses intense far-infrared light to investigate mutations in DNA molecules and to study the properties of semiconductor materials. Free electron lasers should also eventually become capable of producing very high-power radiation that is currently too expensive to produce. At high power, near-infrared beams from a free electron laser could defend against a missile attack. IV LASER APPLICATIONS The use of lasers is restricted only by imagination. Lasers have become valuable tools in industry, scientific research, communications, medicine, the military, and the arts. A Industry Powerful laser beams can be focused on a small spot to generate enormous temperatures. Consequently, the focused beams can readily and precisely heat, melt, or vaporize material. Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to induce controlled nuclear fusion (see Nuclear Energy). Highly directional laser beams are used for alignment in construction. Perfectly straight and uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short laser pulses also make high-speed photography with exposure times of only several trillionths of a second possible. B Scientific Research Because laser light is highly directional and monochromatic, extremely small amounts of light scattering and small shifts in color caused by the interaction between laser light and matter can easily be detected. By measuring the scattering and color shifts, scientists can study molecular structures of matter. Chemical reactions can be selectively induced, and the existence of trace substances in samples can be detected. Lasers are also the most effective detectors of certain types of air pollution. (see Chemical Analysis; Photochemistry). Scientists use lasers to make extremely accurate measurements. Lasers are used in this way for monitoring small movements associated with plate tectonics and for geographic surveys. Lasers have been used for precise determination (to within one inch) of the distance between Earth and the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists also have used lasers to determine the speed of light to an unprecedented accuracy. Very fast laser-activated switches are being developed for use in particle accelerators. Scientists also use lasers to trap single atoms and subatomic particles in order to study these tiny bits of matter (see Particle Trap). C Communications Laser light can travel a large distance in outer space with little reduction in signal strength. In addition, high-energy laser light can carry 1,000 times the television channels today carried by microwave signals. Lasers are therefore ideal for space communications. Lowloss optical fibers have been developed to transmit laser light for earthbound communication in telephone and computer systems. Laser techniques have also been used for high-density information recording. For instance, laser light simplifies the recording of a hologram, from which a three-dimensional image can be reconstructed with a laser beam. Lasers are also used to play audio CDs and videodiscs (see Sound Recording and Reproduction). D Medicine Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and cauterize certain body tissues in a small fraction of a second without damaging surrounding healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical procedures for eye disorders. Laser techniques have also been developed for lab tests of small biological samples. E Military Applications Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can be fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president Ronald Reagan and the Ballistic Missile Defense program supported by President George W. Bush. The ability of tunable dye lasers to selectively excite an atom or molecule may open up more efficient ways to separate isotopes for construction of nuclear weapons. V LASER SAFETY Because the eye focuses laser light just as it does other light, the chief danger in working with lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected. Lasers sold and used commercially in the United States must comply with a strict set of laws enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food and Drug Administration. The CDRH has divided lasers into six groups, depending on their power output, their emission duration, and the energy of the photons they emit. The classification is then attached to the laser as a sticker. The higher the laser’s energy, the higher its potential to injure. High-powered lasers of the Class IV type (the highest classification) generate a beam of energy that can start fires, burn flesh, and cause permanent eye damage whether the light is direct, reflected, or diffused. Canada uses the same classification system, and laser use in Canada is overseen by Health Canada’s Radiation Protection Bureau. Goggles blocking the specific color of photons that a laser produces are mandatory for the safe use of lasers. Even with goggles, direct exposure to laser light should be avoided. (iii)Pesticides The chemical agents called pesticides include herbicides (for weed control), insecticides, and fungicides. More than half the pesticides used in the U.S. are herbicides that control weeds: USDA estimates indicate that 86 percent of U.S. agricultural land areas are treated with herbicides, 18 percent with insecticides, and 3 percent with fungicides. The amount of pesticide used on different crops also varies. For example, in the U.S., about 67 percent of the insecticides used in agriculture are applied to two crops, cotton and corn; about 70 percent of the herbicides are applied to corn and soybeans, and most of the fungicides are applied to fruit and vegetable crops. Most of the insecticides now applied are long-lasting synthetic compounds that affect the nervous system of insects on contact. Among the most effective are the chlorinated hydrocarbons DDT, chlordane, and toxaphene, although agricultural use of DDT has been banned in the U.S. since 1973. Others, the organophosphate insecticides, include malathion, parathion, and dimethoate. Among the most effective herbicides are the compounds of 2,4-D (2,4-dichlorophenoxyacetic acid), only a few kilograms of which are required per hectare to kill broad-leaved weeds while leaving grains unaffected. Agricultural pesticides prevent a monetary loss of about $9 billion each year in the U.S. For every $1 invested in pesticides, the American farmer gets about $4 in return. These benefits, however, must be weighed against the costs to society of using pesticides, as seen in the banning of ethylene dibromide in the early 1980s. These costs include human poisonings, fish kills, honey bee poisonings, and the contamination of livestock products. The environmental and social costs of pesticide use in the U.S. have been estimated to be at least $1 billion each year. Thus, although pesticides are valuable for agriculture, they also can cause serious harm. Indeed, the question may be asked—what would crop losses be if insecticides were not used in the U.S., and readily available nonchemical controls were substituted? The best estimate is that only another 5 percent of the nation's food would be lost. (iv) Fission and Fusion Fission and Fusion Nuclear energy can be released in two different ways: fission, the splitting of a large nucleus, and fusion, the combining of two small nuclei. In both cases energy—measured in millions of electron volts (MeV)—is released because the products are more stable (have a higher binding energy) than the reactants. Fusion reactions are difficult to maintain because the nuclei repel each other, but fusion creates much less radioactive waste than does fission. © Microsoft Corporation. All Rights Reserved. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q: How would a fusion reactor differ from the nuclear reactors we currently have? A: The nuclear reactors we have now are fission reactors. This means that they obtain their energy from nuclear reactions that split large nuclei such as uranium into smaller ones such as rubidium and cesium. There is a binding energy that holds a nucleus together. If the binding energy of the original large nucleus is greater than the sum of the binding energies of the smaller pieces, you get the difference in energy as heat that can be used in a power station to generate electricity. A fusion reaction works the other way. It takes small nuclei like deuterium (heavy hydrogen) and fuses them together to make larger ones such as helium. If the binding energy of the two deuterium nuclei is greater than that of the final larger helium nucleus, it can be used to generate electricity. There are two main differences between fission and fusion. The first is that the materials required for fission are rarer and more expensive to produce than those for fusion. For example, uranium has to be mined in special areas and then purified by difficult processes. By contrast, even though deuterium makes up only 0.02 percent of naturally occurring hydrogen, we have a vast supply of hydrogen in the water making up the oceans. The second difference is that the products of fission are radioactive and so need to be treated carefully, as they are dangerous to health. The products of fusion are not radioactive (although a realistic reactor will likely have some relatively small amount of radioactive product). The problem with building fusion reactors is that a steady, controlled fusion reaction is very hard to achieve. It is still a subject of intense research. The main problem is that to achieve fusion we need to keep the nuclei we wish to fuse at extremely high temperatures and close enough for them to have a chance of fusing with one other. It is extremely difficult to find a way of holding everything together, since the nuclei naturally repel each other and the temperatures involved are high enough to melt any solid substance known. As technology improves, holding everything together will become easier, but it seems that we are a long way off from having commercial fusion reactors. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (v) Paramagnetism and Diamagnetism Paramagnetism Liquid oxygen becomes trapped in an electromagnet’s magnetic field because oxygen (O2) is paramagnetic. Oxygen has two unpaired electrons whose magnetic moments align with external magnetic field lines. When this occurs, the O2 molecules themselves behave like tiny magnets, and become trapped between the poles of the electromagnet. Magnetism I INTRODUCTION Magnetism, an aspect of electromagnetism, one of the fundamental forces of nature. Magnetic forces are produced by the motion of charged particles such as electrons, indicating the close relationship between electricity and magnetism. The unifying frame for these two forces is called electromagnetic theory (see Electromagnetic Radiation). The most familiar evidence of magnetism is the attractive or repulsive force observed to act between magnetic materials such as iron. More subtle effects of magnetism, however, are found in all matter. In recent times these effects have provided important clues to the atomic structure of matter. II HISTORY OF STUDY The phenomenon of magnetism has been known of since ancient times. The mineral lodestone (see Magnetite), an oxide of iron that has the property of attracting iron objects, was known to the Greeks, Romans, and Chinese. When a piece of iron is stroked with lodestone, the iron itself acquires the same ability to attract other pieces of iron. The magnets thus produced are polarized—that is, each has two sides or ends called northseeking and south-seeking poles. Like poles repel one another, and unlike poles attract. The compass was first used for navigation in the West some time after AD1200. In the 13th century, important investigations of magnets were made by the French scholar Petrus Peregrinus. His discoveries stood for nearly 300 years, until the English physicist and physician William Gilbert published his book Of Magnets, Magnetic Bodies, and the Great Magnet of the Earth in 1600. Gilbert applied scientific methods to the study of electricity and magnetism. He pointed out that the earth itself behaves like a giant magnet, and through a series of experiments, he investigated and disproved several incorrect notions about magnetism that were accepted as being true at the time. Subsequently, in 1750, the English geologist John Michell invented a balance that he used in the study of magnetic forces. He showed that the attraction and repulsion of magnets decrease as the squares of the distance from the respective poles increase. The French physicist Charles Augustin de Coulomb, who had measured the forces between electric charges, later verified Michell's observation with high precision. III ELECTROMAGNETIC THEORY In the late 18th and early 19th centuries, the theories of electricity and magnetism were investigated simultaneously. In 1819 an important discovery was made by the Danish physicist Hans Christian Oersted, who found that a magnetic needle could be deflected by an electric current flowing through a wire. This discovery, which showed a connection between electricity and magnetism, was followed up by the French scientist André Marie Ampère, who studied the forces between wires carrying electric currents, and by the French physicist Dominique François Jean Arago, who magnetized a piece of iron by placing it near a current-carrying wire. In 1831 the English scientist Michael Faraday discovered that moving a magnet near a wire induces an electric current in that wire, the inverse effect to that found by Oersted: Oersted showed that an electric current creates a magnetic field, while Faraday showed that a magnetic field can be used to create an electric current. The full unification of the theories of electricity and magnetism was achieved by the English physicist James Clerk Maxwell, who predicted the existence of electromagnetic waves and identified light as an electromagnetic phenomenon. Subsequent studies of magnetism were increasingly concerned with an understanding of the atomic and molecular origins of the magnetic properties of matter. In 1905 the French physicist Paul Langevin produced a theory regarding the temperature dependence of the magnetic properties of paramagnets (discussed below), which was based on the atomic structure of matter. This theory is an early example of the description of large-scale properties in terms of the properties of electrons and atoms. Langevin's theory was subsequently expanded by the French physicist Pierre Ernst Weiss, who postulated the existence of an internal, “molecular” magnetic field in materials such as iron. This concept, when combined with Langevin's theory, served to explain the properties of strongly magnetic materials such as lodestone. After Weiss's theory, magnetic properties were explored in greater and greater detail. The theory of atomic structure of Danish physicist Niels Bohr, for example, provided an understanding of the periodic table and showed why magnetism occurs in transition elements such as iron and the rare earth elements, or in compounds containing these elements. The American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck showed in 1925 that the electron itself has spin and behaves like a small bar magnet. (At the atomic level, magnetism is measured in terms of magnetic moments—a magnetic moment is a vector quantity that depends on the strength and orientation of the magnetic field, and the configuration of the object that produces the magnetic field.) The German physicist Werner Heisenberg gave a detailed explanation for Weiss's molecular field in 1927, on the basis of the newly-developed quantum mechanics (see Quantum Theory). Other scientists then predicted many more complex atomic arrangements of magnetic moments, with diverse magnetic properties. IV THE MAGNETIC FIELD Objects such as a bar magnet or a current-carrying wire can influence other magnetic materials without physically contacting them, because magnetic objects produce a magnetic field. Magnetic fields are usually represented by magnetic flux lines. At any point, the direction of the magnetic field is the same as the direction of the flux lines, and the strength of the magnetic field is proportional to the space between the flux lines. For example, in a bar magnet, the flux lines emerge at one end of the magnet, then curve around the other end; the flux lines can be thought of as being closed loops, with part of the loop inside the magnet, and part of the loop outside. At the ends of the magnet, where the flux lines are closest together, the magnetic field is strongest; toward the side of the magnet, where the flux lines are farther apart, the magnetic field is weaker. Depending on their shapes and magnetic strengths, different kinds of magnets produce different patterns of flux lines. The pattern of flux lines created by magnets or any other object that creates a magnetic field can be mapped by using a compass or small iron filings. Magnets tend to align themselves along magnetic flux lines. Thus a compass, which is a small magnet that is free to rotate, will tend to orient itself in the direction of the magnetic flux lines. By noting the direction of the compass needle when the compass is placed at many locations around the source of the magnetic field, the pattern of flux lines can be inferred. Alternatively, when iron filings are placed around an object that creates a magnetic field, the filings will line up along the flux lines, revealing the flux line pattern. Magnetic fields influence magnetic materials, and also influence charged particles that move through the magnetic field. Generally, when a charged particle moves through a magnetic field, it feels a force that is at right angles both to the velocity of the charged particle and the magnetic field. Since the force is always perpendicular to the velocity of the charged particle, a charged particle in a magnetic field moves in a curved path. Magnetic fields are used to change the paths of charged particles in devices such as particle accelerators and mass spectrometers. V KINDS OF MAGNETIC MATERIALS The magnetic properties of materials are classified in a number of different ways. One classification of magnetic materials—into diamagnetic, paramagnetic, and ferromagnetic—is based on how the material reacts to a magnetic field. Diamagnetic materials, when placed in a magnetic field, have a magnetic moment induced in them that opposes the direction of the magnetic field. This property is now understood to be a result of electric currents that are induced in individual atoms and molecules. These currents, according to Ampere's law, produce magnetic moments in opposition to the applied field. Many materials are diamagnetic; the strongest ones are metallic bismuth and organic molecules, such as benzene, that have a cyclic structure, enabling the easy establishment of electric currents. Paramagnetic behavior results when the applied magnetic field lines up all the existing magnetic moments of the individual atoms or molecules that make up the material. This results in an overall magnetic moment that adds to the magnetic field. Paramagnetic materials usually contain transition metals or rare earth elements that possess unpaired electrons. Paramagnetism in nonmetallic substances is usually characterized by temperature dependence; that is, the size of an induced magnetic moment varies inversely to the temperature. This is a result of the increasing difficulty of ordering the magnetic moments of the individual atoms along the direction of the magnetic field as the temperature is raised. A ferromagnetic substance is one that, like iron, retains a magnetic moment even when the external magnetic field is reduced to zero. This effect is a result of a strong interaction between the magnetic moments of the individual atoms or electrons in the magnetic substance that causes them to line up parallel to one another. In ordinary circumstances these ferromagnetic materials are divided into regions called domains; in each domain, the atomic moments are aligned parallel to one another. Separate domains have total moments that do not necessarily point in the same direction. Thus, although an ordinary piece of iron might not have an overall magnetic moment, magnetization can be induced in it by placing the iron in a magnetic field, thereby aligning the moments of all the individual domains. The energy expended in reorienting the domains from the magnetized back to the demagnetized state manifests itself in a lag in response, known as hysteresis. Ferromagnetic materials, when heated, eventually lose their magnetic properties. This loss becomes complete above the Curie temperature, named after the French physicist Pierre Curie, who discovered it in 1895. (The Curie temperature of metallic iron is about 770° C/1300° F.) VI OTHER MAGNETIC ORDERINGS In recent years, a greater understanding of the atomic origins of magnetic properties has resulted in the discovery of other types of magnetic ordering. Substances are known in which the magnetic moments interact in such a way that it is energetically favorable for them to line up antiparallel; such materials are called antiferromagnets. There is a temperature analogous to the Curie temperature called the Neel temperature, above which antiferromagnetic order disappears. Other, more complex atomic arrangements of magnetic moments have also been found. Ferrimagnetic substances have at least two different kinds of atomic magnetic moments, which are oriented antiparallel to one another. Because the moments are of different size, a net magnetic moment remains, unlike the situation in an antiferromagnet where all the magnetic moments cancel out. Interestingly, lodestone is a ferrimagnet rather than a ferromagnet; two types of iron ions, each with a different magnetic moment, are in the material. Even more complex arrangements have been found in which the magnetic moments are arranged in spirals. Studies of these arrangements have provided much information on the interactions between magnetic moments in solids. VII APPLICATIONS Numerous applications of magnetism and of magnetic materials have arisen in the past 100 years. The electromagnet, for example, is the basis of the electric motor and the transformer. In more recent times, the development of new magnetic materials has also been important in the computer revolution. Computer memories can be fabricated using bubble domains. These domains are actually smaller regions of magnetization that are either parallel or antiparallel to the overall magnetization of the material. Depending on this direction, the bubble indicates either a one or a zero, thus serving as the units of the binary number system used in computers. Magnetic materials are also important constituents of tapes and disks on which data are stored. In addition to the atomic-sized magnetic units used in computers, large, powerful magnets are crucial to a variety of modern technologies. Powerful magnetic fields are used in nuclear magnetic resonance imaging, an important diagnostic tool used by doctors. Superconducting magnets are used in today's most powerful particle accelerators to keep the accelerated particles focused and moving in a curved path. Scientists are developing magnetic levitation trains that use strong magnets to enable trains to float above the tracks, reducing friction. Contributed By: Martin Blume Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q5: (i) Microcomputer and Minicomputer Minicomputer, a mid-level computer built to perform complex computations while dealing efficiently with a high level of input and output from users connected via terminals. Minicomputers also frequently connect to other minicomputers on a network and distribute processing among all the attached machines. Minicomputers are used heavily in transaction-processing applications and as interfaces between mainframe computer systems and wide area networks. See also Office Systems; Time-Sharing. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Microcomputer, desktop- or notebook-size computing device that uses a microprocessor as its central processing unit, or CPU (see Computer). Microcomputers are also called personal computers (PCs), home computers, small-business computers, and micros. The smallest, most compact are called laptops. When they first appeared, they were considered singleuser devices, and they were capable of handling only four, eight, or 16 bits of information at one time. More recently the distinction between microcomputers and large, mainframe computers (as well as the smaller mainframe-type systems called minicomputers) has become blurred, as newer microcomputer models have increased the speed and datahandling capabilities of their CPUs into the 32-bit, multiuser range. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (ii) Supercomputer I INTRODUCTION Supercomputer, computer designed to perform calculations as fast as current technology allows and used to solve extremely complex problems. Supercomputers are used to design automobiles, aircraft, and spacecraft; to forecast the weather and global climate; to design new drugs and chemical compounds; and to make calculations that help scientists understand the properties of particles that make up atoms as well as the behavior and evolution of stars and galaxies. Supercomputers are also used extensively by the military for weapons and defense systems research, and for encrypting and decoding sensitive intelligence information. See Computer; Encryption; Cryptography. Supercomputers are different than other types of computers in that they are designed to work on a single problem at a time, devoting all their resources to the solution of the problem. Other powerful computers such as mainframes and workstations are specifically designed so that they can work on numerous problems, and support numerous users, simultaneously. Because of their high cost—usually in the hundreds of thousands to millions of dollars—supercomputers are shared resources. Supercomputers are so expensive that usually only large companies, universities, and government agencies and laboratories can afford them. II HOW SUPERCOMPUTERS WORK The two major components of a supercomputer are the same as any other computer—a central processing unit (CPU) where instructions are carried out, and the memory in which data and instructions are stored. The CPU in a supercomputer is similar in function to a standard personal computer (PC) CPU, but it usually has a different type of transistor technology that minimizes transistor switching time. Switching time is the length of time that it takes for a transistor in the CPU to open or close, which corresponds to a piece of data moving or changing value in the computer. This time is extremely important in determining the absolute speed at which a CPU can operate. By using very high performance circuits, architectures, and, in some cases, even special materials, supercomputer designers are able to make CPUs that are 10 to 20 times faster than stateof-the-art processors for other types of commercial computers. Supercomputer memory also has the same function as memory in other computers, but it is optimized so that retrieval of data and instructions from memory takes the least amount of time possible. Also important to supercomputer performance is that the connections between the memory and the CPU be as short as possible to minimize the time that information takes to travel between the memory and the CPU. A supercomputer functions in much the same way as any other type of computer, except that it is designed to do calculations as fast as possible. Supercomputer designers use two main methods to reduce the amount of time that supercomputers spend carrying out instructions—pipelining and parallelism. Pipelining allows multiple operations to take place at the same time in the supercomputer’s CPU by grouping together pieces of data that need to have the same sequence of operations performed on them and then feeding them through the CPU one after the other. The general idea of parallelism is to process data and instructions in parallel rather than in sequence. In pipelining, the various logic circuits (electronic circuits within the CPU that perform arithmetic calculations) used on a specific calculation are continuously in use, with data streaming from one logic unit to the next without interruption. For instance, a sequence of operations on a large group of numbers might be to add adjacent numbers together in pairs beginning with the first and second numbers, then to multiply these results by some constant, and finally to store these results in memory. The addition operation would be Step 1, the multiplication operation would be Step 2, and the assigning of the result to a memory location would be Step 3 in the sequence. The CPU could perform the sequence of operations on the first pair of numbers, store the result in memory and then pass the second pair of numbers through, and continue on like this. For a small group of numbers this would be fine, but since supercomputers perform calculations on massive groups of numbers this technique would be inefficient, because only one operation at a time is being performed. Pipelining overcomes the source of inefficiency associated with the CPU performing a sequence of operations on only one piece of data at a time until the sequence is finished. The pipeline method would be to perform Step 1 on the first pair of data and move it to Step 2. As the result of the first operation move to Step 2, the second pair of data move into Step 1. Step 1 and 2 are then performed simultaneously on their respective data and the results of the operations are moved ahead in the pipeline, or the sequence of operations performed on a group of data. Hence the third pair of numbers are in Step 1, the second pair of numbers are in Step 2, and the first pair of numbers are in Step 3. The remainder of the calculations are performed in this way, with the specific logic units in the sequence are always operating simultaneously on data. The example used above to illustrate pipelining can also be used to illustrate the concept of parallelism (see Parallel Processing). A computer that parallel-processed data would perform Step 1 on multiple pieces of data simultaneously, then move these to Step 2, then to Step 3, each step being performed on the multiple pieces of data simultaneously. One way to do this is to have multiple logic circuits in the CPU that perform the same sequence of operations. Another way is to link together multiple CPUs, synchronize them (meaning that they all perform an operation at exactly the same time) and have each CPU perform the necessary operation on one of the pieces of data. Pipelining and parallelism are combined and used to greater or lesser extent in all supercomputers. Until the early 1990s, parallelism achieved through the interconnection of CPUs was limited to between 2 and 16 CPUs connected in parallel. However, the rapid increase in processing speed of off-the-shelf microprocessors used in personal computers and workstations made possible massively-parallel processing (MPP) supercomputers. While the individual processors used in MPP supercomputers are not as fast as specially designed supercomputer CPUs, they are much less expensive and because of this, hundreds or even thousands of these processors can be linked together to achieve extreme parallelism. III SUPERCOMPUTER PERFORMANCE Supercomputers are used to create mathematical models of complex phenomena. These models usually contain long sequences of numbers that are manipulated by the supercomputer with a kind of mathematics called matrix arithmetic. For example, to accurately predict the weather, scientists use mathematical models that contain current temperature, air pressure, humidity, and wind velocity measurements at many neighboring locations and altitudes. Using these numbers as data, the computer makes many calculations to simulate the physical interactions that will likely occur during the forecast period. When supercomputers perform matrix arithmetic on large sets of numbers, it is often necessary to multiply many pairs of numbers together and to then add up each of their individual products. A simple example of such a calculation is: (4 × 6) + (7 × 2) + (9 × 5) + (8 × 8) + (2 × 9) = 165. In real problems, the strings of numbers used in calculations are usually much longer, often containing hundreds or thousands of pairs of numbers. Furthermore, the numbers used are not simple integers but more complicated types of numbers called floating point numbers that allow a wide range of digits before and after the decimal point, for example 5,063,937.9120834. The various operations of adding, subtracting, multiplying, and dividing floating-point numbers are collectively called floating-point operations. An important way of measuring a supercomputer’s performance is in the peak number of floating-point operations per second (FLOPS) that it can do. In the mid-1990s, the peak computational rate for state-of-the-art supercomputers was between 1 and 200 Gigaflops (billion floating-point operations per second), depending on the specific model and configuration of the supercomputer. In July 1995, computer scientists at the University of Tokyo, in Japan, broke the 1 teraflops (1 trillion floating-point operations per second) mark with a computer they designed to perform astrophysical simulations. Named GRAPE-4 (GRAvity PipE number 4), this MPP supercomputer consisted of 1692 interconnected processors. In November 1996, Cray Research debuted the CRAY T3E-900, the first commercially-available supercomputer to offer teraflops performance. In 1997 the Intel Corporation installed the teraflop machine Janus at Sandia National Laboratories in New Mexico. Janus is composed of 9072 interconnected processors. Scientists use Janus for classified work such as weapons research as well as for unclassified scientific research such as modeling the impact of a comet on the earth. The definition of what a supercomputer is constantly changes with technological progress. The same technology that increases the speed of supercomputers also increases the speed of other types of computers. For instance, the first computer to be called a supercomputer, the Cray-1 developed by Cray Research and first sold in 1976, had a peak speed of 167 megaflops. This is only a few times faster than standard personal computers today, and well within the reach of some workstations. Contributed By: Steve Nelson Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (iii) (iv) Byte and Word Byte, in computer science, a unit of information built from bits, the smallest units of information used in computers. Bits have one of two absolute values, either 0 or 1. These bit values physically correspond to whether transistors and other electronic circuitry in a computer are on or off. A byte is usually composed of 8 bits, although bytes composed of 16 bits are also used. See Number Systems. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (v)RAM and Cache Memory Cache (computer), in computer science, an area of memory that holds frequently accessed data or program instructions for the purpose of speeding a computer system's performance. A cache consists of ultrafast static random-access memory (SRAM) chips, which rapidly move data to the central processing unit (the device in a computer that interprets and executes instructions). The process minimizes the amount of time the processor must be idle while it waits for data. This time is measured in a clock cycle, which is the equivalent in time to a bit in data. The effectiveness of the cache is dependent on the speed of the chips and the quality of the algorithm that determines which data is most likely to be requested by the processor See also Disk Cache. RAM, in computer science, acronym for random access memory. Semiconductor-based memory that can be read and written by the microprocessor or other hardware devices. The storage locations can be accessed in any order. Note that the various types of ROM memory are capable of random access. The term RAM, however, is generally understood to refer to volatile memory, which can be written as well as read. See also Computer; EPROM; PROM. Buffer (computer science), in computer science, an intermediate repository of data—a reserved portion of memory in which data is temporarily held pending an opportunity to complete its transfer to or from a storage device or another location in memory. Some devices, such as printers or the adapters supporting them, commonly have their own buffers. Q6: (i) (ii)Television Television I INTRODUCTION Television, system of sending and receiving pictures and sound by means of electronic signals transmitted through wires and optical fibers or by electromagnetic radiation. These signals are usually broadcast from a central source, a television station, to reception devices such as television sets in homes or relay stations such as those used by cable television service providers. Television is the most widespread form of communication in the world. Though most people will never meet the leader of a country, travel to the moon, or participate in a war, they can observe these experiences through the images on their television. Television has a variety of applications in society, business, and science. The most common use of television is as a source of information and entertainment for viewers in their homes. Security personnel also use televisions to monitor buildings, manufacturing plants, and numerous public facilities. Public utility employees use television to monitor the condition of an underground sewer line, using a camera attached to a robot arm or remote-control vehicle. Doctors can probe the interior of a human body with a microscopic television camera without having to conduct major surgery on the patient. Educators use television to reach students throughout the world. People in the United States have the most television sets per person of any country, with 835 sets per 1,000 people as of 2000. Canadians possessed 710 sets per 1,000 people during the same year. Japan, Germany, Denmark, and Finland follow North America in the number of sets per person. II HOW TELEVISION WORKS A television program is created by focusing a television camera on a scene. The camera changes light from the scene into an electric signal, called the video signal, which varies depending on the strength, or brightness, of light received from each part of the scene. In color television, the camera produces an electric signal that varies depending on the strength of each color of light. Three or four cameras are typically used to produce a television program (see Television Production). The video signals from the cameras are processed in a control room, then combined with video signals from other cameras and sources, such as videotape recorders, to provide the variety of images and special effects seen during a television program. Audio signals from microphones placed in or near the scene also flow to the control room, where they are amplified and combined. Except in the case of live broadcasts (such as news and sports programs) the video and audio signals are recorded on tape and edited, assembled with the use of computers into the final program, and broadcast later. In a typical television station, the signals from live and recorded features, including commercials, are put together in a master control room to provide the station's continuous broadcast schedule. Throughout the broadcast day, computers start and stop videotape machines and other program sources, and switch the various audio and visual signals. The signals are then sent to the transmitter. The transmitter amplifies the video and audio signals, and uses the electronic signals to modulate, or vary, carrier waves (oscillating electric currents that carry information). The carrier waves are combined (diplexed), then sent to the transmitting antenna, usually placed on the tallest available structure in a given broadcast area. In the antenna, the oscillations of the carrier waves generate electromagnetic waves of energy that radiate horizontally throughout the atmosphere. The waves excite weak electric currents in all television-receiving antennas within range. These currents have the characteristics of the original picture and sound currents. The currents flow from the antenna attached to the television into the television receiver, where they are electronically separated into audio and video signals. These signals are amplified and sent to the picture tube and the speakers, where they produce the picture and sound portions of the program. III THE TELEVISION CAMERA The television camera is the first tool used to produce a television program. Most cameras have three basic elements: an optical system for capturing an image, a pickup device for translating the image into electronic signals, and an encoder for encoding signals so they may be transmitted. A Optical System The optical system of a television camera includes a fixed lens that is used to focus the scene onto the front of the pickup device. Color cameras also have a system of prisms and mirrors that separate incoming light from a scene into the three primary colors: red, green, and blue. Each beam of light is then directed to its own pickup device. Almost any color can be reproduced by combining these colors in the appropriate proportions. Most inexpensive consumer video cameras use a filter that breaks light from an image into the three primary colors. B Pickup Device The pickup device takes light from a scene and translates it into electronic signals. The first pickup devices used in cameras were camera tubes. The first camera tube used in television was the iconoscope. Invented in the 1920s, it needed a great deal of light to produce a signal, so it was impractical to use in a low-light setting, such as an outdoor evening scene. The image-orthicon tube and the vidicon tube were invented in the 1940s and were a vast improvement on the iconoscope. They needed only about as much light to record a scene as human eyes need to see. Instead of camera tubes, most modern cameras now use lightsensitive integrated circuits (tiny, electronic devices) called charge-coupled devices (CCDs). When recording television images, the pickup device replaces the function of film used in making movies. In a camera tube pickup device, the front of the tube contains a layer of photosensitive material called a target. In the image-orthicon tube, the target material is photoemissive—that is, it emits electrons when it is struck by light. In the vidicon camera tube, the target material is photoconductive—that is, it conducts electricity when it is struck by light. In both cases, the lens of a camera focuses light from a scene onto the front of the camera tube, and this light causes changes in the target material. The light image is transformed into an electronic image, which can then be read from the back of the target by a beam of electrons (tiny, negatively charged particles). The beam of electrons is produced by an electron gun at the back of the camera tube. The beam is controlled by a system of electromagnets that make the beam systematically scan the target material. Whenever the electron beam hits the bright parts of the electronic image on the target material, the tube emits a high voltage, and when the beam hits a dark part of the image, the tube emits a low voltage. This varying voltage is the electronic television signal. A charge-coupled device (CCD) can be much smaller than a camera tube and is much more durable. As a result, cameras with CCDs are more compact and portable than those using a camera tube. The image they create is less vulnerable to distortion and is therefore clearer. In a CCD, the light from a scene strikes an array of photodiodes arranged on a silicon chip. Photodiodes are devices that conduct electricity when they are struck by light; they send this electricity to tiny capacitors. The capacitors store the electrical charge, with the amount of charge stored depending on the strength of the light that struck the photodiode. The CCD converts the incoming light from the scene into an electrical signal by releasing the charges from the photodiodes in an order that follows the scanning pattern that the receiver will follow in re-creating the image. C Encoder In color television, the signals from the three camera tubes or charge-coupled devices are first amplified, then sent to the encoder before leaving the camera. The encoder combines the three signals into a single electronic signal that contains the brightness information of the colors (luminance). It then adds another signal that contains the code used to combine the colors (color burst), and the synchronization information used to direct the television receiver to follow the same scanning pattern as the camera. The color television receiver uses the color burst part of the signal to separate the three colors again. IV SCANNING Television cameras and television receivers use a procedure called scanning to record visual images and re-create them on a television screen. The television camera records an image, such as a scene in a television show, by breaking it up into a series of lines and scanning over each line with the beam or beams of electrons contained in the camera tube. The pattern is created in a CCD camera by the array of photodiodes. One scan of an image produces one static picture, like a single frame in a film. The camera must scan a scene many times per second to record a continuous image. In the television receiver, another electron beam—or set of electron beams, in the case of color television—uses the signals recorded by the camera to reproduce the original image on the receiver's screen. Just like the beam or beams in the camera, the electron beam in the receiver must scan the screen many times per second to reproduce a continuous image. In order for television to work, television images must be scanned and recorded in the same manner as television receivers reproduce them. In the United States, broadcasters and television manufacturers have agreed on a standard of breaking images down into 525 horizontal lines, and scanning images 30 times per second. In Europe, most of Asia, and Australia, images are broken down into 625 lines, and they are scanned 25 times per second. Special equipment can be used to make television images that have been recorded in one standard fit a television system that uses a different standard. Telecine equipment (from the words television and cinema) is used to convert film and slide images to television signals. The images from film projectors or slides are directed by a system of mirrors toward the telecine camera, which records the images as video signals. The scanning method that is most commonly used today is called interlaced scanning. It produces a clear picture that does not fade. When an image is scanned line by line from top to bottom, the top of the image on the screen will begin to fade by the time the electron beam reaches the bottom of the screen. With interlaced scanning, odd-numbered lines are scanned first, and the remaining even-numbered lines are scanned next. A full image is still produced 30 times a second, but the electron beam travels from the top of the screen to the bottom of the screen twice for every time a full image is produced. V TRANSMISSION OF TELEVISION SIGNALS The audio and video signals of a television program are broadcast through the air by a transmitter. The transmitter superimposes the information in the camera's electronic signals onto carrier waves. The transmitter amplifies the carrier waves, making them much stronger, and sends them to a transmitting antenna. This transmitting antenna radiates the carrier waves in all directions, and the waves travel through the air to antennas connected to television sets or relay stations. A The Transmitter The transmitter superimposes the information from the electronic television signal onto carrier waves by modulating (varying) either the wave's amplitude, which corresponds to the wave's strength, or the wave's frequency, which corresponds to the number of times the wave oscillates each second (see Radio: Modulation). The amplitude of one carrier wave is modulated to carry the video signal (amplitude modulation, or AM) and the frequency of another wave is modulated to carry the audio signal (frequency modulation, or FM). These waves are combined to produce a carrier wave that contains both the video and audio information. The transmitter first generates and modulates the wave at a low power of several watts. After modulation, the transmitter amplifies the carrier signal to the desired power level, sometimes many kilowatts (1,000 watts), depending on how far the signal needs to travel, and then sends the carrier wave to the transmitting antenna. The frequency of carrier waves is measured in hertz (Hz), which is equal to the number of wave peaks that pass by a point every second. The frequency of the modulated carrier wave varies, covering a range, or band, of about 4 million hertz, or 4 megahertz (4 MHz). This band is much wider than the band needed for radio broadcasting, which is about 10,000 Hz, or 10 kilohertz (10 kHz). Television stations that broadcast in the same area send out carrier waves on different bands of frequencies, each called a channel, so that the signals from different stations do not mix. To accommodate all the channels, which are spaced at least 6 MHz apart, television carrier frequencies are very high. Six MHz does not represent a significant chunk of bandwidth if the television stations broadcast between 50 and 800 MHz. In the United States and Canada, there are two ranges of frequency bands that cover 67 different channels. The first range is called very high frequency (VHF), and it includes frequencies from 54 to 72 MHz, from 76 to 88 MHz, and from 174 to 216 MHz. These frequencies correspond to channels 2 through 13 on a television set. The second range, ultrahigh frequency (UHF), includes frequencies from 407 MHz to 806 MHz, and it corresponds to channels 14 through 69 (see Radio and Television Broadcasting). The high-frequency waves radiated by transmitting antennas can travel only in a straight line, and may be blocked by obstacles in between the transmitting and receiving antennas. For this reason, transmitting antennas must be placed on tall buildings or towers. In practice, these transmitters have a range of about 120 km (75 mi). In addition to being blocked, some television signals may reflect off buildings or hills and reach a receiving antenna a little later than the signals that travel directly to the antenna. The result is a ghost, or second image, that appears on the television screen. Television signals may, however, be sent clearly from almost any point on earth to any other—and from spacecraft to earth—by means of cables, microwave relay stations, and communications satellites. B Cable Transmission Cable television was first developed in the late 1940s to serve shadow areas—that is, areas that are blocked from receiving signals from a station's transmitting antenna. In these areas, a community antenna receives the signal, and the signal is then redistributed to the shadow areas by coaxial cable (a large cable with a wire core that can transmit the wide band of frequencies required for television) or, more recently, by fiber-optic cable. Viewers in most areas can now subscribe to a cable television service, which provides a wide variety of television programs and films adapted for television that are transmitted by cable directly to the viewer's television set. Digital data-compression techniques, which convert television signals to digital code in an efficient way, have increased cable's capacity to 500 or more channels. C Microwave Relay Transmission Microwave relay stations are tall towers that receive television signals, amplify them, and retransmit them as a microwave signal to the next relay station. Microwaves are electromagnetic waves that are much shorter than normal television carrier waves and can travel farther. The stations are placed about 50 km (30 mi) apart. Television networks once relied on relay stations to broadcast to affiliate stations located in cities far from the original source of the broadcast. The affiliate stations received the microwave transmission and rebroadcast it as a normal television signal to the local area. This system has now been replaced almost entirely by satellite transmission in which networks send or uplink their program signals to a satellite that in turn downlinks the signals to affiliate stations. D Satellite Transmission Communications satellites receive television signals from a ground station, amplify them, and relay them back to the earth over an antenna that covers a specified terrestrial area. The satellites circle the earth in a geosynchronous orbit, which means they stay above the same place on the earth at all times. Instead of a normal aerial antenna, receiving dishes are used to receive the signal and deliver it to the television set or station. The dishes can be fairly small for home use, or large and powerful, such as those used by cable and network television stations. Satellite transmissions are used to efficiently distribute television and radio programs from one geographic location to another by networks; cable companies; individual broadcasters; program providers; and industrial, educational, and other organizations. Programs intended for specific subscribers are scrambled so that only the intended recipients, with appropriate decoders, can receive the program. Direct-broadcast satellites (DBS) are used worldwide to deliver TV programming directly to TV receivers through small home dishes. The Federal Communications Commission (FCC) licensed several firms in the 1980s to begin DBS service in the United States. The actual launch of DBS satellites, however, was delayed due to the economic factors involved in developing a digital video compression system. The arrival in the early 1990s of digital compression made it possible for a single DBS satellite to carry more than 200 TV channels. DBS systems in North America are operating in the Ku band (12.0-19.0 GHz). DBS home systems consist of the receiving dish antenna and a low-noise amplifier that boosts the antenna signal level and feeds it to a coaxial cable. A receiving box converts the superhigh frequency (SHF) signals to lower frequencies and puts them on channels that the home TV set can display. VI TELEVISION RECEIVER The television receiver translates the pulses of electric current from the antenna or cable back into images and sound. A traditional television set integrates the receiver, audio system, and picture tube into one device. However, some cable TV systems use a separate component such as a set-top box as a receiver. A high-definition television (HDTV) set integrates the receiver directly into the set like a traditional TV. However, some televisions receive high-definition signals and display them on a monitor. In these instances, an external receiver is required. A Tuner The tuner blocks all signals other than that of the desired channel. Blocking is done by the radio frequency (RF) amplifier. The RF amplifier is set to amplify a frequency band, 6 MHz wide, transmitted by a television station; all other frequencies are blocked. A channel selector connected to the amplifier determines the particular frequency band that is amplified. When a new channel is selected, the amplifier is reset accordingly. In this way, the band, or channel, picked out by the home receiver is changed. Once the viewer selects a channel, the incoming signal is amplified, and the video, audio, and scanning signals are separated from the higher-frequency carrier waves by a process called demodulation. The tuner amplifies the weak signal intercepted by the antenna and partially demodulates (decodes) it by converting the carrier frequency to a lower frequency—the intermediate frequency. Intermediate-frequency amplifiers further increase the strength of the signals received from the antenna. After the incoming signals have been amplified, audio, scanning, and video signals are separated. B Audio System The audio system consists of a discriminator, which translates the audio portion of the carrier wave back into an electronic audio signal; an amplifier; and a speaker. The amplifier strengthens the audio signal from the discriminator and sends it to the speaker, which converts the electrical waves into sound waves that travel through the air to the listener. C Picture Tube The television picture tube receives video signals from the tuner and translates the signals back into images. The images are created by an electron gun in the back of the picture tube, which shoots a beam of electrons toward the back of the television screen. A blackand-white picture tube contains just one electron gun, while a color picture tube contains three electron guns, one for each of the primary colors of light (red, green, and blue). Part of the video signal goes to a magnetic coil that directs the beam and makes it scan the screen in the same manner as the camera originally scanned the scene. The rest of the signal directs the strength of the electron beam as it strikes the screen. The screen is coated with phosphor, a substance that glows when it is struck by electrons (see Luminescence). The stronger the electron beam, the stronger the glow and the brighter that section of the scene appears. In color television, a portion of the video signal is used to separate out the three color signals, which are then sent to their corresponding electron beams. The screen is coated by tiny phosphor strips or dots that are arranged in groups of three: one strip or dot that emits blue, one that emits green, and one that emits red. Before light from each beam hits the screen, it passes through a shadow mask located just behind the screen. The shadow mask is a layer of opaque material that is covered with slots or holes. It partially blocks the beam corresponding to one color and prevents it from hitting dots of another color. As a result, the electron beam directed by signals for the color blue can strike and light up only blue dots. The result is similar for the beams corresponding to red and green. Images in the three different colors are produced on the television screen. The eye automatically combines these images to produce a single image having the entire spectrum of colors formed by mixing the primary colors in various proportions. VII TELEVISION'S HISTORY The scientific principles on which television is based were discovered in the course of basic research. Only much later were these concepts applied to television as it is known today. The first practical television system began operating in the 1940s. In 1873 the Scottish scientist James Clerk Maxwell predicted the existence of the electromagnetic waves that make it possible to transmit ordinary television broadcasts. Also in 1873 the English scientist Willoughby Smith and his assistant Joseph May noticed that the electrical conductivity of the element selenium changes when light falls on it. This property, known as photoconductivity, is used in the vidicon television camera tube. In 1888 the German physicist Wilhelm Hallwachs noticed that certain substances emit electrons when exposed to light. This effect, called photoemission, was applied to the image-orthicon television camera tube. Although several methods of changing light into electric current were discovered, it was some time before the methods were applied to the construction of a television system. The main problem was that the currents produced were weak and no effective method of amplifying them was known. Then, in 1906, the American engineer Lee De Forest patented the triode vacuum tube. By 1920 the tube had been improved to the point where it could be used to amplify electric currents for television. A Nipkow Disk Some of the earliest work on television began in 1884, when the German engineer Paul Nipkow designed the first true television mechanism. In front of a brightly lit picture, he placed a scanning disk (called a Nipkow disk) with a spiral pattern of holes punched in it. As the disk revolved, the first hole would cross the picture at the top. The second hole passed across the picture a little lower down, the third hole lower still, and so on. In effect, he designed a disk with its own form of scanning. With each complete revolution of the disk, all parts of the picture would be briefly exposed in turn. The disk revolved quickly, accomplishing the scanning within one-fifteenth of a second. Similar disks rotated in the camera and receiver. Light passing through these disks created crude television images. Nipkow's mechanical scanner was used from 1923 to 1925 in experimental television systems developed in the United States by the inventor Charles F. Jenkins, and in England by the inventor John L. Baird. The pictures were crude but recognizable. The receiver also used a Nipkow disk placed in front of a lamp whose brightness was controlled by the signal from the light-sensitive tube behind the disk in the transmitter. In 1926 Baird demonstrated a system that used a 30-hole Nipkow disk. B Electronic Television Simultaneous to the development of a mechanical scanning method, an electronic method of scanning was conceived in 1908 by the English inventor A. A. Campbell-Swinton. He proposed using a screen to collect a charge whose pattern would correspond to the scene, and an electron gun to neutralize this charge and create a varying electric current. This concept was used by the Russian-born American physicist Vladimir Kosma Zworykin in his iconoscope camera tube of the 1920s. A similar arrangement was later used in the imageorthicon tube. The American inventor and engineer Philo Taylor Farnsworth also devised an electronic television system in the 1920s. He called his television camera, which converted each element of an image into an electrical signal, an image dissector. Farnsworth continued to improve his system in the 1930s, but his project lost its financial backing at the beginning of World War II (1939-1945). Many aspects of Farnsworth's image dissector were also used in Zworykin's more successful iconoscope camera. Cathode rays, or beams of electrons in evacuated glass tubes, were first noted by the British chemist and physicist Sir William Crookes in 1878. By 1908 Campbell-Swinton and a Russian, Boris Rosing, had independently suggested that a cathode-ray tube (CRT) be used to reproduce the television picture on a phosphor-coated screen. The CRT was developed for use in television during the 1930s by the American electrical engineer Allen B. DuMont. DuMont's method of picture reproduction is essentially the same as the one used today. The first home television receiver was demonstrated in Schenectady, New York, on January 13, 1928, by the American inventor Ernst F. W. Alexanderson. The images on the 76-mm (3-in) screen were poor and unsteady, but the set could be used in the home. A number of these receivers were built by the General Electric Company (GE) and distributed in Schenectady. On May 10, 1928, station WGY began regular broadcasting to this area. C Public Broadcasting The first public broadcasting of television programs took place in London in 1936. Broadcasts from two competing firms were shown. Marconi-EMI produced a 405-line frame at 25 frames per second, and Baird Television produced a 240-line picture at 25 frames per second. In early 1937 the Marconi system, clearly superior, was chosen as the standard. In 1941 the United States adopted a 525-line, 30-image-per-second standard. The first regular television broadcasts began in the United States in 1939, but after two years they were suspended until shortly after the end of World War II in 1945. A television broadcasting boom began just after the war in 1946, and the industry grew rapidly. The development of color television had always lagged a few steps behind that of black-andwhite (monochrome) television. At first, this was because color television was technically more complex. Later, however, the growth of color television was delayed because it had to be compatible with monochrome—that is, color television would have to use the same channels as monochrome television and be receivable in black and white on monochrome sets. D Color Television It was realized as early as 1904 that color television was possible using the three primary colors of light: red, green, and blue. In 1928 Baird demonstrated color television using a Nipkow disk in which three sets of openings scanned the scene. A fairly refined color television system was introduced in New York City in 1940 by the Hungarian-born American inventor Peter Goldmark. In 1951 public broadcasting of color television was begun using Goldmark's system. However, the system was incompatible with monochrome television, and the experiment was dropped at the end of the year. Compatible color television was perfected in 1953, and public broadcasting in color was revived a year later. Other developments that improved the quality of television were larger screens and better technology for broadcasting and transmitting television signals. Early television screens were either 18 or 25 cm (7 or 10 in) diagonally across. Television screens now come in a range of sizes. Those that use built-in cathode-ray tubes (CRTs) measure as large as 89 or 100 cm (35 or 40 in) diagonally. Projection televisions (PTVs), first introduced in the 1970s, now come with screens as large as 2 m (7 ft) diagonally. The most common are rearprojection sets in which three CRTs beam their combined light indirectly to a screen via an assembly of lenses and mirrors. Another type of PTV is the front-projection set, which is set up like a motion picture projector to project light across a room to a separate screen that can be as large as a wall in a home allows. Newer types of PTVs use liquid-crystal display (LCD) technology or an array of micro mirrors, also known as a digital light processor (DLP), instead of cathode-ray tubes. Manufacturers have also developed very small, portable television sets with screens that are 7.6 cm (3 in) diagonally across. E Television in Space Television evolved from an entertainment medium to a scientific medium during the exploration of outer space. Knowing that broadcast signals could be sent from transmitters in space, the National Aeronautics and Space Administration (NASA) began developing satellites with television cameras. Unmanned spacecraft of the Ranger and Surveyor series relayed thousands of close-up pictures of the moon's surface back to earth for scientific analysis and preparation for lunar landings. The successful U.S. manned landing on the moon in July 1969 was documented with live black-and-white broadcasts made from the surface of the moon. NASA's use of television helped in the development of photosensitive camera lenses and more-sophisticated transmitters that could send images from a quartermillion miles away. Since 1960 television cameras have also been used extensively on orbiting weather satellites. Video cameras trained on Earth record pictures of cloud cover and weather patterns during the day, and infrared cameras (cameras that record light waves radiated at infrared wavelengths) detect surface temperatures. The ten Television Infrared Observation Satellites (TIROS) launched by NASA paved the way for the operational satellites of the Environmental Science Services Administration (ESSA), which in 1970 became a part of the National Oceanic and Atmospheric Administration (NOAA). The pictures returned from these satellites aid not only weather prediction but also understanding of global weather systems. High-resolution cameras mounted in Landsat satellites have been successfully used to provide surveys of crop, mineral, and marine resources. F Home Recording In time, the process of watching images on a television screen made people interested in either producing their own images or watching programming at their leisure, rather than during standard broadcasting times. It became apparent that programming on videotape— which had been in use since the 1950s—could be adapted for use by the same people who were buying televisions. Affordable videocassette recorders (VCRs) were introduced in the 1970s and in the 1980s became almost as common as television sets. During the late 1990s and early 2000s the digital video disc (DVD) player had the most successful product launch in consumer electronics history. According to the Consumer Electronics Association (CEA), which represents manufacturers and retailers of audio and video products, 30 million DVD players were sold in the United States in a record five-year period from 1997 to 2001. It took compact disc (CD) players 8 years and VCRs 13 years to achieve that 30-million milestone. The same size as a CD, a DVD can store enough data to hold a full-length motion picture with a resolution twice that of a videocassette. The DVD player also offered the digital surround-sound quality experienced in a state-of-the-art movie theater. Beginning in 2001 some DVD players also offered home recording capability. G Digital Television Digital television receivers, which convert the analog, or continuous, electronic television signals received by an antenna into an electronic digital code (a series of ones and zeros), are currently available. The analog signal is first sampled and stored as a digital code, then processed, and finally retrieved. This method provides a cleaner signal that is less vulnerable to distortion, but in the event of technical difficulties, the viewer is likely to receive no picture at all rather than the degraded picture that sometimes occurs with analog reception. The difference in quality between digital television and regular television is similar to the difference between a compact disc recording (using digital technology) and an audiotape or long-playing record. The high-definition television (HDTV) system was developed in the 1980s. It uses 1,080 lines and a wide-screen format, providing a significantly clearer picture than the traditional 525- and 625-line television screens. Each line in HDTV also contains more information than normal formats. HDTV is transmitted using digital technology. Because it takes a huge amount of coded information to represent a visual image—engineers believe HDTV will need about 30 million bits (ones and zeros of the digital code) each second—data-compression techniques have been developed to reduce the number of bits that need to be transmitted. With these techniques, digital systems need to continuously transmit codes only for a scene in which images are changing; the systems can compress the recurring codes for images that remain the same (such as the background) into a single code. Digital technology is being developed that will offer sharper pictures on wider screens, and HDTV with cinemaquality images. A fully digital system was demonstrated in the United States in the 1990s. A common world standard for digital television, the MPEG-2, was agreed on in April 1993 at a meeting of engineers representing manufacturers and broadcasters from 18 countries. Because HDTV receivers initially cost much more than regular television sets, and broadcasts of HDTV and regular television are incompatible, the transition from one format to the next could take many years. The method endorsed by the U.S. Congress and the FCC to ease this transition is to give existing television networks a second band of frequencies on which to broadcast, allowing networks to broadcast in both formats at the same time. Engineers are also working on making HDTV compatible with computers and telecommunications equipment so that HDTV technology may be applied to other systems besides home television, such as medical devices, security systems, and computer-aided manufacturing (CAM). H Flat Panel Display In addition to getting clearer, televisions are also getting thinner. Flat panel displays, some just a few centimeters thick, offer an alternative to bulky cathode ray tube televisions. Even the largest flat panel display televisions are thin enough to be hung on the wall like a painting. Many flat panel TVs use liquid-crystal display (LCD) screens that make use of a special substance that changes properties when a small electric current is applied to it. LCD technology has already been used extensively in laptop computers. LCD television screens are flat, use very little electricity, and work well for small, portable television sets. LCD has not been as successful, however, for larger television screens. Flat panel TVs made from gas-plasma displays can be much larger. In gas-plasma displays, a small electric current stimulates an inert gas sandwiched between glass panels, including one coated with phosphors that emit light in various colors. While just 8 cm (3 in) thick, plasma screens can be more than 150 cm (60 in) diagonally. I Computer and Internet Integration As online computer systems become more popular, televisions and computers are increasingly integrated. Such technologies combine the capabilities of personal computers, television, DVD players, and in some cases telephones, and greatly expand the kinds of services that can be provided. For example, computer-like hard drives in set-top recorders automatically store a TV program as it is being received so that the consumer can pause live TV, replay a scene, or skip ahead. For programs that consumers want to record for future viewing, a hard drive makes it possible to store a number of shows. Some set-top devices offer Internet access through a dial-up modem or broadband connection. Others allow the consumer to browse the World Wide Web on their TV screen. When a device has both a hard drive and a broadband connection, consumers may be able to download a specific program, opening the way for true video on demand. Consumers may eventually need only one system or device, known as an information appliance, which they could use for entertainment, communication, shopping, and banking in the convenience of their home. Reviewed By: Michael Antonoff Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (iii) Microwave Oven Microwave Oven, appliance that uses electromagnetic energy to heat and cook foods. A microwave oven uses microwaves, very short radio waves commonly employed in radar and satellite communications. When concentrated within a small space, these waves efficiently heat water and other substances within foods. In a microwave oven, an electronic vacuum tube known as a magnetron produces an oscillating beam of microwaves. Before passing into the cooking space, the microwaves are sent through a fanlike set of spinning metal blades called a stirrer. The stirrer scatters the microwaves, dispersing them evenly within the oven, where they are absorbed by the food. Within the food the microwaves orient molecules, particularly water molecules, in a specific direction. The oscillating effect produced by the magnetron changes the orientation of the microwaves millions of times per second. The water molecules begin to vibrate as they undergo equally rapid changes in direction. This vibration produces heat, which in turn cooks the food. Microwaves cook food rapidly and efficiently because, unlike conventional ovens, they heat only the food and not the air or the oven walls. The heat spreads within food by conduction (see Heat Transfer). Microwave ovens tend to cook moist food more quickly than dry foods, because there is more water to absorb the microwaves. However, microwaves cannot penetrate deeply into foods, sometimes making it difficult to cook thicker foods. Microwaves pass through many types of glass, paper, ceramics, and plastics, making many containers composed of these materials good for holding food; microwave instructions detail exactly which containers are safe for microwave use. Metal containers are particularly unsuitable because they reflect microwaves and prevent food from cooking. Metal objects may also reflect microwaves back into the magnetron and cause damage. The door of the oven should always be securely closed and properly sealed to prevent escape of microwaves. Leakage of microwaves affects cooking efficiency and can pose a health hazard to anyone near the oven. The discovery that microwaves could cook food was accidental. In 1945 Percy L. Spencer, a technician at the Raytheon Company, was experimenting with a magnetron designed to produce short radio waves for a radar system. Standing close to the magnetron, he noticed that a candy bar in his pocket melted even though he felt no heat. Raytheon developed this food-heating capacity and introduced the first microwave oven, then called a radar range, in the early 1950s. Although it was slow to catch on at first, the microwave oven has since grown steadily in popularity to its current status as a common household appliance. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (iv) Radar I INTRODUCTION Radar (Radio Detection And Ranging), remote detection system used to locate and identify objects. Radar signals bounce off objects in their path, and the radar system detects the echoes of signals that return. Radar can determine a number of properties of a distant object, such as its distance, speed, direction of motion, and shape. Radar can detect objects out of the range of sight and works in all weather conditions, making it a vital and versatile tool for many industries. Radar has many uses, including aiding navigation in the sea and air, helping detect military forces, improving traffic safety, and providing scientific data. One of radar’s primary uses is air traffic control, both civilian and military. Large networks of ground-based radar systems help air traffic controllers keep track of aircraft and prevent midair collisions. Commercial and military ships also use radar as a navigation aid to prevent collisions between ships and to alert ships of obstacles, especially in bad weather conditions when visibility is poor. Military forces around the world use radar to detect aircraft and missiles, troop movement, and ships at sea, as well as to target various types of weapons. Radar is a valuable tool for the police in catching speeding motorists. In the world of science, meteorologists use radar to observe and forecast the weather (see Meteorology). Other scientists use radar for remote sensing applications, including mapping the surface of the earth from orbit, studying asteroids, and investigating the surfaces of other planets and their moons (see Radar Astronomy). II HOW RADAR WORKS Radar relies on sending and receiving electromagnetic radiation, usually in the form of radio waves (see Radio) or microwaves. Electromagnetic radiation is energy that moves in waves at or near the speed of light. The characteristics of electromagnetic waves depend on their wavelength. Gamma rays and X rays have very short wavelengths. Visible light is a tiny slice of the electromagnetic spectrum with wavelengths longer than X rays, but shorter than microwaves. Radar systems use long-wavelength electromagnetic radiation in the microwave and radio ranges. Because of their long wavelengths, radio waves and microwaves tend to reflect better than shorter wavelength radiation, which tends to scatter or be absorbed before it gets to the target. Radio waves at the long-wavelength end of the spectrum will even reflect off of the atmosphere’s ionosphere, a layer of electrically-charged particles in the earth’s atmosphere. A radar system starts by sending out electromagnetic radiation, called the signal. The signal bounces off objects in its path. When the radiation bounces back, part of the signal returns to the radar system; this echo is called the return. The radar system detects the return and, depending on the sophistication of the system, simply reports the detection or analyzes the signal for more information. Even though radio waves and microwaves reflect better than electromagnetic waves of other lengths, only a tiny portion—about a billionth of a billionth—of the radar signal gets reflected back. Therefore, a radar system must be able to transmit high amounts of energy in the signal and to detect tiny amounts of energy in the return. A radar system is composed of four basic components: a transmitter, an antenna, a receiver, and a display. The transmitter produces the electrical signals in the correct form for the type of radar system. The antenna sends these signals out as electromagnetic radiation. The antenna also collects incoming return signals and passes them to the receiver, which analyzes the return and passes it to a display. The display enables human operators see the data. All radar systems perform the same basic tasks, but the way systems carry out their tasks has some effect on the system’s parts. A type of radar called pulse radar sends out bursts of radar at regular intervals. Pulse radar requires a method of timing the bursts from its transmitter, so this part is more complicated than the transmitter in other radar systems. Another type of radar called continuous-wave radar sends out a continuous signal. Continuous-wave radar gets much of its information about the target from subtle changes in the return, or the echo of the signal. The receiver in continuous-wave radar is therefore more complicated than in other systems. A Transmitter System The system surrounding the transmitter is made up of three main elements: the oscillator, the modulator, and the transmitter itself. The transmitter supplies energy to the antenna in the form of a high-energy electrical signal. The antenna then sends out electromagnetic radar waves as the signal passes through it. A1 The Oscillator The production of a radar signal begins with an oscillator, a device that produces a pure electrical signal at the desired frequency. Most radar systems use frequencies that fall in the radio range (from a few million cycles per second—or Hertz—to several hundred million Hertz) or the microwave range (from several hundred million Hertz to a several tens of billions Hertz). The oscillator must produce a precise and pure frequency to provide the radar system with an accurate reference when it calculates the Doppler shift of the signal (for further discussion of the Doppler shift, see the Receiver section of this article below). A2 The Modulator The next stage of a radar system is the modulator, which rapidly varies, or modulates, the signal from the oscillator. In a simple pulse radar system the modulator merely turns the signal on and off. The modulator should vary the signal, but not distort it. This requires careful design and engineering. A3 The Transmitter The radar system’s transmitter increases the power of the oscillator signal. The transmitter amplifies the power from the level of about 1 watt to as much as 1 megawatt, or 1 million watts. Radar signals have such high power levels because so little of the original signal comes back in the return. A4 The Antenna After the transmitter amplifies the radar signal to the required level, it sends the signal to the antenna, usually a dish-shaped piece of metal. Electromagnetic waves at the proper wavelength propagate out from the antenna as the electrical signal passes through it. Most radar antennas direct the radiation by reflecting it from a parabolic, or concave shaped, metal dish. The output from the transmitter feeds into the focus of the dish. The focus is the point at which radio waves reflected from the dish travel out from the surface of the dish in a single direction. Most antennas are steerable, meaning that they can move to point in different directions. This enables a radar system to scan an area of space rather than always pointing in the same direction. B Reception Elements A radar receiver detects and often analyzes the faint echoes produced when radar waves bounce off of distant objects and return to the radar system. The antenna gathers the weak returning radar signals and converts them into an electric current. Because a radar antenna may both transmit and receive signals, the duplexer determines whether the antenna is connected to the receiver or the transmitter. The receiver determines whether the signal should be reported and often does further analysis before sending the results to the display. The display conveys the results to the human operator through a visual display or an audible signal. B1 The Antenna The receiver uses an antenna to gather the reflected radar signal. Often the receiver uses the same antenna as the transmitter. This is possible even in some continuous-wave radar because the modulator in the transmitter system formats the outgoing signals in such a way that the receiver (described in following paragraphs) can recognize the difference between outgoing and incoming signals. B2 The Duplexer The duplexer enables a radar system to transmit powerful signals and still receive very weak radar echoes. The duplexer acts as a gate between the antenna and the receiver and transmitter. It keeps the intense signals from the transmitter from passing to the receiver and overloading it, and also ensures that weak signals coming in from the antenna go to the receiver. A pulse radar duplexer connects the transmitter to the antenna only when a pulse is being emitted. Between pulses, the duplexer disconnects the transmitter and connects the receiver to the antenna. If the receiver were connected to the antenna while the pulse was being transmitted, the high power level of the pulse would damage the receiver’s sensitive circuits. In continuous-wave radar the receivers and transmitters operate at the same time. These systems have no duplexer. In this case, the receiver separates the signals by frequency alone. Because the receiver must listen for weak signals at the same time that the transmitter is operating, high power continuous-wave radar systems use separate transmitting and receiving antennas. B3 The Receiver Most modern radar systems use digital equipment because this equipment can perform many complicated functions. In order to use digital equipment, radar systems need analogto-digital converters to change the received signal from an analog form to a digital form. to be rounded off to the decimal number 0.6666667, or 0.667, or 0.7, or even 1. After the analog information has been translated into discrete intervals, digital numbers are usually expressed in binary form, or as series of 1s and 0s that represent numbers. The analog-todigital converter measures the incoming analog signal many times each second and expresses each signal as a binary number.. Digital information must have discrete values, in certain regular steps, such as 0, 1, or 2, but nothing in between. A digital system might require the fraction The incoming analog signal can have any value, from 0 to tens of millions, including fractional values such as Once the signal is in digital form, the receiver can perform many complex functions on it. One of the most important functions for the receiver is Doppler filtering. Signals that bounce off of moving objects come back with a slightly different wavelength because of an effect called the Doppler effect. The wavelength changes as waves leave a moving object because the movement of the object causes each wave to leave from a slightly different position than the waves before it. If an object is moving away from the observer, each successive wave will leave from slightly farther away, so the waves will be farther apart and the signal will have a longer wavelength. If an object is moving toward the observer, each successive wave will leave from a position slightly closer than the one before it, so the waves will be closer to each other and the signal will have a shorter wavelength. Doppler shifts occur in all kinds of waves, including radar waves, sound waves, and light waves. Doppler filtering is the receiver’s way of differentiating between multiple targets. Usually, targets move at different speeds, so each target will have a different Doppler shift. Following Doppler filtering, the receiver performs other functions to maximize the strength of the return signal and to eliminate noise and other interfering signals. B4 The Display Displaying the results is the final step in converting the received radar signals into useful information. Early radar systems used a simple amplitude scope—a display of received signal amplitude, or strength, as a function of distance from the antenna. In such a system, a spike in the signal strength appears at the place on the screen that corresponds to the target’s distance. A more useful and more modern display is the plan position indicator (PPI). The PPI displays the direction of the target in relation to the radar system (relative to north)as an angle measured from the top of the display, while the distance to the target is represented as a distance from the center of the display. Some radar systems that use PPI display the actual amplitude of the signal, while others process the signal before displaying it and display possible targets as symbols. Some simple radar systems designed to look for the presence of an object and not the object’s speed or distance notify the user with an audible signal, such as a beep. C Radar Frequencies Early radar systems were capable only of detecting targets and making a crude measurement of the distance to the target. As radar technology evolved, radar systems could measure more and more properties. Modern technology allows radar systems to use higher frequencies, permitting better measurement of the target’s direction and location. Advanced radar can detect individual features of the target and show a detailed picture of the target instead of a single blurred object. Most radar systems operate in frequencies ranging from the Very High Frequency (VHF) band, at about 150 MHz (150 million Hz), to the Extra High Frequency band, which may go as high as 95 GHz (95 billion Hz). Specific ranges of frequencies work well for certain applications and not as well for others, so most radar systems are specialized to do one type of tracking or detection. The frequency of the radar system is related to the resolution of the system. Resolution determines how close two objects may be and still be distinguished by the radar, and how accurately the system can determine the target’s position. Higher frequencies provide better resolution than lower frequencies because the beam formed by the antenna is sharper. Tracking radar, which precisely locates objects and tracks their movement, needs higher resolution and so uses higher frequencies. On the other hand, if a radar system is used to search large areas for targets, a narrow beam of high-frequency radar will be less efficient. Because the high-power transmitters and large antennas that radar systems require are easier to build for lower frequencies, lower frequency radar systems are more popular for radar that does not need particularly good resolution. D Clutter Clutter is what radar users call radar signals that do not come from actual targets. Rain, snow, and the surface of the earth reflect energy, including radar waves. Such echoes can produce signals that the radar system may mistake for actual targets. Clutter makes it difficult to locate targets, especially when the system is searching for objects that are small and distant. Fortunately, most sources of clutter move slowly if at all, so their radar echoes produce little or no Doppler shift. Radar engineers have developed several systems to take advantage of the difference in Doppler shifts between clutter and moving targets. Some radar systems use a moving target indicator (MTI), which subtracts out every other radar return from the total signal. Because the signals from stationary objects will remain the same over time, the MTI subtracts them from the total signal, and only signals from moving targets get past the receiver. Other radar systems actually measure the frequencies of all returning signals. Frequencies with very low Doppler shifts are assumed to come from clutter. Those with substantial shifts are assumed to come from moving targets. Clutter is actually a relative term, since the clutter for some systems could be the target for other systems. For example, a radar system that tracks airplanes considers precipitation to be clutter, but precipitation is the target of weather radar. The plane-tracking radar would ignore the returns with large sizes and low Doppler shifts that represent weather features, while the weather radar would ignore the small-sized, highly-Doppler-shifted returns that represent airplanes. III TYPES OF RADAR All radar systems send out electromagnetic radiation in radio or microwave frequencies and use echoes of that radiation to detect objects, but different systems use different methods of emitting and receiving radiation. Pulse radar sends out short bursts of radiation. Continuous wave radar sends out a constant signal. Synthetic aperture radar and phasedarray radar have special ways of positioning and pointing the antennas that improve resolution and accuracy. Secondary radar detects radar signals that targets send out, instead of detecting echoes of radiation. A Simple Pulse Radar Simple pulse radar is the simplest type of radar. In this system, the transmitter sends out short pulses of radio frequency energy. Between pulses, the radar receiver detects echoes of radiation that objects reflect. Most pulse radar antennas rotate to scan a wide area. Simple pulse radar requires precise timing circuits in the duplexer to prevent the transmitter from transmitting while the receiver is acquiring a signal from the antenna, and to keep the receiver from trying to read a signal from the antenna while the transmitter is operating. Pulse radar is good at locating an object, but it is not very accurate at measuring an object’s speed. B Continuous Wave Radar Continuous-wave (CW) radar systems transmit a constant radar signal. The transmission is continuous, so, except in systems with very low power, the receiver cannot use the same antenna as the transmitter because the radar emissions would interfere with the echoes that the receiver detects. CW systems can distinguish between stationary clutter and moving targets by analyzing the Doppler shift of the signals, without having to use the precise timing circuits that separates the signal from the return in pulse radar. Continuous wave radar systems are excellent at measuring the speed and direction of an object, but they are not as accurate as pulse radar at measuring an object’s position. Some systems combine pulse and CW radar to achieve both good range and velocity resolution. Such systems are called Pulse-Doppler radar systems. C Synthetic Aperture Radar Synthetic aperture radar (SAR) tracks targets on the ground from the air. The name comes from the fact that the system uses the movement of the airplane or satellite carrying it to make the antenna seem much larger than it actually is. The ability of radar to distinguish between two closely spaced objects depends on the width of the beam that the antenna sends out. The narrower the beam is, the better its resolution. Getting a narrow beam requires a big antenna. A SAR system is limited to a relatively small antenna with a wide beam because it must fit on an aircraft or satellite. SAR systems are called synthetic aperture, however, because the antenna appears to be bigger than it really is. This is because the moving aircraft or satellite allows the SAR system to repeatedly take measurements from different positions. The receiver processes these signals to make it seem as though they came from a large stationary antenna instead of a small moving one. Synthetic aperture radar resolution can be high enough to pick out individual objects as small as automobiles. Typically, an aircraft or satellite equipped with SAR flies past the target object. In inverse synthetic aperture radar, the target moves past the radar antenna. Inverse SAR can give results as good as normal SAR. D Phased-Array Radar Most radar systems use a single large antenna that stays in one place, but can rotate on a base to change the direction of the radar beam. A phased-array radar antenna actually comprises many small separate antennas, each of which can be rotated. The system combines the signals gathered from all the small antennas. The receiver can change the way it combines the signals from the antennas to change the direction of the beam. A huge phased-array radar antenna can change its beam direction electronically many times faster than any mechanical radar system can. E Secondary Radar A radar system that sends out radar signals and reads the echoes that bounce back is a primary radar system. Secondary radar systems read coded radar signals that the target emits in response to signals received, instead of signals that the target reflects. Air traffic control depends heavily on the use of secondary radar. Aircraft carry small radar transmitters called beacons or transponders. Receivers at the air traffic control tower search for signals from the transponders. The transponder signals not only tell controllers the location of the aircraft, but can also carry encoded information about the target. For example, the signal may contain a code that indicates whether the aircraft is an ally, or it may contain encoded information from the aircraft’s altimeter (altitude indicator). IV RADAR APPLICATIONS Many industries depend on radar to carry out their work. Civilian aircraft and maritime industries use radar to avoid collisions and to keep track of aircraft and ship positions. Military craft also use radar for collision avoidance, as well as for tracking military targets. Radar is important to meteorologists, who use it to track weather patterns. Radar also has many other scientific applications. A Air-Traffic Control Radar is a vital tool in avoiding midair aircraft collisions. The international air traffic control system uses both primary and secondary radar. A network of long-range radar systems called Air Route Surveillance Radar (ARSR) tracks aircraft as they fly between airports. Airports use medium-range radar systems called Airport Surveillance Radar to track aircraft more accurately while they are near the airport. B Maritime Navigation Radar also helps ships navigate through dangerous waters and avoid collisions. Unlike airtraffic radar, with its centralized networks that monitor many craft, maritime radar depends almost entirely on radar systems installed on individual vessels. These radar systems search the surface of the water for landmasses; navigation aids, such as lighthouses and channel markers; and other vessels. For a ship’s navigator, echoes from landmasses and other stationary objects are just as important as those from moving objects. Consequently, marine radar systems do not include clutter removal circuits. Instead, ship-based radar depends on high-resolution distance and direction measurements to differentiate between land, ships, and unwanted signals. Marine radar systems have become available at such low cost that many pleasure craft are equipped with them, especially in regions where fog is common. C Military Defense and Attack Historically, the military has played the leading role in the use and development of radar. The detection and interception of opposing military aircraft in air defense has been the predominant military use of radar. The military also uses airborne radar to scan large battlefields for the presence of enemy forces and equipment and to pick out precise targets for bombs and missiles. C1 Air Defense A typical surface-based air defense system relies upon several radar systems. First, a lower frequency radar with a high-powered transmitter and a large antenna searches the airspace for all aircraft, both friend and foe. A secondary radar system reads the transponder signals sent by each aircraft to distinguish between allies and enemies. After enemy aircraft are detected, operators track them more precisely by using high-frequency waves from special fire control radar systems. The air defense system may attempt to shoot down threatening aircraft with gunfire or missiles, and radar sometimes guides both gunfire and missiles (see Guided Missiles). Longer-range air defense systems use missiles with internal guidance. These systems track a target using data from a radar system on the missile. Such missile-borne radar systems are called seekers. The seeker uses radar signals from the missile or radar signals from a transmitter on the ground to determine the position of the target relative to the missile, then passes the information to the missile’s guidance system. The military uses surface-to-air systems for defense against ballistic missiles as well as aircraft (see Defense Systems). During the Cold War both the United States and the Union of Soviet Socialist Republics (USSR) did a great deal of research into defense against intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). The United States and the USSR signed the Anti-Ballistic Missile (ABM) treaty in 1972. This treaty limited each of the superpowers to a single, limited capability system. The U.S. system consisted of a low-frequency (UHF) phased-array radar around the perimeter of the country, another phased-array radar to track incoming missiles more accurately, and several very high speed missiles to intercept the incoming ballistic missiles. The second radar guided the interceptor missiles. Airborne air defense systems incorporate the same functions as ground-based air defense, but special aircraft carry the large area search radar systems. This is necessary because it is difficult for high-performance fighter aircraft to carry both large radar systems and weapons. Modern warfare uses air-to-ground radar to detect targets on the ground and to monitor the movement of troops. Advanced Doppler techniques and synthetic aperture radar have greatly increased the accuracy and usefulness of air-to-ground radar since their introduction in the 1960s and 1970s. Military forces around the world use air-to-ground radar for weapon aiming and for battlefield surveillance. The United States used the Joint Surveillance and Tracking Radar System (JSTARS) in the Persian Gulf War (1991), demonstrating modern radar’s ability to provide information about enemy troop concentrations and movements during the day or night, regardless of weather conditions. C2 Countermeasures The military uses several techniques to attempt to avoid detection by enemy radar. One common technique is jamming—that is, sending deceptive signals to the enemy’s radar system. During World War II (1939-1945), flyers under attack jammed enemy radar by dropping large clouds of chaff—small pieces of aluminum foil or some other material that reflects radar well. “False” returns from the chaff hid the aircraft’s exact location from the enemy’s air defense radar. Modern jamming uses sophisticated electronic systems that analyze enemy radar, then send out false radar echoes that mask the actual target echoes or deceive the radar about a target’s location. Stealth technology is a collection of methods that reduce the radar echoes from aircraft and other radar targets (see Stealth Aircraft). Special paint can absorb radar signals and sharp angles in the aircraft design can reflect radar signals in deceiving directions. Improvements in jamming and stealth technology force the continual development of high-power transmitters, antennas good at detecting weak signals, and very sensitive receivers, as well as techniques for improved clutter rejection. D Traffic safety Since the 1950s, police have used radar to detect motorists who are exceeding the speed limit. Most older police radar “guns” use Doppler technology to determine the target vehicle’s speed. Such systems were simple, but they sometimes produced false results. The radar beam of such systems was relatively wide, which meant that stray radar signals could be detected by motorists with radar detectors. Newer police radar systems, developed in the 1980s and 1990s, use laser light to form a narrow, highly selective radar beam. The narrow beam helps insure that the radar returns signals from a single, selected car and reduces the chance of false results. Instead of relying on the Doppler effect to measure speed, these systems use pulse radar to measure the distance to the car many times, then calculate the speed by dividing the change in distance by the change in time. Laser radar is also more reliable than normal radar for the detection of speeding motorists because its narrow beam is more difficult to detect by motorists with radar detectors. E Meteorology Meteorologists use radar to learn about the weather. Networks of radar systems installed across many countries throughout the world detect and display areas of rain, snow, and other precipitation. Weather radar systems use Doppler radar to determine the speed of the wind within the storm. The radar signals bounce off of water droplets or ice crystals in the atmosphere. Gaseous water vapor does not reflect radar waves as well as the liquid droplets of water or solid ice crystals, so radar returns from rain or snow are stronger than that from clouds. Dust in the atmosphere also reflects radar, but the returns are only significant when the concentration of dust is much higher than usual. The Terminal Doppler Weather Radar can detect small, localized, but hazardous wind conditions, especially if precipitation or a large amount of dust accompanies the storm. Many airports use this advanced radar to make landing safer. F Scientific Applications Scientists use radar in several space-related applications. The Spacetrack system is a cooperative effort of the United States, Canada, and the United Kingdom. It uses data from several large surveillance and tracking radar systems (including the Ballistic Missile Early Warning System) to detect and track all objects in orbit around the earth. This helps scientists and engineers keep an eye on space junk—abandoned satellites, discarded pieces of rockets, and other unused fragments of spacecraft that could pose a threat to operating spacecraft. Other special-purpose radar systems track specific satellites that emit a beacon signal. One of the most important of these systems is the Global Positioning System (GPS), operated by the U.S. Department of Defense. GPS provides highly accurate navigational data for the U.S. military and for anyone who owns a GPS receiver. During space flights, radar gives precise measurements of the distances between the spacecraft and other objects. In the U.S. Surveyor missions to the moon in the 1960s, radar measured the altitude of the probe above the moon’s surface to help the probe control its descent. In the Apollo missions, which landed astronauts on the moon during the 1960s and 1970s, radar measured the altitude of the Lunar Module, the part of the Apollo spacecraft that carried two astronauts from orbit around the moon down to the moon’s surface, above the surface of the moon. Apollo also used radar to measure the distance between the Lunar Module and the Command and Service Module, the part of the spacecraft that remained in orbit around the moon. Astronomers have used ground-based radar to observe the moon, some of the larger asteroids in our solar system, and a few of the planets and their moons. Radar observations provide information about the orbit and surface features of the object. The U.S. Magellan space probe mapped the surface of the planet Venus with radar from 1990 to 1994. Magellan’s radar was able to penetrate the dense cloud layer of the Venusian atmosphere and provide images of much better quality than radar measurements from Earth. Many nations have used satellite-based radar to map portions of the earth’s surface. Radar can show conditions on the surface of the earth and can help determine the location of various resources such as oil, water for irrigation, and mineral deposits. In 1995 the Canadian Space Agency launched a satellite called RADARsat to provide radar imagery to commercial, government, and scientific users. V HISTORY Although British physicist James Clerk Maxwell predicted the existence of radio waves in the 1860s, it wasn’t until the 1890s that British-born American inventor Elihu Thomson and German physicist Heinrich Hertz independently confirmed their existence. Scientists soon realized that radio waves could bounce off of objects, and by 1904 Christian Hülsmeyer, a German inventor, had used radio waves in a collision avoidance device for ships. Hülsmeyer’s system was only effective for a range of about 1.5 km (about 1 mi). The first long-range radar systems were not developed until the 1920s. In 1922 Italian radio pioneer Guglielmo Marconi demonstrated a low-frequency (60 MHz) radar system. In 1924 English physicist Edward Appleton and his graduate student from New Zealand, Miles Barnett, proved the existence of the ionosphere, an electrically charged upper layer of the atmosphere, by reflecting radio waves off of it. Scientists at the U.S. Naval Research Laboratory in Washington, D.C., became the first to use radar to detect aircraft in 1930. A Radar in World War II None of the early demonstrations of radar generated much enthusiasm. The commercial and military value of radar did not become readily apparent until the mid-1930s. Before World War II, the United States, France, and the United Kingdom were all carrying out radar research. Beginning in 1935, the British built a network of ground-based aircraft detection radar, called Chain Home, under the direction of Sir Robert Watson-Watt. Chain Home was fully operational from 1938 until the end of World War II in 1945 and was extremely instrumental in Britain’s defense against German bombers. The British recognized the value of radar with frequencies much higher than the radio waves used for most systems. A breakthrough in radar technology came in 1939 when two British scientists, physicist Henry Boot and biophysicist John Randall, developed the resonant-cavity magnetron. This device generates high-frequency radio pulses with a large amount of power, and it made the development of microwave radar possible. Also in 1939, the Massachusetts Institute of Technology (MIT) Radiation Laboratory was formed in Cambridge, Massachusetts, bringing together U.S. and British radar research. In March 1942 scientists demonstrated the detection of ships from the air. This technology became the basis of antiship and antisubmarine radar for the U.S. Navy. The U.S. Army operated air surveillance radar at the start of World War II. The army also used early forms of radar to direct antiaircraft guns. Initially the radar systems were used to aim searchlights so the soldier aiming the gun could see where to fire, but the systems evolved into fire-control radar that aimed the guns automatically. B Radar during the Cold War With the end of World War II, interest in radar development declined. Some experiments continued, however; for instance, in 1946 the U.S. Army Signal Corps bounced radar signals off of the moon, ushering in the field of radar astronomy. The growing hostility between the United States and the Union of Soviet Socialists Republics—the so-called Cold War—renewed military interest in radar improvements. After the Soviets detonated their first atomic bomb in 1949, interest in radar development, especially for air defense, surged. Major programs included the installation of the Distant Early Warning (DEW) network of long-range radar across the northern reaches of North America to warn against bomber attacks. As the potential threat of attack by ICBMs increased, the United Kingdom, Greenland, and Alaska installed the Ballistic Missile Early Warning System (BMEWS). C Modern Radar Radar found many applications in civilian and military life and became more sophisticated and specialized for each application. The use of radar in air traffic control grew quickly during the Cold War, especially with the jump in air traffic that occurred in the 1960s. Today almost all commercial and private aircraft have transponders. Transponders send out radar signals encoded with information about an aircraft and its flight that other aircraft and air traffic controllers can use. American traffic engineer John Barker discovered in 1947 that moving automobiles would reflect radar waves, which could be analyzed to determine the car’s speed. Police began using traffic radar in the 1950s, and the accuracy of traffic radar has increased markedly since the 1980s. Doppler radar came into use in the 1960s and was first dedicated to weather forecasting in the 1970s. In the 1990s the United States had a nationwide network of more than 130 Doppler radar stations to help meteorologists track weather patterns. Earth-observing satellites such as those in the SEASAT program began to use radar to measure the topography of the earth in the late 1970s. The Magellan spacecraft mapped most of the surface of the planet Venus in the 1990s. The Cassini spacecraft, scheduled to reach Saturn in 2004, carries radar instruments for studying the surface of Saturn’s moon Titan. As radar continues to improve, so does the technology for evading radar. Stealth aircraft feature radar-absorbing coatings and deceptive shapes to reduce the possibility of radar detection. The Lockheed F-117A, first flown in 1981, and the Northrop , first flown in 1989, are two of the latest additions to the U.S. stealth aircraft fleet. In the area of civilian radar avoidance, companies are introducing increasingly sophisticated radar detectors, designed to warn motorists of police using traffic radar. Contributed By: Robert E. Millett Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (v) 1. Tape Recording In analog tape recording, electrical signals from a microphone are transformed into magnetic signals. These signals are encoded onto a thin plastic ribbon of recording tape. Recording tape is coated with tiny magnetic particles. Chromium dioxide and ferric oxide are two magnetic materials commonly used. A chemical binder coats the particles to the tape, and a back coating prevents the magnetic charge from traveling from one layer of tape to the next. Tape is wound onto reels, which can vary in diameter and size. Professional reel-to-reel tape, which is 6.2 mm (0.25 in) wide, is wound on large metal or plastic reels. Reel-to-reel tapes must be loaded onto a reel-to-reel tape recorder by hand. Cassette tape is only 3.81 mm (0.15 in) wide and is completely self-enclosed for convenience. Regardless of size, all magnetic tape is drawn from a supply reel on the left side of the recorder to a take-up reel on the right. A drive shaft, called a capstan, rolls against a pinch roller and pulls the tape along. Various guides and rollers are used to mechanically regulate the speed and tension of the tape, since any variations in speed or tension will affect sound quality. As the tape is drawn from the supply reel to the take-up reel, it passes over a series of three magnetic coils called heads. The erase head is activated only while recording. It generates a current that places the tape's magnetic particles in a neutral position in order to remove any previous sounds. The record head transforms the electrical signal coming into the recorder into a magnetic flux and thus applies the original electrical signal onto the tape. The sound wave is now physically present on the analog tape. The playback head reads the magnetic field on the tape and converts this field back to electric energy. Unwanted noise, such as hiss, is a frequent problem with recording on tape. To combat this problem, sound engineers developed noise reduction systems that help reduce unwanted sounds. Many different systems exist, such as the Dolby System, which is used to reduce hiss on musical recordings and motion-picture soundtracks. Most noise occurs around the weakest sounds on a tape recording. Noise reduction systems work by boosting weak signals during recording. When the tape is played, the boosted signals are reduced to their normal levels. This reduction to normal levels also minimizes any noise that might have been present. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q7: (i) Deoxyribonucleic Acid I INTRODUCTION Deoxyribonucleic Acid (DNA), genetic material of all cellular organisms and most viruses. DNA carries the information needed to direct protein synthesis and replication. Protein synthesis is the production of the proteins needed by the cell or virus for its activities and development. Replication is the process by which DNA copies itself for each descendant cell or virus, passing on the information needed for protein synthesis. In most cellular organisms, DNA is organized on chromosomes located in the nucleus of the cell. II STRUCTURE A molecule of DNA consists of two chains, strands composed of a large number of chemical compounds, called nucleotides, linked together to form a chain. These chains are arranged like a ladder that has been twisted into the shape of a winding staircase, called a double helix. Each nucleotide consists of three units: a sugar molecule called deoxyribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine (A), guanine (G), thymine (T), and cytosine (C). The deoxyribose molecule occupies the center position in the nucleotide, flanked by a phosphate group on one side and a base on the other. The phosphate group of each nucleotide is also linked to the deoxyribose of the adjacent nucleotide in the chain. These linked deoxyribosephosphate subunits form the parallel side rails of the ladder. The bases face inward toward each other, forming the rungs of the ladder. The nucleotides in one DNA strand have a specific association with the corresponding nucleotides in the other DNA strand. Because of the chemical affinity of the bases, nucleotides containing adenine are always paired with nucleotides containing thymine, and nucleotides containing cytosine are always paired with nucleotides containing guanine. The complementary bases are joined to each other by weak chemical bonds called hydrogen bonds. In 1953 American biochemist James D. Watson and British biophysicist Francis Crick published the first description of the structure of DNA. Their model proved to be so important for the understanding of protein synthesis, DNA replication, and mutation that they were awarded the 1962 Nobel Prize for physiology or medicine for their work. III PROTEIN SYNTHESIS DNA carries the instructions for the production of proteins. A protein is composed of smaller molecules called amino acids, and the structure and function of the protein is determined by the sequence of its amino acids. The sequence of amino acids, in turn, is determined by the sequence of nucleotide bases in the DNA. A sequence of three nucleotide bases, called a triplet, is the genetic code word, or codon, that specifies a particular amino acid. For instance, the triplet GAC (guanine, adenine, and cytosine) is the codon for the amino acid leucine, and the triplet CAG (cytosine, adenine, and guanine) is the codon for the amino acid valine. A protein consisting of 100 amino acids is thus encoded by a DNA segment consisting of 300 nucleotides. Of the two polynucleotide chains that form a DNA molecule, only one strand contains the information needed for the production of a given amino acid sequence. The other strand aids in replication. Protein synthesis begins with the separation of a DNA molecule into two strands. In a process called transcription, a section of one strand acts as a template, or pattern, to produce a new strand called messenger RNA (mRNA). The mRNA leaves the cell nucleus and attaches to the ribosomes, specialized cellular structures that are the sites of protein synthesis. Amino acids are carried to the ribosomes by another type of RNA, called transfer RNA (tRNA). In a process called translation, the amino acids are linked together in a particular sequence, dictated by the mRNA, to form a protein. A gene is a sequence of DNA nucleotides that specify the order of amino acids in a protein via an intermediary mRNA molecule. Substituting one DNA nucleotide with another containing a different base causes all descendant cells or viruses to have the altered nucleotide base sequence. As a result of the substitution, the sequence of amino acids in the resulting protein may also be changed. Such a change in a DNA molecule is called a mutation. Most mutations are the result of errors in the replication process. Exposure of a cell or virus to radiation or to certain chemicals increases the likelihood of mutations. IV REPLICATION In most cellular organisms, replication of a DNA molecule takes place in the cell nucleus and occurs just before the cell divides. Replication begins with the separation of the two polynucleotide chains, each of which then acts as a template for the assembly of a new complementary chain. As the old chains separate, each nucleotide in the two chains attracts a complementary nucleotide that has been formed earlier by the cell. The nucleotides are joined to one another by hydrogen bonds to form the rungs of a new DNA molecule. As the complementary nucleotides are fitted into place, an enzyme called DNA polymerase links them together by bonding the phosphate group of one nucleotide to the sugar molecule of the adjacent nucleotide, forming the side rail of the new DNA molecule. This process continues until a new polynucleotide chain has been formed alongside the old one, forming a new double-helix molecule. V TOOLS AND PROCEDURES Several tools and procedures facilitate are used by scientists for the study and manipulation of DNA. Specialized enzymes, called restriction enzymes, found in bacteria act like molecular scissors to cut the phosphate backbones of DNA molecules at specific base sequences. Strands of DNA that have been cut with restriction enzymes are left with singlestranded tails that are called sticky ends, because they can easily realign with tails from certain other DNA fragments. Scientists take advantage of restriction enzymes and the sticky ends generated by these enzymes to carry out recombinant DNA technology, or genetic engineering. This technology involves removing a specific gene from one organism and inserting the gene into another organism. Another tool for working with DNA is a procedure called polymerase chain reaction (PCR). This procedure uses the enzyme DNA polymerase to make copies of DNA strands in a process that mimics the way in which DNA replicates naturally within cells. Scientists use PCR to obtain vast numbers of copies of a given segment of DNA. DNA fingerprinting, also called DNA typing, makes it possible to compare samples of DNA from various sources in a manner that is analogous to the comparison of fingerprints. In this procedure, scientists use restriction enzymes to cleave a sample of DNA into an assortment of fragments. Solutions containing these fragments are placed at the surface of a gel to which an electric current is applied. The electric current causes the DNA fragments to move through the gel. Because smaller fragments move more quickly than larger ones, this process, called electrophoresis, separates the fragments according to their size. The fragments are then marked with probes and exposed on X-ray film, where they form the DNA fingerprint—a pattern of characteristic black bars that is unique for each type of DNA. A procedure called DNA sequencing makes it possible to determine the precise order, or sequence, of nucleotide bases within a fragment of DNA. Most versions of DNA sequencing use a technique called primer extension, developed by British molecular biologist Frederick Sanger. In primer extension, specific pieces of DNA are replicated and modified, so that each DNA segment ends in a fluorescent form of one of the four nucleotide bases. Modern DNA sequencers, pioneered by American molecular biologist Leroy Hood, incorporate both lasers and computers. Scientists have completely sequenced the genetic material of several microorganisms, including the bacterium Escherichia coli. In 1998, scientists achieved the milestone of sequencing the complete genome of a multicellular organism—a roundworm identified as Caenorhabditis elegans. The Human Genome Project, an international research collaboration, has been established to determine the sequence of all of the three billion nucleotide base pairs that make up the human genetic material. An instrument called an atomic force microscope enables scientists to manipulate the threedimensional structure of DNA molecules. This microscope involves laser beams that act like tweezers—attaching to the ends of a DNA molecule and pulling on them. By manipulating these laser beams, scientists can stretch, or uncoil, fragments of DNA. This work is helping reveal how DNA changes its three-dimensional shape as it interacts with enzymes. VI APPLICATIONS Research into DNA has had a significant impact on medicine. Through recombinant DNA technology, scientists can modify microorganisms so that they become so-called factories that produce large quantities of medically useful drugs. This technology is used to produce insulin, which is a drug used by diabetics, and interferon, which is used by some cancer patients. Studies of human DNA are revealing genes that are associated with specific diseases, such as cystic fibrosis and breast cancer. This information is helping physicians to diagnose various diseases, and it may lead to new treatments. For example, physicians are using a technology called chimeraplasty, which involves a synthetic molecule containing both DNA and RNA strands, in an effort to develop a treatment for a form of hemophilia. Forensic science uses techniques developed in DNA research to identify individuals who have committed crimes. DNA from semen, skin, or blood taken from the crime scene can be compared with the DNA of a suspect, and the results can be used in court as evidence. DNA has helped taxonomists determine evolutionary relationships among animals, plants, and other life forms. Closely related species have more similar DNA than do species that are distantly related. One surprising finding to emerge from DNA studies is that vultures of the Americas are more closely related to storks than to the vultures of Europe, Asia, or Africa (see Classification). Techniques of DNA manipulation are used in farming, in the form of genetic engineering and biotechnology. Strains of crop plants to which genes have been transferred may produce higher yields and may be more resistant to insects. Cattle have been similarly treated to increase milk and beef production, as have hogs, to yield more meat with less fat. VII SOCIAL ISSUES Despite the many benefits offered by DNA technology, some critics argue that its development should be monitored closely. One fear raised by such critics is that DNA fingerprinting could provide a means for employers to discriminate against members of various ethnic groups. Critics also fear that studies of people’s DNA could permit insurance companies to deny health insurance to those people at risk for developing certain diseases. The potential use of DNA technology to alter the genes of embryos is a particularly controversial issue. The use of DNA technology in agriculture has also sparked controversy. Some people question the safety, desirability, and ecological impact of genetically altered crop plants. In addition, animal rights groups have protested against the genetic engineering of farm animals. Despite these and other areas of disagreement, many people agree that DNA technology offers a mixture of benefits and potential hazards. Many experts also agree that an informed public can help assure that DNA technology is used wisely. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Ribonucleic Acid I INTRODUCTION Ribonucleic Acid (RNA), genetic material of certain viruses (RNA viruses) and, in cellular organisms, the molecule that directs the middle steps of protein production. In RNA viruses, the RNA directs two processes—protein synthesis (production of the virus's protein coat) and replication (the process by which RNA copies itself). In cellular organisms, another type of genetic material, called deoxyribonucleic acid (DNA), carries the information that determines protein structure. But DNA cannot act alone and relies upon RNA to transfer this crucial information during protein synthesis (production of the proteins needed by the cell for its activities and development). Like DNA, RNA consists of a chain of chemical compounds called nucleotides. Each nucleotide is made up of a sugar molecule called ribose, a phosphate group, and one of four different nitrogen-containing compounds called bases. The four bases are adenine, guanine, uracil, and cytosine. These components are joined together in the same manner as in a deoxyribonucleic acid (DNA) molecule. RNA differs chemically from DNA in two ways: The RNA sugar molecule contains an oxygen atom not found in DNA, and RNA contains the base uracil in the place of the base thymine in DNA. II CELLULAR RNA In cellular organisms, RNA is a single-stranded polynucleotide chain, a strand of many nucleotides linked together. There are three types of RNA. Ribosomal RNA (rRNA) is found in the cell's ribosomes, the specialized structures that are the sites of protein synthesis). Transfer RNA (tRNA) carries amino acids to the ribosomes for incorporation into a protein. Messenger RNA (mRNA) carries the genetic blueprint copied from the sequence of bases in a cell's DNA. This blueprint specifies the sequence of amino acids in a protein. All three types of RNA are formed as needed, using specific sections of the cell's DNA as templates. III VIRAL RNA Some RNA viruses have double-stranded RNA—that is, their RNA molecules consist of two parallel polynucleotide chains. The base of each RNA nucleotide in one chain pairs with a complementary base in the second chain—that is, adenine pairs with uracil, and guanine pairs with cytosine. For these viruses, the process of RNA replication in a host cell follows the same pattern as that of DNA replication, a method of replication called semiconservative replication. In semi-conservative replication, each newly formed doublestranded RNA molecule contains one polynucleotide chain from the parent RNA molecule, and one complementary chain formed through the process of base pairing. The Colorado tick fever virus, which causes mild respiratory infections, is a double stranded RNA virus. There are two types of single-stranded RNA viruses. After entering a host cell, one type, polio virus, becomes double-stranded by making an RNA strand complementary to its own. During replication, although the two strands separate, only the recently formed strand attracts nucleotides with complementary bases. Therefore, the polynucleotide chain that is produced as a result of replication is exactly the same as the original RNA chain. The other type of single-stranded RNA viruses, called retroviruses, include the human immunodeficiency virus (HIV), which causes AIDS, and other viruses that cause tumors. After entering a host cell, a retrovirus makes a DNA strand complementary to its own RNA strand using the host's DNA nucleotides. This new DNA strand then replicates and forms a double helix that becomes incorporated into the host cell's chromosomes, where it is replicated along with the host DNA. While in a host cell, the RNA-derived viral DNA produces single-stranded RNA viruses that then leave the host cell and enter other cells, where the replication process is repeated. IV RNA AND THE ORIGIN OF LIFE In 1981, American biochemist Thomas Cech discovered that certain RNA molecules appear to act as enzymes, molecules that speed up, or catalyze, some reactions inside cells. Until this discovery biologists thought that all enzymes were proteins. Like other enzymes, these RNA catalysts, called ribozymes, show great specificity with respect to the reactions they speed up. The discovery of ribozymes added to the evidence that RNA, not DNA, was the earliest genetic material. Many scientists think that the earliest genetic molecule was simple in structure and capable of enzymatic activity. Furthermore, the molecule would necessarily exist in all organisms. The enzyme ribonuclease-P, which exists in all organisms, is made of protein and a form of RNA that has enzymatic activity. Based on this evidence, some scientists suspect that the RNA portion of ribonuclease-P may be the modern equivalent of the earliest genetic molecule, the molecule that first enabled replication to occur in primitive cells. Contributed By: Louis Levine Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (ii) Brass (alloy) Brass (alloy), alloy of copper and zinc. Harder than copper, it is ductile and can be hammered into thin leaves. Formerly any alloy of copper, especially one with tin, was called brass, and it is probable that the “brass” of ancient times was of copper and tin (see Bronze). The modern alloy came into use about the 16th century. The malleability of brass varies with its composition and temperature and with the presence of foreign metals, even in minute quantities. Some kinds of brass are malleable only when cold, others only when hot, and some are not malleable at any temperature. All brass becomes brittle if heated to a temperature near the melting point. See Metalwork. To prepare brass, zinc is mixed directly with copper in crucibles or in a reverberatory or cupola furnace. The ingots are rolled when cold. The bars or sheets can be rolled into rods or cut into strips that can be drawn out into wire. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Bronze I INTRODUCTION Bronze, metal compound containing copper and other elements. The term bronze was originally applied to an alloy of copper containing tin, but the term is now used to describe a variety of copper-rich material, including aluminum bronze, manganese bronze, and silicon bronze. Bronze was developed about 3500 BC by the ancient Sumerians in the Tigris-Euphrates Valley. Historians are unsure how this alloy was discovered, but believe that bronze may have first been made accidentally when rocks rich in ores of copper and tin were used to build campfire rings (enclosures for preventing fires from spreading). As fire heated these stones, the metals may have mixed, forming bronze. This theory is supported by the fact that bronze was not developed in North America, where natural tin and copper ores are rarely found in the same rocks. Around 3000 BC, bronze-making spread to Persia, where bronze objects such as ornaments, weapons, and chariot fittings have been found. Bronzes appeared in both Egypt and China around 2000 BC. The earliest bronze castings (objects made by pouring liquid metal into molds) were made in sand; later, clay and stone molds were used. Zinc, lead, and silver were added to bronze alloys by Greek and Roman metalworkers for use in tools, weapons, coins, and art objects. During the Renaissance, a series of cultural movements that occurred in Europe in the 14th, 15th, and 16th centuries, bronze was used to make guns, and artists such as Michelangelo and Benvenuto Cellini used bronze for sculpting See also Metalwork; Founding. Today, bronze is used for making products ranging from household items such as doorknobs, drawer handles, and clocks to industrial products such as engine parts, bearings, and wire. II TYPES Tin bronzes, the original bronzes, are alloys of copper and tin. They may contain from 5 to 22 percent tin. When a tin bronze contains at least 10 percent tin, the alloy is hard and has a low melting point. Leaded tin bronzes, used for casting, contain 5 to 10 percent tin, 1.5 to 25 percent lead, and 0 to 4.5 percent zinc. Manganese bronze contains 39 percent zinc, 1 percent tin, and 0.5 to 4 percent manganese. Aluminum bronze contains 5 to 10 percent aluminum. Silicon bronze contains 1.5 to 3 percent silicon. Bronze is made by heating and mixing the molten metal constituents. When the molten mixture is poured into a mold and begins to harden, the bronze expands and fills the entire mold. Once the bronze has cooled, it shrinks slightly and can easily be removed from the mold. III CHARACTERISTICS AND USES Bronze is stronger and harder than any other common metal alloy except steel. It does not easily break under stress, is corrosion resistant, and is easy to form into finished shapes by molding, casting, or machining (See also Engineering). The strongest bronze alloys contain tin and a small amount of lead. Tin, silicon, or aluminum is often added to bronze to improve its corrosion resistance. As bronze weathers, a brown or green film forms on the surface. This film inhibits corrosion. For example, many bronze statues erected hundreds of years ago show little sign of corrosion. Bronzes have a low melting point, a characteristic that makes them useful for brazing—that is, for joining two pieces of metal. When used as brazing material, bronze is heated above 430°C (800°F), but not above the melting point of the metals being joined. The molten bronze fuses to the other metals, forming a solid joint after cooling. Lead is often added to make bronze easier to machine. Silicon bronze is machined into piston rings and screening, and because of its resistance to chemical corrosion it is also used to make chemical containers. Manganese bronze is used for valve stems and welding rods. Aluminum bronzes are used in engine parts and in marine hardware. Bronze containing 10 percent or more tin is most often rolled or drawn into wires, sheets, and pipes. Tin bronze, in a powdered form, is sintered (heated without being melted), pressed into a solid mass, saturated with oil, and used to make self-lubricating bearings. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (iii) Lymph Lymph, common name for the fluid carried in the lymphatic system. Lymph is diluted blood plasma containing large numbers of white blood cells, especially lymphocytes, and occasionally a few red blood cells. Because of the number of living cells it contains, lymph is classified as a fluid tissue. Lymph diffuses into and is absorbed by the lymphatic capillaries from the spaces between the various cells constituting the tissues. In these spaces lymph is known as tissue fluid, plasma that has permeated the blood capillary walls and surrounded the cells to bring them nutriment and to remove their waste substances. The lymph contained in the lacteals of the small intestine is known as chyle. The synovial fluid that lubricates joints is almost identical with lymph, as is the serous fluid found in the body and pleural cavities. The fluid contained within the semicircular canals of the ear, although known as endolymph, is not true lymph. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Blood I INTRODUCTION Blood, vital fluid found in humans and other animals that provides important nourishment to all body organs and tissues and carries away waste materials. Sometimes referred to as “the river of life,” blood is pumped from the heart through a network of blood vessels collectively known as the circulatory system. An adult human has about 5 to 6 liters (1 to 2 gal) of blood, which is roughly 7 to 8 percent of total body weight. Infants and children have comparably lower volumes of blood, roughly proportionate to their smaller size. The volume of blood in an individual fluctuates. During dehydration, for example while running a marathon, blood volume decreases. Blood volume increases in circumstances such as pregnancy, when the mother’s blood needs to carry extra oxygen and nutrients to the baby. II ROLE OF BLOOD Blood carries oxygen from the lungs to all the other tissues in the body and, in turn, carries waste products, predominantly carbon dioxide, back to the lungs where they are released into the air. When oxygen transport fails, a person dies within a few minutes. Food that has been processed by the digestive system into smaller components such as proteins, fats, and carbohydrates is also delivered to the tissues by the blood. These nutrients provide the materials and energy needed by individual cells for metabolism, or the performance of cellular function. Waste products produced during metabolism, such as urea and uric acid, are carried by the blood to the kidneys, where they are transferred from the blood into urine and eliminated from the body. In addition to oxygen and nutrients, blood also transports special chemicals, called hormones, that regulate certain body functions. The movement of these chemicals enables one organ to control the function of another even though the two organs may be located far apart. In this way, the blood acts not just as a means of transportation but also as a communications system. The blood is more than a pipeline for nutrients and information; it is also responsible for the activities of the immune system, helping fend off infection and fight disease. In addition, blood carries the means for stopping itself from leaking out of the body after an injury. The blood does this by carrying special cells and proteins, known as the coagulation system, that start to form clots within a matter of seconds after injury. Blood is vital to maintaining a stable body temperature; in humans, body temperature normally fluctuates within a degree of 37.0° C (98.6° F). Heat production and heat loss in various parts of the body are balanced out by heat transfer via the bloodstream. This is accomplished by varying the diameter of blood vessels in the skin. When a person becomes overheated, the vessels dilate and an increased volume of blood flows through the skin. Heat dissipates through the skin, effectively lowering the body temperature. The increased flow of blood in the skin makes the skin appear pink or flushed. When a person is cold, the skin may become pale as the vessels narrow, diverting blood from the skin and reducing heat loss. III COMPOSITION OF BLOOD About 55 percent of the blood is composed of a liquid known as plasma. The rest of the blood is made of three major types of cells: red blood cells (also known as erythrocytes), white blood cells (leukocytes), and platelets (thrombocytes). A Plasma Plasma consists predominantly of water and salts. The kidneys carefully maintain the salt concentration in plasma because small changes in its concentration will cause cells in the body to function improperly. In extreme conditions this can result in seizures, coma, or even death. The pH of plasma, the common measurement of the plasma’s acidity, is also carefully controlled by the kidneys within the neutral range of 6.8 to 7.7. Plasma also contains other small molecules, including vitamins, minerals, nutrients, and waste products. The concentrations of all of these molecules must be carefully regulated. Plasma is usually yellow in color due to proteins dissolved in it. However, after a person eats a fatty meal, that person’s plasma temporarily develops a milky color as the blood carries the ingested fats from the intestines to other organs of the body. Plasma carries a large number of important proteins, including albumin, gamma globulin, and clotting factors. Albumin is the main protein in blood. It helps regulate the water content of tissues and blood. Gamma globulin is composed of tens of thousands of unique antibody molecules. Antibodies neutralize or help destroy infectious organisms. Each antibody is designed to target one specific invading organism. For example, chicken pox antibody will target chicken pox virus, but will leave an influenza virus unharmed. Clotting factors, such as fibrinogen, are involved in forming blood clots that seal leaks after an injury. Plasma that has had the clotting factors removed is called serum. Both serum and plasma are easy to store and have many medical uses. B Red Blood Cells Red blood cells make up almost 45 percent of the blood volume. Their primary function is to carry oxygen from the lungs to every cell in the body. Red blood cells are composed predominantly of a protein and iron compound, called hemoglobin, that captures oxygen molecules as the blood moves through the lungs, giving blood its red color. As blood passes through body tissues, hemoglobin then releases the oxygen to cells throughout the body. Red blood cells are so packed with hemoglobin that they lack many components, including a nucleus, found in other cells. The membrane, or outer layer, of the red blood cell is flexible, like a soap bubble, and is able to bend in many directions without breaking. This is important because the red blood cells must be able to pass through the tiniest blood vessels, the capillaries, to deliver oxygen wherever it is needed. The capillaries are so narrow that the red blood cells, normally shaped like a disk with a concave top and bottom, must bend and twist to maneuver single file through them. C Blood Type There are several types of red blood cells and each person has red blood cells of just one type. Blood type is determined by the occurrence or absence of substances, known as recognition markers or antigens, on the surface of the red blood cell. Type A blood has just marker A on its red blood cells while type B has only marker B. If neither A nor B markers are present, the blood is type O. If both the A and B markers are present, the blood is type AB. Another marker, the Rh antigen (also known as the Rh factor), is present or absent regardless of the presence of A and B markers. If the Rh marker is present, the blood is said to be Rh positive, and if it is absent, the blood is Rh negative. The most common blood type is A positive—that is, blood that has an A marker and also an Rh marker. More than 20 additional red blood cell types have been discovered. Blood typing is important for many medical reasons. If a person loses a lot of blood, that person may need a blood transfusion to replace some of the lost red blood cells. Since everyone makes antibodies against substances that are foreign, or not of their own body, transfused blood must be matched so as not to contain these substances. For example, a person who is blood type A positive will not make antibodies against the A or Rh markers, but will make antibodies against the B marker, which is not on that person’s own red blood cells. If blood containing the B marker (from types B positive, B negative, AB positive, or AB negative) is transfused into this person, then the transfused red blood cells will be rapidly destroyed by the patient’s anti-B antibodies. In this case, the transfusion will do the patient no good and may even result in serious harm. For a successful blood transfusion into an A positive blood type individual, blood that is type O negative, O positive, A negative, or A positive is needed because these blood types will not be attacked by the patient’s anti-B antibodies. D White Blood Cells White blood cells only make up about 1 percent of blood, but their small number belies their immense importance. They play a vital role in the body’s immune system—the primary defense mechanism against invading bacteria, viruses, fungi, and parasites. They often accomplish this goal through direct attack, which usually involves identifying the invading organism as foreign, attaching to it, and then destroying it. This process is referred to as phagocytosis. White blood cells also produce antibodies, which are released into the circulating blood to target and attach to foreign organisms. After attachment, the antibody may neutralize the organism, or it may elicit help from other immune system cells to destroy the foreign substance. There are several varieties of white blood cells, including neutrophils, monocytes, and lymphocytes, all of which interact with one another and with plasma proteins and other cell types to form the complex and highly effective immune system. E Platelets and Clotting The smallest cells in the blood are the platelets, which are designed for a single purpose—to begin the process of coagulation, or forming a clot, whenever a blood vessel is broken. As soon as an artery or vein is injured, the platelets in the area of the injury begin to clump together and stick to the edges of the cut. They also release messengers into the blood that perform a variety of functions: constricting the blood vessels to reduce bleeding, attracting more platelets to the area to enlarge the platelet plug, and initiating the work of plasmabased clotting factors, such as fibrinogen. Through a complex mechanism involving many steps and many clotting factors, the plasma protein fibrinogen is transformed into long, sticky threads of fibrin. Together, the platelets and the fibrin create an intertwined meshwork that forms a stable clot. This self-sealing aspect of the blood is crucial to survival. IV PRODUCTION AND ELIMINATION OF BLOOD CELLS Blood is produced in the bone marrow, a tissue in the central cavity inside almost all of the bones in the body. In infants, the marrow in most of the bones is actively involved in blood cell formation. By later adult life, active blood cell formation gradually ceases in the bones of the arms and legs and concentrates in the skull, spine, ribs, and pelvis. Red blood cells, white blood cells, and platelets grow from a single precursor cell, known as a hematopoietic stem cell. Remarkably, experiments have suggested that as few as 10 stem cells can, in four weeks, multiply into 30 trillion red blood cells, 30 billion white blood cells, and 1.2 trillion platelets—enough to replace every blood cell in the body. Red blood cells have the longest average life span of any of the cellular elements of blood. A red blood cell lives 100 to 120 days after being released from the marrow into the blood. Over that period of time, red blood cells gradually age. Spent cells are removed by the spleen and, to a lesser extent, by the liver. The spleen and the liver also remove any red blood cells that become damaged, regardless of their age. The body efficiently recycles many components of the damaged cells, including parts of the hemoglobin molecule, especially the iron contained within it. The majority of white blood cells have a relatively short life span. They may survive only 18 to 36 hours after being released from the marrow. However, some of the white blood cells are responsible for maintaining what is called immunologic memory. These memory cells retain knowledge of what infectious organisms the body has previously been exposed to. If one of those organisms returns, the memory cells initiate an extremely rapid response designed to kill the foreign invader. Memory cells may live for years or even decades before dying. Memory cells make immunizations possible. An immunization, also called a vaccination or an inoculation, is a method of using a vaccine to make the human body immune to certain diseases. A vaccine consists of an infectious agent that has been weakened or killed in the laboratory so that it cannot produce disease when injected into a person, but can spark the immune system to generate memory cells and antibodies specific for the infectious agent. If the infectious agent should ever invade that vaccinated person in the future, these memory cells will direct the cells of the immune system to target the invader before it has the opportunity to cause harm. Platelets have a life span of seven to ten days in the blood. They either participate in clot formation during that time or, when they have reached the end of their lifetime, are eliminated by the spleen and, to a lesser extent, by the liver. V BLOOD DISEASES Many diseases are caused by abnormalities in the blood. These diseases are categorized by which component of the blood is affected. A Red Blood Cell Diseases One of the most common blood diseases worldwide is anemia, which is characterized by an abnormally low number of red blood cells or low levels of hemoglobin. One of the major symptoms of anemia is fatigue, due to the failure of the blood to carry enough oxygen to all of the tissues. The most common type of anemia, iron-deficiency anemia, occurs because the marrow fails to produce sufficient red blood cells. When insufficient iron is available to the bone marrow, it slows down its production of hemoglobin and red blood cells. The most common causes of iron-deficiency anemia are certain infections that result in gastrointestinal blood loss and the consequent chronic loss of iron. Adding supplemental iron to the diet is often sufficient to cure iron-deficiency anemia. Some anemias are the result of increased destruction of red blood cells, as in the case of sickle-cell anemia, a genetic disease most common in persons of African ancestry. The red blood cells of sickle-cell patients assume an unusual crescent shape, causing them to become trapped in some blood vessels, blocking the flow of other blood cells to tissues and depriving them of oxygen. B White Blood Cell Diseases Some white blood cell diseases are characterized by an insufficient number of white blood cells. This can be caused by the failure of the bone marrow to produce adequate numbers of normal white blood cells, or by diseases that lead to the destruction of crucial white blood cells. These conditions result in severe immune deficiencies characterized by recurrent infections. Any disease in which excess white blood cells are produced, particularly immature white blood cells, is called leukemia, or blood cancer. Many cases of leukemia are linked to gene abnormalities, resulting in unchecked growth of immature white blood cells. If this growth is not halted, it often results in the death of the patient. These genetic abnormalities are not inherited in the vast majority of cases, but rather occur after birth. Although some causes of these abnormalities are known, for example exposure to high doses of radiation or the chemical benzene, most remain poorly understood. Treatment for leukemia typically involves the use of chemotherapy, in which strong drugs are used to target and kill leukemic cells, permitting normal cells to regenerate. In some cases, bone marrow transplants are effective. Much progress has been made over the last 30 years in the treatment of this disease. In one type of childhood leukemia, more than 80 percent of patients can now be cured of their disease. C Coagulation Diseases One disease of the coagulation system is hemophilia, a genetic bleeding disorder in which one of the plasma clotting factors, usually factor VIII, is produced in abnormally low quantities, resulting in uncontrolled bleeding from minor injuries. Although individuals with hemophilia are able to form a good initial platelet plug when blood vessels are damaged, they are not easily able to form the meshwork that holds the clot firmly intact. As a result, bleeding may occur some time after the initial traumatic event. Treatment for hemophilia relies on giving transfusions of factor VIII. Factor VIII can be isolated from the blood of normal blood donors but it also can be manufactured in a laboratory through a process known as gene cloning. VI BLOOD BANKS The Red Cross and a number of other organizations run programs, known as blood banks, to collect, store, and distribute blood and blood products for transfusions. When blood is donated, its blood type is determined so that only appropriately matched blood is given to patients needing a transfusion. Before using the blood, the blood bank also tests it for the presence of disease-causing organisms, such as hepatitis viruses and human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS). This blood screening dramatically reduces, but does not fully eliminate, the risk to the recipient of acquiring a disease through a blood transfusion. Blood donation, which is extremely safe, generally involves giving about 400 to 500 ml (about 1 pt) of blood, which is only about 7 percent of a person’s total blood. VII BLOOD IN NONHUMANS One-celled organisms have no need for blood. They are able to absorb nutrients, expel wastes, and exchange gases with their environment directly. Simple multicelled marine animals, such as sponges, jellyfishes, and anemones, also do not have blood. They use the seawater that bathes their cells to perform the functions of blood. However, all more complex multicellular animals have some form of a circulatory system using blood. In some invertebrates, there are no cells analogous to red blood cells. Instead, hemoglobin, or the related copper compound heocyanin, circulates dissolved in the plasma. The blood of complex multicellular animals tends to be similar to human blood, but there are also some significant differences, typically at the cellular level. For example, fish, amphibians, and reptiles possess red blood cells that have a nucleus, unlike the red blood cells of mammals. The immune system of invertebrates is more primitive than that of vertebrates, lacking the functionality associated with the white blood cell and antibody system found in mammals. Some arctic fish species produce proteins in their blood that act as a type of antifreeze, enabling them to survive in environments where the blood of other animals would freeze. Nonetheless, the essential transportation, communication, and protection functions that make blood essential to the continuation of life occur throughout much of the animal kingdom. (IV) Heavy water Almost all the hydrogen in water has an atomic weight of 1. The American chemist Harold Clayton Urey discovered in 1932 the presence in water of a small amount (1 part in 6000) of so-called heavy water, or deuterium oxide (D2O); deuterium is the hydrogen isotope with an atomic weight of 2. In 1951 the American chemist Aristid Grosse discovered that naturally occurring water contains also minute traces of tritium oxide (T2O); tritium is the hydrogen isotope with an atomic weight of 3. See Atom. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Hard Water Hardness of natural waters is caused largely by calcium and magnesium salts and to a small extent by iron, aluminum, and other metals. Hardness resulting from the bicarbonates and carbonates of calcium and magnesium is called temporary hardness and can be removed by boiling, which also sterilizes the water. The residual hardness is known as noncarbonate, or permanent, hardness. The methods of softening noncarbonate hardness include the addition of sodium carbonate and lime and filtration through natural or artificial zeolites which absorb the hardness-producing metallic ions and release sodium ions to the water See Ion Exchange; Zeolite. Sequestering agents in detergents serve to inactivate the substances that make water hard. Iron, which causes an unpleasant taste in drinking water, may be removed by aeration and sedimentation or by passing the water through iron-removing zeolite filters, or the iron may be stabilized by addition of such salts as polyphosphates. For use in laboratory applications, water is either distilled or demineralized by passing it through ion-absorbing compounds. (v) Smallpox, highly contagious viral disease that is often fatal. The disease is chiefly characterized by a skin rash that develops on the face, chest, back, and limbs. Over the course of a week the rash develops into pustular (pus-filled) pimples resembling boils. In extreme cases the pustular pimples run together—usually an indication of a fatal infection. Death may result from a secondary bacterial infection of the pustules, from cell damage caused by the viral infection, or from heart attack or shock. In the latter stages of nonfatal cases, smallpox pustules become crusted, often leaving the survivor with permanent, pitted scars. Smallpox is caused by a virus. An infected person spreads virus particles into the air in the form of tiny droplets emitted from the mouth by speaking, coughing, or simply breathing. The virus can then infect anyone who inhales the droplets. By this means, smallpox can spread extremely rapidly from person to person. Smallpox has afflicted humanity for thousands of years, causing epidemics from ancient times through the 20th century. No one is certain where the smallpox virus came from, but scientists speculate that several thousand years ago the virus made a trans-species jump into humans from an animal—likely a rodent species such as a mouse. The disease probably did not become established among humans until the beginnings of agriculture gave rise to the first large settlements in the Nile valley (northeastern Africa) and Mesopotamia (now eastern Syria, southeastern Turkey, and Iraq) more than 10,000 years ago. Over the next several centuries smallpox established itself as a widespread disease in Europe, Asia, and across Africa. During the 16th and 17th centuries, a time when Europeans made journeys of exploration and conquest to the Americas and other continents, smallpox went with them. By 1518 the disease reached the Americas aboard a Spanish ship that landed at the island of Hispaniola (now the Dominican Republic and Haiti) in the West Indies. Before long smallpox had killed half of the Taíno people, the native population of the island. The disease followed the Spanish conquistadors into Mexico and Central America in 1520. With fewer than 500 men, the Spanish explorer Hernán Cortés was able to conquer the great Aztec Empire under the emperor Montezuma in what is now Mexico. One of Cortés's men was infected with smallpox, triggering an epidemic that ultimately killed an estimated 3 million Aztecs, one-third of the population. A similar path of devastation was left among the people of the Inca Empire of South America. Smallpox killed the Inca emperor Huayna Capac in 1525, along with an estimated 100,000 Incas in the capital city of Cuzco. The Incas and Aztecs are only two of many examples of smallpox cutting a swath through a native population in the Americas, easing the way for Europeans to conquer and colonize new territory. It can truly be said that smallpox changed history. Yet the story of smallpox is also the story of great biomedical advancement and of ultimate victory. As the result of a worldwide effort of vaccination and containment, the last naturally occurring infection of smallpox occurred in 1977. Stockpiles of the virus now exist only in secured laboratories. Some experts, however, are concerned about the potential use of smallpox as a weapon of bioterrorism. Thus, despite being deliberately and successfully eradicated, smallpox may still pose a threat to humanity. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Measles, also rubeola, acute, highly contagious, fever-producing disease caused by a filterable virus, different from the virus that causes the less serious disease German measles, or rubella. Measles is characterized by small red dots appearing on the surface of the skin, irritation of the eyes (especially on exposure to light), coughing, and a runny nose. About 12 days after first exposure, the fever, sneezing, and runny nose appear. Coughing and swelling of the neck glands often follow. Four days later, red spots appear on the face or neck and then on the trunk and limbs. In 2 or 3 days the rash subsides and the fever falls; some peeling of the involved skin areas may take place. Infection of the middle ear may also occur. Measles was formerly one of the most common childhood diseases. Since the development of an effective vaccine in 1963, it has become much less frequent. By 1988 annual measles cases in the United States had been reduced to fewer than 3,500, compared with about 500,000 per year in the early 1960s. However, the number of new cases jumped to more than 18,000 in 1989 and to nearly 28,000 in 1990. Most of these cases occurred among inner-city preschool children and recent immigrants, but adolescents and young adults, who may have lost immunity (see Immunization) from their childhood vaccinations, also experienced an increase. The number of new cases declined rapidly in the 1990s and by 1999 fewer than 100 cases were reported. The reasons for this resurgence and subsequent decline are not clearly understood. In other parts of the world measles is still a common childhood disease and according to the World Health Organization (WHO), about 1 million children die from measles each year. In the U.S., measles is rarely fatal; should the virus spread to the brain, however, it can cause death or brain damage (see Encephalitis). No specific treatment for measles exists. Patients are kept isolated from other susceptible individuals, usually resting in bed, and are treated with aspirin, cough syrup, and skin lotions to lessen fever, coughing, and itching. The disease usually confers immunity after one attack, and an immune pregnant woman passes the antibody in the globulin fraction of the blood serum, through the placenta, to her fetus. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (vi) PIG IRON The basic materials used for the manufacture of pig iron are iron ore, coke, and limestone. The coke is burned as a fuel to heat the furnace; as it burns, the coke gives off carbon monoxide, which combines with the iron oxides in the ore, reducing them to metallic iron. This is the basic chemical reaction in the blast furnace; it has the equation: Fe2O3 + 3CO = 3CO2 + 2Fe. The limestone in the furnace charge is used as an additional source of carbon monoxide and as a “flux” to combine with the infusible silica present in the ore to form fusible calcium silicate. Without the limestone, iron silicate would be formed, with a resulting loss of metallic iron. Calcium silicate plus other impurities form a slag that floats on top of the molten metal at the bottom of the furnace. Ordinary pig iron as produced by blast furnaces contains iron, about 92 percent; carbon, 3 or 4 percent; silicon, 0.5 to 3 percent; manganese, 0.25 to 2.5 percent; phosphorus, 0.04 to 2 percent; and a trace of sulfur. A typical blast furnace consists of a cylindrical steel shell lined with a refractory, which is any nonmetallic substance such as firebrick. The shell is tapered at the top and at the bottom and is widest at a point about one-quarter of the distance from the bottom. The lower portion of the furnace, called the bosh, is equipped with several tubular openings or tuyeres through which the air blast is forced. Near the bottom of the bosh is a hole through which the molten pig iron flows when the furnace is tapped, and above this hole, but below the tuyeres, is another hole for draining the slag. The top of the furnace, which is about 27 m (about 90 ft) in height, contains vents for the escaping gases, and a pair of round hoppers closed with bell-shaped valves through which the charge is introduced into the furnace. The materials are brought up to the hoppers in small dump cars or skips that are hauled up an inclined external skip hoist. Blast furnaces operate continuously. The raw material to be fed into the furnace is divided into a number of small charges that are introduced into the furnace at 10- to 15-min intervals. Slag is drawn off from the top of the melt about once every 2 hr, and the iron itself is drawn off or tapped about five times a day. The air used to supply the blast in a blast furnace is preheated to temperatures between approximately 540° and 870° C (approximately 1,000° and 1,600° F). The heating is performed in stoves, cylinders containing networks of firebrick. The bricks in the stoves are heated for several hours by burning blast-furnace gas, the waste gases from the top of the furnace. Then the flame is turned off and the air for the blast is blown through the stove. The weight of air used in the operation of a blast furnace exceeds the total weight of the other raw materials employed. An important development in blast furnace technology, the pressurizing of furnaces, was introduced after World War II. By “throttling” the flow of gas from the furnace vents, the pressure within the furnace may be built up to 1.7 atm or more. The pressurizing technique makes possible better combustion of the coke and higher output of pig iron. The output of many blast furnaces can be increased 25 percent by pressurizing. Experimental installations have also shown that the output of blast furnaces can be increased by enriching the air blast with oxygen. The process of tapping consists of knocking out a clay plug from the iron hole near the bottom of the bosh and allowing the molten metal to flow into a clay-lined runner and then into a large, brick-lined metal container, which may be either a ladle or a rail car capable of holding as much as 100 tons of metal. Any slag that may flow from the furnace with the metal is skimmed off before it reaches the container. The container of molten pig iron is then transported to the steelmaking shop. Modern-day blast furnaces are operated in conjunction with basic oxygen furnaces and sometimes the older open-hearth furnaces as part of a single steel-producing plant. In such plants the molten pig iron is used to charge the steel furnaces. The molten metal from several blast furnaces may be mixed in a large ladle before it is converted to steel, to minimize any irregularities in the composition of the individual melts. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. STAINLESS STEEL Stainless steels contain chromium, nickel, and other alloying elements that keep them bright and rust resistant in spite of moisture or the action of corrosive acids and gases. Some stainless steels are very hard; some have unusual strength and will retain that strength for long periods at extremely high and low temperatures. Because of their shining surfaces architects often use them for decorative purposes. Stainless steels are used for the pipes and tanks of petroleum refineries and chemical plants, for jet planes, and for space capsules. Surgical instruments and equipment are made from these steels, and they are also used to patch or replace broken bones because the steels can withstand the action of body fluids. In kitchens and in plants where food is prepared, handling equipment is often made of stainless steel because it does not taint the food and can be easily cleaned. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (VII) Alloy, substance composed of two or more metals. Alloys, like pure metals, possess metallic luster and conduct heat and electricity well, although not generally as well as do the pure metals of which they are formed. Compounds that contain both a metal or metals and certain nonmetals, particularly those containing carbon, are also called alloys. The most important of these is steel. Simple carbon steels consist of about 0.5 percent manganese and up to 0.8 percent carbon, with the remaining material being iron. An alloy may consist of an intermetallic compound, a solid solution, an intimate mixture of minute crystals of the constituent metallic elements, or any combination of solutions or mixtures of the foregoing. Intermetallic compounds, such as NaAu2, CuSn, and CuAl2, do not follow the ordinary rules of valency. They are generally hard and brittle; although they have not been important in the past where strength is required, many new developments have made such compounds increasingly important. Alloys consisting of solutions or mixtures of two metals generally have lower melting points than do the pure constituents. A mixture with a melting point lower than that of any other mixture of the same constituents is called a eutectic. The eutectoid, the solid-phase analog of the eutectic, frequently has better physical characteristics than do alloys of different proportions. The properties of alloys are frequently far different from those of their constituent elements, and such properties as strength and corrosion resistance may be considerably greater for an alloy than for any of the separate metals. For this reason, alloys are more generally used than pure metals. Steel is stronger and harder than wrought iron, which is approximately pure iron, and is used in far greater quantities. The alloy steels, mixtures of steel with such metals as chromium, manganese, molybdenum, nickel, tungsten, and vanadium, are stronger and harder than steel itself, and many of them are also more corrosion-resistant than iron or steel. An alloy can often be made to match a predetermined set of characteristics. An important case in which particular characteristics are necessary is the design of rockets, spacecraft, and supersonic aircraft. The materials used in these vehicles and their engines must be light in weight, very strong, and able to sustain very high temperatures. To withstand these high temperatures and reduce the overall weight, lightweight, high-strength alloys of aluminum, beryllium, and titanium have been developed. To resist the heat generated during reentry into the atmosphere of the earth, alloys containing heat-resistant metals such as tantalum, niobium, tungsten, cobalt, and nickel are being used in space vehicles. A wide variety of special alloys containing metals such as beryllium, boron, niobium, hafnium, and zirconium, which have particular nuclear absorption characteristics, are used in nuclear reactors. Niobium-tin alloys are used as superconductors at extremely low temperatures. Special copper, nickel, and titanium alloys, designed to resist the corrosive effects of boiling salt water, are used in desalination plants. Historically, most alloys have been prepared by mixing the molten materials. More recently, powder metallurgy has become important in the preparation of alloys with special characteristics. In this process, the alloys are prepared by mixing dry powders of the materials, squeezing them together under high pressure, and then heating them to temperatures just below their melting points. The result is a solid, homogeneous alloy. Mass-produced products may be prepared by this technique at great savings in cost. Among the alloys made possible by powder metallurgy are the cermets. These alloys of metal and carbon (carbides), boron (borides), oxygen (oxides), silicon (silicides), and nitrogen (nitrides) combine the advantages of the high-temperature strength, stability, and oxidation resistance of the ceramic compound with the ductility and shock resistance of the metal. Another alloying technique is ion implantation, which has been adapted from the processes used to produce computer chips; beams of ions of carbon, nitrogen, and other elements are fired into selected metals in a vacuum chamber to produce a strong, thin layer of alloy on the metal surface. Bombarding titanium with nitrogen, for example, can produce a superior alloy for prosthetic implants. Sterling silver, 14-karat gold, white gold, and plantinum-iridium are precious metal alloys. Babbit metal, brass, bronze, Dow-metal, German silver, gunmetal, Monel metal, pewter, and solder are alloys of less precious metals. Commercial aluminum is, because of impurities, actually an alloy. Alloys of mercury with other metals are called amalgams. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Amalgam Mercury combines with all the common metals except iron and platinum to form alloys that are called amalgams. In one method of extracting gold and silver from their ores, the metals are combined with mercury to make them dissolve; the mercury is then removed by distillation. This method is no longer commonly used, however. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (viii) Isotope, one of two or more species of atom having the same atomic number, hence constituting the same element, but differing in mass number. As atomic number is equivalent to the number of protons in the nucleus, and mass number is the sum total of the protons plus the neutrons in the nucleus, isotopes of the same element differ from one another only in the number of neutrons in their nuclei. See Atom. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Isobars (plural i•so•bars) noun 1. line showing weather patterns: a line drawn on a weather map that connects places with equal atmospheric pressure. Isobars are often used collectively to indicate the movement or formation of weather systems. 2. atom with same mass number: one of two or more atoms or elements that have the same mass number but different atomic numbers [Mid-19th century. < Greek isobaros "of equal weight"] Microsoft® Encarta® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (ix) Vein (anatomy) Vein (anatomy), in anatomy, blood vessel that conducts the deoxygenated blood from the capillaries back to the heart. Three exceptions to this description exist: the pulmonary veins return blood from the lungs, where it has been oxygenated, to the heart; the portal veins receive blood from the pyloric, gastric, cystic, superior mesenteric, and splenic veins and, entering the liver, break up into small branches that pass through all parts of that organ; and the umbilical veins convey blood from the fetus to the mother's placenta. Veins enlarge as they proceed, gathering blood from their tributaries. They finally pour the blood through the superior and inferior venae cavae into the right atrium of the heart. Their coats are similar to those of the arteries, but thinner, and often transparent. See Circulatory System; Heart; Varicose Vein. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Artery, one of the tubular vessels that conveys blood from the heart to the tissues of the body. Two arteries have direct connection with the heart: (1) the aorta, which, with its branches, conveys oxygenated blood from the left ventricle to every part of the body; and (2) the pulmonary artery, which conveys blood from the right ventricle to the lungs, whence it is returned bearing oxygen to the left side of the heart (see Heart: Structure and Function). Arteries in their ultimate minute branchings are connected with the veins by capillaries. They are named usually from the part of the body where they are found, as the brachial (arm) or the metacarpal (wrist) artery; or from the organ which they supply, as the hepatic (liver) or the ovarian artery. The facial artery is the branch of the external carotid artery that passes up over the lower jaw and supplies the superficial portion of the face; the hemorrhoidal arteries are three vessels that supply the lower end of the rectum; the intercostal arteries are the arteries that supply the space between the ribs; the lingual artery is the branch of the external carotid artery that supplies the tongue. The arteries expand and then constrict with each beat of the heart, a rhythmic movement that may be felt as the pulse. Disorders of the arteries may involve inflammation, infection, or degeneration of the walls of the arterial blood vessels. The most common arterial disease, and the one which is most often a contributory cause of death, particularly in old people, is arteriosclerosis, known popularly as hardening of the arteries. The hardening usually is preceded by atherosclerosis, an accumulation of fatty deposits on the inner lining of the arterial wall. The deposits reduce the normal flow of the blood through the artery. One of the substances associated with atherosclerosis is cholesterol. As arteriosclerosis progresses, calcium is deposited and scar tissue develops, causing the wall to lose its elasticity. Localized dilatation of the arterial wall, called an aneurysm, may also develop. Arteriosclerosis may affect any or all of the arteries of the body. If the blood vessels supplying the heart muscle are affected, the disease may lead to a painful condition known as angina pectoris. See Heart: Heart Diseases. The presence of arteriosclerosis in the wall of an artery can precipitate formation of a clot, or thrombus (see Thrombosis). Treatment consists of clot-dissolving enzymes called urokinase and streptokinase, which were approved for medical use in 1979. Studies indicate that compounds such as aspirin and sulfinpyrazone, which inhibit platelet reactivity, may act to prevent formation of a thrombus, but whether they can or should be taken in tolerable quantities over a long period of time for this purpose has not yet been determined. Embolism is the name given to the obstruction of an artery by a clot carried to it from another part of the body. Such floating clots may be caused by arteriosclerosis, but are most commonly a consequence of the detachment of a mass of fibrin from a diseased heart. Any artery may be obstructed by embolism; the consequences are most serious in the brain, the retina, and the limbs. In the larger arteries of the brain, Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Aorta, principal artery of the body that carries oxygenated blood to most other arteries in the body. In humans the aorta rises from the left ventricle (lower chamber) of the heart, arches back and downward through the thorax, passes through the diaphragm into the abdomen, and divides into the right and left iliac arteries at about the level of the fourth lumbar vertebra. The aorta gives rise to the coronary arteries, which supply the heart muscle with blood, and to the innominate, subclavian, and carotid arteries, which supply the head and arms. The descending part of the aorta gives rise, in the thorax, to the intercostal arteries that branch in the body wall. In the abdomen it gives off the coeliac artery, which divides into the gastric, hepatic, and splenic arteries, which supply the stomach, liver, and spleen, respectively; the mesenteric arteries to the intestines; the renal arteries to the kidneys; and small branches to the body wall and to reproductive organs. The aorta is subject to a condition known as atherosclerosis, in which fat deposits attach to the aortic walls. If left untreated, this condition may lead to hypertension or to an aneurysm (a swelling of the vessel wall), which can be fatal. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. VALVES In passing through the system, blood pumped by the heart follows a winding course through the right chambers of the heart, into the lungs, where it picks up oxygen, and back into the left chambers of the heart. From these it is pumped into the main artery, the aorta, which branches into increasingly smaller arteries until it passes through the smallest, known as arterioles. Beyond the arterioles, the blood passes through a vast amount of tiny, thin-walled structures called capillaries. Here, the blood gives up its oxygen and its nutrients to the tissues and absorbs from them carbon dioxide and other waste products of metabolism. The blood completes its circuit by passing through small veins that join to form increasingly larger vessels until it reaches the largest veins, the inferior and superior venae cavae, which return it to the right side of the heart. Blood is propelled mainly by contractions of the heart; contractions of skeletal muscle also contribute to circulation. Valves in the heart and in the veins ensure its flow in one direction. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q10: Gland Gland, any structure of animals, plants, or insects that produces chemical secretions or excretions. Glands are classified by shape, such as tubular and saccular, or saclike, and by structure, such as simple and compound. Types of the simple tubular and the simple saccular glands are, respectively, the sweat and the sebaceous glands (see Skin). The kidney is a compound tubular gland, and the tear-producing glands are compound saccular (see Eye). The so-called lymph glands are erroneously named and are in reality nodes (see Lymphatic System). “Swollen glands” are actually infected lymph nodes. Glands are of two principal types: (1) those of internal secretion, called endocrine, and (2) those of external secretion, called exocrine. Some glands such as the pancreas produce both internal and external secretions. Because endocrine glands produce and release hormones (see Hormone) directly into the bloodstream without passing through a canal, they are called ductless. For the functions and diseases of endocrine glands, see Endocrine System. In animals, insects, and plants, exocrine glands secrete chemical substances for a variety of purposes. In plants, they produce water, protective sticky fluids, and nectars. The materials for the eggs of birds, the shells of mussels, the cocoons of caterpillars and silkworms, the webs of spiders, and the wax of honeycombs are other examples of exocrine secretions. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Endocrine System I INTRODUCTION Endocrine System, group of specialized organs and body tissues that produce, store, and secrete chemical substances known as hormones. As the body's chemical messengers, hormones transfer information and instructions from one set of cells to another. Because of the hormones they produce, endocrine organs have a great deal of influence over the body. Among their many jobs are regulating the body's growth and development, controlling the function of various tissues, supporting pregnancy and other reproductive functions, and regulating metabolism. Endocrine organs are sometimes called ductless glands because they have no ducts connecting them to specific body parts. The hormones they secrete are released directly into the bloodstream. In contrast, the exocrine glands, such as the sweat glands or the salivary glands, release their secretions directly to target areas—for example, the skin or the inside of the mouth. Some of the body's glands are described as endo-exocrine glands because they secrete hormones as well as other types of substances. Even some nonglandular tissues produce hormone-like substances—nerve cells produce chemical messengers called neurotransmitters, for example. The earliest reference to the endocrine system comes from ancient Greece, in about 400 BC. However, it was not until the 16th century that accurate anatomical descriptions of many of the endocrine organs were published. Research during the 20th century has vastly improved our understanding of hormones and how they function in the body. Today, endocrinology, the study of the endocrine glands, is an important branch of modern medicine. Endocrinologists are medical doctors who specialize in researching and treating disorders and diseases of the endocrine system. II COMPONENTS OF THE ENDOCRINE SYSTEM The primary glands that make up the human endocrine system are the hypothalamus, pituitary, thyroid, parathyroid, adrenal, pineal body, and reproductive glands—the ovary and testis. The pancreas, an organ often associated with the digestive system, is also considered part of the endocrine system. In addition, some nonendocrine organs are known to actively secrete hormones. These include the brain, heart, lungs, kidneys, liver, thymus, skin, and placenta. Almost all body cells can either produce or convert hormones, and some secrete hormones. For example, glucagon, a hormone that raises glucose levels in the blood when the body needs extra energy, is made in the pancreas but also in the wall of the gastrointestinal tract. However, it is the endocrine glands that are specialized for hormone production. They efficiently manufacture chemically complex hormones from simple chemical substances—for example, amino acids and carbohydrates—and they regulate their secretion more efficiently than any other tissues. The hypothalamus, found deep within the brain, directly controls the pituitary gland. It is sometimes described as the coordinator of the endocrine system. When information reaching the brain indicates that changes are needed somewhere in the body, nerve cells in the hypothalamus secrete body chemicals that either stimulate or suppress hormone secretions from the pituitary gland. Acting as liaison between the brain and the pituitary gland, the hypothalamus is the primary link between the endocrine and nervous systems. Located in a bony cavity just below the base of the brain is one of the endocrine system's most important members: the pituitary gland. Often described as the body’s master gland, the pituitary secretes several hormones that regulate the function of the other endocrine glands. Structurally, the pituitary gland is divided into two parts, the anterior and posterior lobes, each having separate functions. The anterior lobe regulates the activity of the thyroid and adrenal glands as well as the reproductive glands. It also regulates the body's growth and stimulates milk production in women who are breast-feeding. Hormones secreted by the anterior lobe include adrenocorticotropic hormone (ACTH), thyrotropic hormone (TSH), luteinizing hormone (LH), follicle-stimulating hormone (FSH), growth hormone (GH), and prolactin. The anterior lobe also secretes endorphins, chemicals that act on the nervous system to reduce sensitivity to pain. The posterior lobe of the pituitary gland contains the nerve endings (axons) from the hypothalamus, which stimulate or suppress hormone production. This lobe secretes antidiuretic hormones (ADH), which control water balance in the body, and oxytocin, which controls muscle contractions in the uterus. The thyroid gland, located in the neck, secretes hormones in response to stimulation by TSH from the pituitary gland. The thyroid secretes hormones—for example, thyroxine and three-iodothyronine—that regulate growth and metabolism, and play a role in brain development during childhood. The parathyroid glands are four small glands located at the four corners of the thyroid gland. The hormone they secrete, parathyroid hormone, regulates the level of calcium in the blood. Located on top of the kidneys, the adrenal glands have two distinct parts. The outer part, called the adrenal cortex, produces a variety of hormones called corticosteroids, which include cortisol. These hormones regulate salt and water balance in the body, prepare the body for stress, regulate metabolism, interact with the immune system, and influence sexual function. The inner part, the adrenal medulla, produces catecholamines, such as epinephrine, also called adrenaline, which increase the blood pressure and heart rate during times of stress. The reproductive components of the endocrine system, called the gonads, secrete sex hormones in response to stimulation from the pituitary gland. Located in the pelvis, the female gonads, the ovaries, produce eggs. They also secrete a number of female sex hormones, including estrogen and progesterone, which control development of the reproductive organs, stimulate the appearance of female secondary sex characteristics, and regulate menstruation and pregnancy. Located in the scrotum, the male gonads, the testes, produce sperm and also secrete a number of male sex hormones, or androgens. The androgens, the most important of which is testosterone, regulate development of the reproductive organs, stimulate male secondary sex characteristics, and stimulate muscle growth. The pancreas is positioned in the upper abdomen, just under the stomach. The major part of the pancreas, called the exocrine pancreas, functions as an exocrine gland, secreting digestive enzymes into the gastrointestinal tract. Distributed through the pancreas are clusters of endocrine cells that secrete insulin, glucagon, and somastatin. These hormones all participate in regulating energy and metabolism in the body. The pineal body, also called the pineal gland, is located in the middle of the brain. It secretes melatonin, a hormone that may help regulate the wake-sleep cycle. Research has shown that disturbances in the secretion of melatonin are responsible, in part, for the jet lag associated with long-distance air travel. III HOW THE ENDOCRINE SYSTEM WORKS Hormones from the endocrine organs are secreted directly into the bloodstream, where special proteins usually bind to them, helping to keep the hormones intact as they travel throughout the body. The proteins also act as a reservoir, allowing only a small fraction of the hormone circulating in the blood to affect the target tissue. Specialized proteins in the target tissue, called receptors, bind with the hormones in the bloodstream, inducing chemical changes in response to the body’s needs. Typically, only minute concentrations of a hormone are needed to achieve the desired effect. Too much or too little hormone can be harmful to the body, so hormone levels are regulated by a feedback mechanism. Feedback works something like a household thermostat. When the heat in a house falls, the thermostat responds by switching the furnace on, and when the temperature is too warm, the thermostat switches the furnace off. Usually, the change that a hormone produces also serves to regulate that hormone's secretion. For example, parathyroid hormone causes the body to increase the level of calcium in the blood. As calcium levels rise, the secretion of parathyroid hormone then decreases. This feedback mechanism allows for tight control over hormone levels, which is essential for ideal body function. Other mechanisms may also influence feedback relationships. For example, if an individual becomes ill, the adrenal glands increase the secretions of certain hormones that help the body deal with the stress of illness. The adrenal glands work in concert with the pituitary gland and the brain to increase the body’s tolerance of these hormones in the blood, preventing the normal feedback mechanism from decreasing secretion levels until the illness is gone. Long-term changes in hormone levels can influence the endocrine glands themselves. For example, if hormone secretion is chronically low, the increased stimulation by the feedback mechanism leads to growth of the gland. This can occur in the thyroid if a person's diet has insufficient iodine, which is essential for thyroid hormone production. Constant stimulation from the pituitary gland to produce the needed hormone causes the thyroid to grow, eventually producing a medical condition known as goiter. IV DISEASES OF THE ENDOCRINE SYSTEM Endocrine disorders are classified in two ways: disturbances in the production of hormones, and the inability of tissues to respond to hormones. The first type, called production disorders, are divided into hypofunction (insufficient activity) and hyperfunction (excess activity). Hypofunction disorders can have a variety of causes, including malformations in the gland itself. Sometimes one of the enzymes essential for hormone production is missing, or the hormone produced is abnormal. More commonly, hypofunction is caused by disease or injury. Tuberculosis can appear in the adrenal glands, autoimmune diseases can affect the thyroid, and treatments for cancer—such as radiation therapy and chemotherapy—can damage any of the endocrine organs. Hypofunction can also result when target tissue is unable to respond to hormones. In many cases, the cause of a hypofunction disorder is unknown. Hyperfunction can be caused by glandular tumors that secrete hormone without responding to feedback controls. In addition, some autoimmune conditions create antibodies that have the side effect of stimulating hormone production. Infection of an endocrine gland can have the same result. Accurately diagnosing an endocrine disorder can be extremely challenging, even for an astute physician. Many diseases of the endocrine system develop over time, and clear, identifying symptoms may not appear for many months or even years. An endocrinologist evaluating a patient for a possible endocrine disorder relies on the patient's history of signs and symptoms, a physical examination, and the family history—that is, whether any endocrine disorders have been diagnosed in other relatives. A variety of laboratory tests— for example, a radioimmunoassay—are used to measure hormone levels. Tests that directly stimulate or suppress hormone production are also sometimes used, and genetic testing for deoxyribonucleic acid (DNA) mutations affecting endocrine function can be helpful in making a diagnosis. Tests based on diagnostic radiology show anatomical pictures of the gland in question. A functional image of the gland can be obtained with radioactive labeling techniques used in nuclear medicine. One of the most common diseases of the endocrine systems is diabetes mellitus, which occurs in two forms. The first, called diabetes mellitus Type 1, is caused by inadequate secretion of insulin by the pancreas. Diabetes mellitus Type 2 is caused by the body's inability to respond to insulin. Both types have similar symptoms, including excessive thirst, hunger, and urination as well as weight loss. Laboratory tests that detect glucose in the urine and elevated levels of glucose in the blood usually confirm the diagnosis. Treatment of diabetes mellitus Type 1 requires regular injections of insulin; some patients with Type 2 can be treated with diet, exercise, or oral medication. Diabetes can cause a variety of complications, including kidney problems, pain due to nerve damage, blindness, and coronary heart disease. Recent studies have shown that controlling blood sugar levels reduces the risk of developing diabetes complications considerably. Diabetes insipidus is caused by a deficiency of vasopressin, one of the antidiuretic hormones (ADH) secreted by the posterior lobe of the pituitary gland. Patients often experience increased thirst and urination. Treatment is with drugs, such as synthetic vasopressin, that help the body maintain water and electrolyte balance. Hypothyroidism is caused by an underactive thyroid gland, which results in a deficiency of thyroid hormone. Hypothyroidism disorders cause myxedema and cretinism, more properly known as congenital hypothyroidism. Myxedema develops in older adults, usually after age 40, and causes lethargy, fatigue, and mental sluggishness. Congenital hypothyroidism, which is present at birth, can cause more serious complications including mental retardation if left untreated. Screening programs exist in most countries to test newborns for this disorder. By providing the body with replacement thyroid hormones, almost all of the complications are completely avoidable. Addison's disease is caused by decreased function of the adrenal cortex. Weakness, fatigue, abdominal pains, nausea, dehydration, fever, and hyperpigmentation (tanning without sun exposure) are among the many possible symptoms. Treatment involves providing the body with replacement corticosteroid hormones as well as dietary salt. Cushing's syndrome is caused by excessive secretion of glucocorticoids, the subgroup of corticosteroid hormones that includes hydrocortisone, by the adrenal glands. Symptoms may develop over many years prior to diagnosis and may include obesity, physical weakness, easily bruised skin, acne, hypertension, and psychological changes. Treatment may include surgery, radiation therapy, chemotherapy, or blockage of hormone production with drugs. Thyrotoxicosis is due to excess production of thyroid hormones. The most common cause for it is Graves' disease, an autoimmune disorder in which specific antibodies are produced, stimulating the thyroid gland. Thyrotoxicosis is eight to ten times more common in women than in men. Symptoms include nervousness, sensitivity to heat, heart palpitations, and weight loss. Many patients experience protruding eyes and tremors. Drugs that inhibit thyroid activity, surgery to remove the thyroid gland, and radioactive iodine that destroys the gland are common treatments. Acromegaly and gigantism both are caused by a pituitary tumor that stimulates production of excessive growth hormone, causing abnormal growth in particular parts of the body. Acromegaly is rare and usually develops over many years in adult subjects. Gigantism occurs when the excess of growth hormone begins in childhood. Contributed By: Gad B. Kletter Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Human hormones significantly affect the activity of every cell in the body. They influence mental acuity, physical agility, and body build and stature. Growth hormone is a hormone produced by the pituitary gland. It regulates growth by stimulating the formation of bone and the uptake of amino acids, molecules vital to building muscle and other tissue. Sex hormones regulate the development of sexual organs, sexual behavior, reproduction, and pregnancy. For example, gonadotropins, also secreted by the pituitary gland, are sex hormones that stimulate egg and sperm production. The gonadotropin that stimulates production of sperm in men and formation of ovary follicles in women is called a folliclestimulating hormone. When a follicle-stimulating hormone binds to an ovary cell, it stimulates the enzymes needed for the synthesis of estradiol, a female sex hormone. Another gonadotropin called luteinizing hormone regulates the production of eggs in women and the production of the male sex hormone testosterone. Produced in the male gonads, or testes, testosterone regulates changes to the male body during puberty, influences sexual behavior, and plays a role in growth. The female sex hormones, called estrogens, regulate female sexual development and behavior as well as some aspects of pregnancy. Progesterone, a female hormone secreted in the ovaries, regulates menstruation and stimulates lactation in humans and other mammals. Other hormones regulate metabolism. For example, thyroxine, a hormone secreted by the thyroid gland, regulates rates of body metabolism. Glucagon and insulin, secreted in the pancreas, control levels of glucose in the blood and the availability of energy for the muscles. A number of hormones, including insulin, glucagon, cortisol, growth hormone, epinephrine, and norepinephrine, maintain glucose levels in the blood. While insulin lowers the blood glucose, all the other hormones raise it. In addition, several other hormones participate indirectly in the regulation. A protein called somatostatin blocks the release of insulin, glucagon, and growth hormone, while another hormone, gastric inhibitory polypeptide, enhances insulin release in response to glucose absorption. This complex system permits blood glucose concentration to remain within a very narrow range, despite external conditions that may vary to extremes. Hormones also regulate blood pressure and other involuntary body functions. Epinephrine, also called adrenaline, is a hormone secreted in the adrenal gland. During periods of stress, epinephrine prepares the body for physical exertion by increasing the heart rate, raising the blood pressure, and releasing sugar stored in the liver for quick energy. Insulin Secretion Insulin Secretion This light micrograph of a section of the human pancreas shows one of the islets of Langerhans, center, a group of modified glandular cells. These cells secrete insulin, a hormone that helps the body metabolize sugars, fats, and starches. The blue and white lines in the islets of Langerhans are blood vessels that carry the insulin to the rest of the body. Insulin deficiency causes diabetes mellitus, a disease that affects at least 10 million people in the United States. Encarta Encyclopedia Photo Researchers, Inc./Astrid and Hanns-Frieder Michler Full Size Hormones are sometimes used to treat medical problems, particularly diseases of the endocrine system. In people with diabetes mellitus type 1, for example, the pancreas secretes little or no insulin. Regular injections of insulin help maintain normal blood glucose levels. Sometimes, an illness or injury not directly related to the endocrine system can be helped by a dose of a particular hormone. Steroid hormones are often used as antiinflammatory agents to treat the symptoms of various diseases, including cancer, asthma, or rheumatoid arthritis. Oral contraceptives, or birth control pills, use small, regular doses of female sex hormones to prevent pregnancy. Initially, hormones used in medicine were collected from extracts of glands taken from humans or animals. For example, pituitary growth hormone was collected from the pituitary glands of dead human bodies, or cadavers, and insulin was extracted from cattle and hogs. As technology advanced, insulin molecules collected from animals were altered to produce the human form of insulin. With improvements in biochemical technology, many hormones are now made in laboratories from basic chemical compounds. This eliminates the risk of transferring contaminating agents sometimes found in the human and animal sources. Advances in genetic engineering even enable scientists to introduce a gene of a specific protein hormone into a living cell, such as a bacterium, which causes the cell to secrete excess amounts of a desired hormone. This technique, known as recombinant DNA technology, has vastly improved the availability of hormones. Recombinant DNA has been especially useful in producing growth hormone, once only available in limited supply from the pituitary glands of human cadavers. Treatments using the hormone were far from ideal because the cadaver hormone was often in short supply. Moveover, some of the pituitary glands used to make growth hormone were contaminated with particles called prions, which could cause diseases such as Creutzfeldt-Jakob disease, a fatal brain disorder. The advent of recombinant technology made growth hormone widely available for safe and effective therapy. Q11: Flower I INTRODUCTION Flower, reproductive organ of most seed-bearing plants. Flowers carry out the multiple roles of sexual reproduction, seed development, and fruit production. Many plants produce highly visible flowers that have a distinctive size, color, or fragrance. Almost everyone is familiar with beautiful flowers such as the blossoms of roses, orchids, and tulips. But many plants—including oaks, beeches, maples, and grasses—have small, green or gray flowers that typically go unnoticed. Whether eye-catching or inconspicuous, all flowers produce the male or female sex cells required for sexual reproduction. Flowers are also the site of fertilization, which is the union of a male and female sex cell to produce a fertilized egg. The fertilized egg then develops into an embryonic (immature) plant, which forms part of the developing seed. Neighboring structures of the flower enclose the seed and mature into a fruit. Botanists estimate that there are more than 240,000 species of flowering plants. However, flowering plants are not the only seed-producing plants. Pines, firs, and cycads are among the few hundred plants that bear their seeds on the surface of cones, rather than within a fruit. Botanists call the cone-bearing plants gymnosperms, which means naked seeds; they refer to flowering plants as angiosperms, which means enclosed seeds. Flowering plants are more widespread than any other group of plants. They bloom on every continent, from the bogs and marshes of the Arctic tundra to the barren soils of Antarctica. Deserts, grasslands, rainforests, and other biomes display distinctive flower species. Even streams, rivers, lakes, and swamps are home to many flowering plants. In their diverse environments, flowers have evolved to become irreplaceable participants in the complex, interdependent communities of organisms that make up ecosystems. The seeds or fruits that flowers produce are food sources for many animals, large and small. In addition, many insects, bats, hummingbirds, and small mammals feed on nectar, a sweet liquid produced by many flowers, or on flower products known as pollen grains. The animals that eat flowers, seeds, and fruits are prey for other animals—lizards, frogs, salamanders, and fish, for example—which in turn are devoured by yet other animals, such as owls and snakes. Thus, flowers provide a bountiful feast that sustains an intricate web of predators and prey (see Food Web). Flowers play diverse roles in the lives of humans. Wildflowers of every hue brighten the landscape, and the attractive shapes and colors of cultivated flowers beautify homes, parks, and roadsides. The fleshy fruits that flowers produce, such as apples, grapes, strawberries, and oranges, are eaten worldwide, as are such hard-shelled fruits as pecans and other nuts. Flowers also produce wheat, rice, oats, and corn—the grains that are dietary mainstays throughout the world. People even eat unopened flowers, such as those of broccoli and cauliflower, which are popular vegetables. Natural dyes come from flowers, and fragrant flowers, such as jasmine and damask rose, are harvested for their oils and made into perfumes. Certain flowers, such as red clover blossoms, are collected for their medicinal properties, and edible flowers, such as nasturtiums, add color and flavor to a variety of dishes. Flowers also are used to symbolize emotions, as is evidenced by their use from ancient times in significant rituals, such as weddings and funerals. II PARTS OF A FLOWER Flowers typically are composed of four parts, or whorls, arranged in concentric rings attached to the tip of the stem. From innermost to outermost, these whorls are the (1) pistil, (2) stamens, (3) petals, and (4) sepals. A Pistil The innermost whorl, located in the center of the flower, is the female reproductive structure, or pistil. Often vase-shaped, the pistil consists of three parts: the stigma, the style, and the ovary. The stigma, a slightly flared and sticky structure at the top of the pistil, functions by trapping pollen grains, the structures that give rise to the sperm cells necessary for fertilization. The style is a narrow stalk that supports the stigma. The style rises from the ovary, a slightly swollen structure seated at the base of the flower. Depending on the species, the ovary contains one or more ovules, each of which holds one egg cell. After fertilization, the ovules develop into seeds, while the ovary enlarges into the fruit. If a flower has only one ovule, the fruit will contain one seed, as in a peach. The fruit of a flower with many ovules, such as a tomato, will have many seeds. An ovary that contains one or more ovules also is called a carpel, and a pistil may be composed of one to several carpels. B Stamens The next whorl consists of the male reproductive structures, several to many stamens arranged around the pistil. A stamen consists of a slender stalk called the filament, which supports the anther, a tiny compartment where pollen forms. When a flower is still an immature, unopened bud, the filaments are short and serve to transport nutrients to the developing pollen. As the flower opens, the filaments lengthen and hold the anthers higher in the flower, where the pollen grains are more likely to be picked up by visiting animals, wind, or in the case of some aquatic plants, by water. The animals, wind, or water might then carry the pollen to the stigma of an appropriate flower. The placement of pollen on the stigma is called pollination. Pollination initiates the process of fertilization. C Petals Petals, the next whorl, surround the stamens and collectively are termed the corolla. Many petals have bright colors, which attract animals that carry out pollination, collectively termed pollinators. Three groups of pigments—alone or in combination—produce a veritable rainbow of petal colors: anthocyanins yield shades of violet, blue, and red; betalains create reds; and carotenoids produce yellows and orange. Petal color can be modified in several ways. Texture, for example, can play a role in the overall effect—a smooth petal is shiny, while a rough one appears velvety. If cells inside the petal are filled with starch, they create a white layer that makes pigments appear brighter. Petals with flat air spaces between cells shimmer iridescently. In some flowers, the pigments form distinct patterns, invisible to humans but visible to bees, who can see ultraviolet light. Like the landing strips of an airport, these patterns, called nectar guides, direct bees to the nectar within the flower. Nectar is made in specialized glands located at or near the petal’s base. Some flowers secrete copious amounts of nectar and attract big pollinators with large appetites, such as bats. Other flowers, particularly those that depend on wind or water to transport their pollen, may secrete little or no nectar. The petals of many species also are the source of the fragrances that attract pollinators. In these species, the petals house tiny glands that produce essential, or volatile, oils that vaporize easily, often releasing a distinctive aroma. One flower can make dozens of different essential oils, which mingle to yield the flower’s unique fragrance. D Sepals The sepals, the outermost whorl, together are called the calyx. In the flower bud, the sepals tightly enclose and protect the petals, stamens, and pistil from rain or insects. The sepals unfurl as the flower opens and often resemble small green leaves at the flower’s base. In some flowers, the sepals are colorful and work with the petals to attract pollinators. E Variations in Structure Like virtually all forms in nature, flowers display many variations in their structure. Most flowers have all four whorls—pistil, stamens, petals, and sepals. Botanists call these complete flowers. But some flowers are incomplete, meaning they lack one or more whorls. Incomplete flowers are most common in plants whose pollen is dispersed by the wind or water. Since these flowers do not need to attract pollinators, most have no petals, and some even lack sepals. Certain wind-pollinated flowers do have small sepals and petals that create eddies in the wind, directing pollen to swirl around and settle on the flower. In still other flowers, the petals and sepals are fused into structures called a floral tube. Flowers that lack either stamens or a pistil are said to be imperfect. The petal-like rays on the edge of a sunflower, for example, are actually tiny, imperfect flowers that lack stamens. Imperfect flowers can still function in sexual reproduction. A flower that lacks a pistil but has stamens produces pollen, and a flower with a pistil but no stamens provides ovules and can develop into fruits and seeds. Flowers that have only stamens are termed staminate, and flowers that have only a pistil are called pistillate. Although a single flower can be either staminate or pistillate, a plant species must have both to reproduce sexually. In some species with imperfect flowers, the staminate and pistillate flowers occur on the same plant. Such plants, known as monoecious species, include corn. The tassel at the top of the corn plant consists of hundreds of tiny staminate flowers, and the ears, which are located laterally on the stem, contain clusters of pistillate flowers. The silks of corn are very long styles leading to the ovaries, which, when ripe, form the kernels of corn. In dioecious species—such as date, willow, and hemp—staminate and pistillate flowers are found on different plants. A date tree, for example, will develop male or female flowers but not both. In dioecious species, at least two plants, one bearing staminate flowers and one bearing pistillate flowers, are needed for pollination and fertilization. Other variations are found in the types of stems that support flowers. In some species, flowers are attached to only one main stem, called the peduncle. In others, flowers are attached to smaller stems, called pedicels, that branch from the peduncle. The peduncle and pedicels orient a flower so that its pollinator can reach it. In the morning glory, for example, pedicels hold the flowers in a horizontal position. This enables their hummingbird pollinators to feed since they do not crawl into the flower as other pollinators do, but hover near the flower and lick the nectar with their long tongues. Scientists assign specific terms to the different flower and stem arrangements to assist in the precise identification of a flower. A plant with just one flower at the tip of the peduncle—a tulip, for example—is termed solitary. In a spike, such as sage, flowers are attached to the sides of the peduncle. Sometimes flowers are grouped together in a cluster called an inflorescence. In an indeterminate inflorescence, the lower flowers bloom first, and blooming proceeds over a period of days from the bottom to the top of the peduncle or pedicels. As long as light, water, temperature, and nutrients are favorable, the tip of the peduncle or pedicel continues to add new buds. There are several types of indeterminate inflorescences. These include the raceme, formed by a series of pedicels that emerge from the peduncle, as in snapdragons and lupines; and the panicle, in which the series of pedicels branches and rebranches, as in lilac. In determinate inflorescences, called cymes, the peduncle is capped by a flower bud, which prevents the stem from elongating and adding more flowers. However, new flower buds appear on side pedicels that form below the central flower, and the flowers bloom from the top to the bottom of the pedicels. Flowers that bloom in cymes include chickweed and phlox. III SEXUAL REPRODUCTION Sexual reproduction mixes the hereditary material from two parents, creating a population of genetically diverse offspring. Such a population can better withstand environmental changes. Unlike animals, flowers cannot move from place to place, yet sexual reproduction requires the union of the egg from one parent with the sperm from another parent. Flowers overcome their lack of mobility through the all-important process of pollination. Pollination occurs in several ways. In most flowers pollinated by insects and other animals, the pollen escapes through pores in the anthers. As pollinators forage for food, the pollen sticks to their body and then rubs off on the flower's stigma, or on the stigma of the next flower they visit. In plants that rely on wind for pollination, the anthers burst open, releasing a cloud of yellow, powdery pollen that drifts to other flowers. In a few aquatic plants, pollen is released into the water, where it floats to other flowers. Pollen consists of thousands of microscopic pollen grains. A tough pollen wall surrounds each grain. In most flowers, the pollen grains released from the anthers contain two cells. If a pollen grain lands on the stigma of the same species, the pollen grain germinates—one cell within the grain emerges through the pollen wall and contacts the surface of the stigma, where it begins to elongate. The lengthening cell grows through the stigma and style, forming a pollen tube that transports the other cell within the pollen down the style to the ovary. As the tube grows, the cell within it divides to produce two sperm cells, the male sex cells. In some species, the sperm are produced before the pollen is released from the anther. Independently of the pollen germination and pollen tube growth, developmental changes occur within the ovary. The ovule produces several specialized structures—among them, the egg, or female sex cell. The pollen tube grows into the ovary, crosses the ovule wall, and releases the two sperm cells into the ovule. One sperm unites with the egg, triggering hormonal changes that transform the ovule into a seed. The outer wall of the ovule develops into the seed coat, while the fertilized egg grows into an embryonic plant. The growing embryonic plant relies on a starchy, nutrient-rich food in the seed called endosperm. Endosperm develops from the union of the second sperm with the two polar nuclei, also known as the central cell nuclei, structures also produced by the ovary. As the seed grows, hormones are released that stimulate the walls of the ovary to expand, and it develops into the fruit. The mature fruit often is hundreds or even thousands of times larger than the tiny ovary from which it grew, and the seeds also are quite large compared to the miniscule ovules from which they originated. The fruits, which are unique to flowering plants, play an extremely important role in dispersing seeds. Animals eat fruits, such as berries and grains. The seeds pass through the digestive tract of the animal unharmed and are deposited in a wide variety of locations, where they germinate to produce the next generation of flowering plants, thus continuing the species. Other fruits are dispersed far and wide by wind or water; the fruit of maple trees, for example, has a winglike structure that catches the wind. IV FLOWERING AND THE LIFE CYCLE The life cycle of a flowering plant begins when the seed germinates. It progresses through the growth of roots, stems, and leaves; formation of flower buds; pollination and fertilization; and seed and fruit development. The life cycle ends with senescence, or old age, and death. Depending on the species, the life cycle of a plant may last one, two, or many years. Plants called annuals carry out their life cycle within one year. Biennial plants live for two years: The first year they produce leaves, and in the second year they produce flowers and fruits and then die. Perennial plants live for more than one year. Some perennials bloom every year, while others, like agave, live for years without flowering and then in a few weeks produce thousands of flowers, fruits, and seeds before dying. Whatever the life cycle, most plants flower in response to certain cues. A number of factors influence the timing of flowering. The age of the plant is critical—most plants must be at least one or two weeks old before they bloom; presumably they need this time to accumulate the energy reserves required for flowering. The number of hours of darkness is another factor that influences flowering. Many species bloom only when the night is just the right length—a phenomenon called photoperiodism. Poinsettias, for example, flower in winter when the nights are long, while spinach blooms when the nights are short—late spring through late summer. Temperature, light intensity, and moisture also affect the time of flowering. In the desert, for example, heavy rains that follow a long dry period often trigger flowers to bloom. V EVOLUTION OF FLOWERS Flowering plants are thought to have evolved around 135 million years ago from conebearing gymnosperms. Scientists had long proposed that the first flower most likely resembled today’s magnolias or water lilies, two types of flowers that lack some of the specialized structures found in most modern flowers. But in the late 1990s scientists compared the genetic material deoxyribonucleic acid (DNA) of different plants to determine their evolutionary relationships. From these studies, scientists identified a small, creamcolored flower from the genus Amborella as the only living relative to the first flowering plant. This rare plant is found only on the South Pacific island of New Caledonia. The evolution of flowers dramatically changed the face of earth. On a planet where algae, ferns, and cycads tinged the earth with a monochromatic green hue, flowers emerged to paint the earth with vivid shades of red, pink, orange, yellow, blue, violet, and white. Flowering plants spread rapidly, in part because their fruits so effectively disperse seeds. Today, flowering plants occupy virtually all areas of the planet, with about 240,000 species known. Many flowers and pollinators coevolved—that is, they influenced each other’s traits during the process of evolution. For example, any population of flowers displays a range of color, fragrance, size, and shape—hereditary traits that can be passed from one generation to the next. Certain traits or combinations of traits appeal more to pollinators, so pollinators are more likely to visit these attractive plants. The appealing plants have a greater chance of being pollinated than others and, thus, are likely to produce more seeds. The seeds develop into plants that display the inherited appealing traits. Similarly, in a population of pollinators, there are variations in hereditary traits, such as wing size and shape, length and shape of tongue, ability to detect fragrance, and so on. For example, pollinators whose bodies are small enough to reach inside certain flowers gather pollen and nectar more efficiently than larger-sized members of their species. These efficient, well-fed pollinators have more energy for reproduction. Their offspring inherit the traits that enable them to forage successfully in flowers, and from generation to generation, these traits are preserved. The pollinator preference seen today for certain flower colors, fragrances, and shapes often represents hundreds of thousands of years of coevolution. Coevolution often results in exquisite adaptations between flower and pollinator. These adaptations can minimize competition for nectar and pollen among pollinators and also can minimize competition among flowers for pollinators. Comet orchids, for example, have narrow flowers almost a foot and a half long. These flowers are pollinated only by a species of hawk moth that has a narrow tongue just the length of the flowers. The flower shape prevents other pollinators from consuming the nectar, guarantees the moths a meal, and ensures the likelihood of pollination and fertilization. Most flowers and pollinators, however, are not as precisely matched to each other, but adaptation still plays a significant role in their interactions. For example, hummingbirds are particularly attracted to the color red. Hummingbird-pollinated flowers typically are red, and they often are narrow, an adaptation that suits the long tongues of hummingbirds. Bats are large pollinators that require relatively more energy than other pollinators. They visit big flowers like those of saguaro cactus, which supply plenty of nectar or pollen. Bats avoid little flowers that do not offer enough reward. Other examples of coevolution are seen in the bromeliads and orchids that grow in dark forests. These plants often have bright red, purple, or white sepals or petals, which make them visible to pollinators. Night-flying pollinators, such as moths and bats, detect white flowers most easily, and flowers that bloom at sunset, such as yucca, datura, and cereus, usually are white. The often delightful and varied fragrances of flowers also reveal the hand of coevolution. In some cases, insects detect fragrance before color. They follow faint aromas to flowers that are too far away to be seen, recognizing petal shape and color only when they are very close to the flower. Some night-blooming flowers emit sweet fragrances that attract nightflying moths. At the other extreme, carrion flowers, flowers pollinated by flies, give off the odor of rotting meat to attract their pollinators. Flowers and their pollinators also coevolved to influence each other’s life cycles. Among species that flower in response to a dark period, some measure the critical night length so accurately that all species of the region flower in the same week or two. This enables related plants to interbreed, and provides pollinators with enough pollen and nectar to live on so that they too can reproduce. The process of coevolution also has resulted in synchronization of floral and insect life cycles. Sometimes flowering occurs the week that insect pollinators hatch or emerge from dormancy, or bird pollinators return from winter migration, so that they feed on and pollinate the flowers. Flowering also is timed so that fruits and seeds are produced when animals are present to feed on the fruits and disperse the seeds. VI FLOWERS AND EXTINCTION Like the amphibians, reptiles, insects, birds, and mammals that are experiencing alarming extinction rates, a number of wildflower species also are endangered. The greatest threat lies in the furious pace at which land is cleared for new houses, industries, and shopping malls to accommodate rapid population growth. Such clearings are making the meadow, forest, and wetland homes of wildflowers ever more scarce. Among the flowers so endangered is the rosy periwinkle of Madagascar, a plant whose compounds have greatly reduced the death rates from childhood leukemia and Hodgkin’s disease. Flowering plants, many with other medicinal properties, also are threatened by global warming from increased combustion of fossil fuels; increased ultraviolet light from ozone layer breakdown; and acid rain from industrial emissions. Flowering plants native to a certain region also may be threatened by introduced species. Yellow toadflax, for example, a garden plant brought to the United States and Canada from Europe, has become a notorious weed, spreading to many habitats and preventing the growth of native species. In some cases, unusual wildflowers such as orchids are placed at risk when they are collected extensively to be sold. Many of the threats that endanger flowering plants also place their pollinators at risk. When a species of flower or pollinator is threatened, the coevolution of pollinators and flowers may prove to be disadvantageous. If a flower species dies out, its pollinators will lack food and may also die out, and the predators that depend on the pollinators also become threatened. In cases where pollinators are adapted to only one or a few types of flowers, the loss of those plants can disrupt an entire ecosystem. Likewise, if pollinators are damaged by ecological changes, plants that depend on them will not be pollinated, seeds will not be formed, and new generations of plants cannot grow. The fruits that these flowers produce may become scarce, affecting the food supply of humans and other animals that depend on them. Worldwide, more than 300 species of flowering plants are endangered, or at immediate risk of extinction. Another two dozen or so are considered threatened, or likely to become extinct in the near future. Of these species, fewer than 50 were the focus of preservation plans in the late 1990s. Various regional, national, and international organizations have marshaled their resources in response to the critical need for protecting flowering plants and their habitats. In the United States, native plant societies work to conserve regional plants in every state. The United States Fish and Wildlife Endangered Species Program protects habitats for threatened and endangered species throughout the United States, as do the Canadian Wildlife Service in Canada, the Ministry for Social Development in Mexico, and similar agencies in other countries. At the international level, the International Plant Conservation Programme at Cambridge, England, collects information and provides education worldwide on plant species at risk, and the United Nations Environmental Programme supports a variety of efforts that address the worldwide crisis of endangered species. Pollination I INTRODUCTION Pollination, transfer of pollen grains from the male structure of a plant to the female structure of a plant. The pollen grains contain cells that will develop into male sex cells, or sperm. The female structure of a plant contains the female sex cells, or eggs. Pollination prepares the plant for fertilization, the union of the male and female sex cells. Virtually all grains, fruits, vegetables, wildflowers, and trees must be pollinated and fertilized to produce seed or fruit, and pollination is vital for the production of critically important agricultural crops, including corn, wheat, rice, apples, oranges, tomatoes, and squash. Pollen grains are microscopic in size, ranging in diameter from less than 0.01mm (about 0.0000004 in) to a little over 0.5 mm (about 0.00002 in). Millions of pollen grains waft along in the clouds of pollen seen in the spring, often causing the sneezing and watery eyes associated with pollen allergies. The outer covering of pollen grains, called the pollen wall, may be intricately sculpted with designs that in some instances can be used to distinguish between plant species. A chemical in the wall called sporopollenin makes the wall resistant to decay. Although the single cell inside the wall is viable, or living, for only a few weeks, the distinctive patterns of the pollen wall can remain intact for thousands or millions of years, enabling scientists to identify the plant species that produced the pollen. Scientists track long-term climate changes by studying layers of pollen deposited in lake beds. In a dry climate, for example, desert species such as tanglehead grass and vine mesquite grass thrive, and their pollen drifts over lakes, settling in a layer at the bottom. If a climate change brings increased moisture, desert species are gradually replaced by forest species such as pines and spruce, whose pollen forms a layer on top of the grass pollen. Scientists take samples of mud from the lake bottom and analyze the pollen in the mud to identify plant species. Comparing the identified species with their known climate requirements, scientists can trace climate shifts over the millennia. II HOW POLLINATION WORKS Most plants have specialized reproductive structures—cones or flowers—where the gametes, or sex cells, are produced. Cones are the reproductive structures of spruce, pine, fir, cycads, and certain other gymnosperms and are of two types: male and female. On conifers such as fir, spruce, and pine trees, the male cones are produced in the spring. The cones form in clusters of 10 to 50 on the tips of the lower branches. Each cone typically measures 1 to 4 cm (0.4 to 1.5 in) and consists of numerous soft, green, spirally attached scales shaped like a bud. Thousands of pollen grains are produced on the lower surface of each scale, and are released to the wind when they mature in late spring. The male cones dry out and shrivel up after their pollen is shed. The female cones typically develop on the upper branches of the same tree that produces the male cones. They form as individual cones or in groups of two or three. A female cone is two to five times longer than the male cone, and starts out with green, spirally attached scales. The scales open the first spring to take in the drifting pollen. After pollination, the scales close for one to two years to protect the developing seed. During this time the scales gradually become brown and stiff, the cones typically associated with conifers. When the seeds are mature, the scales of certain species separate and the mature seeds are dispersed by the wind. In other species, small animals such as gray jays, chipmunks, or squirrels break the scales apart before swallowing some of the enclosed seeds. They cache, or hide, other seeds in a variety of locations, which results in effective seed dispersal-and eventually germination-since the animals do not always return for the stored seeds. Pollination occurs in cone-bearing plants when the wind blows pollen from the male to the female cone. Some pollen grains are trapped by the pollen drop, a sticky substance produced by the ovule, the egg-containing structure that becomes the seed. As the pollen drop dries, it draws a pollen grain through a tiny hole into the ovule, and the events leading to fertilization begin. The pollen grain germinates and produces a short tube, a pollen tube, which grows through the tissues of the ovule and contacts the egg. A sperm cell moves through the tube to the egg where it unites with it in fertilization. The fertilized egg develops into an embryonic plant, and at the same time, tissues in the ovule undergo complex changes. The inner tissues become food for the embryo, and the outer wall of the ovule hardens into a seedcoat. The ovule thus becomes a seed—a tough structure containing an embryonic plant and its food supply. The seed remains tucked in the closed cone scale until it matures and the cone scales open. Each scale of a cone bears two seeds on its upper surface. In plants with flowers, such as roses, maple trees, and corn, pollen is produced within the male parts of the plant, called the stamens, and the female sex cells, or eggs, are produced within the female part of the plant, the pistil. With the help of wind, water, insects, birds, or small mammals, pollen is transferred from the stamens to the stigma, a sticky surface on the pistil. Pollination may be followed by fertilization. The pollen on the stigma germinates to produce a long pollen tube, which grows down through the style, or neck of the pistil, and into the ovary, located at the base of the pistil. Depending on the species, one, several, or many ovules are embedded deep within the ovary. Each ovule contains one egg. Fertilization occurs when a sperm cell carried by the pollen tube unites with the egg. As the fertilized egg begins to develop into an embryonic plant, it produces a variety of hormones to stimulate the outer wall of the ovule to harden into a seedcoat, and tissues of the ovary enlarge into a fruit. The fruit may be a fleshy fruit, such as an apple, orange, tomato, or squash, or a dry fruit, such as an almond, walnut, wheat grain, or rice grain. Unlike conifer seeds, which lie exposed on the cone scales, the seeds of flowering plants are contained within a ripened ovary, a fleshy or dry fruit. III POLLINATION METHODS In order for pollination to be successful, pollen must be transferred between plants of the same species—for example, a rose flower must always receive rose pollen and a pine tree must always receive pine pollen. Plants typically rely on one of two methods of pollination: cross-pollination or self-pollination, but some species are capable of both. Most plants are designed for cross-pollination, in which pollen is transferred between different plants of the same species. Cross-pollination ensures that beneficial genes are transmitted relatively rapidly to succeeding generations. If a beneficial gene occurs in just one plant, that plant’s pollen or eggs can produce seeds that develop into numerous offspring carrying the beneficial gene. The offspring, through cross-pollination, transmit the gene to even more plants in the next generation. Cross-pollination introduces genetic diversity into the population at a rate that enables the species to cope with a changing environment. New genes ensure that at least some individuals can endure new diseases, climate changes, or new predators, enabling the species as a whole to survive and reproduce. Plant species that use cross-pollination have special features that enhance this method. For instance, some plants have pollen grains that are lightweight and dry so that they are easily swept up by the wind and carried for long distances to other plants. Other plants have pollen and eggs that mature at different times, preventing the possibility of self-pollination. In self-pollination, pollen is transferred from the stamens to the pistil within one flower. The resulting seeds and the plants they produce inherit the genetic information of only one parent, and the new plants are genetically identical to the parent. The advantage of selfpollination is the assurance of seed production when no pollinators, such as bees or birds, are present. It also sets the stage for rapid propagation—weeds typically self-pollinate, and they can produce an entire population from a single plant. The primary disadvantage of self-pollination is that it results in genetic uniformity of the population, which makes the population vulnerable to extinction by, for example, a single devastating disease to which all the genetically identical plants are equally susceptible. Another disadvantage is that beneficial genes do not spread as rapidly as in cross-pollination, because one plant with a beneficial gene can transmit it only to its own offspring and not to other plants. Selfpollination evolved later than cross-pollination, and may have developed as a survival mechanism in harsh environments where pollinators were scarce. IV POLLEN TRANSFER Unlike animals, plants are literally rooted to the spot, and so cannot move to combine sex cells from different plants; for this reason, species have evolved effective strategies for accomplishing cross-pollination. Some plants simply allow their pollen to be carried on the wind, as is the case with wheat, rice, corn, and other grasses, and pines, firs, cedars, and other conifers. This method works well if the individual plants are growing close together. To ensure success, huge amounts of pollen must be produced, most of which never reaches another plant. Most plants, however, do not rely on the wind. These plants employ pollinators—bees, butterflies, and other insects, as well as birds, bats, and mice—to transport pollen between sometimes widely scattered plants. While this strategy enables plants to expend less energy making large amounts of pollen, they must still use energy to produce incentives for their pollinators. For instance, birds and insects may be attracted to a plant by its tasty food in the form of nectar, a sugary, energy-rich fluid that bees eat and also use for making honey. Bees and other pollinators may be attracted by a plant’s pollen, a nutritious food that is high in protein and provides almost every known vitamin, about 25 trace minerals, and 22 amino acids. As a pollinator enters a flower or probes it for nectar, typically located deep in the flower, or grazes on the pollen itself, the sticky pollen attaches to parts of its body. When the pollinator visits the next flower in search of more nectar or pollen, it brushes against the stigma and pollen grains rub off onto the stigma. In this way, pollinators inadvertently transfer pollen from flower to flower. Some flowers supply wax that bees use for construction material in their hives. In the Amazonian rain forest, the males of certain bee species travel long distances to visit orchid flowers, from which they collect oil used to make a powerful chemical, called a pheromone, used to attract female bees for mating. The bees carry pollen between flowers as they collect the oils from the orchids. Flowers are designed to attract pollinators, and the unique shape, color, and even scent of a flower appeals to specific pollinators. Birds see the color red particularly well and are prone to pollinating red flowers. The long red floral tubes of certain flowers are designed to attract hummingbirds but discourage small insects that might take the nectar without transferring pollen. Flowers that are pollinated by bats are usually large, light in color, heavily scented, and open at night, when bats are most active. Many of the brighter pink, orange, and yellow flowers are marked by patterns on the petals that can be seen only with ultraviolet light. These patterns act as maps to the nectar glands typically located at the base of the flower. Bees are able to see ultraviolet light and use the colored patterns to find nectar efficiently. These interactions between plants and animals are mutualistic, since both species benefit from the interaction. Undoubtedly plants have evolved flower structures that successfully attract specific pollinators. And in some cases the pollinators may have adapted their behaviors to take advantage of the resources offered by specific kinds of flowers. V CURRENT TOPICS Scientists control pollination by transferring pollen by hand from stamens to stigmas. Using these artificial pollination techniques, scientists study how traits are inherited in plants, and they also breed plants with selected traits—roses with larger blooms, for example, or apple trees that bear more fruit. Scientists also use artificial pollination to investigate temperature and moisture requirements for pollination in different species, the biochemistry of pollen germination, and other details of the pollination process. Some farmers are concerned about the decline in numbers of pollinating insects, especially honey bees. In recent years many fruit growers have found their trees have little or no fruit, thought to be the result of too few honey bee pollinators. Wild populations of honey bees are nearly extinct in some areas of the northern United States and southern Canada. Domestic honey bees—those kept in hives by beekeepers—have declined by as much as 80 percent since the late 1980s. The decline of wild and domestic honey bees is due largely to mite infestations in their hives—the mites eat the young, developing bees. Bees and other insect pollinators are also seriously harmed by chemical toxins in their environment. These toxins, such as the insecticides Diazinon and Malathion, either kill the pollinator directly or harm them by damaging the environment in which they live. Fertilization I INTRODUCTION Fertilization, the process in which gametes—a male's sperm and a female's egg or ovum— fuse together, producing a single cell that develops into an adult organism. Fertilization occurs in both plants and animals that reproduce sexually—that is, when a male and a female are needed to produce an offspring (see Reproduction). This article focuses on animal fertilization. For information on plant fertilization see the articles on Seed, Pollination, and Plant Propagation. Fertilization is a precise period in the reproductive process. It begins when the sperm contacts the outer surface of the egg and it ends when the sperm's nucleus fuses with the egg's nucleus. Fertilization is not instantaneous—it may take 30 minutes in sea urchins and up to several hours in mammals. After nuclear fusion, the fertilized egg is called a zygote. When the zygote divides to a two-cell stage, it is called an embryo. Fertilization is necessary to produce a single cell that contains a full complement of genes. When a cell undergoes meiosis, gametes are formed—a sperm cell or an egg cell. Each gamete contains only half the genetic material of the original cell. During sperm and egg fusion in fertilization, the full amount of genetic material is restored: half contributed by the male parent and half contributed by the female. In humans, for example, there are 46 chromosomes (carriers of genetic material) in each human body cell—except in the sperm and egg, which each have 23 chromosomes. As soon as fertilization is complete, the zygote that is formed has a complete set of 46 chromosomes containing genetic information from both parents. The fertilization process also activates cell division. Without activation from the sperm, an egg typically remains dormant and soon dies. In general, it is fertilization that sets the egg on an irreversible pathway of cell division and embryo development. II THE FERTILIZATION PROCESS Fertilization is complete when the sperm's nucleus fuses with the egg's nucleus. Researchers have identified several specific steps in this process. The first step is the sperm approaching the egg. In some organisms, sperm just swim randomly toward the egg (or eggs). In others, the eggs secrete a chemical substance that attracts the sperm toward the eggs. For example, in one species of sea urchin (an aquatic animal often used in fertilization research), the sperm swim toward a small protein molecule in the egg's protective outer layer, or surface coat. In humans there is evidence that sperm are attracted to the fluid surrounding the egg. The second step of fertilization is the attachment of several sperm to the egg's surface coat. All animal eggs have surface coats, which are variously named the vitelline envelope (in abalone and frogs) or the zona pellucida (in mammals). This attachment step may last for just a few seconds or for several minutes. The third step is a complex process in which the sperm penetrate the egg’s surface coat. The head, or front end, of the sperm of almost all animals except fish contains an acrosome, a membrane-enclosed compartment. The acrosome releases proteins that dissolve the surface coat of an egg of the same species. In mammals, a molecule of the egg’s surface coat triggers the sperm's acrosome to explosively release its contents onto the surface coat, where the proteins dissolve a tiny hole. A single sperm is then able to make a slitlike channel in the surface coat, through which it swims to reach the egg's cell membrane. In fish eggs that do not have acrosomes, specialized channels, called micropyles, enable a single sperm to swim down through the egg's surface coat to reach the cell membrane. When more than one sperm enters the egg, the resulting zygote typically develops abnormally. The next step in fertilization—the fusion of sperm and egg cell membranes—is poorly understood. When the membranes fuse, a single sperm and the egg become one cell. This process takes only seconds, and it is directly observable by researchers. Specific proteins on the surface of the sperm appear to induce this fusion process, but the exact mechanism is not yet known. After fusion of the cell membranes the sperm is motionless. The egg extends cytoplasmic fingers to surround the sperm and pull it into the egg's cytoplasm. Filaments called microtubules begin to grow from the inner surface of the egg cell's membrane inward toward the cell's center, resembling spokes of a bicycle wheel growing from the rim inward toward the wheel's hub. As the microtubules grow, the sperm and egg nuclei are pushed toward the egg's center. Finally, in a process that is also poorly understood, the egg and sperm nuclear envelopes (outer membranes) fuse, permitting the chromosomes from the egg and sperm to mix within a common space. A zygote is formed, and development of an embryo begins. III TYPES OF FERTILIZATION Two types of fertilization occur in animals: external and internal. In external fertilization the egg and sperm come together outside of the parents' bodies. Animals such as sea urchins, starfish, clams, mussels, frogs, corals, and many fish reproduce in this way. The gametes are released, or spawned, by the adults into the ocean or a pond. Fertilization takes place in this watery environment, where embryos start to develop. A disadvantage to external fertilization is that the meeting of egg and sperm is somewhat left to chance. Swift water currents, water temperature changes, predators, and a variety of other interruptions can prevent fertilization from occurring. A number of adaptations help ensure that offspring will successfully be produced. The most important adaptation is the production of literally millions of sperm and eggs—if even a tiny fraction of these gametes survive to become zygotes, many offspring will still result. Males and females also use behavioral clues, chemical signals, or other stimuli to coordinate spawning so that sperm and eggs appear in the water at the same time and in the same place. In animals that use external fertilization, there is no parental care for the developing embryos. Instead, the eggs of these animals contain a food supply in the form of a yolk that nourishes the embryos until they hatch and are able to feed on their own. Internal fertilization takes place inside the female's body. The male typically has a penis or other structure that delivers sperm into the female's reproductive tract. All mammals, reptiles, and birds as well as some invertebrates, including snails, worms, and insects, use internal fertilization. Internal fertilization does not necessarily require that the developing embryo remains inside the female's body. In honey bees, for example, the queen bee deposits the fertilized eggs into special compartments in the honeycomb. These compartments are supplied with food resources for the young bees to use as they develop. Various adaptations have evolved in the reproductive process of internal-fertilizing organisms. Because the sperm and egg are always protected inside the male's and female's bodies—and are deliberately placed into close contact during mating—relatively few sperm and eggs are produced. Many animals in this group provide extensive parental care of their young. In most mammals, including humans, two specialized structures in the female's body further help to protect and nourish the developing embryo. One is the uterus, which is the cushioned chamber where the embryo matures before birth; the other is the placenta, which is a blood-rich organ that supplies nutrients to the embryo and also removes its wastes (see Pregnancy and Childbirth). IV RESEARCH ISSUES Although reproduction is well studied in many kinds of organisms, fertilization is one of the least understood of all fundamental biological processes. Our knowledge of this fascinating topic has been vastly improved by many recent discoveries. For example, researchers have discovered how to clone the genes that direct the fertilization process. Yet many important questions still remain. Scientists are actively trying to determine issues such as how sperm and egg cells recognize that they are from the same species; what molecules sperm use to attach to egg coats; and how signals on the sperm's surface are relayed inside to trigger the acrosome reaction. With continued study, answers to these questions will one day be known. Q12: (i) (ii) Research companies developing compressed natural gas (CNG) and methanol (most of which is made from natural gas today but can be made from garbage, trees, or seaweed) have been given government subsidies to get these efforts off the ground. But with oil prices still low, consumers have not had much incentive to accept the inconveniences of finding supply stations, more time-consuming fueling processes, reduced power output, and reduced driving range. Currently, all the alternatives to gas have drawbacks in terms of cost, ease of transport, and efficiency that prohibit their spread. But that could change rapidly if another oil crisis like that of the 1970s develops and if research continues. Any fuel combustion contributes to greenhouse gas emissions, however, and automakers imagine that stricter energy-consumption standards are probable in the future. In the United States onerous gasoline or energy taxes are less likely than a sudden tightening of CAFE standards, which have not changed for cars since 1994. Such restriction could, for example, put an end to the current boom in sales of large sport-utility vehicles that get relatively poor gas mileage. Therefore, long-term research focuses on other means of propulsion, including cars powered by electricity Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. CHCl). PVC is the most widely used of the amorphous plastics. PVC is lightweight, durable, and waterproof. Chlorine atoms bonded to the carbon backbone of its molecules give PVC its hard and flame-resistant properties.(iii) Polyvinyl chloride (PVC) is prepared from the organic compound vinyl chloride (CH2 In its rigid form, PVC is weather-resistant and is extruded into pipe, house siding, and gutters. Rigid PVC is also blow molded into clear bottles and is used to form other consumer products, including compact discs and computer casings. PVC can be softened with certain chemicals. This softened form of PVC is used to make shrink-wrap, food packaging, rainwear, shoe soles, shampoo containers, floor tile, gloves, upholstery, and other products. Most softened PVC plastic products are manufactured by extrusion, injection molding, or casting. (iv) (v) Antibiotics I INTRODUCTION Antibiotics (Greek anti, “against”; bios, “life”) are chemical compounds used to kill or inhibit the growth of infectious organisms. Originally the term antibiotic referred only to organic compounds, produced by bacteria or molds, that are toxic to other microorganisms. The term is now used loosely to include synthetic and semisynthetic organic compounds. Antibiotic refers generally to antibacterials; however, because the term is loosely defined, it is preferable to specify compounds as being antimalarials, antivirals, or antiprotozoals. All antibiotics share the property of selective toxicity: They are more toxic to an invading organism than they are to an animal or human host. Penicillin is the most well-known antibiotic and has been used to fight many infectious diseases, including syphilis, gonorrhea, tetanus, and scarlet fever. Another antibiotic, streptomycin, has been used to combat tuberculosis. II HISTORY Although the mechanisms of antibiotic action were not scientifically understood until the late 20th century, the principle of using organic compounds to fight infection has been known since ancient times. Crude plant extracts were used medicinally for centuries, and there is anecdotal evidence for the use of cheese molds for topical treatment of infection. The first observation of what would now be called an antibiotic effect was made in the 19th century by French chemist Louis Pasteur, who discovered that certain saprophytic bacteria can kill anthrax bacilli. In the first decade of the 20th century, German physician and chemist Paul Ehrlich began experimenting with the synthesis of organic compounds that would selectively attack an infecting organism without harming the host organism. His experiments led to the development, in 1909, of salvarsan, a synthetic compound containing arsenic, which exhibited selective action against spirochetes, the bacteria that cause syphilis. Salvarsan remained the only effective treatment for syphilis until the purification of penicillin in the 1940s. In the 1920s British bacteriologist Sir Alexander Fleming, who later discovered penicillin, found a substance called lysozyme in many bodily secretions, such as tears and sweat, and in certain other plant and animal substances. Lysozyme has some antimicrobial activity, but it is not clinically useful. Penicillin, the archetype of antibiotics, is a derivative of the mold Penicillium notatum. Penicillin was discovered accidentally in 1928 by Fleming, who showed its effectiveness in laboratory cultures against many disease-producing bacteria. This discovery marked the beginning of the development of antibacterial compounds produced by living organisms. Penicillin in its original form could not be given by mouth because it was destroyed in the digestive tract and the preparations had too many impurities for injection. No progress was made until the outbreak of World War II stimulated renewed research and the Australian pathologist Sir Howard Florey and German-British biochemist Ernst Chain purified enough of the drug to show that it would protect mice from infection. Florey and Chain then used the purified penicillin on a human patient who had staphylococcal and streptococcal septicemia with multiple abscesses and osteomyelitis. The patient, gravely ill and near death, was given intravenous injections of a partly purified preparation of penicillin every three hours. Because so little was available, the patient's urine was collected each day, the penicillin was extracted from the urine and used again. After five days the patient's condition improved vastly. However, with each passage through the body, some penicillin was lost. Eventually the supply ran out and the patient died. The first antibiotic to be used successfully in the treatment of human disease was tyrothricin, isolated from certain soil bacteria by American bacteriologist Rene Dubos in 1939. This substance is too toxic for general use, but it is employed in the external treatment of certain infections. Other antibiotics produced by a group of soil bacteria called actinomycetes have proved more successful. One of these, streptomycin, discovered in 1944 by American biologist Selman Waksman and his associates, was, in its time, the major treatment for tuberculosis. Since antibiotics came into general use in the 1950s, they have transformed the patterns of disease and death. Many diseases that once headed the mortality tables—such as tuberculosis, pneumonia, and septicemia—now hold lower positions. Surgical procedures, too, have been improved enormously, because lengthy and complex operations can now be carried out without a prohibitively high risk of infection. Chemotherapy has also been used in the treatment or prevention of protozoal and fungal diseases, especially malaria, a major killer in economically developing nations (see Third World). Slow progress is being made in the chemotherapeutic treatment of viral diseases. New drugs have been developed and used to treat shingles (see herpes) and chicken pox. There is also a continuing effort to find a cure for acquired immunodeficiency syndrome (AIDS), caused by the human immunodeficiency virus (HIV). III CLASSIFICATION Antibiotics can be classified in several ways. The most common method classifies them according to their action against the infecting organism. Some antibiotics attack the cell wall; some disrupt the cell membrane; and the majority inhibit the synthesis of nucleic acids and proteins, the polymers that make up the bacterial cell. Another method classifies antibiotics according to which bacterial strains they affect: staphylococcus, streptococcus, or Escherichia coli, for example. Antibiotics are also classified on the basis of chemical structure, as penicillins, cephalosporins, aminoglycosides, tetracyclines, macrolides, or sulfonamides, among others. A Mechanisms of Action Most antibiotics act by selectively interfering with the synthesis of one of the large-molecule constituents of the cell—the cell wall or proteins or nucleic acids. Some, however, act by disrupting the cell membrane (see Cell Death and Growth Suppression below). Some important and clinically useful drugs interfere with the synthesis of peptidoglycan, the most important component of the cell wall. These drugs include the Β-lactam antibiotics, which are classified according to chemical structure into penicillins, cephalosporins, and carbapenems. All these antibiotics contain a Β-lactam ring as a critical part of their chemical structure, and they inhibit synthesis of peptidoglycan, an essential part of the cell wall. They do not interfere with the synthesis of other intracellular components. The continuing buildup of materials inside the cell exerts ever greater pressure on the membrane, which is no longer properly supported by peptidoglycan. The membrane gives way, the cell contents leak out, and the bacterium dies. These antibiotics do not affect human cells because human cells do not have cell walls. Many antibiotics operate by inhibiting the synthesis of various intracellular bacterial molecules, including DNA, RNA, ribosomes, and proteins. The synthetic sulfonamides are among the antibiotics that indirectly interfere with nucleic acid synthesis. Nucleic-acid synthesis can also be stopped by antibiotics that inhibit the enzymes that assemble these polymers—for example, DNA polymerase or RNA polymerase. Examples of such antibiotics are actinomycin, rifamicin, and rifampicin, the last two being particularly valuable in the treatment of tuberculosis. The quinolone antibiotics inhibit synthesis of an enzyme responsible for the coiling and uncoiling of the chromosome, a process necessary for DNA replication and for transcription to messenger RNA. Some antibacterials affect the assembly of messenger RNA, thus causing its genetic message to be garbled. When these faulty messages are translated, the protein products are nonfunctional. There are also other mechanisms: The tetracyclines compete with incoming transfer-RNA molecules; the aminoglycosides cause the genetic message to be misread and a defective protein to be produced; chloramphenicol prevents the linking of amino acids to the growing protein; and puromycin causes the protein chain to terminate prematurely, releasing an incomplete protein. B Range of Effectiveness In some species of bacteria the cell wall consists primarily of a thick layer of peptidoglycan. Other species have a much thinner layer of peptidoglycan and an outer as well as an inner membrane. When bacteria are subjected to Gram's stain, these differences in structure affect the differential staining of the bacteria with a dye called gentian violet. The differences in staining coloration (gram-positive bacteria appear purple and gram-negative bacteria appear colorless or reddish, depending on the process used) are the basis of the classification of bacteria into gram-positive (those with thick peptidoglycan) and gramnegative (those with thin peptidoglycan and an outer membrane), because the staining properties correlate with many other bacterial properties. Antibacterials can be further subdivided into narrow-spectrum and broad-spectrum agents. The narrow-spectrum penicillins act against many gram-positive bacteria. Aminoglycosides, also narrowspectrum, act against many gram-negative as well as some gram-positive bacteria. The tetracyclines and chloramphenicols are both broad-spectrum drugs because they are effective against both gram-positive and gram-negative bacteria. C Cell Death and Growth Suppression Antibiotics may also be classed as bactericidal (killing bacteria) or bacteriostatic (stopping bacterial growth and multiplication). Bacteriostatic drugs are nonetheless effective because bacteria that are prevented from growing will die off after a time or be killed by the defense mechanisms of the host. The tetracyclines and the sulfonamides are among the bacteriostatic antiobiotics. Antibiotics that damage the cell membrane cause the cell's metabolites to leak out, thus killing the organism. Such compounds, including penicillins and cephalosporins, are therefore classed as bactericidal. IV TYPES OF ANTIBIOTICS Following is a list of some of the more common antibiotics and examples of some of their clinical uses. This section does not include all antibiotics nor all of their clinical applications. A Penicillins Penicillins are bactericidal, inhibiting formation of the cell wall. There are four types of penicillins: the narrow-spectrum penicillin-G types, ampicillin and its relatives, the penicillinase-resistants, and the extended spectrum penicillins that are active against pseudomonas. Penicillin-G types are effective against gram-positive strains of streptococci, staphylococci, and some gram-negative bacteria such as meningococcus. Penicillin-G is used to treat such diseases as syphilis, gonorrhea, meningitis, anthrax, and yaws. The related penicillin V has a similar range of action but is less effective. Ampicillin and amoxicillin have a range of effectiveness similar to that of penicillin-G, with a slightly broader spectrum, including some gram-negative bacteria. The penicillinase-resistants are penicillins that combat bacteria that have developed resistance to penicillin-G. The antipseudomonal penicillins are used against infections caused by gram-negative Pseudomonas bacteria, a particular problem in hospitals. They may be administered as a prophylactic in patients with compromised immune systems, who are at risk from gramnegative infections. Side effects of the penicillins, while relatively rare, can include immediate and delayed allergic reactions—specifically, skin rashes, fever, and anaphylactic shock, which can be fatal. B Cephalosporin Like the penicillins, cephalosporins have a Β-lactam ring structure that interferes with synthesis of the bacterial cell wall and so are bactericidal. Cephalosporins are more effective than penicillin against gram-negative bacilli and equally effective against grampositive cocci. Cephalosporins may be used to treat strains of meningitis and as a prophylactic for orthopedic, abdominal, and pelvic surgery. Rare hypersensitive reactions from the cephalosporins include skin rash and, less frequently, anaphylactic shock. C Aminoglycosides Streptomycin is the oldest of the aminoglycosides. The aminoglycosides inhibit bacterial protein synthesis in many gram-negative and some gram-positive organisms. They are sometimes used in combination with penicillin. The members of this group tend to be more toxic than other antibiotics. Rare adverse effects associated with prolonged use of aminoglycosides include damage to the vestibular region of the ear, hearing loss, and kidney damage. D Tetracyclines Tetracyclines are bacteriostatic, inhibiting bacterial protein synthesis. They are broadspectrum antibiotics effective against strains of streptococci, gram-negative bacilli, rickettsia (the bacteria that causes typhoid fever), and spirochetes (the bacteria that causes syphilis). They are also used to treat urinary-tract infections and bronchitis. Because of their wide range of effectiveness, tetracyclines can sometimes upset the balance of resident bacteria that are normally held in check by the body's immune system, leading to secondary infections in the gastrointestinal tract and vagina, for example. Tetracycline use is now limited because of the increase of resistant bacterial strains. E Macrolides The macrolides are bacteriostatic, binding with bacterial ribosomes to inhibit protein synthesis. Erythromycin, one of the macrolides, is effective against gram-positive cocci and is often used as a substitute for penicillin against streptococcal and pneumococcal infections. Other uses for macrolides include diphtheria and bacteremia. Side effects may include nausea, vomiting, and diarrhea; infrequently, there may be temporary auditory impairment. F Sulfonamides The sulfonamides are synthetic bacteriostatic, broad-spectrum antibiotics, effective against most gram-positive and many gram-negative bacteria. However, because many gramnegative bacteria have developed resistance to the sulfonamides, these antibiotics are now used only in very specific situations, including treatment of urinary-tract infection, against meningococcal strains, and as a prophylactic for rheumatic fever. Side effects may include disruption of the gastrointestinal tract and hypersensitivity. V PRODUCTION The production of a new antibiotic is lengthy and costly. First, the organism that makes the antibiotic must be identified and the antibiotic tested against a wide variety of bacterial species. Then the organism must be grown on a scale large enough to allow the purification and chemical analysis of the antibiotic and to demonstrate that it is unique. This is a complex procedure because there are several thousand compounds with antibiotic activity that have already been discovered, and these compounds are repeatedly rediscovered. After the antibiotic has been shown to be useful in the treatment of infections in animals, larger-scale preparation can be undertaken. Commercial development requires a high yield and an economic method of purification. Extensive research may be needed to increase the yield by selecting improved strains of the organism or by changing the growth medium. The organism is then grown in large steel vats, in submerged cultures with forced aeration. The naturally fermented product may be modified chemically to produce a semisynthetic antibiotic. After purification, the effect of the antibiotic on the normal function of host tissues and organs (its pharmacology), as well as its possible toxic actions (toxicology), must be tested on a large number of animals of several species. In addition, the effective forms of administration must be determined. Antibiotics may be topical, applied to the surface of the skin, eye, or ear in the form of ointments or creams. They may be oral, or given by mouth, and either allowed to dissolve in the mouth or swallowed, in which case they are absorbed into the bloodstream through the intestines. Antibiotics may also be parenteral, or injected intramuscularly, intravenously, or subcutaneously; antibiotics are administered parenterally when fast absorption is required. In the United States, once these steps have been completed, the manufacturer may file an Investigational New Drug Application with the Food and Drug Administration (FDA). If approved, the antibiotic can be tested on volunteers for toxicity, tolerance, absorption, and excretion. If subsequent tests on small numbers of patients are successful, the drug can be used on a larger group, usually in the hundreds. Finally a New Drug Application can be filed with the FDA, and, if this application is approved, the drug can be used generally in clinical medicine. These procedures, from the time the antibiotic is discovered in the laboratory until it undergoes clinical trial, usually extend over several years. VI RISKS AND LIMITATIONS The use of antibiotics is limited because bacteria have evolved defenses against certain antibiotics. One of the main mechanisms of defense is inactivation of the antibiotic. This is the usual defense against penicillins and chloramphenicol, among others. Another form of defense involves a mutation that changes the bacterial enzyme affected by the drug in such a way that the antibiotic can no longer inhibit it. This is the main mechanism of resistance to the compounds that inhibit protein synthesis, such as the tetracyclines. All these forms of resistance are transmitted genetically by the bacterium to its progeny. Genes that carry resistance can also be transmitted from one bacterium to another by means of plasmids, chromosomal fragments that contain only a few genes, including the resistance gene. Some bacteria conjugate with others of the same species, forming temporary links during which the plasmids are passed from one to another. If two plasmids carrying resistance genes to different antibiotics are transferred to the same bacterium, their resistance genes can be assembled onto a single plasmid. The combined resistances can then be transmitted to another bacterium, where they may be combined with yet another type of resistance. In this way, plasmids are generated that carry resistance to several different classes of antibiotic. In addition, plasmids have evolved that can be transmitted from one species of bacteria to another, and these can transfer multiple antibiotic resistance between very dissimilar species of bacteria. The problem of resistance has been exacerbated by the use of antibiotics as prophylactics, intended to prevent infection before it occurs. Indiscriminate and inappropriate use of antibiotics for the treatment of the common cold and other common viral infections, against which they have no effect, removes antibiotic-sensitive bacteria and allows the development of antibiotic-resistant bacteria. Similarly, the use of antibiotics in poultry and livestock feed has promoted the spread of drug resistance and has led to the widespread contamination of meat and poultry by drug-resistant bacteria such as Salmonella. In the 1970s, tuberculosis seemed to have been nearly eradicated in the developed countries, although it was still prevalent in developing countries. Now its incidence is increasing, partly due to resistance of the tubercle bacillus to antibiotics. Some bacteria, particularly strains of staphylococci, are resistant to so many classes of antibiotics that the infections they cause are almost untreatable. When such a strain invades a surgical ward in a hospital, it is sometimes necessary to close the ward altogether for a time. Similarly, plasmodia, the causative organisms of malaria, have developed resistance to antibiotics, while, at the same time, the mosquitoes that carry plasmodia have become resistant to the insecticides that were once used to control them. Consequently, although malaria had been almost entirely eliminated, it is now again rampant in Africa, the Middle East, Southeast Asia, and parts of Latin America. Furthermore, the discovery of new antibiotics is now much less common than in the past. (vi) Ceramics I INTRODUCTION Ceramics (Greek keramos, "potter's clay"), originally the art of making pottery, now a general term for the science of manufacturing articles prepared from pliable, earthy materials that are made rigid by exposure to heat. Ceramic materials are nonmetallic, inorganic compounds—primarily compounds of oxygen, but also compounds of carbon, nitrogen, boron, and silicon. Ceramics includes the manufacture of earthenware, porcelain, bricks, and some kinds of tile and stoneware. Ceramic products are used not only for artistic objects and tableware, but also for industrial and technical items, such as sewer pipe and electrical insulators. Ceramic insulators have a wide range of electrical properties. The electrical properties of a recently discovered family of ceramics based on a copper-oxide mixture allow these ceramics to become superconductive, or to conduct electricity with no resistance, at temperatures much higher than those at which metals do (see Superconductivity). In space technology, ceramic materials are used to make components for space vehicles. The rest of this article will deal only with ceramic products that have industrial or technical applications. Such products are known as industrial ceramics. The term industrial ceramics also refers to the science and technology of developing and manufacturing such products. II PROPERTIES Ceramics possess chemical, mechanical, physical, thermal, electrical, and magnetic properties that distinguish them from other materials, such as metals and plastics. Manufacturers customize the properties of ceramics by controlling the type and amount of the materials used to make them. A Chemical Properties Industrial ceramics are primarily oxides (compounds of oxygen), but some are carbides (compounds of carbon and heavy metals), nitrides (compounds of nitrogen), borides (compounds of boron), and silicides (compounds of silicon). For example, aluminum oxide can be the main ingredient of a ceramic—the important alumina ceramics contain 85 to 99 percent aluminum oxide. Primary components, such as the oxides, can also be chemically combined to form complex compounds that are the main ingredient of a ceramic. Examples of such complex compounds are barium titanate (BaTiO3) and zinc ferrite (ZnFe2O4). Another material that may be regarded as a ceramic is the element carbon (in the form of diamond or graphite). Ceramics are more resistant to corrosion than plastics and metals are. Ceramics generally do not react with most liquids, gases, alkalies, and acids. Most ceramics have very high melting points, and certain ceramics can be used up to temperatures approaching their melting points. Ceramics also remain stable over long time periods. B Mechanical Properties Ceramics are extremely strong, showing considerable stiffness under compression and bending. Bend strength, the amount of pressure required to bend a material, is often used to determine the strength of a ceramic. One of the strongest ceramics, zirconium dioxide, has a bend strength similar to that of steel. Zirconias (ZrO2) retain their strength up to temperatures of 900° C (1652° F), while silicon carbides and silicon nitrides retain their strength up to temperatures of 1400° C (2552° F). These silicon materials are used in hightemperature applications, such as to make parts for gas-turbine engines. Although ceramics are strong, temperature-resistant, and resilient, these materials are brittle and may break when dropped or when quickly heated and cooled. C Physical Properties Most industrial ceramics are compounds of oxygen, carbon, or nitrogen with lighter metals or semimetals. Thus, ceramics are less dense than most metals. As a result, a light ceramic part may be just as strong as a heavier metal part. Ceramics are also extremely hard, resisting wear and abrasion. The hardest known substance is diamond, followed by boron nitride in cubic-crystal form. Aluminum oxide and silicon carbide are also extremely hard materials and are often used to cut, grind, sand, and polish metals and other hard materials. D Thermal Properties Most ceramics have high melting points, meaning that even at high temperatures, these materials resist deformation and retain strength under pressure. Silicon carbide and silicon nitride, for example, withstand temperature changes better than most metals do. Large and sudden changes in temperature, however, can weaken ceramics. Materials that undergo less expansion or contraction per degree of temperature change can withstand sudden changes in temperature better than materials that undergo greater deformation. Silicon carbide and silicon nitride expand and contract less during temperature changes than most other ceramics do. These materials are therefore often used to make parts, such as turbine rotors used in jet engines, that can withstand extreme variations in temperature. E Electrical Properties Certain ceramics conduct electricity. Chromium dioxide, for example, conducts electricity as well as most metals do. Other ceramics, such as silicon carbide, do not conduct electricity as well, but may still act as semiconductors. (A semiconductor is a material with greater electrical conductivity than an insulator has but with less than that of a good conductor.) Other types of ceramics, such as aluminum oxide, do not conduct electricity at all. These ceramics are used as insulators—devices used to separate elements in an electrical circuit to keep the current on the desired pathway. Certain ceramics, such as porcelain, act as insulators at lower temperatures but conduct electricity at higher temperatures. F Magnetic Properties Ceramics containing iron oxide (Fe2O3) can have magnetic properties similar to those of iron, nickel, and cobalt magnets (see Magnetism). These iron oxide-based ceramics are called ferrites. Other magnetic ceramics include oxides of nickel, manganese, and barium. Ceramic magnets, used in electric motors and electronic circuits, can be manufactured with high resistance to demagnetization. When electrons become highly aligned, as they do in ceramic magnets, they create a powerful magnetic field which is more difficult to disrupt (demagnetize) by breaking the alignment of the electrons. III MANUFACTURE Industrial ceramics are produced from powders that have been tightly squeezed and then heated to high temperatures. Traditional ceramics, such as porcelain, tiles, and pottery, are formed from powders made from minerals such as clay, talc, silica, and feldspar. Most industrial ceramics, however, are formed from highly pure powders of specialty chemicals such as silicon carbide, alumina, and barium titanate. The minerals used to make ceramics are dug from the earth and are then crushed and ground into fine powder. Manufacturers often purify this powder by mixing it in solution and allowing a chemical precipitate (a uniform solid that forms within a solution) to form. The precipitate is then separated from the solution, and the powder is heated to drive off impurities, including water. The result is typically a highly pure powder with particle sizes of about 1 micrometer (a micrometer is 0.000001 meter, or 0.00004 in). A Molding After purification, small amounts of wax are often added to bind the ceramic powder and make it more workable. Plastics may also be added to the powder to give the desired pliability and softness. The powder can then be shaped into different objects by various molding processes. These molding processes include slip casting, pressure casting, injection molding, and extrusion. After the ceramic is molded, it is heated in a process known as densification to make the material stronger and more dense. A1 Slip Casting Slip casting is a molding process used to form hollow ceramic objects. The ceramic powder is poured into a mold that has porous walls, and then the mold is filled with water. The capillary action (forces created by surface tension and by wetting the sides of a tube) of the porous walls drains water through the powder and the mold, leaving a solid layer of ceramic inside. A2 Pressure Casting In pressure casting, ceramic powder is poured into a mold, and pressure is then applied to the powder. The pressure condenses the powder into a solid layer of ceramic that is shaped to the inside of the mold. A3 Injection Molding Injection molding is used to make small, intricate objects. This method uses a piston to force the ceramic powder through a heated tube into a mold, where the powder cools, hardening to the shape of the mold. When the object has solidified, the mold is opened and the ceramic piece is removed. A4 Extrusion Extrusion is a continuous process in which ceramic powder is heated in a long barrel. A rotating screw then forces the heated material through an opening of the desired shape. As the continuous form emerges from the die opening, the form cools, solidifies, and is cut to the desired length. Extrusion is used to make products such as ceramic pipe, tiles, and brick. B Densification The process of densification uses intense heat to condense a ceramic object into a strong, dense product. After being molded, the ceramic object is heated in an electric furnace to temperatures between 1000° and 1700° C (1832° and 3092° F). As the ceramic heats, the powder particles coalesce, much as water droplets join at room temperature. As the ceramic particles merge, the object becomes increasingly dense, shrinking by up to 20 percent of its original size . The goal of this heating process is to maximize the ceramic’s strength by obtaining an internal structure that is compact and extremely dense. IV APPLICATIONS Ceramics are valued for their mechanical properties, including strength, durability, and hardness. Their electrical and magnetic properties make them valuable in electronic applications, where they are used as insulators, semiconductors, conductors, and magnets. Ceramics also have important uses in the aerospace, biomedical, construction, and nuclear industries. A Mechanical Applications Industrial ceramics are widely used for applications requiring strong, hard, and abrasionresistant materials. For example, machinists use metal-cutting tools tipped with alumina, as well as tools made from silicon nitrides, to cut, shape, grind, sand, and polish cast iron, nickel-based alloys, and other metals. Silicon nitrides, silicon carbides, and certain types of zirconias are used to make components such as valves and turbocharger rotors for hightemperature diesel and gas-turbine engines. The textile industry uses ceramics for thread guides that can resist the cutting action of fibers traveling through these guides at high speed. B Electrical and Magnetic Applications Ceramic materials have a wide range of electrical properties. Hence, ceramics are used as insulators (poor conductors of electricity), semiconductors (greater conductivity than insulators but less than good conductors), and conductors (good conductors of electricity). Ceramics such as aluminum oxide (Al2O3) do not conduct electricity at all and are used to make insulators. Stacks of disks made of this material are used to suspend high-voltage power lines from transmission towers. Similarly, thin plates of aluminum oxide , which remain electrically and chemically stable when exposed to high-frequency currents, are used to hold microchips. Other ceramics make excellent semiconductors. Small semiconductor chips, often made from barium titanate (BaTiO3) and strontium titanate (SrTiO3), may contain hundreds of thousands of transistors, making possible the miniaturization of electronic devices. Scientists have discovered a family of copper-oxide-based ceramics that become superconductive at higher temperatures than do metals. Superconductivity refers to the ability of a cooled material to conduct an electric current with no resistance. This phenomenon can occur only at extremely low temperatures, which are difficult to maintain. However, in 1988 researchers discovered a copper oxide ceramic that becomes superconductive at -148° C (-234° F). This temperature is far higher than the temperatures at which metals become superconductors (see Superconductivity). Thin insulating films of ceramic material such as barium titanate and strontium titanate are capable of storing large quantities of electricity in extremely small volumes. Devices capable of storing electrical charge are known as capacitors. Engineers form miniature capacitors from ceramics and use them in televisions, stereos, computers, and other electronic products. Ferrites (ceramics containing iron oxide) are widely used as low-cost magnets in electric motors. These magnets help convert electric energy into mechanical energy. In an electric motor, an electric current is passed through a magnetic field created by a ceramic magnet. As the current passes through the magnetic field, the motor coil turns, creating mechanical energy. Unlike metal magnets, ferrites conduct electric currents at high frequencies (currents that increase and decrease rapidly in voltage). Because ferrites conduct highfrequency currents, they do not lose as much power as metal conductors do. Ferrites are also used in video, radio, and microwave equipment. Manganese zinc ferrites are used in magnetic recording heads, and bits of ferric oxides are the active component in a variety of magnetic recording media, such as recording tape and computer diskettes (see Sound Recording and Reproduction; Floppy Disk). C Aerospace Aerospace engineers use ceramic materials and cermets (durable, highly heat-resistant alloys made by combining powdered metal with an oxide or carbide and then pressing and baking the mixture) to make components for space vehicles. Such components include heat-shield tiles for the space shuttle and nosecones for rocket payloads. D Bioceramics Certain advanced ceramics are compatible with bone and tissue and are used in the biomedical field to make implants for use within the body. For example, specially prepared, porous alumina will bond with bone and other natural tissue. Medical and dental specialists use this ceramic to make hip joints, dental caps, and dental bridges. Ceramics such as calcium hydroxyl phosphates are compatible with bone and are used to reconstruct fractured or diseased bone (See Bioengineering; Dentistry). E Nuclear Power Engineers use uranium ceramic pellets to generate nuclear power. These pellets are produced in fuel fabrication plants from the gas uranium hexafluoride (UF6). The pellets are then packed into hollow tubes called fuel rods and are transported to nuclear power plants. F Building and Construction Manufacturers use ceramics to make bricks, tiles, piping, and other construction materials. Ceramics for these purposes are made primarily from clay and shale. Household fixtures such as sinks and bathtubs are made from feldspar- and clay-based ceramics. G Coatings Because ceramic materials are harder and have better corrosion resistance than most metals, manufacturers often coat metal with ceramic enamel. Manufacturers apply ceramic enamel by injecting a compressed gas containing ceramic powder into the flame of a hydrocarbon-oxygen torch burning at about 2500° C (about 4500° F). The semimolten powder particles adhere to the metal, cooling to form a hard enamel. Household appliances, such as refrigerators, stoves, washing machines, and dryers, are often coated with ceramic enamel. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. (vii) Greenhouse Effect I INTRODUCTION Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive. The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally. Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions. II HOW THE GREENHOUSE EFFECT WORKS The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface. About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere. Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life. The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F). To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life. III TYPES OF GREENHOUSE GASES Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs). Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface. A Water Vapor Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere. B Carbon Dioxide Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct. In order to provide energy to heat buildings, power automobiles, and fuel electricityproducing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide. As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue. C Methane Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels. Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide. D Nitrous Oxide Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide. Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century. E Fluorinated Compounds Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide. CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers. Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs. F Other Synthetic Chemicals Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain. IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming. However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century. V UNDERSTANDING THE GREENHOUSE EFFECT Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof. In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.” Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming. The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years. As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992. Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists. VI EFFORTS TO CONTROL GREENHOUSE GASES Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement costeffectively, negotiators developed a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. In 2004 Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than 126 countries have ratified the protocol. Australia and the United States are the only industrialized nations that have failed to support it. (viii) Greenhouse Effect I INTRODUCTION Greenhouse Effect, the capacity of certain gases in the atmosphere to trap heat emitted from the Earth’s surface, thereby insulating and warming the Earth. Without the thermal blanketing of the natural greenhouse effect, the Earth’s climate would be about 33 Celsius degrees (about 59 Fahrenheit degrees) cooler—too cold for most living organisms to survive. The greenhouse effect has warmed the Earth for over 4 billion years. Now scientists are growing increasingly concerned that human activities may be modifying this natural process, with potentially dangerous consequences. Since the advent of the Industrial Revolution in the 1700s, humans have devised many inventions that burn fossil fuels such as coal, oil, and natural gas. Burning these fossil fuels, as well as other activities such as clearing land for agriculture or urban settlements, releases some of the same gases that trap heat in the atmosphere, including carbon dioxide, methane, and nitrous oxide. These atmospheric gases have risen to levels higher than at any time in the last 420,000 years. As these gases build up in the atmosphere, they trap more heat near the Earth’s surface, causing Earth’s climate to become warmer than it would naturally. Scientists call this unnatural heating effect global warming and blame it for an increase in the Earth’s surface temperature of about 0.6 Celsius degrees (about 1 Fahrenheit degree) over the last nearly 100 years. Without remedial measures, many scientists fear that global temperatures will rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by 2100. These warmer temperatures could melt parts of polar ice caps and most mountain glaciers, causing a rise in sea level of up to 1 m (40 in) within a century or two, which would flood coastal regions. Global warming could also affect weather patterns causing, among other problems, prolonged drought or increased flooding in some of the world’s leading agricultural regions. II HOW THE GREENHOUSE EFFECT WORKS The greenhouse effect results from the interaction between sunlight and the layer of greenhouse gases in the Earth's atmosphere that extends up to 100 km (60 mi) above Earth's surface. Sunlight is composed of a range of radiant energies known as the solar spectrum, which includes visible light, infrared light, gamma rays, X rays, and ultraviolet light. When the Sun’s radiation reaches the Earth’s atmosphere, some 25 percent of the energy is reflected back into space by clouds and other atmospheric particles. About 20 percent is absorbed in the atmosphere. For instance, gas molecules in the uppermost layers of the atmosphere absorb the Sun’s gamma rays and X rays. The Sun’s ultraviolet radiation is absorbed by the ozone layer, located 19 to 48 km (12 to 30 mi) above the Earth’s surface. About 50 percent of the Sun’s energy, largely in the form of visible light, passes through the atmosphere to reach the Earth’s surface. Soils, plants, and oceans on the Earth’s surface absorb about 85 percent of this heat energy, while the rest is reflected back into the atmosphere—most effectively by reflective surfaces such as snow, ice, and sandy deserts. In addition, some of the Sun’s radiation that is absorbed by the Earth’s surface becomes heat energy in the form of long-wave infrared radiation, and this energy is released back into the atmosphere. Certain gases in the atmosphere, including water vapor, carbon dioxide, methane, and nitrous oxide, absorb this infrared radiant heat, temporarily preventing it from dispersing into space. As these atmospheric gases warm, they in turn emit infrared radiation in all directions. Some of this heat returns back to Earth to further warm the surface in what is known as the greenhouse effect, and some of this heat is eventually released to space. This heat transfer creates equilibrium between the total amount of heat that reaches the Earth from the Sun and the amount of heat that the Earth radiates out into space. This equilibrium or energy balance—the exchange of energy between the Earth’s surface, atmosphere, and space—is important to maintain a climate that can support a wide variety of life. The heat-trapping gases in the atmosphere behave like the glass of a greenhouse. They let much of the Sun’s rays in, but keep most of that heat from directly escaping. Because of this, they are called greenhouse gases. Without these gases, heat energy absorbed and reflected from the Earth’s surface would easily radiate back out to space, leaving the planet with an inhospitable temperature close to –19°C (2°F), instead of the present average surface temperature of 15°C (59°F). To appreciate the importance of the greenhouse gases in creating a climate that helps sustain most forms of life, compare Earth to Mars and Venus. Mars has a thin atmosphere that contains low concentrations of heat-trapping gases. As a result, Mars has a weak greenhouse effect resulting in a largely frozen surface that shows no evidence of life. In contrast, Venus has an atmosphere containing high concentrations of carbon dioxide. This heat-trapping gas prevents heat radiated from the planet’s surface from escaping into space, resulting in surface temperatures that average 462°C (864°F)—too hot to support life. III TYPES OF GREENHOUSE GASES Earth’s atmosphere is primarily composed of nitrogen (78 percent) and oxygen (21 percent). These two most common atmospheric gases have chemical structures that restrict absorption of infrared energy. Only the few greenhouse gases, which make up less than 1 percent of the atmosphere, offer the Earth any insulation. Greenhouse gases occur naturally or are manufactured. The most abundant naturally occurring greenhouse gas is water vapor, followed by carbon dioxide, methane, and nitrous oxide. Human-made chemicals that act as greenhouse gases include chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs). Since the 1700s, human activities have substantially increased the levels of greenhouse gases in the atmosphere. Scientists are concerned that expected increases in the concentrations of greenhouse gases will powerfully enhance the atmosphere’s capacity to retain infrared radiation, leading to an artificial warming of the Earth’s surface. A Water Vapor Water vapor is the most common greenhouse gas in the atmosphere, accounting for about 60 to 70 percent of the natural greenhouse effect. Humans do not have a significant direct impact on water vapor levels in the atmosphere. However, as human activities increase the concentration of other greenhouse gases in the atmosphere (producing warmer temperatures on Earth), the evaporation of oceans, lakes, and rivers, as well as water evaporation from plants, increase and raise the amount of water vapor in the atmosphere. B Carbon Dioxide Carbon dioxide constantly circulates in the environment through a variety of natural processes known as the carbon cycle. Volcanic eruptions and the decay of plant and animal matter both release carbon dioxide into the atmosphere. In respiration, animals break down food to release the energy required to build and maintain cellular activity. A byproduct of respiration is the formation of carbon dioxide, which is exhaled from animals into the environment. Oceans, lakes, and rivers absorb carbon dioxide from the atmosphere. Through photosynthesis, plants collect carbon dioxide and use it to make their own food, in the process incorporating carbon into new plant tissue and releasing oxygen to the environment as a byproduct. In order to provide energy to heat buildings, power automobiles, and fuel electricityproducing power plants, humans burn objects that contain carbon, such as the fossil fuels oil, coal, and natural gas; wood or wood products; and some solid wastes. When these products are burned, they release carbon dioxide into the air. In addition, humans cut down huge tracts of trees for lumber or to clear land for farming or building. This process, known as deforestation, can both release the carbon stored in trees and significantly reduce the number of trees available to absorb carbon dioxide. As a result of these human activities, carbon dioxide in the atmosphere is accumulating faster than the Earth’s natural processes can absorb the gas. By analyzing air bubbles trapped in glacier ice that is many centuries old, scientists have determined that carbon dioxide levels in the atmosphere have risen by 31 percent since 1750. And since carbon dioxide increases can remain in the atmosphere for centuries, scientists expect these concentrations to double or triple in the next century if current trends continue. C Methane Many natural processes produce methane, also known as natural gas. Decomposition of carbon-containing substances found in oxygen-free environments, such as wastes in landfills, release methane. Ruminating animals such as cattle and sheep belch methane into the air as a byproduct of digestion. Microorganisms that live in damp soils, such as rice fields, produce methane when they break down organic matter. Methane is also emitted during coal mining and the production and transport of other fossil fuels. Methane has more than doubled in the atmosphere since 1750, and could double again in the next century. Atmospheric concentrations of methane are far less than carbon dioxide, and methane only stays in the atmosphere for a decade or so. But scientists consider methane an extremely effective heat-trapping gas—one molecule of methane is 20 times more efficient at trapping infrared radiation radiated from the Earth’s surface than a molecule of carbon dioxide. D Nitrous Oxide Nitrous oxide is released by the burning of fossil fuels, and automobile exhaust is a large source of this gas. In addition, many farmers use nitrogen-containing fertilizers to provide nutrients to their crops. When these fertilizers break down in the soil, they emit nitrous oxide into the air. Plowing fields also releases nitrous oxide. Since 1750 nitrous oxide has risen by 17 percent in the atmosphere. Although this increase is smaller than for the other greenhouse gases, nitrous oxide traps heat about 300 times more effectively than carbon dioxide and can stay in the atmosphere for a century. E Fluorinated Compounds Some of the most potent greenhouse gases emitted are produced solely by human activities. Fluorinated compounds, including CFCs, HCFCs, and HFCs, are used in a variety of manufacturing processes. For each of these synthetic compounds, one molecule is several thousand times more effective in trapping heat than a single molecule of carbon dioxide. CFCs, first synthesized in 1928, were widely used in the manufacture of aerosol sprays, blowing agents for foams and packing materials, as solvents, and as refrigerants. Nontoxic and safe to use in most applications, CFCs are harmless in the lower atmosphere. However, in the upper atmosphere, ultraviolet radiation breaks down CFCs, releasing chlorine into the atmosphere. In the mid-1970s, scientists began observing that higher concentrations of chlorine were destroying the ozone layer in the upper atmosphere. Ozone protects the Earth from harmful ultraviolet radiation, which can cause cancer and other damage to plants and animals. Beginning in 1987 with the Montréal Protocol on Substances that Deplete the Ozone Layer, representatives from 47 countries established control measures that limited the consumption of CFCs. By 1992 the Montréal Protocol was amended to completely ban the manufacture and use of CFCs worldwide, except in certain developing countries and for use in special medical processes such as asthma inhalers. Scientists devised substitutes for CFCs, developing HCFCs and HFCs. Since HCFCs still release ozone-destroying chlorine in the atmosphere, production of this chemical will be phased out by the year 2030, providing scientists some time to develop a new generation of safer, effective chemicals. HFCs, which do not contain chlorine and only remain in the atmosphere for a short time, are now considered the most effective and safest substitute for CFCs. F Other Synthetic Chemicals Experts are concerned about other industrial chemicals that may have heat-trapping abilities. In 2000 scientists observed rising concentrations of a previously unreported compound called trifluoromethyl sulphur pentafluoride. Although present in extremely low concentrations in the environment, the gas still poses a significant threat because it traps heat more effectively than all other known greenhouse gases. The exact sources of the gas, undisputedly produced from industrial processes, still remain uncertain. IV OTHER FACTORS AFFECTING THE GREENHOUSE EFFECT Aerosols, also known as particulates, are airborne particles that absorb, scatter, and reflect radiation back into space. Clouds, windblown dust, and particles that can be traced to erupting volcanoes are examples of natural aerosols. Human activities, including the burning of fossil fuels and slash-and-burn farming techniques used to clear forestland, contribute additional aerosols to the atmosphere. Although aerosols are not considered a heat-trapping greenhouse gas, they do affect the transfer of heat energy radiated from the Earth to space. The effect of aerosols on climate change is still debated, but scientists believe that light-colored aerosols cool the Earth’s surface, while dark aerosols like soot actually warm the atmosphere. The increase in global temperature in the last century is lower than many scientists predicted when only taking into account increasing levels of carbon dioxide, methane, nitrous oxide, and fluorinated compounds. Some scientists believe that aerosol cooling may be the cause of this unexpectedly reduced warming. However, scientists do not expect that aerosols will ever play a significant role in offsetting global warming. As pollutants, aerosols typically pose a health threat, and the manufacturing or agricultural processes that produce them are subject to air-pollution control efforts. As a result, scientists do not expect aerosols to increase as fast as other greenhouse gases in the 21st century. V UNDERSTANDING THE GREENHOUSE EFFECT Although concern over the effect of increasing greenhouse gases is a relatively recent development, scientists have been investigating the greenhouse effect since the early 1800s. French mathematician and physicist Jean Baptiste Joseph Fourier, while exploring how heat is conducted through different materials, was the first to compare the atmosphere to a glass vessel in 1827. Fourier recognized that the air around the planet lets in sunlight, much like a glass roof. In the 1850s British physicist John Tyndall investigated the transmission of radiant heat through gases and vapors. Tyndall found that nitrogen and oxygen, the two most common gases in the atmosphere, had no heat-absorbing properties. He then went on to measure the absorption of infrared radiation by carbon dioxide and water vapor, publishing his findings in 1863 in a paper titled “On Radiation Through the Earth’s Atmosphere.” Swedish chemist Svante August Arrhenius, best known for his Nobel Prize-winning work in electrochemistry, also advanced understanding of the greenhouse effect. In 1896 he calculated that doubling the natural concentrations of carbon dioxide in the atmosphere would increase global temperatures by 4 to 6 Celsius degrees (7 to 11 Fahrenheit degrees), a calculation that is not too far from today’s estimates using more sophisticated methods. Arrhenius correctly predicted that when Earth’s temperature warms, water vapor evaporation from the oceans increases. The higher concentration of water vapor in the atmosphere would then contribute to the greenhouse effect and global warming. The predictions about carbon dioxide and its role in global warming set forth by Arrhenius were virtually ignored for over half a century, until scientists began to detect a disturbing change in atmospheric levels of carbon dioxide. In 1957 researchers at the Scripps Institution of Oceanography, based in San Diego, California, began monitoring carbon dioxide levels in the atmosphere from Hawaii’s remote Mauna Loa Observatory located 3,000 m (11,000 ft) above sea level. When the study began, carbon dioxide concentrations in the Earth’s atmosphere were 315 molecules of gas per million molecules of air (abbreviated parts per million or ppm). Each year carbon dioxide concentrations increased—to 323 ppm by 1970 and 335 ppm by 1980. By 1988 atmospheric carbon dioxide had increased to 350 ppm, an 8 percent increase in only 31 years. As other researchers confirmed these findings, scientific interest in the accumulation of greenhouse gases and their effect on the environment slowly began to grow. In 1988 the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC). The IPCC was the first international collaboration of scientists to assess the scientific, technical, and socioeconomic information related to the risk of human-induced climate change. The IPCC creates periodic assessment reports on advances in scientific understanding of the causes of climate change, its potential impacts, and strategies to control greenhouse gases. The IPCC played a critical role in establishing the United Nations Framework Convention on Climate Change (UNFCCC). The UNFCCC, which provides an international policy framework for addressing climate change issues, was adopted by the United Nations General Assembly in 1992. Today scientists around the world monitor atmospheric greenhouse gas concentrations and create forecasts about their effects on global temperatures. Air samples from sites spread across the globe are analyzed in laboratories to determine levels of individual greenhouse gases. Sources of greenhouse gases, such as automobiles, factories, and power plants, are monitored directly to determine their emissions. Scientists gather information about climate systems and use this information to create and test computer models that simulate how climate could change in response to changing conditions on the Earth and in the atmosphere. These models act as high-tech crystal balls to project what may happen in the future as greenhouse gas levels rise. Models can only provide approximations, and some of the predictions based on these models often spark controversy within the science community. Nevertheless, the basic concept of global warming is widely accepted by most climate scientists. VI EFFORTS TO CONTROL GREENHOUSE GASES Due to overwhelming scientific evidence and growing political interest, global warming is currently recognized as an important national and international issue. Since 1992 representatives from over 160 countries have met regularly to discuss how to reduce worldwide greenhouse gas emissions. In 1997 representatives met in Kyōto, Japan, and produced an agreement, known as the Kyōto Protocol, which requires industrialized countries to reduce their emissions by 2012 to an average of 5 percent below 1990 levels. To help countries meet this agreement costeffectively, negotiators developed a system in which nations that have no obligations or that have successfully met their reduced emissions obligations could profit by selling or trading their extra emissions quotas to other countries that are struggling to reduce their emissions. In 2004 Russia’s cabinet approved the treaty, paving the way for it to go into effect in 2005. More than 126 countries have ratified the protocol. Australia and the United States are the only industrialized nations that have failed to support it. (ix) Pasteurization Pasteurization, process of heating a liquid, particularly milk, to a temperature between 55° and 70° C (131° and 158° F), to destroy harmful bacteria without materially changing the composition, flavor, or nutritive value of the liquid. The process is named after the French chemist Louis Pasteur, who devised it in 1865 to inhibit fermentation of wine and milk. Milk is pasteurized by heating at a temperature of 63° C (145° F) for 30 minutes, rapidly cooling it, and then storing it at a temperature below 10° C (50° F). Beer and wine are pasteurized by being heated at about 60° C (140° F) for about 20 minutes; a newer method involves heating at 70° C (158° F) for about 30 seconds and filling the container under sterile conditions. (x) Immunization I INTRODUCTION Immunization, also called vaccination or inoculation, a method of stimulating resistance in the human body to specific diseases using microorganisms—bacteria or viruses—that have been modified or killed. These treated microorganisms do not cause the disease, but rather trigger the body's immune system to build a defense mechanism that continuously guards against the disease. If a person immunized against a particular disease later comes into contact with the disease-causing agent, the immune system is immediately able to respond defensively. Immunization has dramatically reduced the incidence of a number of deadly diseases. For example, a worldwide vaccination program resulted in the global eradication of smallpox in 1980, and in most developed countries immunization has essentially eliminated diphtheria, poliomyelitis, and neonatal tetanus. The number of cases of Haemophilus influenzae type b meningitis in the United States has dropped 95 percent among infants and children since 1988, when the vaccine for that disease was first introduced. In the United States, more than 90 percent of children receive all the recommended vaccinations by their second birthday. About 85 percent of Canadian children are immunized by age two. II TYPES OF IMMUNIZATION Scientists have developed two approaches to immunization: active immunization, which provides long-lasting immunity, and passive immunization, which gives temporary immunity. In active immunization, all or part of a disease-causing microorganism or a modified product of that microorganism is injected into the body to make the immune system respond defensively. Passive immunity is accomplished by injecting blood from an actively immunized human being or animal. A Active Immunization Vaccines that provide active immunization are made in a variety of ways, depending on the type of disease and the organism that causes it. The active components of the vaccinations are antigens, substances found in the disease-causing organism that the immune system recognizes as foreign. In response to the antigen, the immune system develops either antibodies or white blood cells called T lymphocytes, which are special attacker cells. Immunization mimics real infection but presents little or no risk to the recipient. Some immunizing agents provide complete protection against a disease for life. Other agents provide partial protection, meaning that the immunized person can contract the disease, but in a less severe form. These vaccines are usually considered risky for people who have a damaged immune system, such as those infected with the virus that causes acquired immunodeficiency syndrome (AIDS) or those receiving chemotherapy for cancer or organ transplantation. Without a healthy defense system to fight infection, these people may develop the disease that the vaccine is trying to prevent. Some immunizing agents require repeated inoculations—or booster shots—at specific intervals. Tetanus shots, for example, are recommended every ten years throughout life. In order to make a vaccine that confers active immunization, scientists use an organism or part of one that has been modified so that it has a low risk of causing illness but still triggers the body’s immune defenses against disease. One type of vaccine contains live organisms that have been attenuated—that is, their virulence has been weakened. This procedure is used to protect against yellow fever, measles, smallpox, and many other viral diseases. Immunization can also occur when a person receives an injection of killed or inactivated organisms that are relatively harmless but that still contain antigens. This type of vaccination is used to protect against bacterial diseases such as poliomyelitis, typhoid fever, and diphtheria. Some vaccines use only parts of an infectious organism that contain antigens, such as a protein cell wall or a flagellum. Known as acellular vaccines, they produce the desired immunity with a lower risk of producing potentially harmful immune reactions that may result from exposure to other parts of the organism. Acellular vaccines include the Haemophilus influenzae type B vaccine for meningitis and newer versions of the whooping cough vaccine. Scientists use genetic engineering techniques to refine this approach further by isolating a gene or genes within an infectious organism that code for a particular antigen. The subunit vaccines produced by this method cannot cause disease and are safe to use in people who have an impaired immune system. Subunit vaccines for hepatitis B and pneumococcus infection, which causes pneumonia, became available in the late 1990s. Active immunization can also be carried out using bacterial toxins that have been treated with chemicals so that they are no longer toxic, even though their antigens remain intact. This procedure uses the toxins produced by genetically engineered bacteria rather than the organism itself and is used in vaccinating against tetanus, botulism, and similar toxic diseases. B Passive Immunization Passive immunization is performed without injecting any antigen. In this method, vaccines contain antibodies obtained from the blood of an actively immunized human being or animal. The antibodies last for two to three weeks, and during that time the person is protected against the disease. Although short-lived, passive immunization provides immediate protection, unlike active immunization, which can take weeks to develop. Consequently, passive immunization can be lifesaving when a person has been infected with a deadly organism. Occasionally there are complications associated with passive immunization. Diseases such as botulism and rabies once posed a particular problem. Immune globulin (antibodycontaining plasma) for these diseases was once derived from the blood serum of horses. Although this animal material was specially treated before administration to humans, serious allergic reactions were common. Today, human-derived immune globulin is more widely available and the risk of side effects is reduced. III IMMUNIZATION RECOMMENDATIONS More than 50 vaccines for preventable diseases are licensed in the United States. The American Academy of Pediatrics and the U.S. Public Health Service recommend a series of immunizations beginning at birth. The initial series for children is complete by the time they reach the age of two, but booster vaccines are required for certain diseases, such as diphtheria and tetanus, in order to maintain adequate protection. When new vaccines are introduced, it is uncertain how long full protection will last. Recently, for example, it was discovered that a single injection of measles vaccine, first licensed in 1963 and administered to children at the age of 15 months, did not confer protection through adolescence and young adulthood. As a result, in the 1980s a series of measles epidemics occurred on college campuses throughout the United States among students who had been vaccinated as infants. To forestall future epidemics, health authorities now recommend that a booster dose of the measles, mumps, and rubella (also known as German measles) vaccine be administered at the time a child first enters school. Not only children but also adults can benefit from immunization. Many adults in the United States are not sufficiently protected against tetanus, diphtheria, measles, mumps, and German measles. Health authorities recommend that most adults 65 years of age and older, and those with respiratory illnesses, be immunized against influenza (yearly) and pneumococcus (once). IV HISTORY OF IMMUNIZATION The use of immunization to prevent disease predated the knowledge of both infection and immunology. In China in approximately 600 BC, smallpox material was inoculated through the nostrils. Inoculation of healthy people with a tiny amount of material from smallpox sores was first attempted in England in 1718 and later in America. Those who survived the inoculation became immune to smallpox. American statesman Thomas Jefferson traveled from his home in Virginia to Philadelphia, Pennsylvania, to undergo this risky procedure. A significant breakthrough came in 1796 when British physician Edward Jenner discovered that he could immunize patients against smallpox by inoculating them with material from cowpox sores. Cowpox is a far milder disease that, unlike smallpox, carries little risk of death or disfigurement. Jenner inserted matter from cowpox sores into cuts he made on the arm of a healthy eight-year-old boy. The boy caught cowpox. However, when Jenner exposed the boy to smallpox eight weeks later, the child did not contract the disease. The vaccination with cowpox had made him immune to the smallpox virus. Today we know that the cowpox virus antigens are so similar to those of the smallpox virus that they trigger the body's defenses against both diseases. In 1885 Louis Pasteur created the first successful vaccine against rabies for a young boy who had been bitten 14 times by a rabid dog. Over the course of ten days, Pasteur injected progressively more virulent rabies organisms into the boy, causing the boy to develop immunity in time to avert death from this disease. Another major milestone in the use of vaccination to prevent disease occurred with the efforts of two American physician-researchers. In 1954 Jonas Salk introduced an injectable vaccine containing an inactivated virus to counter the epidemic of poliomyelitis. Subsequently, Albert Sabin made great strides in the fight against this paralyzing disease by developing an oral vaccine containing a live weakened virus. Since the introduction of the polio vaccine, the disease has been nearly eliminated in many parts of the world. As more vaccines are developed, a new generation of combined vaccines are becoming available that will allow physicians to administer a single shot for multiple diseases. Work is also under way to develop additional orally administered vaccines and vaccines for sexually transmitted infections. Possible future vaccines may include, for example, one that would temporarily prevent pregnancy. Such a vaccine would still operate by stimulating the immune system to recognize and attack antigens, but in this case the antigens would be those of the hormones that are necessary for pregnancy. #3 Sunday, December 30, 2007 Join Date: Sep 2005 Posts: 26 Thanks: 3 Thanked 24 Times in 7 Posts Dilrauf Junior Member PAPER 2002 Q.1 Write short notes on any two of the following : 5 each a. Acid Rain b. pesticides c endocrine system (a) Acid Rain I INTRODUCTION Acid Rain, form of air pollution in which airborne acids produced by electric utility plants and other sources fall to Earth in distant regions. The corrosive nature of acid rain causes widespread damage to the environment. The problem begins with the production of sulfur dioxide and nitrogen oxides from the burning of fossil fuels, such as coal, natural gas, and oil, and from certain kinds of manufacturing. Sulfur dioxide and nitrogen oxides react with water and other chemicals in the air to form sulfuric acid, nitric acid, and other pollutants. These acid pollutants reach high into the atmosphere, travel with the wind for hundreds of miles, and eventually return to the ground by way of rain, snow, or fog, and as invisible “dry” forms. Damage from acid rain has been widespread in eastern North America and throughout Europe, and in Japan, China, and Southeast Asia. Acid rain leaches nutrients from soils, slows the growth of trees, and makes lakes uninhabitable for fish and other wildlife. In cities, acid pollutants corrode almost everything they touch, accelerating natural wear and tear on structures such as buildings and statues. Acids combine with other chemicals to form urban smog, which attacks the lungs, causing illness and premature deaths. II FORMATION OF ACID RAIN The process that leads to acid rain begins with the burning of fossil fuels. Burning, or combustion, is a chemical reaction in which oxygen from the air combines with carbon, nitrogen, sulfur, and other elements in the substance being burned. The new compounds formed are gases called oxides. When sulfur and nitrogen are present in the fuel, their reaction with oxygen yields sulfur dioxide and various nitrogen oxide compounds. In the United States, 70 percent of sulfur dioxide pollution comes from power plants, especially those that burn coal. In Canada, industrial activities, including oil refining and metal smelting, account for 61 percent of sulfur dioxide pollution. Nitrogen oxides enter the atmosphere from many sources, with motor vehicles emitting the largest share—43 percent in the United States and 60 percent in Canada. Once in the atmosphere, sulfur dioxide and nitrogen oxides undergo complex reactions with water vapor and other chemicals to yield sulfuric acid, nitric acid, and other pollutants called nitrates and sulfates. The acid compounds are carried by air currents and the wind, sometimes over long distances. When clouds or fog form in acid-laden air, they too are acidic, and so is the rain or snow that falls from them. Acid pollutants also occur as dry particles and as gases, which may reach the ground without the help of water. When these “dry” acids are washed from ground surfaces by rain, they add to the acids in the rain itself to produce a still more corrosive solution. The combination of acid rain and dry acids is known as acid deposition. III EFFECTS OF ACID RAIN The acids in acid rain react chemically with any object they contact. Acids are corrosive chemicals that react with other chemicals by giving up hydrogen atoms. The acidity of a substance comes from the abundance of free hydrogen atoms when the substance is dissolved in water. Acidity is measured using a pH scale with units from 0 to 14. Acidic substances have pH numbers from 1 to 6—the lower the pH number, the stronger, or more corrosive, the substance. Some nonacidic substances, called bases or alkalis, are like acids in reverse—they readily accept the hydrogen atoms that the acids offer. Bases have pH numbers from 8 to 14, with the higher values indicating increased alkalinity. Pure water has a neutral pH of 7—it is not acidic or basic. Rain, snow, or fog with a pH below 5.6 is considered acid rain. When bases mix with acids, the bases lessen the strength of an acid (see Acids and Bases). This buffering action regularly occurs in nature. Rain, snow, and fog formed in regions free of acid pollutants are slightly acidic, having a pH near 5.6. Alkaline chemicals in the environment, found in rocks, soils, lakes, and streams, regularly neutralize this precipitation. But when precipitation is highly acidic, with a pH below 5.6, naturally occurring acid buffers become depleted over time, and nature’s ability to neutralize the acids is impaired. Acid rain has been linked to widespread environmental damage, including soil and plant degradation, depleted life in lakes and streams, and erosion of human-made structures. A Soil In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Some soils are quite alkaline and can neutralize acid deposition indefinitely; others, especially thin mountain soils derived from granite or gneiss, buffer acid only briefly. B Trees By removing useful nutrients from the soil, acid rain slows the growth of plants, especially trees. It also attacks trees more directly by eating holes in the waxy coating of leaves and needles, causing brown dead spots. If many such spots form, a tree loses some of its ability to make food through photosynthesis. Also, organisms that cause disease can infect the tree through its injured leaves. Once weakened, trees are more vulnerable to other stresses, such as insect infestations, drought, and cold temperatures. Spruce and fir forests at higher elevations, where the trees literally touch the acid clouds, seem to be most at risk. Acid rain has been blamed for the decline of spruce forests on the highest ridges of the Appalachian Mountains in the eastern United States. In the Black Forest of southwestern Germany, half of the trees are damaged from acid rain and other forms of pollution. C Agriculture Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions, such as those in the Midwestern United States, can absorb and neutralize large amounts of acid. Mountain farms are more at risk—the thin soils in these higher elevations cannot neutralize so much acid. Farmers can prevent acid rain damage by monitoring the condition of the soil and, when necessary, adding crushed limestone to the soil to neutralize acid. If excessive amounts of nutrients have been leached out of the soil, farmers can replace them by adding nutrient-rich fertilizer. D Surface Waters Acid rain falls into and drains into streams, lakes, and marshes. Where there is snow cover in winter, local waters grow suddenly more acidic when the snow melts in the spring. Most natural waters are close to chemically neutral, neither acidic nor alkaline: their pH is between 6 and 8. In the northeastern United States and southeastern Canada, the water in some lakes now has a pH value of less than 5 as a result of acid rain. This means they are at least ten times more acidic than they should be. In the Adirondack Mountains of New York State, a quarter of the lakes and ponds are acidic, and many have lost their brook trout and other fish. In the middle Appalachian Mountains, over 1,300 streams are afflicted. All of Norway’s major rivers have been damaged by acid rain, severely reducing salmon and trout populations. E Plants and Animals The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Some species that live in water are very sensitive to acidity, some less so. Freshwater clams and mayfly young, for instance, begin dying when the water pH reaches 6.0. Frogs can generally survive more acidic water, but if their supply of mayflies is destroyed by acid rain, frog populations may also decline. Fish eggs of most species stop hatching at a pH of 5.0. Below a pH of 4.5, water is nearly sterile, unable to support any wildlife. Land animals dependent on aquatic organisms are also affected. Scientists have found that populations of snails living in or near water polluted by acid rain are declining in some regions. In The Netherlands songbirds are finding fewer snails to eat. The eggs these birds lay have weakened shells because the birds are receiving less calcium from snail shells. F Human-Made Structures Acid rain and the dry deposition of acidic particles damage buildings, statues, automobiles, and other structures made of stone, metal, or any other material exposed to weather for long periods. The corrosive damage can be expensive and, in cities with very historic buildings, tragic. Both the Parthenon in Athens, Greece, and the Taj Mahal in Agra, India, are deteriorating due to acid pollution. G Human Health The acidification of surface waters causes little direct harm to people. It is safe to swim in even the most acidified lakes. However, toxic substances leached from soil can pollute local water supplies. In Sweden, as many as 10,000 lakes have been polluted by mercury released from soils damaged by acid rain, and residents have been warned to avoid eating fish caught in these lakes. In the air, acids join with other chemicals to produce urban smog, which can irritate the lungs and make breathing difficult, especially for people who already have asthma, bronchitis, or other respiratory diseases. Solid particles of sulfates, a class of minerals derived from sulfur dioxide, are thought to be especially damaging to the lungs. H Acid Rain and Global Warming Acid pollution has one surprising effect that may be beneficial. Sulfates in the upper atmosphere reflect some sunlight out into space, and thus tend to slow down global warming. Scientists believe that acid pollution may have delayed the onset of warming by several decades in the middle of the 20th century. IV EFFORTS TO CONTROL ACID RAIN Acid rain can best be curtailed by reducing the amount of sulfur dioxide and nitrogen oxides released by power plants, motorized vehicles, and factories. The simplest way to cut these emissions is to use less energy from fossil fuels. Individuals can help. Every time a consumer buys an energy-efficient appliance, adds insulation to a house, or takes a bus to work, he or she conserves energy and, as a result, fights acid rain. Another way to cut emissions of sulfur dioxide and nitrogen oxides is by switching to cleaner-burning fuels. For instance, coal can be high or low in sulfur, and some coal contains sulfur in a form that can be washed out easily before burning. By using more of the low-sulfur or cleanable types of coal, electric utility companies and other industries can pollute less. The gasoline and diesel oil that run most motor vehicles can also be formulated to burn more cleanly, producing less nitrogen oxide pollution. Clean-burning fuels such as natural gas are being used increasingly in vehicles. Natural gas contains almost no sulfur and produces very low nitrogen oxides. Unfortunately, natural gas and the less-polluting coals tend to be more expensive, placing them out of the reach of nations that are struggling economically. Pollution can also be reduced at the moment the fuel is burned. Several new kinds of burners and boilers alter the burning process to produce less nitrogen oxides and more free nitrogen, which is harmless. Limestone or sandstone added to the combustion chamber can capture some of the sulfur released by burning coal. Once sulfur dioxide and oxides of nitrogen have been formed, there is one more chance to keep them out of the atmosphere. In smokestacks, devices called scrubbers spray a mixture of water and powdered limestone into the waste gases (flue gases), recapturing the sulfur. Pollutants can also be removed by catalytic converters. In a converter, waste gases pass over small beads coated with metals. These metals promote chemical reactions that change harmful substances to less harmful ones. In the United States and Canada, these devices are required in cars, but they are not often used in smokestacks. Once acid rain has occurred, a few techniques can limit environmental damage. In a process known as liming, powdered limestone can be added to water or soil to neutralize the acid dropping from the sky. In Norway and Sweden, nations much afflicted with acid rain, lakes are commonly treated this way. Rural water companies may need to lime their reservoirs so that acid does not eat away water pipes. In cities, exposed surfaces vulnerable to acid rain destruction can be coated with acid-resistant paints. Delicate objects like statues can be sheltered indoors in climate-controlled rooms. Cleaning up sulfur dioxide and nitrogen oxides will reduce not only acid rain but also smog, which will make the air look clearer. Based on a study of the value that visitors to national parks place on clear scenic vistas, the U.S. Environmental Protection Agency thinks that improving the vistas in eastern national parks alone will be worth $1 billion in tourist revenue a year. A National Legislation In the United States, legislative efforts to control sulfur dioxide and nitrogen oxides began with passage of the Clean Air Act of 1970. This act established emissions standards for pollutants from automobiles and industry. In 1990 Congress approved a set of amendments to the act that impose stricter limits on pollution emissions, particularly pollutants that cause acid rain. These amendments aim to cut the national output of sulfur dioxide from 23.5 million tons to 16 million tons by the year 2010. Although no national target is set for nitrogen oxides, the amendments require that power plants, which emit about one-third of all nitrogen oxides released to the atmosphere, reduce their emissions from 7.5 million tons to 5 million tons by 2010. These rules were applied first to selected large power plants in Eastern and Midwestern states. In the year 2000, smaller, cleaner power plants across the country came under the law. These 1990 amendments include a novel provision for sulfur dioxide control. Each year the government gives companies permits to release a specified number of tons of sulfur dioxide. Polluters are allowed to buy and sell their emissions permits. For instance, a company can choose to reduce its sulfur dioxide emissions more than the law requires and sell its unused pollution emission allowance to another company that is further from meeting emission goals; the buyer may then pollute above the limit for a certain time. Unused pollution rights can also be "banked" and kept for later use. It is hoped that this flexible market system will clean up emissions more quickly and cheaply than a set of rigid rules. Legislation enacted in Canada restricts the annual amount of sulfur dioxide emissions to 2.3 million tons in all of Canada’s seven easternmost provinces, where acid rain causes the most damage. A national cap for sulfur dioxide emissions has been set at 3.2 million tons per year. Legislation is currently being developed to enforce stricter pollution emissions by 2010. Norwegian law sets the goal of reducing sulfur dioxide emission to 76 percent of 1980 levels and nitrogen oxides emissions to 70 percent of the 1986 levels. To encourage cleanup, Norway collects a hefty tax from industries that emit acid pollutants. In some cases these taxes make it more expensive to emit acid pollutants than to reduce emissions. B International Agreements Acid rain typically crosses national borders, making pollution control an international issue. Canada receives much of its acid pollution from the United States—by some estimates as much as 50 percent. Norway and Sweden receive acid pollutants from Britain, Germany, Poland, and Russia. The majority of acid pollution in Japan comes from China. Debates about responsibilities and cleanup costs for acid pollutants led to international cooperation. In 1988, as part of the Long-Range Transboundary Air Pollution Agreement sponsored by the United Nations, the United States and 24 other nations ratified a protocol promising to hold yearly nitrogen oxide emissions at or below 1987 levels. In 1991 the United States and Canada signed an Air Quality Agreement setting national limits on annual sulfur dioxide emissions from power plants and factories. In 1994 in Oslo, Norway, 12 European nations agreed to reduce sulfur dioxide emissions by as much as 87 percent by 2010. Legislative actions to prevent acid rain have results. The targets established in laws and treaties are being met, usually ahead of schedule. Sulfur emissions in Europe decreased by 40 percent from 1980 to 1994. In Norway sulfur dioxide emissions fell by 75 percent during the same period. Since 1980 annual sulfur dioxide emissions in the United States have dropped from 26 million tons to 18.3 million tons. Canada reports sulfur dioxide emissions have been reduced to 2.6 million tons, 18 percent below the proposed limit of 3.2 million tons. Monitoring stations in several nations report that precipitation is actually becoming less acidic. In Europe, lakes and streams are now growing less acid. However, this does not seem to be the case in the United States and Canada. The reasons are not completely understood, but apparently, controls reducing nitrogen oxide emissions only began recently and their effects have yet to make a mark. In addition, soils in some areas have absorbed so much acid that they contain no more neutralizing alkaline chemicals. The weathering of rock will gradually replace the missing alkaline chemicals, but scientists fear that improvement will be very slow unless pollution controls are made even stricter. (b)Pesticides (c)Endocrine System Q.2 Differentiate between any five of the following pairs : a) rotation and revolution of earth As Earth revolves around the Sun, it rotates, or spins, on its axis, an imaginary line that runs between the North and South poles. The period of one complete rotation is defined as a day and takes 23 hr 56 min 4.1 sec. The period of one revolution around the Sun is defined as a year, or 365.2422 solar days, or 365 days 5 hr 48 min 46 sec. Earth also moves along with the Milky Way Galaxy as the Galaxy rotates and moves through space. It takes more than 200 million years for the stars in the Milky Way to complete one revolution around the Galaxy’s center. Earth’s axis of rotation is inclined (tilted) 23.5° relative to its plane of revolution around the Sun. This inclination of the axis creates the seasons and causes the height of the Sun in the sky at noon to increase and decrease as the seasons change. The Northern Hemisphere receives the most energy from the Sun when it is tilted toward the Sun. This orientation corresponds to summer in the Northern Hemisphere and winter in the Southern Hemisphere. The Southern Hemisphere receives maximum energy when it is tilted toward the Sun, corresponding to summer in the Southern Hemisphere and winter in the Northern Hemisphere. Fall and spring occur in between these orientations. (b) Monocot and dicot plants Dicots Dicots, popular name for dicotyledons, one of the two large groups of flowering plants. A number of floral and vegetative features of dicots distinguish them from the more recently evolved monocotyledons (see Monocots), the other class of flowering plants. In dicots the embryo sprouts two cotyledons, which are seed leaves that usually do not become foliage leaves but serve to provide food for the new seedling. Flower parts of dicots are in fours or fives, and the leaves usually have veins arranged in a reticulate (netlike) pattern. The vascular tissue in the stems is arranged in a ring, and true secondary growth takes place, causing stems and roots to increase in diameter. Tree forms are common. Certain woody dicot groups (see Magnolia) exhibit characteristics such as large flowers with many unfused parts that are thought to be similar to those of early flowering plants. About 170,000 species of dicots are known, including buttercups, maples, roses, and violets. Scientific classification: Dicots make up the class Magnoliopsida, in the phylum Magnoliophyta. Monocots Monocots, more properly monocotyledons, one of two classes of flowering plants (see Angiosperm). They are mostly herbaceous and include such familiar plants as iris, lily, orchid, grass, and palm. Several floral and vegetative features distinguish them from dicots, the other angiosperm class. These features include flower parts in threes; one cotyledon (seed leaf); leaf veins that are usually parallel; vascular tissue in scattered bundles in the stem; and no true secondary growth. Monocots are thought to have evolved from some early aquatic group of dicots through reduction of various flower and vegetative parts. Among living monocot groups, one order (see Water Plantain) contains the most primitive monocots. About 50,000 species of monocots are known—about one-third the number of dicot species. Scientific classification: Monocots make up the class Liliopsida of the phylum Magnoliophyta. The most primitive living monocots belong to the order Alismatales. (d) Umbra and penumbra Penumbra 1. partial shadow: a partial outer shadow that is lighter than the darker inner shadow umbra, e.g. the area between complete darkness and complete light in an eclipse 2. indeterminate area: an indistinct area, especially a state in which something is unclear or uncertain 3. periphery: the outer region or periphery of something 4. ASTRONOMY edge of sunspot: a grayish area surrounding the dark center of a sunspot Umbra 1. PHYSICS complete shadow: an area of complete shadow caused by light from all points of a source being prevented from reaching the area, usually by an opaque object 2. ASTRONOMY darkest part on moon or Earth: the darkest portion of the shadow cast by an astronomical object during an eclipse, especially that cast on Earth during a solar eclipse 3. ASTRONOMY dark part of sunspot: the inner, darker area of a sunspot The earth, lit by the sun, casts a long, conical shadow in space. At any point within that cone the light of the sun is wholly obscured. Surrounding the shadow cone, also called the umbra, is an area of partial shadow called the penumbra. The approximate mean length of the umbra is 1,379,200 km (857,000 mi); at a distance of 384,600 km (239,000 mi), the mean distance of the moon from the earth, it has a diameter of about 9170 km (about 5700 mi). (e) Nucleus and nucleolus Nucleus (atomic structure) Nucleus (atomic structure), in atomic structure, the positively charged central mass of an atom about which the orbital electrons revolve. The nucleus is composed of nucleons, that is, protons and neutrons, and its mass accounts for nearly the entire mass of the atom. Nucleolus Nucleolus, structure within the nucleus of cells, involved in the manufacture of ribosomes (cell structures where protein synthesis occurs). Each cell nucleus typically contains one or more nucleoli, which appear as irregularly shaped fibers and granules embedded in the nucleus. There is no membrane separating the nucleolus from the rest of the nucleus. The manufacture of ribosomes requires that the components of ribosomes—ribonucleic acid (RNA) and protein—be synthesized and brought together for assembly. The ribosomes of eukaryotic cells contain four strands of RNA and from 70 to 80 proteins. Using genes that reside on regions of chromosomes located in the nucleolus, three of the four ribosomal RNA strands are synthesized in the center of the nucleolus. The fourth RNA strand is synthesized outside of the nucleolus, using genes at a different location. The fourth strand is then transported into the nucleolus to participate in ribosome assembly. The genetic information for ribosomal proteins, found in the nucleus, is copied, or transcribed, into special chemical messengers called messenger RNA (mRNA), a different type of RNA than ribosomal RNA. The mRNA travels out of the nucleus into the cell’s cytoplasm where its information is transferred, or translated, into the ribosomal proteins. The newly created proteins enter the nucleolus and bind with the four ribosomal RNA strands to create two ribosomal structures: the large and small subunits. These two subunits leave the nucleus and enter the cytoplasm. When protein synthesis is initiated, the two subunits merge to form the completed ribosome. The nucleolus creates the two subunits for a single ribosome in about one hour. Thousands of subunits are manufactured by each nucleolus simultaneously, however, since several hundred to several thousand copies of the ribosomal RNA genes are present in the nucleolus. Before a cell divides, the nucleolus assembles about ten million ribosomal subunits, necessary for the large-scale protein production that occurs in cell division. (f) Heavy water and Hard water Q#3 raw a labeled diagram of human eye, indicating all essential parts, discuss its working I INTRODUCTION Eye (anatomy), light-sensitive organ of vision in animals. The eyes of various species vary from simple structures that are capable only of differentiating between light and dark to complex organs, such as those of humans and other mammals, that can distinguish minute variations of shape, color, brightness, and distance. The actual process of seeing is performed by the brain rather than by the eye. The function of the eye is to translate the electromagnetic vibrations of light into patterns of nerve impulses that are transmitted to the brain. II THE HUMAN EYE The entire eye, often called the eyeball, is a spherical structure approximately 2.5 cm (about 1 in) in diameter with a pronounced bulge on its forward surface. The outer part of the eye is composed of three layers of tissue. The outside layer is the sclera, a protective coating. It covers about five-sixths of the surface of the eye. At the front of the eyeball, it is continuous with the bulging, transparent cornea. The middle layer of the coating of the eye is the choroid, a vascular layer lining the posterior three-fifths of the eyeball. The choroid is continuous with the ciliary body and with the iris, which lies at the front of the eye. The innermost layer is the light-sensitive retina. The cornea is a tough, five-layered membrane through which light is admitted to the interior of the eye. Behind the cornea is a chamber filled with clear, watery fluid, the aqueous humor, which separates the cornea from the crystalline lens. The lens itself is a flattened sphere constructed of a large number of transparent fibers arranged in layers. It is connected by ligaments to a ringlike muscle, called the ciliary muscle, which surrounds it. The ciliary muscle and its surrounding tissues form the ciliary body. This muscle, by flattening the lens or making it more nearly spherical, changes its focal length. The pigmented iris hangs behind the cornea in front of the lens, and has a circular opening in its center. The size of its opening, the pupil, is controlled by a muscle around its edge. This muscle contracts or relaxes, making the pupil larger or smaller, to control the amount of light admitted to the eye. Behind the lens the main body of the eye is filled with a transparent, jellylike substance, the vitreous humor, enclosed in a thin sac, the hyaloid membrane. The pressure of the vitreous humor keeps the eyeball distended. The retina is a complex layer, composed largely of nerve cells. The light-sensitive receptor cells lie on the outer surface of the retina in front of a pigmented tissue layer. These cells take the form of rods or cones packed closely together like matches in a box. Directly behind the pupil is a small yellow-pigmented spot, the macula lutea, in the center of which is the fovea centralis, the area of greatest visual acuity of the eye. At the center of the fovea, the sensory layer is composed entirely of cone-shaped cells. Around the fovea both rod-shaped and cone-shaped cells are present, with the cone-shaped cells becoming fewer toward the periphery of the sensitive area. At the outer edges are only rod-shaped cells. Where the optic nerve enters the eyeball, below and slightly to the inner side of the fovea, a small round area of the retina exists that has no light-sensitive cells. This optic disk forms the blind spot of the eye. III FUNCTIONING OF THE EYE In general the eyes of all animals resemble simple cameras in that the lens of the eye forms an inverted image of objects in front of it on the sensitive retina, which corresponds to the film in a camera. Focusing the eye, as mentioned above, is accomplished by a flattening or thickening (rounding) of the lens. The process is known as accommodation. In the normal eye accommodation is not necessary for seeing distant objects. The lens, when flattened by the suspensory ligament, brings such objects to focus on the retina. For nearer objects the lens is increasingly rounded by ciliary muscle contraction, which relaxes the suspensory ligament. A young child can see clearly at a distance as close as 6.3 cm (2.5 in), but with increasing age the lens gradually hardens, so that the limits of close seeing are approximately 15 cm (about 6 in) at the age of 30 and 40 cm (16 in) at the age of 50. In the later years of life most people lose the ability to accommodate their eyes to distances within reading or close working range. This condition, known as presbyopia, can be corrected by the use of special convex lenses for the near range. Structural differences in the size of the eye cause the defects of hyperopia, or farsightedness, and myopia, or nearsightedness. See Eyeglasses; Vision. As mentioned above, the eye sees with greatest clarity only in the region of the fovea; due to the neural structure of the retina. The cone-shaped cells of the retina are individually connected to other nerve fibers, so that stimuli to each individual cell are reproduced and, as a result, fine details can be distinguished. The rodshaped cells, on the other hand, are connected in groups so that they respond to stimuli over a general area. The rods, therefore, respond to small total light stimuli, but do not have the ability to separate small details of the visual image. The result of these differences in structure is that the visual field of the eye is composed of a small central area of great sharpness surrounded by an area of lesser sharpness. In the latter area, however, the sensitivity of the eye to light is great. As a result, dim objects can be seen at night on the peripheral part of the retina when they are invisible to the central part. The mechanism of seeing at night involves the sensitization of the rod cells by means of a pigment, called visual purple or rhodopsin, that is formed within the cells. Vitamin A is necessary for the production of visual purple; a deficiency of this vitamin leads to night blindness. Visual purple is bleached by the action of light and must be reformed by the rod cells under conditions of darkness. Hence a person who steps from sunlight into a darkened room cannot see until the pigment begins to form. When the pigment has formed and the eyes are sensitive to low levels of illumination, the eyes are said to be dark-adapted. A brownish pigment present in the outer layer of the retina serves to protect the cone cells of the retina from overexposure to light. If bright light strikes the retina, granules of this brown pigment migrate to the spaces around the cone cells, sheathing and screening them from the light. This action, called light adaptation, has the opposite effect to that of dark adaptation. Subjectively, a person is not conscious that the visual field consists of a central zone of sharpness surrounded by an area of increasing fuzziness. The reason is that the eyes are constantly moving, bringing first one part of the visual field and then another to the foveal region as the attention is shifted from one object to another. These motions are accomplished by six muscles that move the eyeball upward, downward, to the left, to the right, and obliquely. The motions of the eye muscles are extremely precise; the estimation has been made that the eyes can be moved to focus on no less than 100,000 distinct points in the visual field. The muscles of the two eyes, working together, also serve the important function of converging the eyes on any point being observed, so that the images of the two eyes coincide. When convergence is nonexistent or faulty, double vision results. The movement of the eyes and fusion of the images also play a part in the visual estimation of size and distance. IV PROTECTIVE STRUCTURES Several structures, not parts of the eyeball, contribute to the protection of the eye. The most important of these are the eyelids, two folds of skin and tissue, upper and lower, that can be closed by means of muscles to form a protective covering over the eyeball against excessive light and mechanical injury. The eyelashes, a fringe of short hairs growing on the edge of either eyelid, act as a screen to keep dust particles and insects out of the eyes when the eyelids are partly closed. Inside the eyelids is a thin protective membrane, the conjunctiva, which doubles over to cover the visible sclera. Each eye also has a tear gland, or lacrimal organ, situated at the outside corner of the eye. The salty secretion of these glands lubricates the forward part of the eyeball when the eyelids are closed and flushes away any small dust particles or other foreign matter on the surface of the eye. Normally the eyelids of human eyes close by reflex action about every six seconds, but if dust reaches the surface of the eye and is not washed away, the eyelids blink oftener and more tears are produced. On the edges of the eyelids are a number of small glands, the Meibomian glands, which produce a fatty secretion that lubricates the eyelids themselves and the eyelashes. The eyebrows, located above each eye, also have a protective function in soaking up or deflecting perspiration or rain and preventing the moisture from running into the eyes. The hollow socket in the skull in which the eye is set is called the orbit. The bony edges of the orbit, the frontal bone, and the cheekbone protect the eye from mechanical injury by blows or collisions. V COMPARATIVE ANATOMY The simplest animal eyes occur in the cnidarians and ctenophores, phyla comprising the jellyfish and somewhat similar primitive animals. These eyes, known as pigment eyes, consist of groups of pigment cells associated with sensory cells and often covered with a thickened layer of cuticle that forms a kind of lens. Similar eyes, usually having a somewhat more complex structure, occur in worms, insects, and mollusks. Two kinds of image-forming eyes are found in the animal world, single and compound eyes. The single eyes are essentially similar to the human eye, though varying from group to group in details of structure. The lowest species to develop such eyes are some of the large jellyfish. Compound eyes, confined to the arthropods (see Arthropod), consist of a faceted lens, each facet of which forms a separate image on a retinal cell, creating a moasic field. In some arthropods the structure is more sophisticated, forming a combined image. The eyes of other vertebrates are essentially similar to human eyes, although important modifications may exist. The eyes of such nocturnal animals as cats, owls, and bats are provided only with rod cells, and the cells are both more sensitive and more numerous than in humans. The eye of a dolphin has 7000 times as many rod cells as a human eye, enabling it to see in deep water. The eyes of most fish have a flat cornea and a globular lens and are hence particularly adapted for seeing close objects. Birds’ eyes are elongated from front to back, permitting larger images of distant objects to be formed on the retina. VI EYE DISEASES Eye disorders may be classified according to the part of the eye in which the disorders occur. The most common disease of the eyelids is hordeolum, known commonly as a sty, which is an infection of the follicles of the eyelashes, usually caused by infection by staphylococci. Internal sties that occur inside the eyelid and not on its edge are similar infections of the lubricating Meibomian glands. Abscesses of the eyelids are sometimes the result of penetrating wounds. Several congenital defects of the eyelids occasionally occur, including coloboma, or cleft eyelid, and ptosis, a drooping of the upper lid. Among acquired defects are symblepharon, an adhesion of the inner surface of the eyelid to the eyeball, which is most frequently the result of burns. Entropion, the turning of the eyelid inward toward the cornea, and ectropion, the turning of the eyelid outward, can be caused by scars or by spasmodic muscular contractions resulting from chronic irritation. The eyelids also are subject to several diseases of the skin such as eczema and acne, and to both benign and malignant tumors. Another eye disease is infection of the conjunctiva, the mucous membranes covering the inside of the eyelids and the outside of the eyeball. See Conjunctivitis; Trachoma. Disorders of the cornea, which may result in a loss of transparency and impaired sight, are usually the result of injury but may also occur as a secondary result of disease; for example, edema, or swelling, of the cornea sometimes accompanies glaucoma. The choroid, or middle coat of the eyeball, contains most of the blood vessels of the eye; it is often the site of secondary infections from toxic conditions and bacterial infections such as tuberculosis and syphilis. Cancer may develop in the choroidal tissues or may be carried to the eye from malignancies elsewhere in the body. The light-sensitive retina, which lies just beneath the choroid, also is subject to the same type of infections. The cause of retrolental fibroplasia, however—a disease of premature infants that causes retinal detachment and partial blindness—is unknown. Retinal detachment may also follow cataract surgery. Laser beams are sometimes used to weld detached retinas back onto the eye. Another retinal condition, called macular degeneration, affects the central retina. Macular degeneration is a frequent cause of loss of vision in older persons. Juvenile forms of this condition also exist. The optic nerve contains the retinal nerve fibers, which carry visual impulses to the brain. The retinal circulation is carried by the central artery and vein, which lie in the optic nerve. The sheath of the optic nerve communicates with the cerebral lymph spaces. Inflammation of that part of the optic nerve situated within the eye is known as optic neuritis, or papillitis; when inflammation occurs in the part of the optic nerve behind the eye, the disease is called retrobulbar neuritis. When the pressure in the skull is elevated, or increased in intracranial pressure, as in brain tumors, edema and swelling of the optic disk occur where the nerve enters the eyeball, a condition known as papilledema, or chocked disk. VII EYE BANK Eye banks are organizations that distribute corneal tissue taken from deceased persons for eye grafts. Blindness caused by cloudiness or scarring of the cornea can sometimes be cured by surgical removal of the affected portion of the corneal tissue. With present techniques, such tissue can be kept alive for only 48 hours, but current experiments in preserving human corneas by freezing give hope of extending its useful life for months. Eye banks also preserve and distribute vitreous humor, the liquid within the larger chamber of the eye, for use in treatment of detached retinas. The first eye bank was opened in New York City in 1945. The Eye-Bank Association of America, in Rochester, New York, acts as a clearinghouse for information. Q.5 What is the solar system ? Indicate the position of planet pluto in it. State the characteristics that classify it as : (5,1,4) a. a planet b. an asteroid I INTRODUCTION Solar System, the Sun and everything that orbits the Sun, including the nine planets and their satellites; the asteroids and comets; and interplanetary dust and gas. The term may also refer to a group of celestial bodies orbiting another star (see Extrasolar Planets). In this article, solar system refers to the system that includes Earth and the Sun. The dimensions of the solar system are specified in terms of the mean distance from Earth to the Sun, called the astronomical unit (AU). One AU is 150 million km (about 93 million mi). The most distant known planet, Pluto, orbits about 39 AU from the Sun. Estimates for the boundary where the Sun’s magnetic field ends and interstellar space begins—called the heliopause—range from 86 to 100 AU. The most distant known planetoid orbiting the Sun is Sedna, whose discovery was reported in March 2004. A planetoid is an object that is too small to be a planet. At the farthest point in its orbit, Sedna is about 900 AU from the Sun. Comets known as long-period comets, however, achieve the greatest distance from the Sun; they have highly eccentric orbits ranging out to 50,000 AU or more. The solar system was the only planetary system known to exist around a star similar to the Sun until 1995, when astronomers discovered a planet about 0.6 times the mass of Jupiter orbiting the star 51 Pegasi. Jupiter is the most massive planet in our solar system. Soon after, astronomers found a planet about 8.1 times the mass of Jupiter orbiting the star 70 Virginis, and a planet about 3.5 times the mass of Jupiter orbiting the star 47 Ursa Majoris. Since then, astronomers have found planets and disks of dust in the process of forming planets around many other stars. Most astronomers think it likely that solar systems of some sort are numerous throughout the universe. See Astronomy; Galaxy; Star. II THE SUN AND THE SOLAR WIND The Sun is a typical star of intermediate size and luminosity. Sunlight and other radiation are produced by the conversion of hydrogen into helium in the Sun’s hot, dense interior (see Nuclear Energy). Although this nuclear fusion is transforming 600 million metric tons of hydrogen each second, the Sun is so massive (2 × 1030 kg, or 4.4 × 1030 lb) that it can continue to shine at its present brightness for 6 billion years. This stability has allowed life to develop and survive on Earth. For all the Sun’s steadiness, it is an extremely active star. On its surface, dark sunspots bounded by intense magnetic fields come and go in 11-year cycles and sudden bursts of charged particles from solar flares can cause auroras and disturb radio signals on Earth. A continuous stream of protons, electrons, and ions also leaves the Sun and moves out through the solar system. This solar wind shapes the ion tails of comets and leaves its traces in the lunar soil, samples of which were brought back from the Moon’s surface by piloted United States Apollo spacecraft. The Sun’s activity also influences the heliopause, a region of space that astronomers believe marks the boundary between the solar system and interstellar space. The heliopause is a dynamic region that expands and contracts due to the constantly changing speed and pressure of the solar wind. In November 2003 a team of astronomers reported that the Voyager 1 spacecraft appeared to have encountered the outskirts of the heliopause at about 86 AU from the Sun. They based their report on data that indicated the solar wind had slowed from 1.1 million km/h (700,000 mph) to 160,000 km/h (100,000 mph). This finding is consistent with the theory that when the solar wind meets interstellar space at a turbulent zone known as the termination shock boundary, it will slow abruptly. However, another team of astronomers disputed the finding, saying that the spacecraft had neared but had not yet reached the heliopause. III THE MAJOR PLANETS Nine major planets are currently known. They are commonly divided into two groups: the inner planets (Mercury, Venus, Earth, and Mars) and the outer planets (Jupiter, Saturn, Uranus, and Neptune). The inner planets are small and are composed primarily of rock and iron. The outer planets are much larger and consist mainly of hydrogen, helium, and ice. Pluto does not belong to either group, and there is an ongoing debate as to whether Pluto should be categorized as a major planet. Mercury is surprisingly dense, apparently because it has an unusually large iron core. With only a transient atmosphere, Mercury has a surface that still bears the record of bombardment by asteroidal bodies early in its history. Venus has a carbon dioxide atmosphere 90 times thicker than that of Earth, causing an efficient greenhouse effect by which the Venusian atmosphere is heated. The resulting surface temperature is the hottest of any planet—about 477°C (about 890°F). Earth is the only planet known to have abundant liquid water and life. However, in 2004 astronomers with the National Aeronautics and Space Administration’s Mars Exploration Rover mission confirmed that Mars once had liquid water on its surface. Scientists had previously concluded that liquid water once existed on Mars due to the numerous surface features on the planet that resemble water erosion found on Earth. Mars’s carbon dioxide atmosphere is now so thin that the planet is dry and cold, with polar caps of frozen water and solid carbon dioxide, or dry ice. However, small jets of subcrustal water may still erupt on the surface in some places. Jupiter is the largest of the planets. Its hydrogen and helium atmosphere contains pastelcolored clouds, and its immense magnetosphere, rings, and satellites make it a planetary system unto itself. One of Jupiter’s largest moons, Io, has volcanoes that produce the hottest surface temperatures in the solar system. At least four of Jupiter’s moons have atmospheres, and at least three show evidence that they contain liquid or partially frozen water. Jupiter’s moon Europa may have a global ocean of liquid water beneath its icy crust. Saturn rivals Jupiter, with a much more intricate ring structure and a similar number of satellites. One of Saturn’s moons, Titan, has an atmosphere thicker than that of any other satellite in the solar system. Uranus and Neptune are deficient in hydrogen compared with Jupiter and Saturn; Uranus, also ringed, has the distinction of rotating at 98° to the plane of its orbit. Pluto seems similar to the larger, icy satellites of Jupiter or Saturn. Pluto is so distant from the Sun and so cold that methane freezes on its surface. See also Planetary Science. IV OTHER ORBITING BODIES The asteroids are small rocky bodies that move in orbits primarily between the orbits of Mars and Jupiter. Numbering in the thousands, asteroids range in size from Ceres, which has a diameter of 1,003 km (623 mi), to microscopic grains. Some asteroids are perturbed, or pulled by forces other than their attraction to the Sun, into eccentric orbits that can bring them closer to the Sun. If the orbits of such bodies intersect that of Earth, they are called meteoroids. When they appear in the night sky as streaks of light, they are known as meteors, and recovered fragments are termed meteorites. Laboratory studies of meteorites have revealed much information about primitive conditions in our solar system. The surfaces of Mercury, Mars, and several satellites of the planets (including Earth’s Moon) show the effects of an intense bombardment by asteroidal objects early in the history of the solar system. On Earth that record has eroded away, except for a few recently found impact craters. Some meteors and interplanetary dust may also come from comets, which are basically aggregates of dust and frozen gases typically 5 to 10 km (about 3 to 6 mi) in diameter. Comets orbit the Sun at distances so great that they can be perturbed by stars into orbits that bring them into the inner solar system. As comets approach the Sun, they release their dust and gases to form a spectacular coma and tail. Under the influence of Jupiter’s strong gravitational field, comets can sometimes adopt much smaller orbits. The most famous of these is Halley’s Comet, which returns to the inner solar system at 75-year periods. Its most recent return was in 1986. In July 1994 fragments of Comet Shoemaker-Levy 9 bombarded Jupiter’s dense atmosphere at speeds of about 210,000 km/h (130,000 mph). Upon impact, the tremendous kinetic energy of the fragments was released through massive explosions, some resulting in fireballs larger than Earth. Comets circle the Sun in two main groups, within the Kuiper Belt or within the Oort cloud. The Kuiper Belt is a ring of debris that orbits the Sun beyond the planet Neptune. Many of the comets with periods of less than 500 years come from the Kuiper Belt. In 2002 astronomers discovered a planetoid in the Kuiper Belt, and they named it Quaoar. The Oort cloud is a hypothetical region about halfway between the Sun and the heliopause. Astronomers believe that the existence of the Oort cloud, named for Dutch astronomer Jan Oort, explains why some comets have very long periods. A chunk of dust and ice may stay in the Oort cloud for thousands of years. Nearby stars sometimes pass close enough to the solar system to push an object in the Oort cloud into an orbit that takes it close to the Sun. The first detection of the long-hypothesized Oort cloud came in March 2004 when astronomers reported the discovery of a planetoid about 1,700 km (about 1,000 mi) in diameter. They named it Sedna, after a sea goddess in Inuit mythology. Sedna was found about 13 billion km (about 8 billion mi) from the Sun. At its farthest point from the Sun, Sedna is the most distant object in the solar system and is about 130 billion km (about 84 billion mi) from the Sun. Many of the objects that do not fall into the asteroid belts, the Kuiper Belt, or the Oort cloud may be comets that will never make it back to the Sun. The surfaces of the icy satellites of the outer planets are scarred by impacts from such bodies. The asteroid-like object Chiron, with an orbit between Saturn and Uranus, may itself be an extremely large inactive comet. Similarly, some of the asteroids that cross the path of Earth’s orbit may be the rocky remains of burned-out comets. Chiron and similar objects called the Centaurs probably escaped from the Kuiper Belt and were drawn into their irregular orbits by the gravitational pull of the giant outer planets, Jupiter, Saturn, Neptune, and Uranus. The Sun was also found to be encircled by rings of interplanetary dust. One of them, between Jupiter and Mars, has long been known as the cause of zodiacal light, a faint glow that appears in the east before dawn and in the west after dusk. Another ring, lying only two solar widths away from the Sun, was discovered in 1983. V MOVEMENTS OF THE PLANETS AND THEIR SATELLITES If one could look down on the solar system from far above the North Pole of Earth, the planets would appear to move around the Sun in a counterclockwise direction. All of the planets except Venus and Uranus rotate on their axes in this same direction. The entire system is remarkably flat—only Mercury and Pluto have obviously inclined orbits. Pluto’s orbit is so elliptical that it is sometimes closer than Neptune to the Sun. The satellite systems mimic the behavior of their parent planets and move in a counterclockwise direction, but many exceptions are found. Jupiter, Saturn, and Neptune each have at least one satellite that moves around the planet in a retrograde orbit (clockwise instead of counterclockwise), and several satellite orbits are highly elliptical. Jupiter, moreover, has trapped two clusters of asteroids (the so-called Trojan asteroids) leading and following the planet by 60° in its orbit around the Sun. (Some satellites of Saturn have done the same with smaller bodies.) The comets exhibit a roughly spherical distribution of orbits around the Sun. Within this maze of motions, some remarkable patterns exist: Mercury rotates on its axis three times for every two revolutions about the Sun; no asteroids exist with periods (intervals of time needed to complete one revolution) 1/2, 1/3, …, 1/n (where n is an integer) the period of Jupiter; the three inner Galilean satellites of Jupiter have periods in the ratio 4:2:1. These and other examples demonstrate the subtle balance of forces that is established in a gravitational system composed of many bodies. VI THEORIES OF ORIGIN Despite their differences, the members of the solar system probably form a common family. They seem to have originated at the same time; few indications exist of bodies joining the solar system, captured later from other stars or interstellar space. Early attempts to explain the origin of this system include the nebular hypothesis of the German philosopher Immanuel Kant and the French astronomer and mathematician Pierre Simon de Laplace, according to which a cloud of gas broke into rings that condensed to form planets. Doubts about the stability of such rings led some scientists to consider various catastrophic hypotheses, such as a close encounter of the Sun with another star. Such encounters are extremely rare, and the hot, tidally disrupted gases would dissipate rather than condense to form planets. Current theories connect the formation of the solar system with the formation of the Sun itself, about 4.7 billion years ago. The fragmentation and gravitational collapse of an interstellar cloud of gas and dust, triggered perhaps by nearby supernova explosions, may have led to the formation of a primordial solar nebula. The Sun would then form in the densest, central region. It is so hot close to the Sun that even silicates, which are relatively dense, have difficulty forming there. This phenomenon may account for the presence near the Sun of a planet such as Mercury, having a relatively small silicate crust and a larger than usual, dense iron core. (It is easier for iron dust and vapor to coalesce near the central region of a solar nebula than it is for lighter silicates to do so.) At larger distances from the center of the solar nebula, gases condense into solids such as are found today from Jupiter outward. Evidence of a possible preformation supernova explosion appears as traces of anomalous isotopes in tiny inclusions in some meteorites. This association of planet formation with star formation suggests that billions of other stars in our galaxy may also have planets. The high frequency of binary and multiple stars, as well as the large satellite systems around Jupiter and Saturn, attest to the tendency of collapsing gas clouds to fragment into multibody systems. See separate articles for most of the celestial bodies mentioned in this article. See also Exobiology. Pluto (planet) I INTRODUCTION Pluto (planet), ninth planet from the Sun, smallest and outermost known planet of the solar system. Pluto revolves about the Sun once in 247.9 Earth years at an average distance of 5,880 million km (3,650 million mi). The planet’s orbit is so eccentric that at certain points along its path Pluto is slightly closer to the Sun than is Neptune. Pluto is about 2,360 km (1,475 mi) in diameter, about two-thirds the size of Earth's moon. Discovered in 1930, Pluto is the most recent planet in the solar system to be detected. The planet was named after the god of the underworld in Roman mythology. II OBSERVATION FROM EARTH Pluto is far away from Earth, and no spacecraft has yet been sent to the planet. All the information astronomers have on Pluto comes from observation through large telescopes. Pluto was discovered as the result of a telescopic search inaugurated in 1905 by American astronomer Percival Lowell, who postulated the existence of a distant planet beyond Neptune as the cause of slight irregularities in the orbits of Uranus and Neptune. Continued after Lowell’s death by members of the Lowell Observatory staff, the search ended successfully in 1930, when American astronomer Clyde William Tombaugh found Pluto. For many years very little was known about the planet, but in 1978 astronomers discovered a relatively large moon orbiting Pluto at a distance of only about 19,600 km (about 12,180 mi) and named it Charon. The orbits of Pluto and Charon caused them to pass repeatedly in front of one another as seen from Earth between 1985 and 1990, enabling astronomers to determine their sizes accurately. Charon is about 1,200 km (750 mi) in diameter, making Pluto and Charon the planet-satellite pair closest in size to one another in the solar system. Scientists often call Pluto and Charon a double planet. Every 248 years Pluto’s elliptical orbit brings it within the orbit of Neptune. Pluto last traded places with Neptune as the most distant planet in 1979 and crossed back outside Neptune’s orbit in 1999. No possibility of collision exists, however, because Pluto's orbit is inclined more than 17.2° to the plane of the ecliptic (the plane in which Earth and most of the other planets orbit the Sun) and is oriented such that it never actually crosses Neptune's path. Pluto has a pinkish color. In 1988, astronomers discovered that Pluto has a thin atmosphere consisting of nitrogen with traces of carbon monoxide and methane. Atmospheric pressure on the planet's surface is about 100,000 times less than Earth's atmospheric pressure at sea level. Pluto’s atmosphere is believed to freeze out as a snow on the planet’s surface for most of each Plutonian orbit. During the decades when Pluto is closest to the Sun, however, the snows sublimate (evaporate) and create the atmosphere that has been observed. In 1994 the Hubble Space Telescope imaged 85 percent of Pluto's surface, revealing polar caps and bright and dark areas of startling contrast. Astronomers believe that the bright areas are likely to be shifting fields of clean ice and that the dark areas are fields of dirty ice colored by interaction with sunlight. These images show that extensive ice caps form on Pluto's poles, especially when the planet is farthest from the Sun. III ORIGIN OF PLUTO With a density about twice that of water, Pluto is apparently made of a much greater proportion of rockier material than are the giant planets of the outer solar system. This may be the result of the kind of chemical reactions that took place during the formation of the planet under cold temperatures and low pressure. Many astronomers think Pluto was growing rapidly to be a larger planet when Neptune’s gravitational influence disturbed the region where Pluto orbits (the Kuiper Belt), stopping the process of planetary growth there. The Kuiper Belt is a ring of material orbiting the Sun beyond the planet Neptune that contains millions of rocky, icy objects like Pluto and Charon. Charon could be an accumulation of the lighter materials resulting from a collision between Pluto and another large Kuiper Belt Object (KBO) in the ancient past. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Asteroid I INTRODUCTION Asteroid, one of the many small or minor rocky planetoids that are members of the solar system and that move in elliptical orbits primarily between the orbits of Mars and Jupiter. II SIZES AND ORBITS The largest representatives are 1 Ceres, with a diameter of about 1,003 km (about 623 mi), and 2 Pallas and 4 Vesta, with diameters of about 550 km (about 340 mi). The naming of asteroids is governed by the International Astronomical Union (IAU). After an astronomer observes a possible unknown asteroid, other astronomers confirm the discovery by observing the body over a period of several orbits and comparing the asteroid’s position and orbit to those of known asteroids. If the asteroid is indeed a newly discovered object, the IAU gives it a number according to its order of discovery, and the astronomer who discovered it chooses a name. Asteroids are usually referred to by both number and name. About 200 asteroids have diameters of more than 97 km (60 mi), and thousands of smaller ones exist. The total mass of all asteroids in the solar system is much less than the mass of the Moon. The larger bodies are roughly spherical, but elongated and irregular shapes are common for those with diameters of less than 160 km (100 mi). Most asteroids, regardless of size, rotate on their axes every 5 to 20 hours. Certain asteroids may be binary, or have satellites of their own. Few scientists now believe that asteroids are the remnants of a former planet. It is more likely that asteroids occupy a place in the solar system where a sizable planet could have formed but was prevented from doing so by the disruptive gravitational influences of the nearby giant planet Jupiter. Originally perhaps only a few dozen asteroids existed, which were subsequently fragmented by mutual collisions to produce the population now present. Scientists believe that asteroids move out of the asteroid belt because heat from the Sun warms them unevenly. This causes the asteroids to drift slowly away from their original orbits. The so-called Trojan asteroids lie in two clouds, one moving 60° ahead of Jupiter in its orbit and the other 60° behind. In 1977 the asteroid 2060 Chiron was discovered in an orbit between that of Saturn and Uranus. Asteroids that intersect the orbit of Mars are called Amors; asteroids that intersect the orbit of Earth are known as Apollos; and asteroids that have orbits smaller than Earth’s orbit are called Atens. One of the largest inner asteroids is 443 Eros, an elongated body measuring 13 by 33 km (8 by 21 mi). The peculiar Apollo asteroid 3200 Phaethon, about 5 km (about 3 mi) wide, approaches the Sun more closely, at 20.9 million km (13.9 million mi), than any other known asteroid. It is also associated with the yearly return of the Geminid stream of meteors (see Geminids). Several Earth-approaching asteroids are relatively easy targets for space missions. In 1991 the United States Galileo space probe, on its way to Jupiter, took the first close-up pictures of an asteroid. The images showed that the small, lopsided body, 951 Gaspra, is pockmarked with craters, and revealed evidence of a blanket of loose, fragmental material, or regolith, covering the asteroid’s surface. Galileo also visited an asteroid named 243 Ida and found that Ida has its own moon, a smaller asteroid subsequently named Dactyl. (Dactyl’s official designation is 243 Ida I, because it is a satellite of Ida.) In 1996 the National Aeronautics and Space Administration (NASA) launched the NearEarth Asteroid Rendezvous (NEAR) spacecraft. NEAR was later renamed NEAR Shoemaker in honor of American scientist Eugene M. Shoemaker. NEAR Shoemaker’s goal was to go into orbit around the asteroid Eros. On its way to Eros, the spacecraft visited the asteroid 253 Mathilde in June 1997. At 60 km (37 mi) in diameter, Mathilde is larger than either of the asteroids that Galileo visited. In February 2000, NEAR Shoemaker reached Eros, moved into orbit around the asteroid, and began making observations. The spacecraft orbited the asteroid for a year, gathering data to provide astronomers with a better idea of the origin, composition, and structure of large asteroids. After NEAR Shoemaker’s original mission ended, NASA decided to attempt a “controlled crash” on the surface of Eros. NEAR Shoemaker set down safely on Eros in February 2001—the first spacecraft ever to land on an asteroid. In 1999 Deep Space 1, a probe NASA designed to test new space technologies, flew by the tiny asteroid 9969 Braille. Measurements taken by Deep Space 1 revealed that the composition of Braille is very similar to that of 4 Vesta, the third largest asteroid known. Scientists believe that Braille may be a broken piece of Vesta or that the two asteroids may have formed under similar conditions. III SURFACE COMPOSITION With the exception of a few that have been traced to the Moon and Mars, most of the meteorites recovered on Earth are thought to be asteroid fragments. Remote observations of asteroids by telescopic spectroscopy and radar support this hypothesis. They reveal that asteroids, like meteorites, can be classified into a few distinct types. Three-quarters of the asteroids visible from Earth, including 1 Ceres, belong to the C type, which appear to be related to a class of stony meteorites known as carbonaceous chondrites. These meteorites are considered the oldest materials in the solar system, with a composition reflecting that of the primitive solar nebula. Extremely dark in color, probably because of their hydrocarbon content, they show evidence of having adsorbed water of hydration. Thus, unlike the Earth and the Moon, they have never either melted or been reheated since they first formed. Asteroids of the S type, related to the stony iron meteorites, make up about 15 percent of the total population. Much rarer are the M-type objects, corresponding in composition to the meteorites known as “irons.” Consisting of an iron-nickel alloy, they may represent the cores of melted, differentiated planetary bodies whose outer layers were removed by impact cratering. A very few asteroids, notably 4 Vesta, are probably related to the rarest meteorite class of all: the achondrites. These asteroids appear to have an igneous surface composition like that of many lunar and terrestrial lava flows. Thus, astronomers are reasonably certain that Vesta was, at some time in its history, at least partly melted. Scientists are puzzled that some of the asteroids have been melted but others, such as 1 Ceres, have not. One possible explanation is that the early solar system contained certain concentrated, highly radioactive isotopes that might have generated enough heat to melt the asteroids. IV ASTEROIDS AND EARTH Astronomers have found more than 300 asteroids with orbits that approach Earth’s orbit. Some scientists project that several thousand of these near-Earth asteroids may exist and that as many as 1,500 could be large enough to cause a global catastrophe if they collided with Earth. Still, the chances of such a collision average out to only one collision about every 300,000 years. Many scientists believe that a collision with an asteroid or a comet may have been responsible for at least one mass extinction of life on Earth over the planet’s history. A giant crater on the Yucatán Peninsula in Mexico marks the spot where a comet or asteroid struck Earth at the end of the Cretaceous Period, about 65 million years ago. This is about the same time as the disappearance of the last of the dinosaurs. A collision with an asteroid large enough to cause the Yucatán crater would have sent so much dust and gas into the atmosphere that sunlight would have been dimmed for months or years. Reactions of gases from the impact with clouds in the atmosphere would have caused massive amounts of acid rain. The acid rain and the lack of sunlight would have killed off plant life and the animals in the food chain that were dependent on plants for survival. The most recent major encounter between Earth and what may have been an asteroid was a 1908 explosion in the atmosphere above the Tunguska region of Siberia. The force of the blast flattened more than 200,000 hectares (500,000 acres) of pine forest and killed thousands of reindeer. The number of human casualties, if any, is unknown. The first scientific expedition went to the region two decades later. This expedition and several detailed studies following it found no evidence of an impact crater. This led scientists to believe that the heat generated by friction with the atmosphere as the object plunged toward Earth was great enough to make the object explode before it hit the ground. If the Tunguska object had exploded in a less remote area, the loss of human life and property could have been astounding. Military satellites—in orbit around Earth watching for explosions that could signal violations of weapons testing treaties—have detected dozens of smaller asteroid explosions in the atmosphere each year. In 1995 NASA, the Jet Propulsion Laboratory, and the U.S. Air Force began a project called Near-Earth Asteroid Tracking (NEAT). NEAT uses an observatory in Hawaii to search for asteroids with orbits that might pose a threat to Earth. By tracking these asteroids, scientists can calculate the asteroids’ precise orbits and project these orbits into the future to determine whether the asteroids will come close to Earth. Astronomers believe that tracking programs such as NEAT would probably give the world decades or centuries of warning time for any possible asteroid collision. Scientists have suggested several strategies for deflecting asteroids from a collision course with Earth. If the asteroid is very far away, a nuclear warhead could be used to blow it up without much danger of pieces of the asteroid causing significant damage to Earth. Another suggested strategy would be to attach a rocket engine to the asteroid and direct the asteroid off course without breaking it up. Both of these methods require that the asteroid be far from Earth. If an asteroid exploded close to Earth, chunks of it would probably cause damage. Any effort to push an asteroid off course would also require years to work. Asteroids are much too large for a rocket to push quickly. If astronomers were to discover an asteroid less than ten years away from collision with Earth, new strategies for deflecting the asteroid would probably be needed. Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved. Q7: What are minerals ? For most of the part minerals are constituted of eight elements, name any six of them. State the six characteristics that are used to identify minerals. Mineral (chemistry), in general, any naturally occurring chemical element or compound, but in mineralogy and geology, chemical elements and compounds that have been formed through inorganic processes. Petroleum and coal, which are formed by the decomposition of organic matter, are not minerals in the strict sense. More than 3000 mineral species are known, most of which are characterized by definite chemical composition, crystalline structure, and physical properties. They are classified primarily by chemical composition, crystal class, hardness, and appearance (color, luster, and opacity). Mineral species are, as a rule, limited to solid substances, the only liquids being metallic mercury and water. All the rocks forming the earth's crust consist of minerals. Metalliferous minerals of economic value, which are mined for their metals, are known as ores. See Crystal. I INTRODUCTION Mineralogy, the identification of minerals and the study of their properties, origin, and classification. The properties of minerals are studied under the convenient subdivisions of chemical mineralogy, physical mineralogy, and crystallography. The properties and classification of individual minerals, their localities and modes of occurrence, and their uses are studied under descriptive mineralogy. Identification according to chemical, physical, and crystallographic properties is called determinative mineralogy. II CHEMICAL MINERALOGY Chemical composition is the most important property for identifying minerals and distinguishing them from one another. Mineral analysis is carried out according to standard qualitative and quantitative methods of chemical analysis. Minerals are classified on the basis of chemical composition and crystal symmetry. The chemical constituents of minerals may also be determined by electron-beam microprobe analysis. Although chemical classification is not rigid, the various classes of chemical compounds that include a majority of minerals are as follows: (1) elements, such as gold, graphite, diamond, and sulfur, that occur in the native state, that is, in an uncombined form; (2) sulfides, which are minerals composed of various metals combined with sulfur. Many important ore minerals, such as galena and sphalerite, are in this class; (3) sulfo salts, minerals composed of lead, copper, or silver in combination with sulfur and one or more of the following: antimony, arsenic, and bismuth. Pyrargyrite, Ag3SbS3, belongs to this class; (4) oxides, minerals composed of a metal in combination with oxygen, such as hematite, Fe2O3. Mineral oxides that contain water, such as diaspore, Al2O3• H2O, or the hydroxyl (OH) group, such as bog iron ore, FeO(OH), also belong to this group; (5) halides, composed of metals in combination with chlorine, fluorine, bromine, or iodine; halite, NaCl, is the most common mineral of this class; (6) carbonates, minerals such as calcite, CaCO 3, containing a carbonate group; (7) phosphates, minerals such as apatite, Ca5(F,Cl)(PO4)3, that contain a phosphate group; (8) sulfates, minerals such as barite, BaSO4, containing a sulfate group; and (9) silicates, the largest class of minerals, containing various elements in combination with silicon and oxygen, often with complex chemical structure, and minerals composed solely of silicon and oxygen (silica). The silicates include the minerals comprising the feldspar, mica, pyroxene, quartz, and zeolite and amphibole families. III PHYSICAL MINERALOGY The physical properties of minerals are important aids in identifying and characterizing them. Most of the physical properties can be recognized at sight or determined by simple tests. The most important properties include powder (streak), color, cleavage, fracture, hardness, luster, specific gravity, and fluorescence or phosphorescence. IV CRYSTALLOGRAPHY The majority of minerals occur in crystal form when conditions of formation are favorable. Crystallography is the study of the growth, shape, and geometric character of crystals. The arrangement of atoms within a crystal is determined by X-ray diffraction analysis. Crystal chemistry is the study of the relationship of chemical composition, arrangement of atoms, and the binding forces among atoms. This relationship determines minerals' chemical and physical properties. Crystals are grouped into six main classes of symmetry: isometric, hexagonal, tetragonal, orthorhombic, monoclinic, and triclinic. The study of minerals is an important aid in understanding rock formation. Laboratory synthesis of the high-pressure varieties of minerals is helping the understanding of igneous processes deep in the lithosphere (see Earth). Because all of the inorganic materials of commerce are minerals or derivatives of minerals, mineralogy has direct economic application. Important uses of minerals and examples in each category are gem minerals (diamond, garnet, opal, zircon); ornamental objects and structural material (agate, calcite, gypsum); abrasives (corundum, diamond, kaolin); lime, cement, and plaster (calcite, gypsum); refractories (asbestos, graphite, magnesite, mica); ceramics (feldspar, quartz); chemical minerals (halite, sulfur, borax); fertilizers (phosphates); natural pigments (hematite, limonite); optical and scientific apparatus (quartz, mica, tourmaline); and the ores of metals (cassiterite, chalcopyrite, chromite, cinnabar, ilmenite, molybdenite, galena, and sphalerite). Q.8 Define any five of the following terms using suitable examples : a. Polymerization b. Ecosystem c. Antibiotics d. Renewable energy resources e. Gene f. Software I INTRODUCTION Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units. Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances. II STRUCTURE OF POLYMERS Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC). Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers. In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt. III SYNTHESIS Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon). In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes. In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well. (b)Eco System (c)Antihiotia (d) Polymer I INTRODUCTION Polymer, substance consisting of large molecules that are made of many small, repeating units called monomers, or mers. The number of repeating units in one large molecule is called the degree of polymerization. Materials with a very high degree of polymerization are called high polymers. Polymers consisting of only one kind of repeating unit are called homopolymers. Copolymers are formed from several different repeating units. Most of the organic substances found in living matter, such as protein, wood, chitin, rubber, and resins, are polymers. Many synthetic materials, such as plastics, fibers (; Rayon), adhesives, glass, and porcelain, are also to a large extent polymeric substances. II STRUCTURE OF POLYMERS Polymers can be subdivided into three, or possibly four, structural groups. The molecules in linear polymers consist of long chains of monomers joined by bonds that are rigid to a certain degree—the monomers cannot rotate freely with respect to each other. Typical examples are polyethylene, polyvinyl alcohol, and polyvinyl chloride (PVC). Branched polymers have side chains that are attached to the chain molecule itself. Branching can be caused by impurities or by the presence of monomers that have several reactive groups. Chain polymers composed of monomers with side groups that are part of the monomers, such as polystyrene or polypropylene, are not considered branched polymers. In cross-linked polymers, two or more chains are joined together by side chains. With a small degree of cross-linking, a loose network is obtained that is essentially two dimensional. High degrees of cross-linking result in a tight three-dimensional structure. Cross-linking is usually caused by chemical reactions. An example of a two-dimensional cross-linked structure is vulcanized rubber, in which cross-links are formed by sulfur atoms. Thermosetting plastics are examples of highly cross-linked polymers; their structure is so rigid that when heated they decompose or burn rather than melt. III SYNTHESIS Two general methods exist for forming large molecules from small monomers: addition polymerization and condensation polymerization. In the chemical process called addition polymerization, monomers join together without the loss of atoms from the molecules. Some examples of addition polymers are polyethylene, polypropylene, polystyrene, polyvinyl acetate, and polytetrafluoroethylene (Teflon). In condensation polymerization, monomers join together with the simultaneous elimination of atoms or groups of atoms. Typical condensation polymers are polyamides, polyesters, and certain polyurethanes. In 1983 a new method of addition polymerization called group transfer polymerization was announced. An activating group within the molecule initiating the process transfers to the end of the growing polymer chain as individual monomers insert themselves in the group. The method has been used for acrylic plastics; it should prove applicable to other plastics as well. (e) Gene Gene, basic unit of heredity found in the cells of all living organisms, from bacteria to humans. Genes determine the physical characteristics that an organism inherits, such as the shape of a tree’s leaf, the markings on a cat’s fur, and the color of a human hair. Genes are composed of segments of deoxyribonucleic acid (DNA), a molecule that forms the long, threadlike structures called chromosomes. The information encoded within the DNA structure of a gene directs the manufacture of proteins, molecular workhorses that carry out all life-supporting activities within a cell (see Genetics). Chromosomes within a cell occur in matched pairs. Each chromosome contains many genes, and each gene is located at a particular site on the chromosome, known as the locus. Like chromosomes, genes typically occur in pairs. A gene found on one chromosome in a pair usually has the same locus as another gene in the other chromosome of the pair, and these two genes are called alleles. Alleles are alternate forms of the same gene. For example, a pea plant has one gene that determines height, but that gene appears in more than one form—the gene that produces a short plant is an allele of the gene that produces a tall plant. The behavior of alleles and how they influence inherited traits follow predictable patterns. Austrian monk Gregor Mendel first identified these patterns in the 1860s. In organisms that use sexual reproduction, offspring inherit one-half of their genes from each parent and then mix the two sets of genes together. This produces new combinations of genes, so that each individual is unique but still possesses the same genes as its parents. As a result, sexual reproduction ensures that the basic characteristics of a particular species remain largely the same for generations. However, mutations, or alterations in DNA, occur constantly. They create variations in the genes that are inherited. Some mutations may be neutral, or silent, and do not affect the function of a protein. Occasionally a mutation may benefit or harm an organism and over the course of evolutionary time, these mutations serve the crucial role of providing organisms with previously nonexistent proteins. In this way, mutations are a driving force behind genetic diversity and the rise of new or more competitive species that are better able to adapt to changes, such as climate variations, depletion of food sources, or the emergence of new types of disease . Geneticists are scientists who study the function and behavior of genes. Since the 1970s geneticists have devised techniques, cumulatively known as genetic engineering, to alter or manipulate the DNA structure within genes. These techniques enable scientists to introduce one or more genes from one organism into a second organism. The second organism incorporates the new DNA into its own genetic material, thereby altering its own genetic characteristics by changing the types of proteins it can produce. In humans these techniques form the basis of gene therapy, a group of experimental procedures in which scientists try to substitute one or more healthy genes for defective ones in order to eliminate symptoms of disease. Genetic engineering techniques have also enabled scientists to determine the chromosomal location and DNA structure of all the genes found within a variety of organisms. In April 2003 the Human Genome Project, a publicly funded consortium of academic scientists from around the world, identified the chromosomal locations and structure of the estimated 20,000 to 25,000 genes found within human cells. The genetic makeup of other organisms has also been identified, including that of the bacterium Escherichia coli, the yeast Saccharomyces cerevisiae, the roundworm Caenorhabditis elegans, and the fruit fly Drosophila melanogaster. Scientists hope to use this genetic information to develop lifesaving drugs for a variety of diseases, to improve agricultural crop yields, and to learn more about plant and animal physiology and evolutionary history. (f) Software Software, computer programs; instructions that cause the hardware—the machines—to do work. Software as a whole can be divided into a number of categories based on the types of work done by programs. The two primary software categories are operating systems (system software), which control the workings of the computer, and application software, which addresses the multitude of tasks for which people use computers. System software thus handles such essential, but often invisible, chores as maintaining disk files and managing the screen, whereas application software performs word processing, database management, and the like. Two additional categories that are neither system nor application software, although they contain elements of both, are network software, which enables groups of computers to communicate, and language software, which provides programmers with the tools they need to write programs. Q9: what do you understand by the term “Balanced Diet ? What are its essential constituents ? state the function of each constituent. I INTRODUCTION Human Nutrition, study of how food affects the health and survival of the human body. Human beings require food to grow, reproduce, and maintain good health. Without food, our bodies could not stay warm, build or repair tissue, or maintain a heartbeat. Eating the right foods can help us avoid certain diseases or recover faster when illness occurs. These and other important functions are fueled by chemical substances in our food called nutrients. Nutrients are classified as carbohydrates, proteins, fats, vitamins, minerals, and water. When we eat a meal, nutrients are released from food through digestion. Digestion begins in the mouth by the action of chewing and the chemical activity of saliva, a watery fluid that contains enzymes, certain proteins that help break down food. Further digestion occurs as food travels through the stomach and the small intestine, where digestive enzymes and acids liquefy food and muscle contractions push it along the digestive tract. Nutrients are absorbed from the inside of the small intestine into the bloodstream and carried to the sites in the body where they are needed. At these sites, several chemical reactions occur that ensure the growth and function of body tissues. The parts of foods that are not absorbed continue to move down the intestinal tract and are eliminated from the body as feces. Once digested, carbohydrates, proteins, and fats provide the body with the energy it needs to maintain its many functions. Scientists measure this energy in kilocalories, the amount of energy needed to raise 1 kilogram of water 1 degree Celsius. In nutrition discussions, scientists use the term calorie instead of kilocalorie as the standard unit of measure in nutrition. II ESSENTIAL NUTRIENTS Nutrients are classified as essential or nonessential. Nonessential nutrients are manufactured in the body and do not need to be obtained from food. Examples include cholesterol, a fatlike substance present in all animal cells. Essential nutrients must be obtained from food sources, because the body either does not produce them or produces them in amounts too small to maintain growth and health. Essential nutrients include water, carbohydrates, proteins, fats, vitamins, and minerals. An individual needs varying amounts of each essential nutrient, depending upon such factors as gender and age. Specific health conditions, such as pregnancy, breast-feeding, illness, or drug use, make unusual demands on the body and increase its need for nutrients. Dietary guidelines, which take many of these factors into account, provide general guidance in meeting daily nutritional needs. III WATER If the importance of a nutrient is judged by how long we can do without it, water ranks as the most important. A person can survive only eight to ten days without water, whereas it takes weeks or even months to die from a lack of food. Water circulates through our blood and lymphatic system, transporting oxygen and nutrients to cells and removing wastes through urine and sweat. Water also maintains the natural balance between dissolved salts and water inside and outside of cells. Our joints and soft tissues depend on the cushioning that water provides for them. While water has no caloric value and therefore is not an energy source, without it in our diets we could not digest or absorb the foods we eat or eliminate the body’s digestive waste. The human body is 65 percent water, and it takes an average of eight to ten cups to replenish the water our bodies lose each day. How much water a person needs depends largely on the volume of urine and sweat lost daily, and water needs are increased if a person suffers from diarrhea or vomiting or undergoes heavy physical exercise. Water is replenished by drinking liquids, preferably those without caffeine or alcohol, both of which increase the output of urine and thus dehydrate the body. Many foods are also a good source of water—fruits and vegetables, for instance, are 80 to 95 percent water; meats are made up of 50 percent water; and grains, such as oats and rice, can have as much as 35 percent water. IV CARBOHYDRATES Carbohydrates are the human body’s key source of energy, providing 4 calories of energy per gram. When carbohydrates are broken down by the body, the sugar glucose is produced; glucose is critical to help maintain tissue protein, metabolize fat, and fuel the central nervous system. Glucose is absorbed into the bloodstream through the intestinal wall. Some of this glucose goes straight to work in our brain cells and red blood cells, while the rest makes its way to the liver and muscles, where it is stored as glycogen (animal starch), and to fat cells, where it is stored as fat. Glycogen is the body’s auxiliary energy source, tapped and converted back into glucose when we need more energy. Although stored fat can also serve as a backup source of energy, it is never converted into glucose. Fructose and galactose, other sugar products resulting from the breakdown of carbohydrates, go straight to the liver, where they are converted into glucose. Starches and sugars are the major carbohydrates. Common starch foods include wholegrain breads and cereals, pasta, corn, beans, peas, and potatoes. Naturally occurring sugars are found in fruits and many vegetables; milk products; and honey, maple sugar, and sugar cane. Foods that contain starches and naturally occurring sugars are referred to as complex carbohydrates, because their molecular complexity requires our bodies to break them down into a simpler form to obtain the much-needed fuel, glucose. Our bodies digest and absorb complex carbohydrates at a rate that helps maintain the healthful levels of glucose already in the blood. In contrast, simple sugars, refined from naturally occurring sugars and added to processed foods, require little digestion and are quickly absorbed by the body, triggering an unhealthy chain of events. The body’s rapid absorption of simple sugars elevates the levels of glucose in the blood, which triggers the release of the hormone insulin. Insulin reins in the body’s rising glucose levels, but at a price: Glucose levels may fall so low within one to two hours after eating foods high in simple sugars, such as candy, that the body responds by releasing chemicals known as anti-insulin hormones. This surge in chemicals, the aftermath of eating a candy bar, can leave a person feeling irritable and nervous. Many processed foods not only contain high levels of added simple sugars, they also tend to be high in fat and lacking in the vitamins and minerals found naturally in complex carbohydrates. Nutritionists often refer to such processed foods as junk foods and say that they provide only empty calories, meaning they are loaded with calories from sugars and fats but lack the essential nutrients our bodies need. In addition to starches and sugars, complex carbohydrates contain indigestible dietary fibers. Although such fibers provide no energy or building materials, they play a vital role in our health. Found only in plants, dietary fiber is classified as soluble or insoluble. Soluble fiber, found in such foods as oats, barley, beans, peas, apples, strawberries, and citrus fruits, mixes with food in the stomach and prevents or reduces the absorption by the small intestine of potentially dangerous substances from food. Soluble fiber also binds dietary cholesterol and carries it out of the body, thus preventing it from entering the bloodstream where it can accumulate in the inner walls of arteries and set the stage for high blood pressure, heart disease, and strokes. Insoluble fiber, found in vegetables, whole-grain products, and bran, provides roughage that speeds the elimination of feces, which decreases the time that the body is exposed to harmful substances, possibly reducing the risk of colon cancer. Studies of populations with fiber-rich diets, such as Africans and Asians, show that these populations have less risk of colon cancer compared to those who eat low-fiber diets, such as Americans. In the United States, colon cancer is the third most common cancer for both men and women, but experts believe that, with a proper diet, it is one of the most preventable types of cancer. Nutritionists caution that most Americans need to eat more complex carbohydrates. In the typical American diet, only 40 to 50 percent of total calories come from carbohydrates—a lower percentage than found in most of the world. To make matters worse, half of the carbohydrate calories consumed by the typical American come from processed foods filled with simple sugars. Experts recommend that these foods make up no more that 10 percent of our diet, because these foods offer no nutritional value. Foods rich in complex carbohydrates, which provide vitamins, minerals, some protein, and dietary fiber and are an abundant energy source, should make up roughly 50 percent of our daily calories. V PROTEINS Dietary proteins are powerful compounds that build and repair body tissues, from hair and fingernails to muscles. In addition to maintaining the body’s structure, proteins speed up chemical reactions in the body, serve as chemical messengers, fight infection, and transport oxygen from the lungs to the body’s tissues. Although protein provides 4 calories of energy per gram, the body uses protein for energy only if carbohydrate and fat intake is insufficient. When tapped as an energy source, protein is diverted from the many critical functions it performs for our bodies. Proteins are made of smaller units called amino acids. Of the more than 20 amino acids our bodies require, eight (nine in some older adults and young children) cannot be made by the body in sufficient quantities to maintain health. These amino acids are considered essential and must be obtained from food. When we eat food high in proteins, the digestive tract breaks this dietary protein into amino acids. Absorbed into the bloodstream and sent to the cells that need them, amino acids then recombine into the functional proteins our bodies need. Animal proteins, found in such food as eggs, milk, meat, fish, and poultry, are considered complete proteins because they contain all of the essential amino acids our bodies need. Plant proteins, found in vegetables, grains, and beans, lack one or more of the essential amino acids. However, plant proteins can be combined in the diet to provide all of the essential amino acids. A good example is rice and beans. Each of these foods lacks one or more essential amino acids, but the amino acids missing in rice are found in the beans, and vice versa. So when eaten together, these foods provide a complete source of protein. Thus, people who do not eat animal products (see Vegetarianism) can meet their protein needs with diets rich in grains, dried peas and beans, rice, nuts, and tofu, a soybean product. Experts recommend that protein intake make up only 10 percent of our daily calorie intake. Some people, especially in the United States and other developed countries, consume more protein than the body needs. Because extra amino acids cannot be stored for later use, the body destroys these amino acids and excretes their by-products. Alternatively, deficiencies in protein consumption, seen in the diets of people in some developing nations, may result in health problems. Marasmus and kwashiorkor, both life-threatening conditions, are the two most common forms of protein malnutrition. Some health conditions, such as illness, stress, and pregnancy and breast-feeding in women, place an enormous demand on the body as it builds tissue or fights infection, and these conditions require an increase in protein consumption. For example, a healthy woman normally needs 45 grams of protein each day. Experts recommend that a pregnant woman consume 55 grams of protein per day, and that a breast-feeding mother consume 65 grams to maintain health. A man of average size should eat 57 grams of protein daily. To support their rapid development, infants and young children require relatively more protein than do adults. A three-month-old infant requires about 13 grams of protein daily, and a four-year-old child requires about 22 grams. Once in adolescence, sex hormone differences cause boys to develop more muscle and bone than girls; as a result, the protein needs of adolescent boys are higher than those of girls. VI FATS Fats, which provide 9 calories of energy per gram, are the most concentrated of the energy-producing nutrients, so our bodies need only very small amounts. Fats play an important role in building the membranes that surround our cells and in helping blood to clot. Once digested and absorbed, fats help the body absorb certain vitamins. Fat stored in the body cushions vital organs and protects us from extreme cold and heat. Fat consists of fatty acids attached to a substance called glycerol. Dietary fats are classified as saturated, monounsaturated, and polyunsaturated according to the structure of their fatty acids (see Fats and Oils). Animal fats—from eggs, dairy products, and meats—are high in saturated fats and cholesterol, a chemical substance found in all animal fat. Vegetable fats—found, for example, in avocados, olives, some nuts, and certain vegetable oils—are rich in monounsaturated and polyunsaturated fat. As we will see, high intake of saturated fats can be unhealthy. To understand the problem with eating too much saturated fat, we must examine its relationship to cholesterol. High levels of cholesterol in the blood have been linked to the development of heart disease, strokes, and other health problems. Despite its bad reputation, our bodies need cholesterol, which is used to build cell membranes, to protect nerve fibers, and to produce vitamin D and some hormones, chemical messengers that help coordinate the body’s functions. We just do not need cholesterol in our diet. The liver, and to a lesser extent the small intestine, manufacture all the cholesterol we require. When we eat cholesterol from foods that contain saturated fatty acids, we increase the level of a cholesterol-carrying substance in our blood that harms our health. Cholesterol, like fat, is a lipid—an organic compound that is not soluble in water. In order to travel through blood, cholesterol therefore must be transported through the body in special carriers, called lipoproteins. High-density lipoproteins (HDLs) remove cholesterol from the walls of arteries, return it to the liver, and help the liver excrete it as bile, a liquid acid essential to fat digestion. For this reason, HDL is called “good” cholesterol. Low-density lipoproteins (LDLs) and very-low-density lipoproteins (VLDLs) are considered “bad” cholesterol. Both LDLs and VLDLs transport cholesterol from the liver to the cells. As they work, LDLs and VLDLs leave plaque-forming cholesterol in the walls of the arteries, clogging the artery walls and setting the stage for heart disease. Almost 70 percent of the cholesterol in our bodies is carried by LDLs and VLDLs, and the remainder is transported by HDLs. For this reason, we need to consume dietary fats that increase our HDLs and decrease our LDL and VLDL levels. Saturated fatty acids—found in foods ranging from beef to ice cream, to mozzarella cheese to doughnuts—should make up no more than 10 percent of a person’s total calorie intake each day. Saturated fats are considered harmful to the heart and blood vessels because they are thought to increase the level of LDLs and VLDLs and decrease the levels of HDLs. Monounsaturated fats—found in olive, canola, and peanut oils—appear to have the best effect on blood cholesterol, decreasing the level of LDLs and VLDLs and increasing the level of HDLs. Polyunsaturated fats—found in margarine and sunflower, soybean, corn, and safflower oils—are considered more healthful than saturated fats. However, if consumed in excess (more than 10 percent of daily calories), they can decrease the blood levels of HDLs. Most Americans obtain 15 to 50 percent of their daily calories from fats. Health experts consider diets with more than 30 percent of calories from fat to be unsafe, increasing the risk of heart disease. High-fat diets also contribute to obesity, which is linked to high blood pressure (see hypertension) and diabetes mellitus. A diet high in both saturated and unsaturated fats has also been associated with greater risk of developing cancers of the colon, prostate, breast, and uterus. Choosing a diet that is low in fat and cholesterol is critical to maintaining health and reducing the risk of life-threatening disease. VII VITAMINS AND MINERALS Both vitamins and minerals are needed by the body in very small amounts to trigger the thousands of chemical reactions necessary to maintain good health. Many of these chemical reactions are linked, with one triggering another. If there is a missing or deficient vitamin or mineral—or link—anywhere in this chain, this process may break down, with potentially devastating health effects. Although similar in supporting critical functions in the human body, vitamins and minerals have key differences. Among their many functions, vitamins enhance the body’s use of carbohydrates, proteins, and fats. They are critical in the formation of blood cells, hormones, nervous system chemicals known as neurotransmitters, and the genetic material deoxyribonucleic acid (DNA). Vitamins are classified into two groups: fat soluble and water soluble. Fat-soluble vitamins, which include vitamins A, D, E, and K, are usually absorbed with the help of foods that contain fat. Fat containing these vitamins is broken down by bile, a liquid released by the liver, and the body then absorbs the breakdown products and vitamins. Excess amounts of fat-soluble vitamins are stored in the body’s fat, liver, and kidneys. Because these vitamins can be stored in the body, they do not need to be consumed every day to meet the body’s needs. Water-soluble vitamins, which include vitamins C (also known as ascorbic acid), B1 (thiamine), B2 (riboflavin), B3 (niacin), B6, B12, and folic acid, cannot be stored and rapidly leave the body in urine if taken in greater quantities than the body can use. Foods that contain water-soluble vitamins need to be eaten daily to replenish the body’s needs. In addition to the roles noted in the vitamin and mineral chart accompanying this article, vitamins A (in the form of beta-carotene), C, and E function as antioxidants, which are vital in countering the potential harm of chemicals known as free radicals. If these chemicals remain unchecked they can make cells more vulnerable to cancer-causing substances. Free radicals can also transform chemicals in the body into cancer-causing agents. Environmental pollutants, such as cigarette smoke, are sources of free radicals. Minerals are minute amounts of metallic elements that are vital for the healthy growth of teeth and bones. They also help in such cellular activity as enzyme action, muscle contraction, nerve reaction, and blood clotting. Mineral nutrients are classified as major elements (calcium, chlorine, magnesium, phosphorus, potassium, sodium, and sulfur) and trace elements (chromium, copper, fluoride, iodine, iron, selenium, and zinc). Vitamins and minerals not only help the body perform its various functions, but also prevent the onset of many disorders. For example, vitamin C is important in maintaining our bones and teeth; scurvy, a disorder that attacks the gums, skin, and muscles, occurs in its absence. Diets lacking vitamin B1, which supports neuromuscular function, can result in beriberi, a disease characterized by mental confusion, muscle weakness, and inflammation of the heart. Adequate intake of folic acid by pregnant women is critical to avoid nervous system defects in the developing fetus. The mineral calcium plays a critical role in building and maintaining strong bones; without it, children develop weak bones and adults experience the progressive loss of bone mass known as osteoporosis, which increases their risk of bone fractures. Vitamins and minerals are found in a wide variety of foods, but some foods are better sources of specific vitamins and minerals than others. For example, oranges contain large amounts of vitamin C and folic acid but very little of the other vitamins. Milk contains large amounts of calcium but no vitamin C. Sweet potatoes are rich in vitamin A, but white potatoes contain almost none of this vitamin. Because of these differences in vitamin and mineral content, it is wise to eat a wide variety of foods. VIII TOO LITTLE AND TOO MUCH FOOD When the body is not given enough of any one of the essential nutrients over a period of time, it becomes weak and less able to fight infection. The brain may become sluggish and react slowly. The body taps its stored fat for energy, and muscle is broken down to use for energy. Eventually the body withers away, the heart ceases to pump properly, and death occurs—the most extreme result of a dietary condition known as deficiency-related malnutrition. Deficiency diseases result from inadequate intake of the major nutrients. These deficiencies can result from eating foods that lack critical vitamins and minerals, from a lack of variety of foods, or from simply not having enough food. Malnutrition can reflect conditions of poverty, war, famine, and disease. It can also result from eating disorders, such as anorexia nervosa and bulimia. Although malnutrition is more commonly associated with dietary deficiencies, it also can develop in cases where people have enough food to eat, but they choose foods low in essential nutrients. This is the more common form of malnutrition in developed countries such as the United States. When poor food choices are made, a person may be getting an adequate, or excessive, amount of calories each day, yet still be undernourished. For example, iron deficiency is a common health problem among women and young children in the United States, and low intake of calcium is directly related to poor quality bones and increased fracture risk, especially in the elderly. A diet of excesses may also lead to other nutritional problems. Obesity is the condition of having too much body fat. It has been linked to life-threatening diseases including diabetes mellitus, heart problems, and some forms of cancer. Eating too many salty foods may contribute to high blood pressure (see hypertension), an often undiagnosed condition that causes the heart to work too hard and puts strain on the arteries. High blood pressure can lead to strokes, heart attacks, and kidney failure. A diet high in cholesterol and fat, particularly saturated fat, is the primary cause of atherosclerosis, which results when fat and cholesterol deposits build up in the arteries, causing a reduction in blood flow. IX MAKING GOOD NUTRITIONAL CHOICES To determine healthful nutrition standards, the Food and Nutrition Board of the National Academy of Sciences (NAS), a nonprofit, scholarly society that advises the United States government, periodically assembles committees of national experts to update and assess nutrition guidelines. The NAS first published its Recommended Dietary Allowances (RDAs) in 1941. An RDA reflects the amount of a nutrient in the diet that should decrease the risk of chronic disease for most healthy individuals. The NAS originally developed the RDAs to ensure that World War II soldiers stationed around the world received enough of the right kinds of foods to maintain their health. The NAS periodically has updated the RDAs to reflect new knowledge of nutrient needs. In the late 1990s the NAS decided that the RDAs, originally developed to prevent nutrient deficiencies, needed to serve instead as a guide for optimizing health. Consequently, the NAS created Dietary Reference Intakes (DRIs), which incorporate the RDAs and a variety of new dietary guidelines. As part of this change, the NAS replaced some RDAs with another measure, called Adequate Intake (AI). Although the AI recommendations are often the same as those in the original RDA, use of this term reflects that there is not enough scientific evidence to set a standard for the nutrient. Calcium has an AI of 1000 to 1200 mg per day, not an RDA, because scientists do not yet know how much calcium is needed to prevent osteoporosis. Tolerable Upper Intake Level (UL) designates the highest recommended intake of a nutrient for good health. If intake exceeds this amount, health problems may develop. Calcium, for instance, has a UL of 2500 mg per day. Scientists know that more than this amount of calcium taken every day can interfere with the absorption of iron, zinc, and magnesium and may result in kidney stones or kidney failure. Estimated Average Requirement (EAR) reflects the amount of a particular nutrient that meets the optimal needs of half the individuals in a specified group. For example, the NAS cites an EAR of 45 to 90 grams of protein for men aged 18 to 25. This figure means that half the men in that population need a daily intake of protein that falls within that range. To simplify the complex standards established by the NAS, the United States Department of Agriculture (USDA) created the Food Guide Pyramid, a visual display of the relative importance to health of six food groups common to the American diet. The food groups are arranged in a pyramid to emphasize that it is wise to choose an abundance of foods from the category at the broad base (bread, cereal, rice, pasta) and use sparingly foods from the peak (fats, oils, sweets). The other food groups appear between these two extremes, indicating the importance of vegetables and fruits and the need for moderation in eating dairy products and meats. The pyramid recommends a range of the number of servings to choose from each group, based on the nutritional needs of males and females and different age groups. Other food pyramids have been developed based on the USDA pyramid to help people choose foods that fit a specific ethnic or cultural pattern, including Mediterranean, Asian, Latin American, Puerto Rican, and vegetarian diets. In an effort to provide additional nutritional guidance and reduce the incidence of dietrelated cancers, the National Cancer Institute developed the 5-a-Day Campaign for Better Health, a program that promotes the practice of eating five servings of fruits and vegetables daily. Studies of populations that eat many fruits and vegetables reveal a decreased incidence of diet-related cancers. Laboratory studies have shown that many fruits and vegetables contain phytochemicals, substances that appear to limit the growth of cancer cells. Many people obtain most of their nutrition information from a food label called the Nutrition Facts panel. This label is mandatory for most foods that contain more than one ingredient, and these foods are mostly processed foods. Labeling remains voluntary for raw meats, fresh fruits and vegetables, foods produced by small businesses, and those sold in restaurants, food stands, and local bakeries. The Nutrition Facts panel highlights a product’s content of fat, saturated fat, cholesterol, sodium, dietary fiber, vitamins A and C, and the minerals calcium and iron. The stated content of these nutrients must be based on a standard serving size, as defined by the Food and Drug Administration (FDA). Food manufacturers may provide information about other nutrients if they choose. However, if a nutritional claim is made on a product’s package, the appropriate nutrient content must be listed. For example, if the package says “high in folic acid,” then the folic acid content in the product must be given in the Nutrition Facts panel. The Nutrition Facts panel also includes important information in a column headed % Daily Value (DV). DVs tell how the food item meets the recommended daily intakes of fat, saturated fat, cholesterol, carbohydrates, dietary fiber, and protein necessary for nutritional health based on the total intake recommended for a person consuming 2000 calories per day. One portion from a can of soup, for example, may have less than 2 percent of the recommended daily value for cholesterol intake. Health-conscious consumers can use the Nutrition Facts panel to guide their food choices. For example, based on a daily diet of 2000 calories, nutrition experts recommend that no more than 30 percent of those calories should be from fat, which would allow for a daily intake of around 65 grams of fat. A Nutrition Facts panel may indicate that a serving of one brand of macaroni and cheese contains 14 grams of fat, or a % DV of 25 percent. This tells the consumer that a serving of macaroni and cheese provides about one-fourth of the suggested healthy level of daily fat intake. If another brand of macaroni and cheese displays a % DRV of 10 percent fat, the nutrition-conscious consumer would opt for this brand. Nutritionists and other health experts help consumers make good food choices. People who study nutrition in college may refer to themselves as nutritionists; often, however, the term refers to a scientist who has pursued graduate education in this field. A nutritionist may also be a dietitian. Dietitians are trained in nutrition, food chemistry, and diet planning. In the United States, dietitians typically have graduated from a college program accredited by the American Dietetic Association (ADA), completed an approved program of clinical experience, and passed the ADA’s registration examination to earn the title Registered Dietitian (RD). Q14: What are fertilizers ? what do you understand by the term NPK fertilizer ? How do fertilizers contribute to water pollution ? Fertilizer, natural or synthetic chemical substance or mixture used to enrich soil so as to promote plant growth. Plants do not require complex chemical compounds analogous to the vitamins and amino acids required for human nutrition, because plants are able to synthesize whatever compounds they need. They do require more than a dozen different chemical elements and these elements must be present in such forms as to allow an adequate availability for plant use. Within this restriction, nitrogen, for example, can be supplied with equal effectiveness in the form of urea, nitrates, ammonium compounds, or pure ammonia. Virgin soil usually contains adequate amounts of all the elements required for proper plant nutrition. When a particular crop is grown on the same parcel of land year after year, however, the land may become exhausted of one or more specific nutrients. If such exhaustion occurs, nutrients in the form of fertilizers must be added to the soil. Plants can also be made to grow more lushly with suitable fertilizers. Of the required nutrients, hydrogen, oxygen, and carbon are supplied in inexhaustible form by air and water. Sulfur, calcium, and iron are necessary nutrients that usually are present in soil in ample quantities. Lime (calcium) is often added to soil, but its function is primarily to reduce acidity and not, in the strict sense, to act as a fertilizer. Nitrogen is present in enormous quantities in the atmosphere, but plants are not able to use nitrogen in this form; bacteria provide nitrogen from the air to plants of the legume family through a process called nitrogen fixation. The three elements that most commonly must be supplied in fertilizers are nitrogen, phosphorus, and potassium. Certain other elements, such as boron, copper, and manganese, sometimes need to be included in small quantities. Many fertilizers used since ancient times contain one or more of the three elements important to the soil. For example, manure and guano contain nitrogen. Bones contain small quantities of nitrogen and larger quantities of phosphorus. Wood ash contains appreciable quantities of potassium (depending considerably on the type of wood). Clover, alfalfa, and other legumes are grown as rotating crops and then plowed under, enriching the soil with nitrogen. The term complete fertilizer often refers to any mixture containing all three important elements; such fertilizers are described by a set of three numbers. For example, 5-8-7 designates a fertilizer (usually in powder or granular form) containing 5 percent nitrogen, 8 percent phosphorus (calculated as phosphorus pentoxide), and 7 percent potassium (calculated as potassium oxide). While fertilizers are essential to modern agriculture, their overuse can have harmful effects on plants and crops and on soil quality. In addition, the leaching of nutrients into bodies of water can lead to water pollution problems such as eutrophication, by causing excessive growth of vegetation. The use of industrial waste materials in commercial fertilizers has been encouraged in the United States as a means of recycling waste products. The safety of this practice has recently been called into question. Its opponents argue that industrial wastes often contain elements that poison the soil and can introduce toxic chemicals into the food chain. Last edited by Last Island; Sunday, December 30, 2007 at 08:18 PM. #4 Sunday, December 30, 2007 Join Date: Sep 2005 Posts: 26 Thanks: 3 Thanked 24 Times in 7 Posts Dilrauf Junior Member PAPER 2003 Q 1. Write short notes on any two of the following : (a)Microwave Oven (b) Optic Fibre (c) Biotechnology I INTRODUCTION Biotechnology, the manipulation of biological organisms to make products that benefit human beings. Biotechnology contributes to such diverse areas as food production, waste disposal, mining, and medicine. Although biotechnology has existed since ancient times, some of its most dramatic advances have come in more recent years. Modern achievements include the transferal of a specific gene from one organism to another (by means of a set of genetic engineering techniques known as transgenics); the maintenance and growth of genetically uniform plant- and animal-cell cultures, called clones; and the fusing of different types of cells to produce beneficial medical products such as monoclonal antibodies, which are designed to attack a specific type of foreign substance. II HISTORY The first achievements in biotechnology were in food production, occurring about 5000 BC. Diverse strains of plants or animals were hybridized (crossed) to produce greater genetic variety. The offspring from these crosses were then selectively bred to produce the greatest number of desirable traits (see Genetics). Repeated cycles of selective breeding produced many present-day food staples. This method continues to be used in food-production programs. Corn (maize) was one of the first food crops known to have been cultivated by human beings. Although used as food as early as 5000 BC in Mexico, no wild forms of the plant have ever been found, indicating that corn was most likely the result of some fortunate agricultural experiment in antiquity. The modern era of biotechnology had its origin in 1953 when American biochemist James Watson and British biophysicist Francis Crick presented their double-helix model of DNA. This was followed by Swiss microbiologist Werner Arber's discovery in the 1960s of special enzymes, called restriction enzymes, in bacteria. These enzymes cut the DNA strands of any organism at precise points. In 1973 American geneticist Stanley Cohen and American biochemist Herbert Boyer removed a specific gene from one bacterium and inserted it into another using restriction enzymes. This event marked the beginning of recombinant DNA technology, commonly called genetic engineering. In 1977 genes from other organisms were transferred to bacteria. This achievement eventually led to the first transfer of a human gene, which coded for a hormone, to Escherichia coli bacteria. Although the transgenic bacteria (bacteria to which a gene from a different species has been transferred) could not use the human hormone, they produced it along with their own normal chemical compounds. In the 1960s an important project used hybridization followed by selective breeding to increase food production and quality of wheat and rice crops. American agriculturalist Norman Borlaug, who spearheaded the program, was awarded the Nobel Peace Prize in 1970 in recognition of the important contribution that increasing the world's food supply makes to the cause of peace. III CURRENT TRENDS Today biotechnology is applied in various fields. In waste management, for example, biotechnology is used to create new biodegradable materials. One such material is made from the lactic acid produced during the bacterial fermentation of discarded corn stalks. When individual lactic acid molecules are joined chemically, they form a material that has the properties of plastics but is biodegradable. Widespread production of plastic from this material is expected to become more economically viable in the future. Biotechnology also has applications in the mining industry. In its natural state, copper is found combined with other elements in the mineral chalcopyrite. The bacterium Thiobacillus ferrooxidans can use the molecules of copper found in chalcopyrite to form the compound copper sulfate (CuSO4), which, in turn, can be treated chemically to obtain pure copper. This microbiological mining process is used only with low-grade ores and currently accounts for about 10 percent of copper production in the United States. The percentage will rise, however, as conventionally mined high-grade deposits are exhausted. Procedures have also been developed for the use of bacteria in the mining of zinc, lead, and other metals. The field of medicine employs some of the most dramatic applications in biotechnology. One advance came in 1986 with the first significant laboratory production of factor VIII, a blood-clotting protein that is not produced, or has greatly reduced activity, in people who have hemophilia. As a result of this condition, hemophiliacs are at risk of bleeding to death after suffering minor cuts or bruises. In this biotechnological procedure, the human gene that codes for the blood-clotting protein is transferred to hamster cells grown in tissue culture, which then produce factor VIII for use by hemophiliacs. Factor VIII was approved for commercial production in 1992. IV CONTROVERSIES Some people, including scientists, object to any procedure that changes the genetic composition of an organism. Critics are concerned that some of the genetically altered forms will eliminate existing species, thereby upsetting the natural balance of organisms. There are also fears that recombinant DNA experiments with pathogenic microorganisms may result in the formation of extremely virulent forms which, if accidentally released from the laboratory, will cause worldwide epidemics. Some critics cite ethical dilemmas associated with the production of transgenic organisms. In 1976, in response to fears of disastrous consequences of unregulated genetic engineering procedures, the National Institutes of Health created a body of rules governing the handling of microorganisms in recombinant DNA experiments. Although many of the rules have been relaxed over time, certain restrictions are still imposed on those working with pathogenic microorganisms. Q2: Give names of the members of the solar system. Briefly write down main characteristics of : a). Mars b). venus Solar System, the Sun and everything that orbits the Sun, including the nine planets and their satellites; the asteroids and comets; and interplanetary dust and gas. The term may also refer to a group of celestial bodies orbiting another star (see Extrasolar Planets). In this article, solar system refers to the system that includes Earth and the Sun. Planet, any major celestial body that orbits a star and does not emit visible light of its own but instead shines by reflected light. Smaller bodies that also orbit a star and are not satellites of a planet are called asteroids or planetoids. In the solar system, there are nine planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto. Planets that orbit stars other than the Sun are collectively called extrasolar planets. Some extrasolar planets are nearly large enough to become stars themselves. Such borderline planets are called brown dwarfs. Mars: Mars (planet), one of the planets in the solar system, it is the fourth planet from the Sun and orbits the Sun at an average distance of about 228 million km (about 141 million mi). Mars is named for the Roman god of war and is sometimes called the red planet because it appears fiery red in Earth’s night sky. Mars is a relatively small planet, with about half the diameter of Earth and about one-tenth Earth’s mass. The force of gravity on the surface of Mars is about one-third of that on Earth. Mars has twice the diameter and twice the surface gravity of Earth’s Moon. The surface area of Mars is almost exactly the same as the surface area of the dry land on Earth. Mars is believed to be about the same age as Earth, having formed from the same spinning, condensing cloud of gas and dust that formed the Sun and the other planets about 4.6 billion years ago. Venus: Venus (planet), one of the planets in the solar system, the second in distance from the Sun. Except for the Sun and the Moon, Venus is the brightest object in the sky. The planet was named for the Roman goddess of beauty. It is often called the morning star when it appears in the east at sunrise, and the evening star when it is in the west at sunset. In ancient times the evening star was called Hesperus and the morning star Phosphorus or Lucifer. Because the planet orbits closer to the Sun than Earth does, Venus seems to either precede or trail the Sun in the sky. Venus is never visible more than three hours before sunrise or three hours after sunset. Q 6 : Define any five of the following : (i) Acoustic Acoustics (Greek akouein, “to hear”), term sometimes used for the science of sound in general. It is more commonly used for the special branch of that science, architectural acoustics, that deals with the construction of enclosed areas so as to enhance the hearing of speech or music. For the treatment of acoustics as a branch of the pure science of physics, see Sound. The acoustics of buildings was an undeveloped aspect of the study of sound until comparatively recent times. The Roman architect Marcus Pollio, who lived during the 1st century BC, made some pertinent observations on the subject and some astute guesses concerning reverberation and interference. The scientific aspects of this subject, however, were first thoroughly treated by the American physicist Joseph Henry in 1856 and more fully developed by the American physicist Wallace Sabine in 1900. (ii) Quartz Quartz, second most common of all minerals, composed of silicon dioxide, or silica, SiO2. It is distributed all over the world as a constituent of rocks and in the form of pure deposits. It is an essential constituent of igneous rocks such as granite, rhyolite, and pegmatite, which contain an excess of silica. In metamorphic rocks, it is a major constituent of the various forms of gneiss and schist; the metamorphic rock quartzite is composed almost entirely of quartz. Quartz forms veins and nodules in sedimentary rock, principally limestone. Sandstone, a sedimentary rock, is composed mainly of quartz. Many widespread veins of quartz deposited in rock fissures form the matrix for many valuable minerals. Precious metals, such as gold, are found in sufficient quantity in quartz veins to warrant the mining of quartz to recover the precious mineral. Quartz is also the primary constituent of sand. (iii) Pollination Pollination, transfer of pollen grains from the male structure of a plant to the female structure of a plant. The pollen grains contain cells that will develop into male sex cells, or sperm. The female structure of a plant contains the female sex cells, or eggs. Pollination prepares the plant for fertilization, the union of the male and female sex cells. Virtually all grains, fruits, vegetables, wildflowers, and trees must be pollinated and fertilized to produce seed or fruit, and pollination is vital for the production of critically important agricultural crops, including corn, wheat, rice, apples, oranges, tomatoes, and squash. In order for pollination to be successful, pollen must be transferred between plants of the same species—for example, a rose flower must always receive rose pollen and a pine tree must always receive pine pollen. Plants typically rely on one of two methods of pollination: cross-pollination or self-pollination, but some species are capable of both. Most plants are designed for cross-pollination, in which pollen is transferred between different plants of the same species. Cross-pollination ensures that beneficial genes are transmitted relatively rapidly to succeeding generations. If a beneficial gene occurs in just one plant, that plant’s pollen or eggs can produce seeds that develop into numerous offspring carrying the beneficial gene. The offspring, through cross-pollination, transmit the gene to even more plants in the next generation. Cross-pollination introduces genetic diversity into the population at a rate that enables the species to cope with a changing environment. New genes ensure that at least some individuals can endure new diseases, climate changes, or new predators, enabling the species as a whole to survive and reproduce. Plant species that use cross-pollination have special features that enhance this method. For instance, some plants have pollen grains that are lightweight and dry so that they are easily swept up by the wind and carried for long distances to other plants. Other plants have pollen and eggs that mature at different times, preventing the possibility of selfpollination. In self-pollination, pollen is transferred from the stamens to the pistil within one flower. The resulting seeds and the plants they produce inherit the genetic information of only one parent, and the new plants are genetically identical to the parent. The advantage of selfpollination is the assurance of seed production when no pollinators, such as bees or birds, are present. It also sets the stage for rapid propagation—weeds typically self-pollinate, and they can produce an entire population from a single plant. The primary disadvantage of self-pollination is that it results in genetic uniformity of the population, which makes the population vulnerable to extinction by, for example, a single devastating disease to which all the genetically identical plants are equally susceptible. Another disadvantage is that beneficial genes do not spread as rapidly as in cross-pollination, because one plant with a beneficial gene can transmit it only to its own offspring and not to other plants. Selfpollination evolved later than cross-pollination, and may have developed as a survival mechanism in harsh environments where pollinators were scarce. (iv) Allele All genetic traits result from different combinations of gene pairs, one gene inherited from the mother and one from the father. Each trait is thus represented by two genes, often in different forms. Different forms of the same gene are called alleles. Traits depend on very precise rules governing how genetic units are expressed through generations. For example, some people have the ability to roll their tongue into a U-shape, while others can only curve their tongue slightly. A single gene with two alleles controls this heritable trait. If a child inherits the allele for tongue rolling from one parent and the allele for no tongue rolling from the other parent, she will be able to roll her tongue. The allele for tongue rolling dominates the gene pair, and so its trait is expressed. According to the laws governing heredity, when a dominant allele (in this case, tongue rolling) and a recessive allele (no tongue rolling) combine, the trait will always be dictated by the dominant allele. The no tongue rolling trait, or any other recessive trait, will only occur in an individual who inherits the two recessive alleles. (v) Optical Illusion All genetic traits result from different combinations of gene pairs, one gene inherited from the mother and one from the father. Each trait is thus represented by two genes, often in different forms. Different forms of the same gene are called alleles. Traits depend on very precise rules governing how genetic units are expressed through generations. For example, some people have the ability to roll their tongue into a U-shape, while others can only curve their tongue slightly. A single gene with two alleles controls this heritable trait. If a child inherits the allele for tongue rolling from one parent and the allele for no tongue rolling from the other parent, she will be able to roll her tongue. The allele for tongue rolling dominates the gene pair, and so its trait is expressed. According to the laws governing heredity, when a dominant allele (in this case, tongue rolling) and a recessive allele (no tongue rolling) combine, the trait will always be dictated by the dominant allele. The no tongue rolling trait, or any other recessive trait, will only occur in an individual who inherits the two recessive alleles. (f) Ovulation Unlike germ cells in the testis, female germ cells originate as single cells in the embryonic tissue that later develops into an ovary. At maturity, after the production of ova from the female germ cells, groups of ovary cells surrounding each ovum develop into “follicle cells” that secrete nutriment for the contained egg. As the ovum is prepared for release during the breeding season, the tissue surrounding the ovum hollows out and becomes filled with fluid and at the same time moves to the surface of the ovary; this mass of tissue, fluid, and ovum is known as a Graafian follicle. The ovary of the adult is merely a mass of glandular and connective tissue containing numerous Graafian follicles at various stages of maturity. When the Graafian follicle is completely mature, it bursts through the surface of the ovary, releasing the ovum, which is then ready for fertilization; the release of the ovum from the ovary is known as ovulation. The space formerly occupied by the Graafian follicle is filled by a blood clot known as the corpus hemorrhagicum; in four or five days this clot is replaced by a mass of yellow cells known as the corpus luteum, which secretes hormones playing an important part in preparation of the uterus for the reception of a fertilized ovum. If the ovum goes unfertilized, the corpus luteum is eventually replaced by scar tissue known as the corpus albicans. The ovary is located in the body cavity, attached to the peritoneum that lines this cavity. (vii) Aqua Regia Aqua Regia (Latin, “royal water”), mixture of concentrated hydrochloric and nitric acids, containing one part by volume of nitric acid (HNO3) to three parts of hydrochloric acid (HCl). Aqua regia was used by the alchemists (see Alchemy) and its name is derived from its ability to dissolve the so-called noble metals, particularly gold, which are inert to either of the acids used separately. It is still occasionally used in the chemical laboratory for dissolving gold and platinum. Aqua regia is a powerful solvent because of the combined effects of the H+, NO 3-, and Cl- ions in solution. The three ions react with gold atoms, for example, to form water, nitric oxide (NO), and the stable ion AuCl- 4, which remains in solution. Q12: Differentiate between the following pairs : (A) Lava and Magma Lava, molten or partially molten rock that erupts at the earth’s surface. When lava comes to the surface, it is red-hot, reaching temperatures as high as 1200° C (2200° F). Some lava can be as thick and viscous as toothpaste, while other lava can be as thin and fluid as warm syrup and flow rapidly down the sides of a volcano. Molten rock that has not yet erupted is called magma. Once lava hardens it forms igneous rock. Volcanoes build up where lava erupts from a central vent. Flood basalt forms where lava erupts from huge fissures. The eruption of lava is the principal mechanism whereby new crust is produced (see Plate Tectonics). Since lava is generated at depth, its chemical and physical characteristics provide indirect information about the chemical composition and physical properties of the rocks 50 to 150 km (30 to 90 mi) below the surface. Magma, molten or partially molten rock beneath the earth’s surface. Magma is generated when rock deep underground melts due to the high temperatures and pressures inside the earth. Because magma is lighter than the surrounding rock, it tends to rise. As it moves upward, the magma encounters colder rock and begins to cool. If the temperature of the magma drops low enough, the magma will crystallize underground to form rock; rock that forms in this way is called intrusive, or plutonic igneous rock, as the magma has formed by intruding the surrounding rocks. If the crust through which the magma passes is sufficiently shallow, warm, or fractured, and if the magma is sufficiently hot and fluid, the magma will erupt at the surface of the earth, possibly forming volcanoes. Magma that erupts is called lava. (B) Ultraviolet and infrared Ultraviolet Radiation, electromagnetic radiation that has wavelengths in the range between 4000 angstrom units (Å), the wavelength of violet light, and 150 Å, the length of X rays. Natural ultraviolet radiation is produced principally by the sun. Ultraviolet radiation is produced artificially by electric-arc lamps (see Electric Arc). Ultraviolet radiation is often divided into three categories based on wavelength, UV-A, UVB, and UV-C. In general shorter wavelengths of ultraviolet radiation are more dangerous to living organisms. UV-A has a wavelength from 4000 Å to about 3150 Å. UV-B occurs at wavelengths from about 3150 Å to about 2800 Å and causes sunburn; prolonged exposure to UV-B over many years can cause skin cancer. UV-C has wavelengths of about 2800 Å to 150 Å and is used to sterilize surfaces because it kills bacteria and viruses. The earth's atmosphere protects living organisms from the sun's ultraviolet radiation. If all the ultraviolet radiation produced by the sun were allowed to reach the surface of the earth, most life on earth would probably be destroyed. Fortunately, the ozone layer of the atmosphere absorbs almost all of the short-wavelength ultraviolet radiation, and much of the long-wavelength ultraviolet radiation. However, ultraviolet radiation is not entirely harmful; a large portion of the vitamin D that humans and animals need for good health is produced when the human's or animal's skin is irradiated by ultraviolet rays. When exposed to ultraviolet light, many substances behave differently than when exposed to visible light. For example, when exposed to ultraviolet radiation, certain minerals, dyes, vitamins, natural oils, and other products become fluorescent—that is, they appear to glow. Molecules in the substances absorb the invisible ultraviolet light, become energetic, then shed their excess energy by emitting visible light. As another example, ordinary window glass, transparent to visible light, is opaque to a large portion of ultraviolet rays, particularly ultraviolet rays with short wavelengths. Special-formula glass is transparent to the longer ultraviolet wavelengths, and quartz is transparent to the entire naturally occurring range. In astronomy, ultraviolet-radiation detectors have been used since the early 1960s on artificial satellites, providing data on stellar objects that cannot be obtained from the earth's surface. An example of such a satellite is the International Ultraviolet Explorer, launched in 1978. INFRARED RADIATION Infrared Radiation, emission of energy as electromagnetic waves in the portion of the spectrum just beyond the limit of the red portion of visible radiation (see Electromagnetic Radiation). The wavelengths of infrared radiation are shorter than those of radio waves and longer than those of light waves. They range between approximately 10-6 and 10-3 (about 0.0004 and 0.04 in). Infrared radiation may be detected as heat, and instruments such as bolometers are used to detect it. See Radiation; Spectrum. Infrared radiation is used to obtain pictures of distant objects obscured by atmospheric haze, because visible light is scattered by haze but infrared radiation is not. The detection of infrared radiation is used by astronomers to observe stars and nebulas that are invisible in ordinary light or that emit radiation in the infrared portion of the spectrum. An opaque filter that admits only infrared radiation is used for very precise infrared photographs, but an ordinary orange or light-red filter, which will absorb blue and violet light, is usually sufficient for most infrared pictures. Developed about 1880, infrared photography has today become an important diagnostic tool in medical science as well as in agriculture and industry. Use of infrared techniques reveals pathogenic conditions that are not visible to the eye or recorded on X-ray plates. Remote sensing by means of aerial and orbital infrared photography has been used to monitor crop conditions and insect and disease damage to large agricultural areas, and to locate mineral deposits. See Aerial Survey; Satellite, Artificial. In industry, infrared spectroscopy forms an increasingly important part of metal and alloy research, and infrared photography is used to monitor the quality of products. See also Photography: Photographic Films. Infrared devices such as those used during World War II enable sharpshooters to see their targets in total visual darkness. These instruments consist essentially of an infrared lamp that sends out a beam of infrared radiation, often referred to as black light, and a telescope receiver that picks up returned radiation from the object and converts it to a visible image. (C) Fault and Fold Fold (geology), in geology, bend in a rock layer caused by forces within the crust of the earth. The forces that cause folds range from slight differences in pressure in the earth’s crust, to large collisions of the crust’s tectonic plates. As a result, a fold may be only a few centimeters in width, or it may cover several kilometers. Rock layers can also break in response to these forces, in which case a fault occurs. Folds usually occur in a series and look like waves. If the rocks have not been turned upside down, then the crests of the waves are called anticlines and the troughs are called synclines (see Anticline and Syncline). Fault (geology), crack in the crust of the earth along which there has been movement of the rocks on either side of the crack. A crack without movement is called a joint. Faults occur on a wide scale, ranging in length from millimeters to thousands of kilometers. Largescale faults result from the movement of tectonic plates, continent-sized slabs of the crust that move as coherent pieces (see Plate Tectonics). (D) Caustic Soda and Caustic Potash Electrolytic decomposition is the basis for a number of important extractive and manufacturing processes in modern industry. Caustic soda, an important chemical in the manufacture of paper, rayon, and photographic film, is produced by the electrolysis of a solution of common salt in water (see Alkalies). The reaction produces chlorine and sodium. The sodium in turn reacts with the water in the cell to yield caustic soda. The chlorine evolved is used in pulp and paper manufacture. Caustic soda, or sodium hydroxide, NaOH, is an important commercial product, used in making soap, rayon, and cellophane; in processing paper pulp; in petroleum refining; and in the manufacture of many other chemical products. Caustic soda is manufactured principally by electrolysis of a common salt solution, with chlorine and hydrogen as important by-products. Potassium hydroxide (KOH), called caustic potash, a white solid that is dissolved by the moisture in the air, is prepared by the electrolysis of potassium chloride or by the reaction of potassium carbonate and calcium hydroxide; it is used in the manufacture of soap and is an important chemical reagent. It dissolves in less than its own weight of water, liberating heat and forming a strongly alkaline solution. (E) S.E.M. and T.E.M. Q15: Laser I INTRODUCTION Laser, a device that produces and amplifies light. The word laser is an acronym for Light Amplification by Stimulated Emission of Radiation. Laser light is very pure in color, can be extremely intense, and can be directed with great accuracy. Lasers are used in many modern technological devices including bar code readers, compact disc (CD) players, and laser printers. Lasers can generate light beyond the range visible to the human eye, from the infrared through the X-ray range. Masers are similar devices that produce and amplify microwaves. II PRINCIPLES OF OPERATION Lasers generate light by storing energy in particles called electrons inside atoms and then inducing the electrons to emit the absorbed energy as light. Atoms are the building blocks of all matter on Earth and are a thousand times smaller than viruses. Electrons are the underlying source of almost all light. Light is composed of tiny packets of energy called photons. Lasers produce coherent light: light that is monochromatic (one color) and whose photons are “in step” with one another. A Excited Atoms At the heart of an atom is a tightly bound cluster of particles called the nucleus. This cluster is made up of two types of particles: protons, which have a positive charge, and neutrons, which have no charge. The nucleus makes up more than 99.9 percent of the atom’s mass but occupies only a tiny part of the atom’s space. Enlarge an atom up to the size of Yankee Stadium and the equally magnified nucleus is only the size of a baseball. Electrons, tiny particles that have a negative charge, whirl through the rest of the space inside atoms. Electrons travel in complex orbits and exist only in certain specific energy states or levels (see Quantum Theory). Electrons can move from a low to a high energy level by absorbing energy. An atom with at least one electron that occupies a higher energy level than it normally would is said to be excited. An atom can become excited by absorbing a photon whose energy equals the difference between the two energy levels. A photon’s energy, color, frequency, and wavelength are directly related: All photons of a given energy are the same color and have the same frequency and wavelength. Usually, electrons quickly jump back to the low energy level, giving off the extra energy as light (see Photoelectric Effect). Neon signs and fluorescent lamps glow with this kind of light as many electrons independently emit photons of different colors in all directions. B Stimulated Emission Lasers are different from more familiar sources of light. Excited atoms in lasers collectively emit photons of a single color, all traveling in the same direction and all in step with one another. When two photons are in step, the peaks and troughs of their waves line up. The electrons in the atoms of a laser are first pumped, or energized, to an excited state by an energy source. An excited atom can then be “stimulated” by a photon of exactly the same color (or, equivalently, the same wavelength) as the photon this atom is about to emit spontaneously. If the photon approaches closely enough, the photon can stimulate the excited atom to immediately emit light that has the same wavelength and is in step with the photon that interacted with it. This stimulated emission is the key to laser operation. The new light adds to the existing light, and the two photons go on to stimulate other excited atoms to give up their extra energy, again in step. The phenomenon snowballs into an amplified, coherent beam of light: laser light. In a gas laser, for example, the photons usually zip back and forth in a gas-filled tube with highly reflective mirrors facing inward at each end. As the photons bounce between the two parallel mirrors, they trigger further stimulated emissions and the light gets brighter and brighter with each pass through the excited atoms. One of the mirrors is only partially silvered, allowing a small amount of light to pass through rather than reflecting it all. The intense, directional, and single-colored laser light finally escapes through this slightly transparent mirror. The escaped light forms the laser beam. Albert Einstein first proposed stimulated emission, the underlying process for laser action, in 1917. Translating the idea of stimulated emission into a working model, however, required more than four decades. The working principles of lasers were outlined by the American physicists Charles Hard Townes and Arthur Leonard Schawlow in a 1958 patent application. (Both men won Nobel Prizes in physics for their work, Townes in 1964 and Schawlow in 1981). The patent for the laser was granted to Townes and Schawlow, but it was later challenged by the American physicist and engineer Gordon Gould, who had written down some ideas and coined the word laser in 1957. Gould eventually won a partial patent covering several types of laser. In 1960 American physicist Theodore Maiman of Hughes Aircraft Corporation constructed the first working laser from a ruby rod. III TYPES OF LASERS Lasers are generally classified according to the material, called the medium, they use to produce the laser light. Solid-state, gas, liquid, semiconductor, and free electron are all common types of lasers. A Solid-State Lasers Solid-state lasers produce light by means of a solid medium. The most common solid laser media are rods of ruby crystals and neodymium-doped glasses and crystals. The ends of the rods are fashioned into two parallel surfaces coated with a highly reflecting nonmetallic film. Solid-state lasers offer the highest power output. They are usually pulsed to generate a very brief burst of light. Bursts as short as 12 × 10-15 sec have been achieved. These short bursts are useful for studying physical phenomena of very brief duration. One method of exciting the atoms in lasers is to illuminate the solid laser material with higher-energy light than the laser produces. This procedure, called pumping, is achieved with brilliant strobe light from xenon flash tubes, arc lamps, or metal-vapor lamps. B Gas Lasers The lasing medium of a gas laser can be a pure gas, a mixture of gases, or even metal vapor. The medium is usually contained in a cylindrical glass or quartz tube. Two mirrors are located outside the ends of the tube to form the laser cavity. Gas lasers can be pumped by ultraviolet light, electron beams, electric current, or chemical reactions. The heliumneon laser is known for its color purity and minimal beam spread. Carbon dioxide lasers are very efficient at turning the energy used to excite their atoms into laser light. Consequently, they are the most powerful continuous wave (CW) lasers—that is, lasers that emit light continuously rather than in pulses. C Liquid Lasers The most common liquid laser media are inorganic dyes contained in glass vessels. They are pumped by intense flash lamps in a pulse mode or by a separate gas laser in the continuous wave mode. Some dye lasers are tunable, meaning that the color of the laser light they emit can be adjusted with the help of a prism located inside the laser cavity. D Semiconductor Lasers Semiconductor lasers are the most compact lasers. Gallium arsenide is the most common semiconductor used. A typical semiconductor laser consists of a junction between two flat layers of gallium arsenide. One layer is treated with an impurity whose atoms provide an extra electron, and the other with an impurity whose atoms are one electron short. Semiconductor lasers are pumped by the direct application of electric current across the junction. They can be operated in the continuous wave mode with better than 50 percent efficiency. Only a small percentage of the energy used to excite most other lasers is converted into light. Scientists have developed extremely tiny semiconductor lasers, called quantum-dot vertical-cavity surface-emitting lasers. These lasers are so tiny that more than a million of them can fit on a chip the size of a fingernail. Common uses for semiconductor lasers include compact disc (CD) players and laser printers. Semiconductor lasers also form the heart of fiber-optics communication systems (see Fiber Optics). E Free Electron Lasers. Free electron lasers employ an array of magnets to excite free electrons (electrons not bound to atoms). First developed in 1977, they are now becoming important research instruments. Free electron lasers are tunable over a broader range of energies than dye lasers. The devices become more difficult to operate at higher energies but generally work successfully from infrared through ultraviolet wavelengths. Theoretically, electron lasers can function even in the X-ray range. The free electron laser facility at the University of California at Santa Barbara uses intense far-infrared light to investigate mutations in DNA molecules and to study the properties of semiconductor materials. Free electron lasers should also eventually become capable of producing very high-power radiation that is currently too expensive to produce. At high power, near-infrared beams from a free electron laser could defend against a missile attack. IV LASER APPLICATIONS The use of lasers is restricted only by imagination. Lasers have become valuable tools in industry, scientific research, communications, medicine, the military, and the arts. A Industry Powerful laser beams can be focused on a small spot to generate enormous temperatures. Consequently, the focused beams can readily and precisely heat, melt, or vaporize material. Lasers have been used, for example, to drill holes in diamonds, to shape machine tools, to trim microelectronics, to cut fashion patterns, to synthesize new material, and to attempt to induce controlled nuclear fusion (see Nuclear Energy). Highly directional laser beams are used for alignment in construction. Perfectly straight and uniformly sized tunnels, for example, may be dug using lasers for guidance. Powerful, short laser pulses also make high-speed photography with exposure times of only several trillionths of a second possible. B Scientific Research Because laser light is highly directional and monochromatic, extremely small amounts of light scattering and small shifts in color caused by the interaction between laser light and matter can easily be detected. By measuring the scattering and color shifts, scientists can study molecular structures of matter. Chemical reactions can be selectively induced, and the existence of trace substances in samples can be detected. Lasers are also the most effective detectors of certain types of air pollution. (see Chemical Analysis; Photochemistry). Scientists use lasers to make extremely accurate measurements. Lasers are used in this way for monitoring small movements associated with plate tectonics and for geographic surveys. Lasers have been used for precise determination (to within one inch) of the distance between Earth and the Moon, and in precise tests to confirm Einstein’s theory of relativity. Scientists also have used lasers to determine the speed of light to an unprecedented accuracy. Very fast laser-activated switches are being developed for use in particle accelerators. Scientists also use lasers to trap single atoms and subatomic particles in order to study these tiny bits of matter (see Particle Trap). C Communications Laser light can travel a large distance in outer space with little reduction in signal strength. In addition, high-energy laser light can carry 1,000 times the television channels today carried by microwave signals. Lasers are therefore ideal for space communications. Lowloss optical fibers have been developed to transmit laser light for earthbound communication in telephone and computer systems. Laser techniques have also been used for high-density information recording. For instance, laser light simplifies the recording of a hologram, from which a three-dimensional image can be reconstructed with a laser beam. Lasers are also used to play audio CDs and videodiscs (see Sound Recording and Reproduction). D Medicine Lasers have a wide range of medical uses. Intense, narrow beams of laser light can cut and cauterize certain body tissues in a small fraction of a second without damaging surrounding healthy tissues. Lasers have been used to “weld” the retina, bore holes in the skull, vaporize lesions, and cauterize blood vessels. Laser surgery has virtually replaced older surgical procedures for eye disorders. Laser techniques have also been developed for lab tests of small biological samples. E Military Applications Laser guidance systems for missiles, aircraft, and satellites have been constructed. Guns can be fitted with laser sights and range finders. The use of laser beams to destroy hostile ballistic missiles has been proposed, as in the Strategic Defense Initiative urged by U.S. president Ronald Reagan and the Ballistic Missile Defense program supported by President George W. Bush. The ability of tunable dye lasers to selectively excite an atom or molecule may open up more efficient ways to separate isotopes for construction of nuclear weapons. V LASER SAFETY Because the eye focuses laser light just as it does other light, the chief danger in working with lasers is eye damage. Therefore, laser light should not be viewed either directly or reflected. Lasers sold and used commercially in the United States must comply with a strict set of laws enforced by the Center for Devices and Radiological Health (CDRH), a department of the Food and Drug Administration. The CDRH has divided lasers into six groups, depending on their power output, their emission duration, and the energy of the photons they emit. The classification is then attached to the laser as a sticker. The higher the laser’s energy, the higher its potential to injure. High-powered lasers of the Class IV type (the highest classification) generate a beam of energy that can start fires, burn flesh, and cause permanent eye damage whether the light is direct, reflected, or diffused. Canada uses the same classification system, and laser use in Canada is overseen by Health Canada’s Radiation Protection Bureau. Goggles blocking the specific color of photons that a laser produces are mandatory for the safe use of lasers. Even with goggles, direct exposure to laser light should be avoided.