How a Jet Engine Operates

advertisement
How a Jet Engine Operates
A turbojet engine is essentially a machine designed for the only purpose of
producing high-velocity gases, which are discharged through the jet nozzle at the
rear of the engine. The engine is started by rotating the compressor with a
starter, then igniting the mixture of fuel and air in the combustion chamber 3 with
one or more igniters. When the engine has started and its compressor is rotating
properly, the starter and the igniters are turned off. The engine will then run
without further assisstance as long as fuel and air in the proper propor-tions
continue to enter the combustion chamber.
The gases created by a fuel and air mixture burning under normal atmospheric
pressure do not expand enough to do useful work. Air under pressure must be
mixed with the fuel before the gases produced by combustion can be successfully
employed to make a turbojet ope-rate. The more air an engine can compress and
use, the greater is the power or thrust it can produce.
In a jet engine the fuel and air mixture is compressed by means of a centrifugal
compressor. The power necessary to drive the compressor in a turbojet engine is
very high. To indicate how much power is absorbed by the compressor of a
moderately large turbojet, let us assume that we have an engine that produces
10,000 pounds of thrust for take-off. In this engine, the turbine has to produce
approximately 35,000 shaft horsepower4 to drive the compressor when the engine
is operating at full thrust. About three-quarters of the power generated inside a
jet engine is used to drive the compressor. Only what is left over is available to
produce the thrust needed to propel the airplane.
Single stage centrifugal compressors are practical for pressure ratios up to about
4 : 1 . Higher pressures can be achieved, but at a decrease in efficiency. It is
possible to obtain higher pressures by using more than one stage of compression.
Liquid Propellants
The energy developed in the rocket engine for propulsion purposes is derived
from the thermochemical energy of the propellants. Their chemical reaction,
called "combustion" is accompanied by the generation of large quantities of
gases at high temperature. Since the fuels employed are hydrocarbons, the
products of combustion usually contain carbon dioxide (CO2), carbon
monoxide (CO) and water (H 2O) in the form of steam, as the principal
constituents. The temperature attained by the reaction and the composition of
the reaction products are influenced to a large extent by the mixture ratio 1 of
the propellants. The pressure in the combustion chamber 2 where the reaction
takes place influences the completeness of the reaction. The higher the
chamber pressure, the more complete the reaction. High chamber pressure gives
improved performance; however, the improvement decreases for valves of
chamber pressure above approximately 300 psia.
The ideal liquid rocket propellant is one that meets the following
requirements:
The heat value4 per pound of propellant shou
1. ld be as high a
possible.
2. The density should be high to keep space requirements low.
3. The propellant should be easily stored and present no special
handling problems.
4. The corrosiveness of the propellant should be low.
5. The performance of the propellant combination should not
be greatly affected by temperature changes.
6. The ignition should be reliable and smooth.
7. The propellant should be stable for reasonable lengths of time.
8. The viscosity change with temperature change should be low
so that the pumping work at low temperatures will not be excessive.
None of the known liquid rocket propellants satisty all these requirements.
The Temperature Problem
The problem which has become of increasing importance as the speeds of
aircraft have become higher is that of temperature. The temperatures associated
with very high energies dissipated during reentry2 of a missile are frequently
above the melting point of most materials. Even the temperatures associated
with the leading edge 3 of airplanes in supersonic flight are high enough to reduce
severely4 the strengtn characteristics of the structural materials. Three
methods have been used to overcome the temperature problem. To certam missile re-entry application, it is possible to construct
the body with a shielding of material that is able to absorb the heat
generated during the re-entry manoeuvre by merely melting or burning
away the smelding , leaving the main structure undamaged. In cases
where such an approach would be unsatisfactory, efforts have been
made to combat the temperature by utilizing cooling systems, such as
feeding water under pressure through the leading edge and absorbing
the excess heat by converting it to steam. At lower speeds, tem
perature-resistant materials, such as stainless steel or titanium or
even certain aluminum alloys, have proved a very satisfactory ap
proach.
■
STOLs and VTOLs
STOL stands for short take-off and landing. STOL looks like
conventional aircraft, but depends on powerful engines and stabilization
devices for landing and take-off. These might include large retractable
flaps1 to increase wing area at low speeds and to deflect the airstream
downward for increased lift.
Being faster than helicopters but requiring more space to land STOLs
might be used in intercity operations between suburban airports.
VTOL stands for vertical take-off and landing. It should be noted that
VTOL craft can also operate in the STOL mode where landing space is
available. All VTOLs pose difficult technical problems. While an ordinary
aircraft can develop lift slowly by increasing speed along a runway3, the
VTOL must take off without this kind of help. It seeks all its initial lift
without any forward speed. This requires a great amount of lifting power,
which is likely to be needed only for take-off and landing. The result is
lower payload, higher costs, and shorter range.
Operating costs are improving, but are still higher than those of
conventional aircraft. Nevertheless 4, there is no question that there is a
place for VTOLs — assuming a satisfactory design can be found.
A number of different kinds of VTOL have been built or are under study.
A model of the strange-looking ADAM II has already been built and is
being tested. ADAM stands for Air Deflection and Modulation. Turbofan
engines5 will be located right in the wings and nose. To obtain upward
thrust , the fixed-wing design diverts7 the airflow downward through a series
of louvers8 or slats. ADAM is planned as a high-sonic craft, which may
bring it into the 600-mph class. Finally, work is proceeding 9 on several
supersonic,
jet-driven
VTOLs. These, as well as ADAM, are the kind of high-performance
craft, that must sacrifice10 payload and economy of operation to obtain
this high performance. Therefore now they are of more interest to the
military than to commercial operators. The future, however, may see
even more novel designs.
Rocket Propulsion Fundamentals
The chemical rocket engine is not dependent upon air as its oxidizer
source and therefore can operate outside the earth's atmosphere to
propel space vehicles. This is an advantage over other types of jet
propulsion engines1. A rocket engine functions perfectly in vacuum or
near-vacuum conditions since it does not have to overcome the drag
that is created in atmospheric conditions.
The rocket engine differs also from other types of jet propulsion in
that its thrust depends entirely upon the effective velocity of the
exhaust and does not depend upon a momentum difference2. Since its
thrust depends only upon the effective jet velocity, it is not affected by
the speed at which the vehicle travels if the propellant consumption
rate is constant. T h r u s t e q u a t i o n s :
The thrust of a rocket engine is composed of the sum of two terms:
momentum thrust 3 and pressure thrust 4. The momentum thrust is
simply the change of momentum which results from the acceleration of
the propellant particles. The equation for momentum thrust is often
called the simplified thrust equation because it assumes "complete
expansion" of the exhaust gases in the nozzle 5. In other words it
assumes that the gases have expanded to the point where the nozzle
pressure is the same as the pressure surrounding the rocket nozzle. The
equation for the momentum thrust is:
Th = G~Ve, where Th —
momentum thrust in lbs;
G— weight rate of flow of propellant in lbs. per second; g—
acceleration due to gravity (32.2 ft/sec2); Ve — velocity of gases at
nozzle exit in ft. per second.
Combustion-Driven
1
MHD
Generator
The combustion-driven MHD generator is remarkably simple — nothing
more than a relatively low-pressure rocket, a combustion chamber attached2 to
a rather long nozzle, with the whole assembly inserted inside a magnet. And
because of the ability of the generator to handle very-high-temperature gases,
a MHD powerplant will run at efficiencies which may exceed 60%. Its high
efficiency could drastically reduce — even eliminate — thermal pollution 3
of lakes and rivers. In wide use, it could also significantly reduce sulfur
dioxide pollution of the atmosphere, and turn out sulfuric and nitric acids 4
as byproducts. The performance of MHD, moreover, improves with increased
generator size.
But the conceptual simplicity of MHD does not, in itself, cinch 5 its
application. In many ways, the situation is closely analogous to that of the
rocket engine, which the generator so closely resembles6.
Ability to utilize a very-high-temperature, high-energy heat source
distinguishes7 the gaseous MHD generator as a power source. In MHD, the
power-production process takes place throughout the gas volume. The gas
container — the MHD channel — can be cooled, and so can operate at
much lower temperature than the generating gas. Consequently, in
principle, an energy source at any temperature may be employed. Ability
to operate at high temperature means high thermodynamic efficiency and
large power density. As a practical matter, however, the gaseous working
fluids of primary interest come from the energy of chemical combustion and
the solid-core nuclear reactor. For combustion-driven MHD, this means a
maximum gas temperature below about 5200 F, except in special instances
involving very high energy fuels. For the nuclear heat source, maximum
attainable temperatures are much lower, well below 3500 F for advanced
systems, and below 2000 F for more conventional systems.
Finally, engineers confront a multitude of problems related to the other
equipment for a complete powerplant. These include the development of the
regenerative high-temperature heat exchangers to preheat the combustion
air to 2000—3000 F.
Information Theory, Codes and Messages
The general problem of transmitting and interpreting (decoding)
messages is considered by information theory, a close relative1 of
thermodynamics, which, a little by design and a little by chance, uses the
statistical concept of entropy as a starting point.
In the general communication problem considered by Claude
Shannon, the inventor of information theory, the following basic
elements are introduced: A message
A transmitter: the thing that is sending the message A receiver: the
instrument that reads and decodes the message A channel: medium
through which the message is transmitted A code: set of symbols
used to write the message Noise: an undesirable signal that interferes
with the whole process and cannot be eliminated
A simple example is provided by the telegraph. There is a code given
by a sequence of lines, dots2, and periods of silence; a transmitter, which
serves to send the message in the form of an electromagnetic signal; a
channel — the air; a receiver, which includes the operator who decodes
the message. Noise is distributed throughout: there may be electrical
discharges interfering with the real signal, errors caused by the
operator, etc. In devising3 his dot-and-dash code4 Morse followed the
principle of using the shortest symbols — the fastest to transmit — for
the most common letters. This method is still used in
more
sophisticated codes.
Electron Optics
The conversion of a visual image — a two-dimensional distribution of
light and shade — into an electrical signal requires not merely a
photosensitive element which translates differences in light intensity into
differences in current or voltage, but also a commutator1 which
successively causes the photoemission derived from different picture
elements to actuate a common signal generator, or, as an alternative,
successively derives the output signal from individual photoelements
associated with the picture elements. Similarly, in picture reconstruction,
a commutator is needed to apply the received signal successively to
elements in the picture frame2 corresponding to the picture elements at
the transmitter from which the signal has originated.
In electronic television the commutators used for both purposes .
are electron beams. In order that the reproduced picture may be a faithful
replica3 of the original scene, these beams must be deflected in a
precisely controlled manner; to realize sharp, high-quality pictures, they
must be sharply converged4. Electric and magnetic fields are the means
used for accomplishing both purposes.
The design of electric and magnetic fields to focus and deflect
electrons in a prescribed manner is commonly called electron optics.
The term follows from the recognition that the paths of material
particles subject to conservative force fields obey the same mathematical
laws as light rays in a medium of variable refractive index 5. Later, it
was shown, both theoretically and experimentally, that axially
symmetric electric and magnetic fields act indeed on electron beams in
the same manner as ordinary glass lenses act on light beams. The
"refractive index" n for electrons in a field with electrostatic potential V
and magnetic vector potential A can be written simply where c is the
velocity of light and 0 the angle between the path and the magnetic
vector potential. The zero level of the potential V is made such that e
V represents the kinetic energy of the electron. It is thus possible to
derive the path equations of the electrons from Fermat's law of optics:
Fermat's law states that for the actual light ray (or electron path) from
point A to point B the optical distance is a minimum or maximum as
compared with any comparison path.
In any actual electron-optical system only the electrodes surround
ing the region through which the electrons move, along with6 their
potentials, as well as external current carrying coils and magnetic
cores can be determined at will 7. The fields in the interior, which
enter into the refractive-index expression and the path equations,
must be derived from a solution of Laplace's equation for the boundary
conditions established by the electrodes and magnetics. For electro
static systems Laplace's equation is simply:
The determination of electron paths within the system is thus normally carried
out in two steps: the determination of the fields and the solution of the path
equation in these fields. However, computer programs applicable for a great range
of practical cases, have been written for carrying out both operations. With them,
the computer supplies the electron paths if the point of origin and initial
velocity of the electron as well the boundary potentials are specified.
Radiation
Radiation is the process by which waves are generated. If we connect an ac
source to one end of an electrical transmission line (say, a pair of wires or
coaxial conductors) we expect an electromagnetic wave to travel down the line.
Similarly, if, as in the first Figure, we move a plunger 2 back and forth in an airfilled tube, we expect an acoustic wave to travel down the tube.
Thus, we commonly associate the radiation of waves with oscillating sources.
The vibrating cone of a loudspeaker radiates acoustic (sound) waves. The
oscillating current in a radio or television transmitting antenna radiates
electromagnetic waves. An oscillating electric or magnetic dipole radiates planepolarized waves3. A rotating electric or magnetic dipole radiates circularly
polarized waves.
Radiation is always associated with motion, but it is not always associated with
changing motion. Imagine some sort of fixed device moving along a dispersive
medium*. In the Figure below this is illustrated as a "guide" moving along a thin
rod and displacing the rod as it moves. Such a moving device generates a wave
in the dispersive medium. The frequency of the wave is such that the phase
velocity v of the wave matches5 the velocity v of the moving device. If the group
velocity is less than the phase velocity, the wave that is generated trails behind
the moving device. If the group velocity is greater than the phase velocity, the
wave rushes out ahead7 of the moving device. Thus, an object that moves in a
straight line at a constant velocity can radiate waves if the velocity of motion is
equal to the phase velocity of the waves that are generated. This can occur in a
linear dispersive medium, as we have noted above. It can also occur in the case of
an object moving through a space in which plane waves 8 can travel.
Antennas and Diffraction
The Figure represents a beam of light emerging from a laser. As the beam
travels, it widens and the surfaces of constant phase become spherical. The beam
then passes through a convex lens1 made of a material in which light travels more
slowly than in air. It takes a longer time for the waves to go through the centre
of the lens than through the edge of the lens. The effect of the lens is to produce a
plane wave2 over the area of the lens. When the light emerges from the lens, the
wavefront, or surface of constant phase, is plane.
The next example represents a type of microwave antenna. A microwave source,
such as the end of a waveguide3, is located at the focusof a parabolic (really, a
paraboloidal) reflector. After reflection, the phase front4 of the wave is
plane over the aperture5 of the reflector.
The light emerging from the lens of the first Figure does not travel
forever6 in a beam with the diameter of the lens. The microwaves from
the parabolic reflector do not travel forever in a beam the diameter of
the reflector. How strong is the wave at a great distance from the lens
or the reflector?
A particular form of this question is posed in the Figure at the'; bottom
of the text. We feed a power PT into an antenna that emits a plane
wave over an area AT. We have another antenna a distance L away which
picks up the power of a plane wave in an area AR and supplies this
power P R to a receiver. What is the relation among P T , P R , A T , A R ,
and L? There is a very simple formula relating these quantities:
Computers and Mathematics
Today physicists and engineers have at their disposal two great tools:
the computer and mathematics. By using the computer, a person who
knows the physical laws governing the behavior of a particular device or
a system can calculate the behavior of that device or system in particular
cases even if he knows only a very little mathematics. Today the novice1
can obtain numerical results that lay beyond the reach of the most
skilled mathematician in the days before the computer. What are we to
say of the value of mathematics in today's world? What of the person
with a practical interest, the person who wants to use mathematics?
Today the user of mathematics, the physicist or the engineer, need
know very little mathematics in order to get particular numerical answers.
Perhaps, he can even dispense with2 the complicated sort of functions
that have been used in connection with configurations of matter. But a
very little mathematics can give the physicist or engineer that is harder to
come by through the use of the computer. That thing is insight3. The
laws of conservation of mechanical energy and momentum can be simply
derived from Newton's laws of motion. The laws are simple, their
application is universal. There is no need for computers, which can be
reserved for more particular problems.
Electronic Components for Computers
The electronic digital computer is built primarily of electronic
components, which are those devices whose operation is based on the
phenomena of electronic and atomic action and the physical laws of
electron movement. An electronic circuit is an interconnection of
electronic components arranged to achieve a desired purpose or function.
During the past two decades, the computer has grown from a fledgling
curiosity1 to an important tool in our society. At the same time, electronic
circuit developments have advanced rapidly; they have had a
profound2 effect on the computer. The computer has been significantly
increased in reliability and speed of operation, and also reduced in size
and cost. These four profound changes have been primarily the result of
vastly improved electronic circuit technology.
Electronic vacuum tubes were used in the earliest computers. They
were replaced by solid-state electronic devices3 toward the end of the
1950's. A solid-state component is a physical device whose operation
depends on the control of electric or magnetic phenomena in solids; for
example, a transistor, crystal diode, or ferrite core4. Solid-state circuits
brought about the reliability and flexibility required by the more
demanding applications of computers in industry. Probably the most
important solid-state device used in computers is the semi-conductor,
which is a solid-state element which contains properties between those of
metal or good conductor, and those of a poor conductor, such as an
insulator. Perhaps the best-known semi-conductor is the transistor.
The advances in electronic circuit technologies have resulted in
changes of "orders of magnitude" where an order of magnitude is equal
to a factor of ten.
The number of installed computers grew from 5000 in 1960 to approximately 80,000 in 1970. Also, the number of circuits employed per
computer installation has significantly increased. The first computers
using solid-state devices employed 20,000 circuits. Today computers
using transistors may contain more than 100,000circuits. The trend is
likely to continue; it has been made possible by the continued decrease
in size, power dissipation, cost, and improved reliability of solid-state
circuits. Note that what was used in a "high performance" computer in
1965 became commonly used in 1968. The speed of the logic circuits
is given in nanoseconds, 10~9 seconds.
Existing Satellite Communications Systems
In a satellite communications system, satellites in orbit provide links
between terrestrial stations sending and receiving radio signals. An earth
station transmits the signal to the satellite, which receives it, amplifies it
and relays it to a receiving earth station. At the frequencies involved,
radio waves are propagated in straight lines, so that in order to perform
its linking and relay functions, the satellite, must be 'visible'— that is,
above the horizon — at both the sending and receiving earth stations
during the transmission of the message.
There are at present two different types of systems by which satellites
are so positioned: 'synchronous' and 'random orbit'. A satellite placed in
orbit above the equator at an altitude of 22,300 miles (35,000 km) will
orbit the Earth once every 24 hours. Since its speed is equal to that of
the Earth's rotation, it will appear to hang1 motionless over a single spot
on the Earth's surface. Such a satellite is called a synchronous satellite,
and the orbit at 22,300 miles above the equator is known as 'the
geostationary orbit'. A synchronous satellite is continuously visible over
about one-third of the Earth (excluding extreme northern and southern
latitudes2). Thus a system of three such satellites, properly positioned
and linked, can provide coverage of the entire surface of the Earth,
except for the arctic and antarctic regions.
A satellite in any orbit other than a synchronous orbit will be simultaneously visible to any given pair of earth stations for only a portion
of each day. In order to provide continuous communication bet-ween
such stations, more than one satellite would be required, orbiting in such
a way that when the first satellite disappeared over the horizon from one
station, another had appeared and was visible to both sending and
receiving earth stations. The number of such satellites required to
provide continuous communication depends on the angle and alti-tude of
the orbit. The number could be minimized if the spacing between the
satellites were precisely controlled (controlled-orbit system), but a
somewhat3 larger number with random spacings can achieve the same
result (random-orbit system).
Since the synchronous satellite remains stationary with respect to any
earth station, it is relatively simple to keep the antennas at thesending
and receiving stations properly pointed at the satellite. Only small
corrections for orbital errors are required. In a random-orbit system, the
earth-station antenna must track the satellite across the sky. Moreover,
if continuous communication is to be maintained, a second antenna
must be in readiness at each earth station to pick up the following
satellite as the first one disappears over the horizon.
At present, there are operating systems of both main types. Intelsat
operates a synchronous system providing global coverage, with satellites
positioned above the Atlantic, Pacific, and Indian Oceans. The Soviet
Orbita system, used for space network domestic communications
(including television distribution) within the USSR, is a random-orbit
system, using eight satellites and providing continuous 24-hour
communications. The satellites are spaced so that two of them are always over the Northern Hemisphere 5; and the orbits are such that
during the time when it is in operation, a satellite is at the apogee of its
orbit. Its apparent motion with respect to the Earth's surface is slowest
at this time and the tracking problem is minimized
Hypersonic Transport
The fastest flights within the atmosphere have been made by a rocket
craft that carried fuel for only a few short minutes of powered flight 1.
For short experimental flights, that is fine 2; for longer trips and higher
payloads within the atmosphere, other forms of propulsion, such as the
ramjet, are necessary.
The ramjet lies somewhere between the jet and the rocket. The next
step, supersonic ramjets may even play a role in sending payloads
into space.
If fossil fuels3 must be phased out, hydrogen may become the fuel of
the future. The light weight of hydrogen in combination with its high
energy value4 gives it the highest energy density per pound of all fuels. A
hypersonic, hydrogen-fueled aircraft has already been proposed. The
supercold liquid hydrogen would be not only a good fuel, it would also
provide an answer to the continuing problem in high-speed flight—
frictional heating due to air resistance. Temperatures of leading edges5
can go up into thousands of degrees, possibly causing weakening of
structural members. The liquid hydrogen, at —423° F, could be used to
help cool these areas.
Someday in the future our supersonic planes may seem slow and oldfashioned. Plans are already on the drawing board for aircraft capable
of a speed of 11,000 mph.
And why not? Materials technology is advancing rapidly and may provide
substances able to stand up against the multithousand degree
temperatures that will be encountered 8. Programs are already under way
to develop a hypersonic test engine and to develop and test lightweight
structures capable of operating at high speeds.
There is no theoretical reason why these or even higher speeds
cannot be attained. Some aircraft experts predict we will see such craft
in service by 1985 or 1990.
At 18,000 mph, however, the craft has reached orbital speed. And at
25,000 mph, the problem is no longer 9 how to keep it up, but how to keep
it down— that is, how to keep it from flying off into space.
Even at 11,000 mph — which can probably be attained without any
revolutionary development, no two cities on earth would be more than
three-quarters of an hour apart. How small would our world be then?
Advances in Satellite Communications
Communications satellites, a by-product of rocketry and microwave
engineering combined, have been made possible by advances in numerous
fields of the physical sciences. Launch vehicles, propulsion devices,
spacecraft structures, converters from solar to electrical energy, low noise
receivers, and high power transmitters are important items entering a
complex system of a communications satellite.
Artificial earth satellites carrying active microwave repeaters offer two
fundamental advantages: (a) bandwidth in excess of the amount available
for intercontinental communications, and (b) the possibility of
communications among several pairs of earth stations "visible" from the
satellite.
m
T h e Or b i t a
S y s t e system of Russian communications satellites
uses
iThe "Orb spacecraft of the Molniya type. This system is unique because
it is the first, and until now the only, satellite system in the world for
domestic communications. It is also quite singular because the
satellites are placed in highly elliptical orbits rather than in
geosynchronous equatorial orbit.
The Soviet Union "Orbita" system was designed to provide coverage of
the far northern latitudes of the European and Asian land masses of the,
U. S. S. R. while minimizing the handicaps of the launch from the
cosmodrome of Iyuratam-Baikonur at its relatively high latitude. A 12-hr
period highly elliptical orbit with apogee of around 40,000 km over the
northern hemisphere is satisfactory to cover the far northern regions. With
orbit inclination set at 65°, the oblateness of the earth results in na
rotation of the line of the upsides, thus minimizing the need for orbit
manoeuvres and corrections. Tracking of the satellites and traffic handover from one satellite to another are clearly necessary, but as the
satellites move slowly around apogee these tracking problems are eased.
Continuity of traffic is obtained by placing two satellites in phase
opposition on each of two orbits whose planes are at 90° from each
other, while the satellite pairs in the two orbits are in turn in quadrature.
The relatively low perigee of about 50 km and various other
constrains tend to limit the in-orbit lifetime of each satellite. Hence,
frequent periodic replenishment of the orbit is necessary.
Since the launch of the first satellite of this series in 1965, at least
fourteen satellites of the Molniya type have been orbited. The Molniya
1 satellite uses frequencies in the UHF range between 800 and 9000 MHz.
An extensive network of earth stations permits telephone, data, fac
simile, and television transmission over the entire territory of the
U. S. S. R
Ram Jet [
The ram jet is technically known as the "Aero-Thermo-Dynamic Duct"
(athodyd). It is probably the simplest airstream jet propulsion device built
since it has no moving parts. In appearance, the ram jet looks like a
tube which is opened at both ends. The forward part of the main chamber1
is the diffuser section2; the mid-portion is the combustion section 3; and the
aft portion is the nozzle 4 section. Fuel is fed through a distributing ring in
the diffuser section to a series of small nozzles. To start combustion, a
conventional type spark plug 5 is located within the combustion chamber.
When started, the combustion process is continuous and self-supporting.
Operation. This engine is dependent upon the forward speed of the unit
to introduce sufficient mass flow of air for operation. Thus, to start this
engine, it is necessary to provide a launching mechanism capable of
accelerating the unit to at least 300 m. p. h.
The air from the atmosphere enters the diffuser section by ram action8.
After passing through the diffuser, the cross-sectionalarea of the tube
increases and the velocity of flow of the air decreases. This causes the
pressure of the air to increase somewhat. Fuel, injected into the airstream
at the diffuser, mixes with the air and the combustion process is started by
an electric spark. This causes the air to increase in temperature and
pressure. After the unit is in operation, combustion takes place at
approximately constant pressure. The heat added to the
air causes the air to be ejected from the nozzle at a velocity which is
greater than the velocity at the entrance to the diffuser. The reaction to
this accelerating force is the thrust force. It acts against the forward
internal walls of the diffuser section. There can be no thrust unless the
velocity of the jet is greater than the entering air velocity.
With constant combustion chamber pressure, the exhaust velocity7 of
the jet increases with temperature. The greater the difference between the
velocity of the jet, the greater the thrust. The thrust of a ram jet also
varies considerably with flight speed. The efficiency with which the fuel
energy is converted into jet energy depends upon the compression ratio
which, in turn, depends upon the flight speed. Finally, the thrust depends
on the increase in momentum which is proportional to the difference
between jet velocity and the flight velocity. However, its over-all
efficiency8 is low as a result of poor conversion of the fuel energy,
particularly at low speeds.
"Electron Gun"
The most common form of an electronic "commutator" used in
television receiving and transmitting devices is shown schematically
in the Figure. The source of the electrons is a small flat thermionic
cathode, consisting of a nickel disc coated with a relatively stable
low work function1 material, such as a mixture of barium and
strontium oxides. As the material is heated, a minute fraction of
the electrons within it attains sufficient energy to overcome the
potential barrier at the surface which prevents the bulk of the
electrons from escaping. These electrons are accelerated toward a
positively biased first anode 2 and at the same time deflected toward
the axis by the negatively biased control grid 3. They form a pencil
of minimum cross section at the point where the principal paths
(i. e. the paths of electrons leaving the cathode with zero velocity)
cross the axis and diverge from this point on toward a final
electron lens, which serves to image the crossover on the image
screen or target. The final lens illustrated is that formed between
two cylinders at different potentials; the sequence of curved lines
represents the equipotential surfaces which can be thought of as
refracting the electron paths in the same manner as a boundary
surface between two media of slightly different refractive index 7.
The magnetic field of a short solenoid can be similarly employed for
imaging the crossover on the screen although the detailed interaction
between field and electron is quite different. The intensity of the electron
beam is varied by changing the potential of the control grid; as the
potential of the latter is reduced a smaller fraction of the electrons
emitted by the cathode passes through the grid aperture8, the remainder
being turned back toward the cathode. At the same time, the crossover
moves toward the cathode; this has, however, only a minor effect on the
sharpness of focus on the screen since the crossover displacement is
small compared to the distance between the crossover and final lens.
The system shown in the figure, designated as an electron gun,
serves merely to form a sharply focused electron spot of controllable
intensity at one point on the screen or target. To effect the "commutation" the beam is subjected to a pair of transverse magnetic (or electric)
fields just beyond the final lens. The exciting currents (or voltages) for
the horizontal and vertical deflector 9 exhibit a sawtooth-shaped10
variation with periods corresponding to the time required for describing
a single scanning line (about 60 microseconds) and a complete picture
field (1/60 second) respectively, the electron emission being suppressed
during the short return times of the sawtooth. As a result, the electron
spot covers the screen and target area with a closely spaced raster11 of
horizontal lines; the deflections at the transmitter and receiver are
synchronized so that the beam scanning a certain point of the image
area in the receiver is modulated by the signal derived from the
corresponding point of the scene at the transmitter. The design of
deflection systems becomes a complex electron-optical problem when
practical considerations (such as large viewing screens and small
receiver depth) demand large angular ranges of deflection, commonly of
the order of 110°.
Computer Science and Technology
Aerospace Design Techniques
The design of early airplanes was characterized by the "cut-arid-try"
approach1; that is, various reasonable designs were evolved and then
sample airplanes2 were built for trial in flight. This approach worked well
as long as airplanes were fairly simple and inexpensive. But, for highperformance airplanes, the cut-and-try approach turned out to be
prohibitively costly. When such an airplane is constructed, it has to fly.
Modeling techniques, using computers, allow an airplane to be "flight
tested" before it is built! So accurate is the simulation that nearly optimal
designs can be developed on the drawing board3 and actual test flights
are used only to verify the design and make minor modifications.
T h e Age of S p a c e
The space age gave birth to its own complexities! Not only were
the problems more difficult, but the answers had to be found faster.
As the rocket flies toward the moon, researchers "fly" its trajectory
in the computer. Lunar gravity, atmospheric drag, equatorial bulge
of the earth, and the forces generated by corrective rocket bursts 5 must
all be taken into account together with several other forces and pertur
bations. In some cases, the calculations must be made in real time
while the rocket is plummeting through space at 25,000 miles per
hour. There is no way of doing this without computers. Indeed, vir
tually every branch of science would long have been stopped by mathe
matical stone walls if the electronic computer had not come to the
rescue.
..
.._...
T h e A t o m i c Age
The atomic age brought out new complexities. Quantum mechanics forced
the abandonment of the deterministic view of nature and the statistical
approaches to science increased the number of required calculations by
orders of magnitude. There was no simple way to predict the behavior of
neutrons in a slab6 of uranium involved in a chain reaction. Each individual
particle had to be followed mathematically as it collided with its
neighbors and produced new neutrons in everincreasing numbers. The
children and grandchildren of each new wave of collisions had to be
tracked as they made their way through the uranium and out into the
surrounding world.
This kind of mathematical simulation is called a Monte Carlo
procedure. In its simplest form, it consists of making numerous individual
simulations of a process to see how it behaves under statistical variations.
Machine Language
A computer cannot understand English or any other human language
directly. It understands only its own language, called machine language,
which varies from one computer system to another. Instructions in such a
language consist of a sequence of letters and/or numbers that are
unintelligible to us unless we are trained as computer systems experts
(not just users or programmers) well-acquainted with that particular
computer.
Prior to about 1958, a computer programmer had to write his
instructions in such a machine language. This required very extensive
training, much of which was wasted effort because, as newer computer
systems were introduced, the language might change completely.
Because of the tenuous connection between the appearance of machine
language instructions and the language in which we think, computer
programming was tedious and prone to error.
User-oriented programming languages
The disadvantages of direct machine language programming have now
been overcome and the human-computer gap closed somewhat by the
introduction of what are called user-oriented languages. These are
ways of stating instructions by using English terms, although the
instructions that result may not seem very much like our everyday
conversation, they will at least have some degree of intelligibility to human
beings.
The computer does not directly follow instructions that are expressed in
such user-oriented languages; it wouldn't understand such instructions.
The instructions must be translated to machine language. This is done by
a special preliminary program written by experts well-acquainted with the
particular computer involved. This program is called the compiler 1.
The compiler does much more than interpret instructions; it also finds
programming errors that might have occurred and describesthem to
the user by printing out comments called diagnostics. If the user-oriented
language program is free of such errors, it produces the machine language
program, called the object program2. When the computations are to be
executed (performed), it loads the object program into the computer's
memory. Your program will then be ready to use.
Many user-oriented languages have been proposed. During the
1969's, the two most important languages for general computer use were
Algol and Fortran. Many computer systems have (or had) compilers for both
languages so that either could be translated into machine language.
Algol and Fortran were intended primarily for scientific programming
and another language, easier to understand, called Cobol, -was devised for
the less sophisticated requirements of routine business programming. The
real computer users — laboratory technicians, civil engineers, and
biology professors —could now learn to code their own problems.
Computer programming was thus no longer the exclusive stomping ground
of a few thousand highly trained specialists.
The Asymmetric Slab Waveguide1 ;
Dielectric slabs are the simplest optical waveguides. Because of their
simple geometry, guided and radiation modes2 of slab waveguides can be
described by simple mathematical expressions. The study of slab
waveguides and their properties is thus often useful in gaining an
understanding of the waveguiding properties of more complicated
dielectric waveguides. However, slab waveguides are not only useful as
models for more general types of optical waveguides, but they are actually
employed for light guidance in integrated optics circuits.
A dielectric slab waveguide is shown schematically in the figure. The
figure shows a slab waveguide as it would be used in a typical integrated
optics application. The core region of the waveguide is assumed to
have refractive index ni and is deposited on a substrate with refractive
index n2. The refractive index of the medium above the core is indicated
as n3. The refractive index n3 may be unity if the region above the core is
air, or it may have some other value if the guiding region of index nx is
surrounded by dielectric materials on both sides. In order to achieve true
mode guidance it is necessary that «i be larger than n2 and ns. In order to
have a specific example we shall assume that
n x > «2>n3
■ (1-1.1)
If n2 = n3, we speak of a symmetric slab waveguide. In case that n2 =£
n3, the slab waveguide is asymmetric. The modes of symmetric slab
waveguides are simpler than those of asymmetric slabs because they
can be expressed either as even or odd field distributions. The lowestorder mode of a symmetric slab waveguide does not have a cutoff
frequency4, which means that, in principle, this mode can propagate at
arbitrarily low frequencies. By contrast, all modes of asymmetric slabs
become cutoff if the frequency of operation is sufficiently low.
Like all dielectric waveguides the asymmetric slab supports a finite
number of guided modes which is supplemented by an infinite continuum
of unguided radiation modes. Both types of modes are obtained as
solutions of a boundary value problem. However, the guided modes can
also be considered from the point of view of ray optics. Since ray optics is
more clear than wave optics, we start the discussion by deriving the
eigenvalue5 equation of the guided modes from geometrical optics, which
is supplemented by some simple results of plane wave6 reflection and
refraction at plane dielectric interfaces.
In integrated optics applications, slab waveguides are formed by
various means, the simplest of which use the deposition of glass or
plastic films on glass or plastic substrates. These films can be deposited
by various methods. One method of forming dielectric optical waveguides
for integrated optics applications employs ion implantation techniques 7.
By bombarding the substrate material with suitable ions it is possible to
alter the refractive index of the substrate so that a dielectric slab
waveguide results.The depth at which the guiding region appears below
the substrate surface can be controlled by the choice of the energy that is
used to accelerate the ion beam.
Many integrated optics applications use narrow dielectric strip
waveguides instead of a continuous two-dimensional film. Such waveguides
are formed by ion implantation techniques or by the deposition of a thin
film on top of a substrate which is subsequently etched awayi so that only
the narrow strip waveguides are left.
The study of asymmetric slab waveguides serves as a valuable
introduction to the entire field of dielectric optical waveguides. Because
of their simplicity, slab waveguides provide insight into the mechanism of
waveguidance by dielectric optical waveguides.
Space Shuttle1
By 1957 some designers were deep in studies to see how a space
shuttle might best be designed. All were agreed that the basic answer
was to construct a "space aircraft" with wings of a broad ogival 2 delta
shape rather like those of the Concorde SSI. Joined to it would be the
booster; this would comprise two or more large solid rockets. Both would
be raised vertically until they were standing on their rocket motors with
their noses pointed to the heavens. The orbiter would be joined closely
above the booster3 to form a kind of biplane combination. The boosters
would carry nothing but propellants and have no crew; the orbiter would
be loaded with everything needed for its mission in space. The combined
vehicles would lift off together and climb vertically before gradually
nosing over at a height of some score of miles at a speed many times
that of sound. At a height of 25 miles the orbiter would separate. The
booster would burn out, fall into the ocean, and be recovered and then
refurbished4. The orbiter would continue out to its appointed mission in
space. From its capacious hold 6 it could produce large space-stations,
lunar laboratories or any of the many other kinds of hardware needed for
space operations. Or, its payload could comprise men and supplies, the
return trip being the homeward journey to Earth of crews whose duty on
a space or lunar station had been completed.
Although the complete shuttle, comprising both the booster and the
orbiter, was planned to cost more than a Saturn Vrocket, the cost per
mission would be very much less even if in practice fewer than a
hundred missions were realized for each vehicle. The cost of delivering a
load to the moon or to a future space station was estimated at about onetenth of its 1972 level. These new flying vehicles will close the gap that
once existed between aircraft and space rockets. Tomorrow's space
vehicle will not look like a vertical tower but like a large aircraft of rather
odd shape. When it returns to Earth, it will be in all respects an
atmospheric aircraft, with lift provided by a wing and steering effected by
aerodynamic controls.
With the appearance of such craft proposed in the late 1970s man will
have completed his initial conquest of the Earth's atmosphere and the
surrounding region of the solar system.
The space shuttle, with its prospect of being used a hundred times, will
enable men of many nations working together to construct great space
stations much nearer to Earth than the Moon and much more useful.
The Radiation Hazard in Space
One of the serious questions was the exposure of humans to radiation in
space. The Van Allen belts, bands of rapidly moving charged particles
circling the earth, had been discovered only a few years earlier, and no
one was certain of their extent or the intensity of radiation associated with
them. Also, solar storms had recently been discovered that inundated the
space around the sun with charged particles, producing radiation levels
that would be lethal for a human in an unprotected state.
This was a difficult question. There was not much knowledge about either
the frequency or the intensity of solar storms, and the Van Allen belts
had not been explored thoroughly enough to indicate how far they
extended. Scientists spent several months, therefore, intensively studying
the levels of radiation between the earth and the moon. They were able to
establish the upper limit of the intensity of solar storms, and they were
able to show that the Apollo Command Module provided shielding thick
enough so that the men inside it could sustain the entire period of a solar
storm.
The Lunar Module (LM) presented a different problem because it had
to be built of thin and light material that could not protect the astronauts
from solar radiation on the lunar surface. (The space suits also offered
practically no protection). They determined how much lead shielding they
would have to add to the LM to provide the necessary protection, and
found that it would have made it too heavy for launching. Finally, they
took the advantage of the fact that solar storms build up gradually and
that, although the radiation can become intense, it takes an exposure of
several hours to do any significant damage. The problem then became one
of detecting the onset of a solar storm, so they developed a network of
sensors aboard the Command Module that would detect a radiation
buildup; and they modified the missile plan in such a way, that, if radiation
were to build up at a certain rate during the moon walk, the astronauts
would enter the LM, take off, return to the Command Module, and abort
the remainder of the mission. This was an indirect solution, but one which
demonstrated that the scientists understood the problem and had a plan
to deal with it.
The Air Vehicle 1985
Some scientists make the following predictions for the air vehicle of
1985 concerning its aerodynamics, propulsion, structure and airborne
avionic systems:
Aerodynamics
There are two basic areas in which technological development will
directly increase the performance of an airplane: 1) the lift/drag ratio1 in
cruise2, and 2) the high-lift capability, which dictates takeoff and landing
field requirements. These are dominant factors in selection of the airplane
wing area and power plant size. In general, the smallest airplane that
can perform a given payload-range task is the most economical.
Propulsion
■.
.
Gas turbine engines have undergone remarkable technological ad
vances in the past 20 years. Further advances will be demanded as
airplane size increases, more speed is desired, and VTOL3 and STOL4
aircraft are developed. The application of the vehicle will dictate the
specific operating improvements desired. As the weight of the air ve
hicle increases, engine thrust must increase to maintain acceptable
runway5 length and cruise altitude. At the same time, fuel consumption
must remain low without increasing the engine drag and weight, so
that operating cost can be kept to a minimum at cruise. To keep the
lowest fuel weight for the particular mission, engine thrust must be
matched to airplane drag at minimum fuel consumption. In general
higher thrust will be demanded along with a corresponding decrease
in engine weight and fuel consumption.
. . -., ,
Structures
With lighter structures, air vehicles can carry heavier payloads.
Structure research continually strives for higher strength/weight ratios6.
There have been steady improvements but no revolution in this area
over the past 20 years. Most airplanes are still constructed of aluminumalloy components.
The next decade will see greater advances in stabilized-element
structures and increases in strength. Advanced titanium alloys and
composite materials, with more strength per pound than aluminum, will
be used to reduce weight. Advanced composites have even higher potential
than the titanium alloys and are most promising for the rest of this
century.
On the horizon today are new structural concepts. One of these
concepts results in the use of fewer frames and stringers 7 and in a 20 per
cent weight reduction. Factory production of this type of structure will
require improvements in construction techniques, using a higher degree of
automation.
Airborne
Avionic
Systems
Airborne avionics8 for the airplane of 1985 will be improved in reliability
and performance. Reliability will be increased by a combination of
improved components and better system design. Performance will be
improved primarily by the large-scale use of on-board digital computers.
On-board equipment will process data from various sources and perform
computations for navigation and flight control.
Air-to-ground communication will be provided by automated or
semiautomated VHF9 and UHF10 modulated data links. Reliable air-toground communication in remote areas will be provided by a satellite
system compatible with the airborne data links.
High control systems for airplanes in 1985 will have been expanded from
simple yaw (oscillation) dampers11 to include automatic three-axis
stabilization. High control system in current use depends on mechanical
linkages between the pilot's controls and the controlled elements. Many
problems associated with this mechanical transfer will be eliminated or
greatly simplified by electrical signal transmission. By 1985 electrical
signalling in airplane control should be well established. The technological
advances in airborne avionic systems will be reflected in performance
improvements which will benefit passengers, aircrew, airline operators,
and airport operators. This will result in more efficient use of space and
more safety and comfort for the passengers.
Electronic Digital Computer
Electronic digital computer is an information-processing device that
accepts and processes data represented by discrete symbols. It is
constructed primarily of electric or electronic devices.
An electronic digital computer possesses three advantages which
make it extremely useful. They are:
1) High speed of operation
z) Precision and accuracy
(3) Reliability
Modern digital computers are constructed of electronic devices which
enable the computer to complete an arithmetic calculation in
approximately one-millionth of a second or less!
The size of computers has decreased by a factor of 10 since 1955. We
expect another decrease, by a factor of 100. Small computers are
commonly called minicomputers. They provide the power and speed of
many large computers of only a few years ago. Through the utilization of
high-speed electronic devices, notably transistors and solid-state devices1,
the speed and size of electronic digital computers has improved
tremendously during the past decade. If the trend continues, we shall have
a computer the size of a shoebox capable of a billion additions per second
by the end of this decade.
The second advantage of an electronic digital computer is the great
precision available in the calculation process. The precision of a computer
may be defined in this way:
P r e c i s i o n . The degree of exactness 2 with which a quantity is
stated. The amount of detail used in representing the data.
Precision is to be compared with accuracy. For example, four-place
numerals3 are less precise than six-place numerals, nevertheless, a properly computed four-place numeral might be more accurate than an
improperly, computed six-place numeral. The accuracy of a computer is
defined as follows:
A c c u r a c y . The degree of freedom from error. Typically, a computer
is able to perform calculations with numbers to a precision and accuracy
of ten decimal places 4.
The third advantage of the electronic digital computer is reliability, which
is defined:
R e l i a b i l i t y The quality of freedom from failure, usually expressed
as the probability that a failure will not occur in a given amount of use.
The reliability of a typical computer might be expressed as a 95 per cent
probability that it will not miscalculate one error in one week, and that it will
not stop functioning within a month. A computer is immensely more reliable
than a man in completing long, difficult calculations.
Now let us consider some of the disadvantages of electronic digital computers.
Three of these disadvantages are:
(!) Limited inherent intelligence5
(2) Limited language-handling capacity
(3) High cost.
We say that the computer has limited inherent intelligence because it must be
told what to do.
Also, the computer possesses a very limited language-handling capacity at
present. Typically, its vocabulary is limited to a hundred words, and its grammar is
primitive. However, computers are being developed which will have expanded
language capabilities.
A relationship between the computing power (essentially speed) and the cost
of the computer has been deduced by Grosch:
Cost of computer = constant xV computing power
This relationship is often called Grosch's Law. Computing power increases as
the cost squared.
Computing power — constant x (cost)2
For example, if we double the cost, we can obtain four times the computing
power.
In summary, we find there are basically three advantages and three
disadvantages in electronic computers. They are as follows:
Disadvantag
Advantages
es
High Speed
Limited Intelligence Accuracy
High Precision and
Limited Language
Reliability
Capability
Cost
The advantages far outweigh
the disadvantages
in a great number of needs
and applications in industry, government, education, and business. There are many
activities that we could not accomplish without the computer. The nature and
structure of the' electronic digital computer is such that it is helpful and
beneficial to man.
Electron Optics
The conversion of a visual image — a two-dimensional distribution of
light and shade — into an electrical signal requires not merely a
photosensitive element which translates differences in light intensity into
differences in current or voltage, but also a commutator which
successively causes the photoemission derived from different picture
elements to actuate a common signal generator, or, as an alternative,
successively derives the output signal from individual photoelements
associated with the picture elements. Similarly, in picture reconstruction,
a commutator is needed to apply the received signal successively to
elements in the picture frame corresponding to the picture elements at
the transmitter from which the signal has originated.
In electronic television the commutators used for both purposes .
are electron beams. In order that the reproduced picture may be a faithful
replica of the original scene, these beams must be deflected in a precisely
controlled manner; to realize sharp, high-quality pictures, they must be
sharply converged. Electric and magnetic fields are the means used for
accomplishing both purposes.
The design of electric and magnetic fields to focus and deflect
electrons in a prescribed manner is commonly called electron optics.
The term follows from the recognition that the paths of material
particles subject to conservative force fields obey the same mathematical
laws as light rays in a medium of variable refractive index. Later, it
was shown, both theoretically and experimentally, that axially
symmetric electric and magnetic fields act indeed on electron beams in
the same manner as ordinary glass lenses act on light beams. The
"refractive index" n for electrons in a field with electrostatic potential V
and magnetic vector potential A can be written simply
where c is the velocity of light and 0 the angle between the path and the
magnetic vector potential. The zero level of the potential V is made
such that e V represents the kinetic energy of the electron. It is thus
possible to derive the path equations of the electrons from Fermat's
law of optics:
Fermat's law states that for the actual light ray (or electron path)
from point A to point B the optical distance is a minimum or maximum as
compared with any comparison path.
In any actual electron-optical system only the electrodes surrounding
the region through which the electrons move, along with their potentials,
as well as external current carrying coils and magnetic cores can be
determined at will. The fields in the interior, which enter into the
refractive-index expression and the path equations, must be derived
from a solution of Laplace's equation for the boundary conditions
established by the electrodes and magnetics. For electro static systems
Laplace's equation is simply:
The determination of electron paths within the system is thus normally carried
out in two steps: the determination of the fields and the solution of the path
equation in these fields. However, computer programs applicable for a great range
of practical cases, have been written for carrying out both operations. With them,
the computer supplies the electron paths if the point of origin and initial
velocity of the electron as well the boundary potentials are specified.
Radiation
Radiation is the process by which waves are generated. If we connect an ac
source to one end of an electrical transmission line (say, a pair of wires or
coaxial conductors) we expect an electromagnetic wave to travel down the line.
Similarly, if, as in the first Figure, we move a plunger 2 back and forth in an airfilled tube, we expect an acoustic wave to travel down the tube.
Thus, we commonly associate the radiation of waves with oscillating sources.
The vibrating cone of a loudspeaker radiates acoustic (sound) waves. The
oscillating current in a radio or television transmitting antenna radiates
electromagnetic waves. An oscillating electric or magnetic dipole radiates planepolarized waves3. A rotating electric or magnetic dipole radiates circularly
polarized waves.
Radiation is always associated with motion, but it is not always associated with
changing motion. Imagine some sort of fixed device moving along a dispersive
medium*. In the Figure below this is illustrated as a "guide" moving along a thin
rod and displacing the rod as it moves. Such a moving device generates a wave
in the dispersive medium. The frequency of the wave is such that the phase
velocity v of the wave matches5 the velocity v of the moving device. If the group
velocity is less than the phase velocity, the wave that is generated trails behind
the moving device. If the group velocity is greater than the phase velocity, the
wave rushes out ahead7 of the moving device. Thus, an object that moves in a
straight line at a constant velocity can radiate waves if the velocity of motion is
equal to the phase velocity of the waves that are generated. This can occur in a
linear dispersive medium, as we have noted above. It can also occur in the case of
an object moving through a space in which plane waves 8 can travel.
Antennas and Diffraction
The Figure represents a beam of light emerging from a laser. As the beam
travels, it widens and the surfaces of constant phase become spherical. The beam
then passes through a convex lens made of a material in which light travels more
slowly than in air. It takes a longer time for the waves to go through the centre
of the lens than through the edge of the lens. The effect of the lens is to produce a
plane wave over the area of the lens. When the light emerges from the lens, the
wavefront, or surface of constant phase, is plane.
The next example represents a type of microwave antenna. A microwave source,
such as the end of a waveguide, is located at the focus of a parabolic (really, a
paraboloidal) reflector. After reflection, the phase front4 of the wave is
plane over the aperture of the reflector.
The light emerging from the lens of the first Figure does not travel
forever in a beam with the diameter of the lens. The microwaves from the
parabolic reflector do not travel forever in a beam the diameter of the
reflector. How strong is the wave at a great distance from the lens or
the reflector?
A particular form of this question is posed in the Figure at the'; bottom
of the text. We feed a power PT into an antenna that emits a plane
wave over an area AT. We have another antenna a distance L away which
picks up the power of a plane wave in an area AR and supplies this
power P R to a receiver. What is the relation among P T , P R , A T , A R ,
and L? There is a very simple formula relating these quantities:
Computers and Mathematics
Today physicists and engineers have at their disposal two great tools:
the computer and mathematics. By using the computer, a person who
knows the physical laws governing the behavior of a particular device or
a system can calculate the behavior of that device or system in particular
cases even if he knows only a very little mathematics. Today the novice
can obtain numerical results that lay beyond the reach of the most
skilled mathematician in the days before the computer. What are we to
say of the value of mathematics in today's world? What of the person
with a practical interest, the person who wants to use mathematics?
Today the user of mathematics, the physicist or the engineer, need
know very little mathematics in order to get particular numerical answers.
Perhaps, he can even dispense with the complicated sort of functions that
have been used in connection with configurations of matter. But a very
little mathematics can give the physicist or engineer that is harder to
come by through the use of the computer. That thing is insight. The laws
of conservation of mechanical energy and momentum can be simply
derived from Newton's laws of motion. The laws are simple, their
application is universal. There is no need for computers, which can be
reserved for more particular problems.
Electronic Components for Computers
The electronic digital computer is built primarily of electronic
components, which are those devices whose operation is based on the
phenomena of electronic and atomic action and the physical laws of
electron movement. An electronic circuit is an interconnection of
electronic components arranged to achieve a desired purpose or function.
During the past two decades, the computer has grown from a fledgling
curiosity1 to an important tool in our society. At the same time, electronic
circuit developments have advanced rapidly; they have had a
profound2 effect on the computer. The computer has been significantly
increased in reliability and speed of operation, and also reduced in size
and cost. These four profound changes have been primarily the result of
vastly improved electronic circuit technology.
Electronic vacuum tubes were used in the earliest computers. They
were replaced by solid-state electronic devices3 toward the end of the
1950's. A solid-state component is a physical device whose operation
depends on the control of electric or magnetic phenomena in solids; for
example, a transistor, crystal diode, or ferrite core4. Solid-state circuits
brought about the reliability and flexibility required by the more
demanding applications of computers in industry. Probably the most
important solid-state device used in computers is the semi-conductor,
which is a solid-state element which contains properties between those of
metal or good conductor, and those of a poor conductor, such as an
insulator. Perhaps the best-known semi-conductor is the transistor.
The advances in electronic circuit technologies have resulted in
changes of "orders of magnitude" where an order of magnitude is equal
to a factor of ten.
The number of installed computers grew from 5000 in 1960 to approximately 80,000 in 1970. Also, the number of circuits employed per
computer installation has significantly increased. The first computers using
solid-state devices employed 20,000 circuits. Today computers using
transistors may contain more than 100,000circuits. The trend is likely to
continue; it has been made possible by the continued decrease in size,
power dissipation, cost, and improved reliability of solid-state circuits. Note
that what was used in a "high performance" computer in 1965 became
commonly used in 1968. The speed of the logic circuits is given in
nanoseconds, 10~9 seconds.
Existing Satellite Communications Systems
In a satellite communications system, satellites in orbit provide links
between terrestrial stations sending and receiving radio signals. An earth
station transmits the signal to the satellite, which receives it, amplifies it
and relays it to a receiving earth station. At the frequencies involved,
radio waves are propagated in straight lines, so that in order to perform
its linking and relay functions, the satellite, must be 'visible'— that is,
above the horizon — at both the sending and receiving earth stations
during the transmission of the message.
There are at present two different types of systems by which satellites
are so positioned: 'synchronous' and 'random orbit'. A satellite placed in
orbit above the equator at an altitude of 22,300 miles (35,000 km) will
orbit the Earth once every 24 hours. Since its speed is equal to that of
the Earth's rotation, it will appear to hang1 motionless over a single spot
on the Earth's surface. Such a satellite is called a synchronous satellite,
and the orbit at 22,300 miles above the equator is known as 'the
geostationary orbit'. A synchronous satellite is continuously visible over
about one-third of the Earth (excluding extreme northern and southern
latitudes). Thus a system of three such satellites, properly positioned and
linked, can provide coverage of the entire surface of the Earth, except
for the arctic and antarctic regions.
A satellite in any orbit other than a synchronous orbit will be simultaneously visible to any given pair of earth stations for only a portion
of each day. In order to provide continuous communication bet-ween
such stations, more than one satellite would be required, orbiting in such
a way that when the first satellite disappeared over the horizon from one
station, another had appeared and was visible to both sending and
receiving earth stations. The number of such satellites required to
provide continuous communication depends on the angle and alti-tude of
the orbit. The number could be minimized if the spacing between the
satellites were precisely controlled (controlled-orbit system), but a
somewhat larger number with random spacings can achieve the same
result (random-orbit system).
Since the synchronous satellite remains stationary with respect to any
earth station, it is relatively simple to keep the antennas at the
sending and receiving stations properly pointed at the satellite. Only
small corrections for orbital errors are required. In a random-orbit
system, the earth-station antenna must track the satellite across the
sky. Moreover, if continuous communication is to be maintained, a
second antenna must be in readiness at each earth station to pick up the
following satellite as the first one disappears over the horizon.
At present, there are operating systems of both main types. Intelsat
operates a synchronous system providing global coverage, with satellites
positioned above the Atlantic, Pacific, and Indian Oceans. The Soviet
Orbita system, used for space network domestic communications
(including television distribution) within the USSR, is a random-orbit
system, using eight satellites and providing continuous 24-hour
communications. The satellites are spaced so that two of them are always over the Northern Hemisphere 5; and the orbits are such that
during the time when it is in operation, a satellite is at the apogee of its
orbit. Its apparent motion with respect to the Earth's surface is slowest
at this time and the tracking problem is minimized.
Hypersonic Transport
The fastest flights within the atmosphere have been made by a rocket
craft that carried fuel for only a few short minutes of powered flight. For
short experimental flights, that is fine; for longer trips and higher
payloads within the atmosphere, other forms of propulsion, such as the
ramjet, are necessary.
The ramjet lies somewhere between the jet and the rocket. The next
step, supersonic ramjets may even play a role in sending payloads
into space.
If fossil fuels must be phased out, hydrogen may become the fuel of
the future. The light weight of hydrogen in combination with its high
energy value gives it the highest energy density per pound of all fuels. A
hypersonic, hydrogen-fueled aircraft has already been proposed. The
supercold liquid hydrogen would be not only a good fuel, it would also
provide an answer to the continuing problem in high-speed flight—
frictional heating due to air resistance. Temperatures of leading edges can
go up into thousands of degrees, possibly causing weakening of structural
members. The liquid hydrogen, at —423° F, could be used to help cool
these areas.
Someday in the future our supersonic planes may seem slow and oldfashioned. Plans are already on the drawing board for aircraft capable
of a speed of 11,000 mph.
And why not? Materials technology is advancing rapidly and may provide
substances able to stand up against the multithousand degree
temperatures that will be encountered. Programs are already under way
to develop a hypersonic test engine and to develop and test lightweight
structures capable of operating at high speeds.
There is no theoretical reason why these or even higher speeds
cannot be attained. Some aircraft experts predict we will see such craft
in service by 1985 or 1990.
At 18,000 mph, however, the craft has reached orbital speed. And at
25,000 mph, the problem is no longer how to keep it up, but how to keep
it down— that is, how to keep it from flying off into space.
Even at 11,000 mph — which can probably be attained without any
revolutionary development, no two cities on earth would be more than
three-quarters of an hour apart. How small would our world be then?
Download