applications and basic principles of absorbers

advertisement
1
ADRIANO PEDRO RODRIGUES
ID: UB18207SSO26040
COURSE TITLE: ELECTRONIC ENGINEERING
Index
1- Introduction……………………………………………………………………………….pag. 5
1.- Electronic engineering….…………………………………………………………..pag. 6
1.1-History of electronic engineering.………………………………………………pag. 6
1.2-The vacuum tube detector……………………….…………………………………pag. 7
1.3-History of computing hardware………..…………………………………………pag. 7
1.4-Desktop calculators..……………………………………………………………………pag. 9
1.5-Digital computation..………………………………………………………………………pag. 9
1.6-Comercial computers…………….………………………………………………………pag.11
1.7-Microprocessors…………….………………………………………………..…………..pag.12
1.8-Electromagnetism & photoelectric effect..….……………………………….pag.13
1.8.1-History………………..………………………………………………………………….……pag.13
1.8.2- Classical electromagnetism…………………………………………………………pag.14
1.8.3-Electromagnetic waves…………………………………………………..……………pag.14
1.8.4-Introduction and early historical view……………………………………..……pag.14
1.8.5-Traditional explanation……….…………………………………………….……….…pag.15
1.8.6-Stoletov: The first law of photoeffect…….…………………………..…………pag.15
1.8.7-Einstein: light quanta…………………………………………………..…………………pag.16
1.8.8-Uses and effects……………….……………………………..……………………………pag.16
1.8.9-Photoelectron spectroscopy..……………………………………………..…………pag.16
1.8.10-Cross section…………………………….…………………………………………………pag.17
1.8.11-Electromagnetism units……………………………………………………….………pag.17
1.8.12-Electromagnetic phenomena…………………………………………….…………pag.17
1.8.13-Electronic devices and circuits………………………………….………………….pag.17
2
1.8.14-Analog circuits……………………………………………………………………………………pag.17
1.8.15-Digital circuits…………………………………………………………………………………….pag.18
2.-Signal processing telecommunications engineering & control engineering pg.18
2.1-Signal processing…………………………………………………………………..………………..pag.18
2.2-Categories of signal processing………………………………………………..……………..pag.18
2.3-Telecommunications engineering……………………………………………………………pag.19
2.4-Telecom equipment engineer…………………………………………………………………pag.19
2.5-Control engineering……………………………………………………………………..…………pag.19
2.6-Instrumentation engineering & Computer engineering………………..…………pag.20
2.7-Computer systems engineering…………………………………………………….………..pag.21
2.8-Algorithm………………………………………………………………………………..……………..pag.21
2.9-Formalization………………………………………………………….……………………………..pag.21
2.10-Expressing algorithms……………………………………………………………..…………..pag.22
2.11-Computer algorithm………………………………………………………………….…………pag.22
2.12-Algorithm analysis…………………………………………………………………..……………pag.22
2.13-Manipulation of symbols………………………………………………………..…….………pag.24
2.14-Mathematics during the 1800s up to the mid- 1900s……………………………pag.24
2.15-Design methods…………………………………………………………………….……………..pag.25
2.16-Electrical laws…………………………………………………………….…………………………pag.25
2.17-Database………………………………………………………………………………………………pag.25
2.18-Database management system………………………………………………………..……pag.25
2.19-Applications of databases……………………………………………………………….…….pag.26
2.20-Digital electronic…………………………………………………………………………………..pag.27
2.21-Advantages…………………………………………………………………………………..………pag.27
2.22-Disadvantages……………………………………………………………………………………….pag.27
2.23-Analog issues in digital circuits………………………………………………………………pag.27
2.24-Struture of digital system……………………………………………………………..……….pag.28
2.25-Design of testability……………………………………………………………………………….pag.29
2.26-Logic families…………………………………………………………………………………………pag.29
2.27-Embebed systems…………………………………………………………………….……………pag.30
3
2.28-CPU platforms……………………………………………………………………..…………………pag.30
2.29-Debugging…………………………………………………………………………………….………..pag.31
3.Applications and basic principles of absorbers……………………………………..…..pag.32
3.1-Reverberation control……………………………………………………………………………….pag.32
3.2-Echo control in auditoria and lecture theatres……………………………………….….pag.34
3.3-Impedance, admittance, reflection coefficient and absorption……………….…pag.34
3.4-Natural noise control…………………………………………………………………………………pag.35
3.5-Loudspeaker cabinet………………………………………………………………………………….pag.35
3.6-Echo control in auditoria……………………………………………………………………………pag.36
3.7-Wavefronts and diffusers reflections…………………………………………………………pag.36
3.8-Burring the focusing from concave surfaces………………………………………………pag.37
3.9-Measurements of absorbers properties…………………………………………….……pag.37
3.9.1-Porous absoption……………………………………………………………………………………pag.38
3.9.2-Resonant absorbrs………………………………………………………………………….………pag.38
3.9.3-Helmholtz resonator…………………………………………………………………………….…pag.39
3.9.4-Active absorption in tree dimensions………………………………………………………pag.40
3.9.5-Active diffusers………………………………………………………………………………..………pag.40
3.9.6-Controllers……………………………………………………………………………………………….pag.41
4.-Acoustic: an introduction………………………………………………..…………………………pag.41
4.1-Geometric room acoustic……………………………………………………………………………pag.41
4.2-Diffuse sound field………………………………………………………………………………………pag.42
4.3-Energy density and reverberation……………………………………………………………….pag.43
4.4-Electroacoustic transducers………………………………………………………………….……pag.44
4.5-Piezoelectric transducer…………………………………………………………………….………pag.44
4.6-Electrostatic transducer…………………………………………………………………………….pag.45
4.7-Magnetic transducer……………………………………………………………………………….…pag.45
4.8-Microphones………………………………………………………………………………………………pag.45
4.9-Condeser microphones………………………………………………………………………………pag.46
4.10-Piezoelectric microphones………………………………………………………………..………pag.46
4.11-Dynamic microphones………………………………………………………………………………pag.46
4
4.12-Carbone microphones……………………………………………………………………………..pag.47
4.13-Hidrophones…………………………………………………………………………………………….pag.47
4.14-Loudspeaker and other electroacoustic sound sources………………………….pag.47
4.14.1-Dynamic loudspeaker…………………………………………………………………….………pag.47
4.14.2-Electrostatic or condenser loudspeaker…………………………………………….…..pag.47
4.14.3-Magnetic loudspeaker……………………………………………………………………………pag.47
4.14.4-The closed loudspeaker cabinet……………………………………………………………..pag.47
4.14.5-The bass-reflex cabinet………………………………………………………………………..…pag.48
4.14.6-Horn loudspeaker…………………………………………………………………………..………pag.48
4.14.7-Loudspeaker directivity………………………………………………………………..…………pag.48
5.General analysis………………………………………………………………………………………………pag.49
6.Conclusion………………………………………………………………………………………………………pag.50
7.Bibliography……………………………………………………………………………………………………pag.51
5
INTRODUCTION
Electronic Engineering,
Is an engineering discipline which uses the scientific knowledge of the
behavior and effects of electrons to develop components, devices, systems, or
equipment (as in electron tubes, transistors, integrated circuits, and printed
circuit boards) that uses electricity as part of its driving force .
In our work on electronic engineering, we present our acquired knowledge by
making mention of the themes and issues that we find extremely important and
is essential in the evaluation show that the job can have.
Apart from the historical aspects of temperament various topics are also
addressed issues relating to the development of new technologies as well as
formulas and calculations that contributed to the development and certification
of these
technologies.
We all know how important the role that engineering electro / electronics has
played in our daily lives to the point of being present in almost all branches of
human activity, primarily in the communications, information and production.
In this work, special attention is given to topics that address issues and
knowledge that relate to the chosen course (sound engineering) that because it
is a very comprehensive is sustained mainly by electronics, construction,
innovation, sensitivity
etc.
The use and development of new technologies is making life easier for itms that
are
linked
to
communication
and
media
events
in
general.
Now! in this case, the sound is something that accompanies us throughout our
life, since the alarm clock that wakes us in the morning, to noticeable various
sounds that surround us and that we transmit sound images of what surrounds
us and which relates with all human
activity.
There are countries where we can hardly feel in calm environments, as human
activity and not only cause stress and consequently health problems to the
population.
Adriano Pedro Rodrigues
2013-01-21
6
ENGINEERING
1.1 Electronic Engineering, is an engineering discipline which uses the
scientific knowledge of the behavior and effects of electrons to develop
components, devices, systems, or equipment (as in electron tubes,
transistors, integrated circuits, and printed circuit boards) that uses
electrictity as part of its driving force.
That encompasses many subfields including those that deal with power,
instrumentation engineering, telecommunications, and semiconductor circuit
design amongst many others.
The name electrical engineering is still used to cover electronic engineering
amongst some of the older (notably American) universities and graduates there
are called electrical engineers. The distinction between electronic and electrical
engineers is becoming more
and more distinct. While electrical engineers utilize voltage and current to
deliver power, electronic engineers utilize voltage and current to deliver
information through information technology.
1.1 History of ElectronicEngineering
Electronic engineering as a profession sprang from technological improvements
in the telegraph industry in the late 1800s and the radio and the telephone
industries in the early 1900s. People were attracted to radio by the technical
fascination it inspired, first in receiving and then in transmitting.
In 1948, came the transistor and in 1960, the IC to revolutionize the electronic
industry. In the UK, the subject of electronic engineering became distinct from
electrical engineering as a university degree subject around 1960.
Early electronics
1896 Marconi patente
In 1893, Nikola Tesla made the first public demonstration of radio
communication.
The Franklin Institute in Philadelphia and the National Electric Light Association,
he described and demonstrated in detail the principles of radio communication.
In 1896, Guglielmo Marconi went on to develop a practical and widely used
radio system. In 1904, John Ambrose Fleming, the first professor of electrical
Engineering at University College London, invented the first radio tube, the
diode.
7
In 1906, Robert von Lieben and Lee De Forest independently developed the
amplifier tube, called the triode.
Vacuum tubes remained the preferred amplifying device for 40 years, until
researchers working for William Shockley at Bell Labs invented the transistor in
1947. In the following years, transistors made small portable radios, or
transistor radios, possible as well as allowing more powerful mainframe
computers to be built. Transistors were smaller and required lower voltages
than vacuum tubes to work.
The terms 'wireless' and 'radio' were then used to refer to anything electronic.
Before the invention of the integrated circuit in 1959, electronic circuits were
constructed from discrete components that could be manipulated by hand. Nonintegrated circuits consumed much space and power, were prone to failure and
were limited in speed although they are still common in simple applications. By
contrast, integrated circuits packed a large number — often millions — of tiny
electrical components, mainly transistors, into a small chip around the size of a
coin.
1.2 The vacuum tube detector
The invention of the triode amplifier, generator, and detector made audio
communication by radio practical. Through the mid 1920s the most common
type of receiver was the crystal set. In the 1920s, amplifying vacuum tubes
revolutionized both radio receivers and transmitters.
In 1928 Philo Farnsworth made the first public demonstration of purely
electronic television.
One of the latest and most advance technologies in TV screens/displays has to
do entirely with electronics principles, and it’s the OLED (organic light emitting
diode) displays, and it’s most likely to replace LCD and Plasma technologies.
During World War II many efforts were expended in the electronic location of
enemy targets and aircraft. These included radio beam guidance of bombers,
electronic counter measures, early radar systems etc. During this time very little
if any effort was expended on consumer electronics developments.
1.3 History of computing hardware
The elements of computing hardware have undergone significant improvement
over their history. This improvement has triggered worldwide use of the
technology, performance has improved and the price has declined. Computers
are accessible to ever-increasing sectors of the world's population. Computing
hardware has become a platform for uses other than computation, such as
automation, communication, control, entertainment, and education.
The von Neumann architecture unifies current computing hardware
implementations. The history of computer data storage is tied to the
development of computers. The major elements of computing hardware
implement abstractions: input, output, memory, and processor. A processor is
composed of control and datapath. In the
von Neumann architecture, control of the datapath is stored in memory. This
allowed control to become an automatic process; the datapath could be under
software control, perhaps in response to events.
8
Analog computers have used lengths, pressures, voltages, and currents to
represent the results of calculations. Eventually the voltages or currents were
standardized, and then digitized. Digital computing elements have ranged from
mechanical gears, to electromechanical relays, to vacuum tubes, to transistors,
and to integrated circuits, all of which are currently implementing the von
Neumann architecture.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is
considered to be the earliest programmable analog computer.
Yazu Arithmometer. Patented in Japan in 1903. Note the lever for turning the
gears of the calculator.
German polymath Wilhelm Schickard built the first digital mechanical calculator
in 1623, and thus became the father of the computing era.
Leibniz also described the binary numeral system, a central ingredient of all
modern computers. However, up to the 1940s, many subsequent designs
(including Charles Babbage's machines of the 1800s and even ENIAC of 1945)
were based on the decimal system; Yazu Arithmometer in 1903. It consisted of
a single cylinder and 22 gears, and employed the mixed base-2 and base-5
number system familiar to users to the soroban (Japanese abacus).
In 1835, Babbage described his analytical engine. It was the plan of a generalpurpose programmable computer, employing punch cards for input and a steam
engine for power.
IBM 407 tabulating machine, (1961)
A reconstruction of the Difference Engine II, an earlier, more limited design, has
been operational since 1991 at the London Science Museum. With a few trivial
changes, it works as Babbage designed it and shows that Babbage was right in
theory.
Hollerith's company eventually became the core of IBM. IBM developed punch
card technology into a powerful tool for business data- rocessing and produced
an extensive line of unit record equipment. By 1950, the IBM card had become
ubiquitous in industry and government.
The Thomas J. Watson Astronomical Computing Bureau, Columbia University
performed astronomical calculations representing the state of the art in
computing.
The computer users, for example, science and engineering students at
universities, would submit their programming assignments to their local
computer center in the form of a stack of cards, one card per program line.
9
Punched cards are still used and manufactured to this day, and their distinctive
dimensions (and 80-column capacity) can still be recognized in forms, records,
and programs around the world.
1.4 Desktop calculators
Companies like Friden, Marchant Calculator and Monroe made desktop
mechanical calculators from the 1930s that could add, subtract, multiply and
divide.
Over time, during the 1950s and 1960s a variety of different brands of
mechanical calculator appeared on the market. The first allelectronic
desktop calculator was the British ANITA Mk.VII, which used a Nixie tube
display and 177 subminiature thyratron tubes.
Advanced analog computers
Before World War II, mechanical and electrical analog computers were
considered the "state of the art", and many thought they were the future of
computing.
Unlike modern digital computers, analog computers are not very flexible, and
need to be reconfigured (i.e., reprogrammed) manually to switch them from
working on one problem to another. Analog computers had an advantage over
early digital computers in that they could be used to solve complex problems
using behavioral analogues while the earliest attempts at digital computers were
quite limited.
But as digital computers have become faster and use larger memory (for
example, RAM or internal storage), they have almost entirely displaced analog
computers.
1.5 Digital computation
The era of modern computing began with a flurry of development before and
during World War II, as electronic circuit elements replaced mechanical
equivalents and digital calculations replaced analog calculations. Machines
such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and
the ENIAC were built by hand using circuits containing relays or valves (vacuum
tubes), and often used punched cards or punched paper tape for input and as
the main (non-volatile) storage medium.
For a computing machine to be a practical general-purpose computer, there
must be some convenient read-write mechanism, punched tape, for example.
10
Nine-track magnetic tape
For a computing machine to be a practical general-purpose computer, there
must be some convenient read-write mechanism, punched tape, for example.
John von Neumann defined an architecture which uses the same memory both
to store programs and data: virtually all contemporary computers use this
architecture (or some variant). While it is theoretically possible to implement a
full computer entirely mechanically (as Babbage's design showed), electronics
made possible the speed and later the miniaturization that characterize modern
computers.
George Stibitz is internationally recognized as one of the fathers of the modern
digital computer. While working at Bell Labs in November 1937, Stibitz invented
and built a relay-based calculator that he dubbed the "Model K" (for "kitchen
table", on which he had assembled it), which was the first to calculate using
binary form.
The Atanasoff-Berry Computer was the world's first electronic digital computer.
The design used over 300 vacuum tubes and employed capacitors fixed in a
mechanically rotating drum for memory. Though the ABC machine was not
programmable, it was the first to use electronic tubes in an adder.
ENIAC
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the
first electronic general-purpose computer. It combined, for the first time, the
high speed of electronics with the ability to be programmed for many complex
problems.
The computer MESM (МЭБМ, Small Electronic Calculating Machine) became
operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of
power. It could perform approximately 3,000 operations per second.
1.6 Commercial computers
11
IBM introduced a smaller, more affordable computer in 1954 that proved very
popular.
The IBM 650 weighed over 900 kg, the attached power supply weighed around
1350 kg The first transistorized computer was built at the University of
Manchester and was operational by 1953; The bipolar junction transistor (BJT)
was invented in 1947. If no electrical current flows through the base-emitter
path of a bipolar transistor, the transistor's collector-emitter path blocks
electrical current (and the transistor is said to "turn full off"). If sufficient current
flows through the base-emitter path of a transistor, that transistor's collectoremitter path also passes current (and the transistor is said to "turn full on").
Current flow or current blockage represent binary 1 (true) or 0 (false),
respectively. From 1955 onwards bipolar junction transistors replaced vacuum
tubes in computer designs, giving rise to the "second generation" of computers.
Compared to vacuum tubes, transistors have many advantages: they are less
expensive to manufacture and are much faster, switching from the condition 1
to 0 in millionths or billionths of a second. Transistor volume is measured in
cubic millimeters compared to vacuum tubes' cubic centimeters. Transistors'
lower operating temperature increased their reliability, compared to vacuum
tubes.
Transistorized computers could contain tens of thousands of binary logic circuits
in a relatively compact space.
Transistors greatly reduced computers' size, initial cost, and operating cost.
Typically, second-generation computers were composed of large numbers of
printed circuit boards such as the IBM Standard Modular System each carrying
one to four logic gates or flip-flops.
RAMAC DASD
The second generation disk data storage units were able to store tens of
millions of letters and digits. Multiple Peripherals can be connected to the CPU,
increasing the total memory capacity to hundreds of millions of characters.
During the second generation remote terminal units (often in the form of
teletype machines like a Friden Flexowriter) saw greatly increased use.
Telephone connections provided sufficient speed for early remote terminals and
allowed hundreds of kilometers separation between remote-terminals and the
computing center. Eventually these standalone computer networks would be
generalized into an interconnected network of networks—the Internet.
12
Intel 8742 eight-bit microcontroller IC
The explosion in the use of computers began with "third-generation" computers,
making use of Jack St. Clair Kilby's and Robert Noyce's independent invention
of the integrated circuit (or microchip), which later led to the invention of the
microprocessor, by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.
As late as 1975, Sperry Univac continued the manufacture of secondgeneration machines such as the UNIVAC 494. The Burroughs large systems
such as the B5000 were stack machines, which allowed for simpler
programming. These pushdown automatons were also implemented in
minicomputers and microprocessors later, which influenced programming
language design.
Minicomputers served as low-cost computer centers for industry, business and
universities.
Microcomputers, the first of which appeared in the 1970s, became ubiquitous in
the 1980s and beyond. Steve Wozniak, co-founder of Apple Computer, is
credited with developing the first mass-market home computers.
In the twenty-first century, multi-core CPUs became commercially available.
When the CMOS field effect transistor-based logic gates supplanted bipolar
transistors, computer power consumption could decrease dramatically (A
CMOS Field-effect transistor only draws significant current during the 'transition'
between logic states, unlike the substantially higher (and continuous) bias
current draw of a BJT). This has allowed computing to become a commodity
which is now ubiquitous, embedded in many forms, from greeting cards and
telephones to satellites.
The arithmetic performance of these machines allowed engineers to develop
completely new technologies and achieve new objectives. Early examples
include the Apollo missions and the NASA moon landing.
The invention of the transistor in 1947 by William B. Shockley, John Bardeen
and Walter Brattain opened the door for more compact devices and led to the
development of the integrated circuit in 1959 by Jack Kilby.
1.7 Microprocessors
The first PC was announced to the general public on the cover of the January
1975 issue of Popular Electronics.
In the field of electronic engineering, engineers design and test circuits that use
the electromagnetic properties of electrical components such as resistors,
capacitors, inductors, diodes and transistors to achieve a particular functionality.
The tuner circuit,
13
which allows the user of a radio to filter out all but a single station, is just one
example of such a circuit.
In designing an integrated circuit, electronics engineers first construct circuit
schematics that specify the electrical components and describe the
interconnections between them.
Integrated circuits and other electrical components can then be assembled on
printed circuit boards to form more complicated circuits. Today, printed circuit
boards are found in most electronic devices including televisions, computers
and audio players.
1.8 Electromagnetism & Photoelectric Effect
Electromagnetism is the physics of the electromagnetic field, a field that exerts
a force on particles with the property of electric charge and is reciprocally
affected by the presence and motion of such particles.
A changing magnetic field produces an electric field (this is the phenomenon of
electromagnetic induction, the basis of operation for electrical generators,
induction motors, and transformers). Similarly, a changing electric field
generates a magnetic field.
The magnetic field is produced by the motion of electric charges, i.e., electric
current.
The magnetic field causes the magnetic force associated with magnets.
The theoretical implications of electromagnetism led to the development of
special relativity by Albert Einstein in 1905; and from this it was shown that
magnetic fields and electric fields are convertible with relative motion as a four
vector and this led to their unification as electromagnetism.
1.8.1 History
While preparing for an evening lecture on 21 April 1820, Hans Christian Ørsted
developed an experiment that provided surprising evidence. As he was setting
up his materials, he noticed a compass needle deflected from magnetic north
when the electric current from the battery he was using was switched on and
off. This deflection convinced him that magnetic fields radiate from all sides off
of a wire carrying an electric current, just as light and heat do, and that it
confirmed a direct relationship between electricity and magnetism.
Ørsted's discovery also represented a major step toward a unified concept of
energy.
This unification, which was observed by Michael Faraday, extended by James
Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich
Hertz, is one of the accomplishments of 19th century Mathematical Physics.
Different frequencies of oscillation give rise to the different forms of
electromagnetic radiation, from radio waves at the lowest frequencies, to visible
light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relation between electricity and
magnetism. In 1802 Gian Domenico Romagnosi, an Italian legal scholar,
deflected a magnetic needle by electrostatic charges. Actually, no galvanic
current existed in the setup and hence no electromagnetism was present.
The force that the electromagnetic field exerts on electrically charged particles,
called the electromagnetic force, is one of the fundamental forces. The other
14
fundamental forces are strong nuclear force (which holds atomic nuclei
together), the weak nuclear force and
the gravitational force. All other forces are ultimately derived from these
fundamental forces.
The electromagnetic force is the one responsible for practically all the
phenomena encountered in daily life, with the exception of gravity. All the forces
involved in interactions between atoms can be traced to the electromagnetic
force acting on the electrically charged protons and electrons inside the atoms.
It also includes all forms of chemical phenomena, which arise from interactions
between electron orbitals.
1.8.2 Classical electromagnetism
Classical electromagnetism (or classical electrodynamics) is a branch of
theoretical physics that studies consequences of the electromagnetic forces
between electric charges and currents. It provides an excellent description of
electromagnetic phenomena whenever the relevant length scales and field
strengths are large enough that quantum mechanical effects are negligible (see
quantum electrodynamics).
The outstanding problem with classical electrodynamics, as stated by Jackson,
is that we are able to obtain and study relevant solutions of its basic equations
only in two limiting cases: »... one in which the sources of charges and currents
are specified and the resulting electromagnetic fields are calculated, and the
other in which external electromagnetic fields are specified and the motion of
charged particles or currents is calculated... Occasionally,the two problems are
combined.
1.8.3 Electromagnetic waves
A changing electromagnetic field propagates away from its origin in the form of
a wave.
These waves travel in vacuum at the speed of light and exist in a wide spectrum
of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in
order of increasing frequency): radio waves, microwaves, light (infrared, visible
light and ultraviolet), x-rays and gamma rays. In the field of particle physics this
electromagnetic radiation is the manifestation of the electromagnetic interaction
between charged particles.
Photoelectric effect
The photoelectric effect is a phenomenon in which electrons are emitted from
matter (metals and non-metallic solids, liquids, or gases) after the absorption of
energy from electromagnetic radiation such as X-rays or visible light. The
emitted electrons can be referred to as photoelectrons in this context. The
effect is also termed the Hertz Effect.
The photoelectric effect takes place with photons with energies from about a
few electronvolts to, in some cases, over 1 MeV.
1.8.4 Introduction and early historical view
With James Clerk Maxwell's wave theory of light, which was thought to predict
that the electron energy would be proportional to the intensity of the radiation. In
1905, Einstein solved this apparent paradox by describing light as composed of
discrete quanta, now called photons, rather than continuous waves.
15
A photon above a threshold frequency has the required energy to eject a single
electron, creating the observed effect. This discovery led to the quantum
revolution in physics and earned Einstein the Nobel Prize in 1921.
1.8.5 Traditional explanation
In the photoemission process, if an electron within some material absorbs the
energy of one photon and thus has more energy than the work function (the
electron binding energy) of the material, it is ejected. If the photon energy is too
low, the electron is unable to escape the material. Increasing the intensity of the
light beam increases the number of photons in the light beam, and thus
increases the number of electrons emitted, but does not increase the energy
that each electron possesses. Thus the energy of the emitted electrons does
not depend on the intensity of the incoming light, but only on the energy of the
individual photons.
According to Einstein's special theory of relativity the relation between energy
(E) and momentum (p) of a particle is, where m is the rest mass of the particle
and c is the velocity of light in a vacuum.
In 1887, Heinrich Hertz observed the photoelectric effect and the production
and reception of electromagnetic (EM) waves. His receiver consisted of a coil
with a spark gap, where a spark would be seen upon detection of EM waves.
He placed the apparatus in a darkened box to see the spark better. However,
he noticed that the maximum spark length was reduced when in the box. A
glass panel placed between the source of EM waves and the receiver absorbed
ultraviolet radiation that assisted the electrons in jumping across the gap. When
removed, the spark length would increase. He observed no decrease in spark
length when he substituted quartz for glass, as quartz does not absorb UV
radiation. Hertz concluded his months of investigation and reported the results
obtained.
1.8.6Stoletov: the first law of photoeffect
Stoletov invented a new experimental setup which was more suitable for a
quantitative analysis of photoeffect.
He discovered the direct proportionality between the intensity of light and the
induced photo electric current (the first law of photoeffect or Stoletov's law).
He found the existence of an optimal gaspressure Pm corresponding to a
maximum photocurrent; this property was used for a creation of solar cells.
In 1902, Philipp Lenard observed the variation in electron energy with light
frequency.
He found the electron energy by relating it to the maximum stopping potential
(voltage) in a phototube. He found that the calculated maximum electron kinetic
energy is determined by the frequency of the light. For example, an increase in
frequency results in an increase in the maximum kinetic energy calculated for
an electron upon liberation - ultraviolet radiation would require a higher applied
stopping potential to stop current in a phototube than blue light.
The current emitted by the surface was determined by the light's intensity, or
brightness: doubling the intensity of the light doubled the number of electrons
emitted from the surface. Lenard did not know of photons.
16
1.8.7 Einstein: light quanta
Assuming that Hertzian oscillators could only exist at energies E proportional to
the frequency f of the oscillator by E = hf, where h is Planck's constant.
It explained why the energy of photoelectrons were dependent only on the
frequency of the incident light and not on its intensity: a low intensity, highfrequency source could supply a few high energy photons, whereas a high
intensity, low-frequency source would supply no photons of sufficient individual
energy to dislodge any electrons.
Einstein's work predicted that the energy of individual ejected electrons
increases linearly with the frequency of the light.
By 1905 it was known that the energy of photoelectrons increases with
increasing frequency of incident light and is independent of the intensity of the
light.
1.8.8 Uses and effects
The photocathode contains combinations of materials such as caesium,
rubidium and antimony specially selected to provide a low work function, so
when illuminated even by very low levels of light, the photocathode readily
releases electrons.
Photomultipliers are still commonly used wherever low levels of light must be
detected.
Silicon image sensors, such as charge-coupled devices, widely used for
photographic imaging, are based on a variant of the photoelectric effect, in
which photons knock electrons out of the valence band of energy states in a
semiconductor, but not out of the solid itself.
The gold leaf electroscope.
The electroscope is an important tool in illustrating the photoelectric effect.
shining high-frequency light onto the cap, the scope discharges and the leaf will
fall limp.
The frequency of the light shining on the cap is above the cap's threshold
frequency. The photons in the light have enough energy to liberate electrons
from the cap, reducing its negative charge.
1.8.9 Photoelectron spectroscopy
17
Photoelectron spectroscopy is done in a high-vacuum environment, since the
electrons would be scattered by significant numbers of gas atoms present (e.g.
even in low-pressure air).
The photoelectric effect will cause spacecraft exposed to sunlight to develop a
positive charge. This can get up to the tens of volts.
The static charge created by the photoelectric effect is self-limiting, though,
because a more highly-charged object gives up its electrons less easily.
1.8.10 Cross section
The photoelectric effect is simply an interaction mechanism conducted between
photons and atoms. However, this mechanism does not have exclusivity in
interactions of this nature and is one of 12 theoretically possible interactions .
The probability of the photoelectric effect occurring is measured by the cross
section of interaction, σ. This has been found to be a function of the atomic
number of the target atom and photon energy. A crude approximation, for
photon energies above the highest atomic binding energy, is given by : Where n
is a number which varies between 4 and 5.
1.8.11Electromagnetic units are part of a system of electrical units based
primarily upon the magnetic properties of electric currents, the fundamental SI
unit being the ampere. The units are:
Ampere (current)
Coulomb (charge)
Farad (capacitance)
Henry (inductance)
Ohm (resistance)
Volt (electric potential)
Watt (power)
Tesla (magnetic field)
In the electromagnetic system, electrical current is a fundamental quantity
defined via Ampère's law and takes the permeability as a dimensionless
quantity (relative permeability) whose value in a vacuum is unity.
1.8.12 Electromagnetic phenomena
With the exception of gravitation, electromagnetic phenomena as described by
quantum electrodynamics account for almost all physical phenomena
observable to the unaided human senses, including light and other
electromagnetic radiation, all of chemistry, most of mechanics (excepting
gravitation), and of course magnetism and electricity.
1.8.13 Electronic devices and circuits
Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon:
diffusion current, drift current, mobility, resistivity. Generation and recombination
of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS
capacitor, MOSFET, LED, p-i-n and avalanche photo diode, LASERs. Device
technology: integrated circuit fabrication process, oxidation, diffusion, ion
implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
1.8.14 Analog circuits: Equivalent circuits (large and small-signal) of diodes,
BJTs, JFETs, and MOSFETs. Simple diode circuits, clipping, clamping, rectifier.
Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and
multi-stage, differential, operational, feedback and power. Analysis of amplifiers;
frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal
18
oscillators; criterion foroscillation; single-transistor and op-amp configurations.
Function generators and waveshaping circuits, Power supplies.
1.8.15 Digital circuits: of Boolean functions; logic gates digital IC families
(DTL, TTL, ECL, MOS, CMOS). Combinational circuits: arithmetic circuits, code
converters, multiplexers and decoders. Sequential circuits: latches and flipflops, counters and shift-registers.
Sample and hold circuits, ADCs, DACs. Semiconductor memories.
Microprocessor 8086: architecture, programming, memory and I/O interfacing.
2. Signal processing, Telecommunications
Engineering & Control engineering
It deals with the analysis and manipulation of signals. Signals can be either
analog, in which case the signal varies continuously according to the
information, or digital, in which case the signal varies according to a series of
discrete values representing the information.
2.1Signal processing is an area of applied mathematics that deals with
operations on or analysis of signals, in either discrete or continuous time to
perform useful operations on those signals. Depending upon the application, a
useful operation could be control, data compression, data transmission,
denoising,
prediction,
filtering,
smoothing,
deblurring,
tomographic
reconstruction, identification, classification, or a variety of other operations.
Signals of interest can include sound, images, time-varying measurement
values and sensor data, for example biological data such as
electrocardiograms, control system signals, telecommunication transmission
signals such as radio signals, and many others.
2.2 Categories of signal processing
Analog signal processing — for signals that have not been digitized, as in
classical radio, telephone, radar, and television systems. This involves linear
electronic circuits such as passive filters, active filters, additive mixers,
integrators and delay lines. It also involves non-linear circuits such as
compandors, multiplicators (frequency mixers and voltage-controlled amplifiers),
voltage-controlled filters, voltage-controlled oscillators and phase-locked loops.
Analog discrete-time signal processing is a technology based on electronic
devices such as sample and hold circuits, analog time-division multiplexers,
analog delay lines and analog feedback shift registers.
Digital signal processing — for signals that have been digitized. Processing is
done by general-purpose computers or by digital circuits such as ASICs,
fieldprogrammable gate arrays or specialized digital signal processors (DSP
chips).
Typical arithmetical operations include fixed-point and floating-point, real-valued
and complex-valued, multiplication and addition. Other typical operations
supported by the hardware are circular buffers and look-up tables. Examples of
algorithms are the Fast Fourier transform (FFT), finite impulse response (FIR)
filter, Infinite impulse response (IIR) filter, Wiener filter and Kalman filter.
For analog signals, signal processing may involve the amplification and filtering
of audio signals for audio equipment or the modulation and demodulation of
signals for telecommunications. For digital signals, signal processing may
involve the compression, error checking and error detection of digital signals.
19
2.3 Telecommunications engineering
It deals with the transmission of information across a channel such as a co-axial
cable, optical fiber or free space.
Transmissions across free space require information to be encoded in a carrier
wave in order to shift the information to a carrier frequency suitable for
transmission, this is known as modulation. Popular analog modulation
techniques include amplitude modulation and frequency modulation. The choice
of modulation affects the cost and performance of a system and these two
factors must be balanced carefully by the engineer.
Once the transmission characteristics of a system are determined,
telecommunication engineers design the transmitters and receivers needed for
such systems. These two are sometimes combined to form a two-way
communication device known as a transceiver.
Telecommunications is a diverse field of engineering including electronics, civil,
structural, and electrical engineering as well as being a political and social
ambassador, a little bit of accounting and a lot of project management.
Telecom engineers are often expected, as most engineers are, to provide the
best solution possible for the lowest cost to the company.
2.4 Telecom equipment engineer
A telecom equipment engineer is an electronics engineer that designs
equipment such as routers, switches, multiplexers, and other specialized
computer/electronics equipment designed to be used in the telecommunication
network infrastructure.
As electrical engineers, OSP engineers are responsible for the resistance,
capacitance, and inductance (RCL) design of all new plant to ensure telephone
service is clear and crisp and data service is clean as well as reliable.
Attenuation and loop loss calculations are required to determine cable length
and size required to provide the service called for.
As civil engineers, OSP egineers are responsible for drawing up plans, either by
hand or using Computer Aided Drafting (CAD) software, for how telecom plant
facilities will be placed. Often when working with municipalities trenching or
boring permits are required and drawings must be made for these.
Structural calculations are required when boring under heavy traffic areas such
as highways or when attaching to other structures such as bridges.
As Political and Social Ambassador, the OSP Engineer is the telephone
operating companies’ face and voice to the local authorities and other utilities.
2.5 Control engineering
20
Control systems play a critical role in space flight
Control engineering is the engineering discipline that applies control theory to
design systems with predictable behaviors. The engineering activities focus on
the mathematical modeling of systems of a diverse nature.
Control engineering has an essential role in a wide range of control systems
from a simple household washing machine to a complex high performance F-16
fighter aircraft.
The scope of classical control theory is limited to single-input and single-output
(SISO) system design.
In contrast, modern control theory is strictly carried out in complex-s domain or
in frequency domain, and can deal with multi-input and multioutput (MIMO)
systems.
Today many of the control systems are computer controlled and they consist of
both digital and analogue components.
The first of these two methods is more commonly encountered in practice
because many industrial systems have many continuous systems components,
including mechanical, fluid, biological and analogue electrical components, with
a few digital controllers.
2.6 Instrumentation Engineering &
Computer Engineering
The design of instrumentation requires a good understanding of physics that
often extends beyond electromagnetic theory. For example, radar guns use the
Doppler effect to measure the speed of oncoming vehicles. Similarly,
thermocouples use the Peltier- Seebeck effect to measure the temperature
difference between two points.
Instrumentation engineering is often viewed as the counterpart of control
engineering.
Pneumatic PID controller.
Instrumentation is the branch of engineering that deals with measurement and
control.
An instrument is a device that measures or manipulates variables such as flow,
temperature, level, or pressure. Instruments include many varied contrivances
21
which can be as simple as valves and transmitters, and as complex as
analyzers.
The control of processes is one of the main branches of applied
instrumentation.
In addition to measuring field parameters, instrumentation is also responsible
for providing the ability to modify some field parameters.
To control the parameters in a process or in a particular system
Microprocessors , Microcontrollers ,PLCs etc are used. But their ultimate aim is
to control the parameters of a system.
2.7 Computer Systems Engineering) is a discipline that combines both
Electrical Engineering and Computer Science. Computer engineers may also
work on a system's software.
The design of complex software systems is often the domain of software
engineering, which is usually considered a separate discipline.
Computer engineers usually have training in electrical engineering, software
design and hardware-software integration instead of only software engineering
or electrical engineering. Usual tasks involving computer engineers include
writing software and firmware for embedded microcontrollers, designing VLSI
chips, designing analog sensors, designing mixed signal circuit boards, and
designing operating systems. Computer engineers are also suited for robotics
research, which relies heavily on using digital systems to control and monitor
electrical systems like motors, communications, and sensors.
2.8 Algorithm
Algorithm is a finite sequence of instructions, logic, an explicit, step-by-step
procedure for solving a problem, often used for calculation and data processing
and many other fields.
The transition from one state to the next is not necessarily deterministic; some
algorithms, known as probabilistic algorithms, incorporate randomness.
A prototypical example of an "algorithm" is Euclid's algorithm to determine the
maximum common divisor of two integers (X and Y) which are greater than one:
We follow a series of steps: In step i, we divide X by Y and find the remainder,
which we call R1. Then we move to step i + 1, where we divide Y by R1, and
find the remainder, which we call R2. If R2=0, we stop and say that R1 is the
greatest common divisor of X and Y. If not, we continue, until Rn=0. Then Rn-1
is the max common division of X and Y.
We might expect an algorithm to be an algebraic equation such as y = m + n —
two arbitrary "input variables" m and n that produce an output y.
The concept of algorithm is also used to define the notion of decidability.
In logic, the time that an algorithm requires to complete cannot be measured, as
it is not apparently related with our customary physical dimension.
2.9 Formalization
Algorithms are essential to the way computers process information.
An algorithm can be considered to be any sequence of operations that can be
simulated by a Turing-complete system.
According to Savage [1987], an algorithm is a computational process defined by
a Turing machine. (Gurevich 2000:3Typically, when an algorithm is associated
with processing information, data is read from an input source, written to an
output device, and/or stored for further processing.
For any such computational process, the algorithm must be rigorously defined.
The criteria for each case must be clear (and computable).
22
2.10 Expressing algorithms
Algorithms can be expressed in many kinds of notation, including natural
languages, pseudocode, flowcharts, and programming languages. Natural
language expressions of algorithms tend to be verbose and ambiguous, and are
rarely used for complex or technical algorithms.
Programming languages are primarily intended for expressing algorithms in a
form that can be executed by a computer, but are often used as a way to define
or document algorithms.
Representations of algorithms are generally classed into three accepted levels
of Turing machine description (Sipser 2006:157)
1 High-level description:
"...prose to describe an algorithm, ignoring the implementation details. At this
level we do not need to mention how the machine manages its tape or head
2 Implementation description:
"...prose used to define the way the Turing machine uses its head and the way
that it stores data on its tape. At this level we do not give details of states or
transition function".
3 Formal description:
Most detailed, "lowest level", gives the Turing machine's "state table". For an
example of the simple algorithm "Add m+n" described in all three levels.
2.11 Computer algorithms
In computer systems, an algorithm is basically an instance of logic written in
software by software developers to be effective for the intended "target"
computer(s), in order for the software on the target machines to do something.
For instance, if a person is writing software that is supposed to print out a PDF
document located at the operating system folder "/My Documents" at computer
drive "D:" every Friday at 10PM, they will write an algorithm that specifies the
following actions.
Most algorithms are intended to be implemented as computer programs.
However, algorithms are also implemented by other means, such as in a
biological neural network (for example, the human brain implementing
arithmetic or an insect looking for food), in an electrical circuit, or in a
mechanical device.
2.12 Algorithmic analysis
Methods have been developed for the analysis of algorithms to obtain such
quantitative answers; for example, the algorithm above has a time requirement
of O(n), using the big O notation with n as the length of the list. At all times the
algorithm only needs to remember two values: the largest number found so far,
and its current position in the input list. Therefore it is said to have a space
requirement of O(1), if the space required to store the input numbers is not
counted, or O(n) if it is counted. Different algorithms may complete the same
task with a different set of instructions in less or more time, space, or 'effort'
than others.
The analysis and study of algorithms is a discipline of computer science, and is
often practiced abstractly without the use of a specific programming language or
implementation. In this sense, algorithm analysis resembles other mathematical
disciplines in that it focuses on the underlying properties of the algorithm and
not on the specifics of any particular implementation.
Iterative algorithms use repetitive constructs like loops and sometimes
additional data structures like stacks to solve the given problems.
23
Logical: An algorithm may be viewed as controlled logical deduction. This
notion may be expressed as: Algorithm = logic + control (Kowalski 1979).
The logic component expresses the axioms that may be used in the
computation and the control component determines the way in which deduction
is applied to the axioms. This is the basis for the logic programming paradigm.
Algorithms are usually discussed with the assumption that computers execute
one instruction of an algorithm at a time.
Parallel or distributed algorithms divide the problem into more symmetrical or
asymmetrical subproblems and collect the results back together.
The resource consumption in such algorithms is not only processor cycles on
each processor but also the communication overhead between the processors.
Classification
There are various ways to classify algorithms, each with its own merits.
By implementation
One way to classify algorithms is by implementation means
Recursion or iteration: A recursive algorithm is one that invokes (makes
reference to) itself repeatedly until a certain condition matches, which is a
method common to functional programming.
Some problems are naturally suited for one implementation or the other. For
example, towers of Hanoi is well understood in recursive implementation.
Logical: An algorithm may be viewed as controlled logical deduction. This
notion may be expressed as: Algorithm = logic + control (Kowalski 1979).
In pure logic programming languages the control component is fixed and
algorithms are specified by supplying only the logic component.
Serial or parallel or distributed: Algorithms are usually discussed with the
assumption that computers execute one instruction of an algorithm at a time.
Those computers are sometimes called serial computers. An algorithm
designed for such an environment is called a serial algorithm, as opposed to
parallel algorithms or distributed algorithms.
The greedy method
The greedy method extends the solution with the best possible decision (not all
feasible decisions) at an algorithmic stage based on the current local optimum
and the best decision (not all possible decisions) made in a previous stage. It is
not exhaustive, and does not give accurate answer to many problems. But
when it works, it will be the fastest method.
When solving a problem using linear programming, specific inequalities
involving the inputs are found and then an attempt is made to maximize (or
minimize) some linear function of the inputs.
Reduction. This technique involves solving a difficult problem by transforming it
into a better known problem for which we have (hopefully) asymptotically
optimal algorithms. This technique is also known as transform and conquer.
By field of study
Every field of science has its own problems and needs efficient algorithms.
Dynamic programming was originally invented for optimization of resource
consumption in industry, but is now used in solving a broad range of problems
in many fields.
By complexity
Algorithms can be classified by the amount of time they need to complete
compared to their input size. There is a wide variety some problems may have
multiple algorithms of differing.
24
By computing power
Another way to classify algorithms is by computing power. This is typically done
by considering some collection (class) of algorithms. A recursive class of
algorithms is one that includes algorithms for all Turing computable functions.
Complexity, while other problems might have no algorithms or no known
efficient algorithms.
For example, the algorithms that run in polynomial time suffice for many
important types of computation but do not exhaust all Turing computable
functions.
Burgin (2005, p. 24) defines a super-recursive class of algorithms as "a class of
algorithms in which it is possible to compute functions not computable by any
Turingmachine"
2.13 Manipulation of symbols
The work of the ancient Greek geometers, Persian mathematician Al-Khwarizmi
(often considered the "father of algebra" and from whose name the terms
"algorism" and "algorithm" are derived), and Western European mathematicians
culminated in Leibniz's notion of the calculus ratiocinator.
Leibniz proposed an algebra of logic, an algebra that would specify the rules for
manipulating logical concepts in the manner that ordinary algebra specifies the
rules for manipulating numbers" (Davis 2000:1).
2.14 Mathematics during the 1800s up to the mid-1900s
In rapid succession the mathematics of George Boole (1847, 1854), Gottlob
Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a
sequence of symbols manipulated by rules. Peano's The principles of
arithmetic, presented by a new method (1888) was "the first attempt at an
axiomatization of mathematics in a symbolic language" (van Heijenoort:81ff).
Effective calculability: In an effort to solve the Entscheidungs problem defined
precisely by Hilbert in 1928, mathematicians first set about to define what was
meant by an "effective method" or "effective calculation" or "effective
calculability" (i.e., a calculation that would succeed).
Emil Post (1936) and Alan Turing (1936-7, 1939)
Here is a remarkable coincidence of two men not knowing each other but
describing a process of men-as-computers working on computations — and
they yield virtually identical definitions.
An electrical network is an interconnection of electrical elements such as
resistors, inductors, capacitors, transmission lines, voltage sources, current
sources, and switches.
An electrical circuit is a network that has a closed loop, giving a return path for
the current. A network is a connection of two or more components, and may not
necessarily be a circuit.
Electrical networks that consist only of sources (voltage or current), linear
lumped elements (resistors, capacitors, inductors), and linear distributed
elements (transmission lines) can be analyzed by algebraic and transform
methods to determine DC response, AC response, and transient response.
25
A network that also contains active electronic components is known as an
electronic circuit. Such networks are generally nonlinear and require more
complex design and analysis tools.
2.15 Design methods
To design any electrical circuit, either analog or digital, electrical engineers
need to be able to predict the voltages and currents at all places within the
circuit. Linear circuits, that is, circuits with the same input and output frequency,
can be analyzed by hand using complex number theory. Other circuits can only
be analyzed with specialized software programs or estimation techniques.
Circuit simulation software, such as VHDL and HSPICE, allows engineers to
design circuits without the time, cost and risk of error involved in building circuit
prototypes.
2.16 Electrical laws
Ohm's law: The voltage across a resistor is equal to the product of the
resistance and the current flowing through it (at constant temperature).
Norton's theorem: Any network of voltage and/or current sources and resistors
is electrically equivalent to an ideal current source in parallel with a single
resistor.
Thévenin's theorem: Any network of voltage and/or current sources and
resistors is electrically equivalent to a single voltage source in series with a
single resistor.
Other more complex laws may be needed if the network contains nonlinear or
reactive components.
2.17 Database
A database is an integrated collection of logically related records or files which
consolidates records into a common pool of data records that provides data for
many applications. A database is a collection of information that is organized so
that it can easily be accessed, managed, and updated.
The data in a database is organized the data according to a database model.
The model that is most commonly used today is the relational model. Other
models such as the hierarchical model and the network model use a more
explicit representation of relationships.
Depending on the intended use, there are a number of database architectures
in use. Many databases use a combination of strategies.
It should be noted that not all databases have or need a database schema (so
called schema-less databases).
There are also other types of database which cannot be classified as relational
databases.
2.18 Database management systems
A Database Management System (DBMS) is a set of computer programs that
controls the creation, maintenance, and the use of the database of an
organization and its end users. It allows organizations to place control of
organizationwide database development in the hands of Database
Administrators (DBAs) and other specialist.
In large systems, a DBMS allows users and other software to store and retrieve
data in a structured way.
DBMS Engine accepts logical request from the various other DBMS
subsystems, converts them into physical equivalent, and actually accesses the
database and data dictionary as they exist on a storage device.
26
Analytical Databases. These databases stores data and information extracted
from selected operational and external databases. They consist of summarized
data and information most needed by an organizations manager and other end
user.
Data Warehouse Databases. It stores data from current and previous years
that has been extracted from the various operational databases of an
organization.
Distributed Databases. These are databases of local work groups and
departments at regional offices, branch offices, manufacturing plants and other
work sites. These databases can include segments of both common operational
and common user databases, as well as data generated and used only at a
user’s own site.
End-User Databases. These databases consist of a variety of data files
developed by end-users at their workstations.
In-Memory databases. It is a database management system that primarily
relies on main memory for computer data storage.
Real-time databases. It is a processing system designed to handle workloads
whose state is constantly changing.
Object database models
In recent years, the object-oriented paradigm has been applied to database
technology, creating a various kinds of new programming model known as
object databases. These databases attempt to bring the database world and the
application programming world closer together, in particular by ensuring that the
database uses the same type system as the application program.
Database storage structures
Database storage structures Relational database tables/indexes are typically
stored in memory or on hard disk in one of many forms, ordered/unordered flat
files. Object databases use a range of storage mechanisms. Some use virtual
memory mapped files to make the native language (C++, Java etc.) objects
persistent.
Security
Database security denotes the system, processes, and procedures that protect
a database from unintended activity.
Security is usually enforced through access control, auditing, and
encryption.
Auditing logs what action or change has been performed, when and by whom.
Access control ensures and restricts who can connect and what can be done to
the database.
Auditing logs what action or change has been performed, when and by whom.
Encryption: Since security has become a major issue in recent years, many
commercial database vendors provide built-in encryption mechanisms.
2.19 Applications of databases
Databases are used in many applications, spanning virtually the entire range of
computer software. Databases are the preferred method of storage for large
multiuser applications, where coordination between many users is needed.
Software database drivers are available for most database platforms so that
application software can use a common Application Programming Interface to
retrieve the information stored in a database. Two commonly used database
APIs are JDBC and ODBC.
2.20 Digital electronics
27
Digital electronics are electronics systems that use digital signals. Digital
electronics are representations of Boolean algebra (also see truth tables) and
are used in computers, mobile phones, and other consumer products. In a
digital circuit, a signal is represented in discrete states or logic levels. The
advantages of digital techniques stem from the fact it is easier to get an
electronic device to switch into one of a number of known states, than to
accurately reproduce a continuous range of values, traditionally only two states,
'1' and '0' are used though digital systems are not limited to this.
2.21 Advantages
One advantage of digital circuits when compared to analog circuits is that
signals represented digitally can be transmitted without degradation due to
noise. For example, a continuous audio signal, transmitted as a sequence of 1s
and 0s, can be reconstructed without error provided the noise picked up in
transmission is not enough to prevent identification of the 1s and 0s. An hour of
music can be stored on a compact disc as about 6 billion binary digits.
In a digital system, a more precise representation of a signal can be obtained by
using more binary digits to represent it.
In an analog system, additional resolution requires fundamental improvements
in the linearity and noise charactersitics of each step of the signal chain.
The noiseimmunity of digital systems permits data to be stored and retrieved
without degradation.
In an analog system, noise from aging and wear degrade the information
stored. In a digital system, as long as the total noise is below a certain level, the
information can be recovered perfectly.
2.22 Disadvantages
In some cases, digital circuits use more energy than analog circuits to
accomplish the same tasks, thus producing more heat. In portable or batterypowered systems this can limit use of digital systems.
Digital circuits are sometimes more expensive, especially in small quantities.
For example, light, temperature, sound, electrical conductivity, electric and
magnetic fields are analog. Most useful digital systems must translate from
continuous analog signals to discrete digital signals. This causes quantization
errors.
Digital fragility can be reduced by designing a digital system for robustness.
2.23 Analog issues in digital circuits
Digital circuits are made from analog components. The design must assure that
the analog nature of the components doesn't dominate the desired digital
behavior. Digital systems must manage noise and timing margins, parasitic
inductances and capacitances, and filter power connections.
Digital circuits are made from analog components, digital circuits calculate more
slowly than low-precision analog circuits that use a similar amount of space and
power.
However, the digital circuit will calculate more repeatably, because of its high
noise immunity.
Construction
A digital circuit is often constructed from small electronic circuits called logic
gates.
A logic gate is an arrangement of electrically controlled switches.
The output of a logic gate is an electrical flow or voltage, that can, in turn,
control more logic gates.
28
Integrated circuits are the least expensive way to make logic gates in large
volumes.
Integrated circuits are usually designed by engineers using electronic design
automation software (See below for more information).
2.24 Structure of digital systems
Engineers use many methods to minimize logic functions, in order to reduce the
circuit's complexity. When the complexity is less, the circuit also has fewer
errors and less electronics, and is therefore less expensive.
Most digital systems divide into "combinatorial systems" and "sequential
systems." A combinatorial system always presents the same output when given
the same inputs.
A sequential system is a combinatorial system with some of the outputs fed
back as inputs. This makes the digital machine perform a "sequence" of
operations.
Sequential systems divide into two further subcategories. "Synchronous"
sequential systems change state all at once, when a "clock" signal changes
state. "Asynchronous" sequential systems propagate changes whenever inputs
change. Synchronous sequential systems are made of well-characterized
asynchronous circuits such as flip-flops, that change only when the clock
changes, and which have carefully designed timing margins.
Building an asynchronous circuit using faster parts implicitly makes the circuit
"go" faster.
More generally, many digital systems are data flow machines.
In the 1980s, some researchers discovered that almost all synchronous
register-transfer machines could be converted to asynchronous designs by
using first-in-first-out synchronization logic. In this scheme, the digital machine
is characterized as a set of data flows.
Computer architects have applied large amounts of ingenuity to computer
design to reduce the cost and increase the speed and immunity to programming
errors of computers.
"Specialized computers" are usually a conventional computer with a specialpurpose microprogram.
The computer programs are called "electronic design automation tools" or just
"EDA."
optimized EDA that automatically produces reduced systems of logic gates or
smaller lookup tables that still produce the desired outputs.
Most practical algorithms for optimizing large logic systems use algebraic
manipulations or binary decision diagrams, and there are promising
experiments with genetic algorithms and annealing optimizations.
Production tests are often designed by software tools called "test pattern
generators." These generate test vectors by examining the structure of the logic
and systematically generating tests for particular faults.
2.25 Design for testability
A large logic machine (say,. with more than a hundred logical variables) can
have an astronomical number of possible states.
To save time, the smaller sub-machines are isolated by permanentlyinstalled
"design for test" circuitry, and are tested independently.
29
After all the test data bits are in place, the design is reconfigured to be in
"normal mode" and one or more clock pulses are applied, to test for faults (e.g.
stuck-at low or stuck-at high) and capture the test result into flip-flops and/or
latches in the scan shift register(s).
Finally, the result of the test is shifted out to the block boundary and compared
against the predicted "good machine" result.
The goal of a designer is not just to make the simplest circuit, but to keep the
component count down. Sometimes this results in slightly more complicated
designs with respect to the underlying digital logic but nevertheless reduces the
number of components, board size, and even power consumption.
For example, in some logic families, NAND gates are the simplest digital gate to
build.
The "reliability" of a logic gate describes its mean time between failure (MTBF).
Digital machines often have millions of logic gates.
Digital machines first became useful when the MTBF for a switch got above a
few hundred hours.
Modern transistorized integrated circuit logic gates have MTBFs of
nearly a trillion (1×1012) hours, and need them because they have so many
logic gates.
Fanout describes how many logic inputs can be controlled by a single logic
output. The minimum practical fanout is about five. Modern electronic logic
using CMOS transistors for switches have fanouts near fifty, and can
sometimes go much higher.
Modern electronic digital logic routinely switches at five gigahertz (5×109 hertz),
and some laboratory systems switch at more than a terahertz (1×1012 hertz).
2.26 Logic families
Design started with relays. Relay logic was relatively inexpensive and reliable,
but slow.
Occasionally a mechanical failure would occur. Fanouts were typically about
ten, limited by the resistance of the coils and arcing on the contacts from high
voltages.
Later, vacuum tubes were used. These were very fast, but generated heat, and
were unreliable because the filaments would burn out.
In the 1950s, special "computer tubes" were de veloped with filaments that
omitted volatile elements like silicon. These ran for hundreds of thousands of
hours.
The first semiconductor logic family was Resistor-transistor logic.
Diode-transistor logic improved the fanout up to about seven, and reduced the
power. Some DTL designs used two power-supplies with alternating layers of
NPN and PNP transistors to increase the fanout.
Transistor transistor logic (TTL) was a great improvement over these.
This is very fast but uses a lot of power. It's now used mostly in radio-frequency
circuits.
Modern integrated circuits mostly use variations of CMOS, which is acceptably
fast, very small and uses very little power.
2.27 Embedded system
30
Picture of the internals of a Netgear ADSL modem/router.
An embedded system is a computer system designed to perform one or a few
dedicated functions, often with real-time computing constraints.
In contrast, a general-purpose computer, such as a personal computer, can do
many different tasks depending on programming.
Some embedded systems are mass-produced, benefiting from economies of
scale.
Physically, embedded systems range from portable devices such as digital
watches and MP3 players, to large stationary installations like traffic lights,
factory controllers, or the systems controlling nuclear power plants.
"Embedded system" is not an exactly defined term, as many systems have
some element of programmability. For example, Handheld computers share
some elements with embedded systems — such as the operating systems and
microprocessors which power them.
Automobiles, electric vehicles, and hybrid vehicles are increasingly using
embedded systems to maximize efficiency and reduce pollution. Other
automotive safety systems such as anti-lock braking system (ABS), Electronic
Stability Control (ESC/ESP), traction control (TCS) and automatic four-wheel
drive.
In addition to commonly described embedded systems based on small
computers, a new class of miniature wireless devices called motes are quickly
gaining popularity as the field of wireless sensor networking rises.
In 1978 National Engineering Manufacturers Association released a "standard"
for programmable microcontrollers, including almost any computer-based
controllers, such as single board computers, numerical, and event-based
controllers.
2.28 CPU platforms
Embedded processors can be broken into two broad categories: ordinary
microprocessors (μP) and microcontrollers (μC), which have many more
peripherals on chip, reducing cost and size. Contrasting to the personal
computer and server markets, a fairly large number of basic CPU architectures
are used;
In certain applications, where small size is not a primary concern, the
components used may be compatible with those used in general purpose
computers.
31
A common configuration for very-high-volume embedded systems is the system
on a chip (SoC) which contains a complete system consisting of multiple
processors, multipliers, caches and interfaces on a single chip. SoCs can be
implemented as an application-specific integrated circuit (ASIC) or using a fieldprogrammable gate array (FPGA).
2.28 Debugging
Embedded Debugging may be performed at different levels, depending on the
facilities available. From simplest to most sophisticated they can be roughly
grouped into the following areas:
A common problem with multi-core development is the proper synchronization
of software execution. In such a case, the embedded system design may wish
to check the data traffic on the busses between the processor cores, which
requires very low-level debugging, at signal/bus level, with a logic analyzer, for
instance.
Reliability
Embedded systems often reside in machines that are expected to run
continuously for years without errors, and in some cases recover by themselves
if an error occurs.
High vs Low Volume
For high volume systems such as portable music players or mobile phones,
minimizing cost is usually the primary design consideration. Engineers typically
select hardware that is just “good enough” to implement the necessary
functions.
For low-volume or prototype embedded systems, general purpose computers
may be adapted by limiting the programs or by replacing the operating system
with a real-time operating system.
Microkernels and exokernels
A microkernel is a logical step up from a real-time OS. The usual arrangement
is that the operating system kernel allocates memory and switches the CPU to
different threads ofexecution.
In general, microkernels succeed when the task switching and intertask
communication is fast, and fail when they are slow.
All the software in the system are available to, and extensible by application
programmers.
Additional software components
In addition to the core operating system, many embedded systems have
additional upperlayer software components. These components consist of
networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and
also included storage capabilities like FAT and Flash memory management
systems. If the embedded devices has audio and video capabilities, then the
appropriate drivers and codecs will be present in the system. In the case of the
monolithic kernels, many of these software layers are included. In the RTOS
category, the availability of the additional software components depends upon
the commercial offering.
32
APPLICATIONS AND BASIC PRINCIPLES OF ABSORBERS
3.1 Reverberation control
In excessively reverberant spaces the sound echoes around the space
making it noisy and difficult to communicate. For some reason, many
restaurateurs seem to think that to create the right atmosphere, it is
necessary to make speech communication virtually impossible. The issue
here is reverberation.
Reverberation is the decay of sound after a sound source has stopped and it
is a key feature in room acoustics. Reverberating is most audible in large
spaces with hard surfaces, such as cathedrals, were the sound echoes
around long after the sound was emitted from the source. In small spaces,
with plenty of soft, acoustically absorbent materials, such as living rooms,
the absorbent materials quickly absorb the sound energy, and the sound
dies away rapidly. When people talk about rooms being “live” or “dead” this
is usually about the perception of reverberance.
The amount of reverberation in a space depends on the size of the room
and the amount of sound absorption. Adding acoustic absorbers reduce the
reflected sound energy in the room and so reduce the reverberance and
sound level. Consequently, the best place for absorption is the ceiling or
high up on the walls out of the way.
Getting the correct amount of reverberation in a space is vital to the design
of most rooms, whether the aime is to make music sound beautiful, to make
speech intelligible, to reduce noise levels or to make a space pleasant place
to be in.
Returning to more general reverberation, the primary technique for control is
absorption
The reverberation time measures the time taken for the sound pressure level
to decay by 60 dB when a sound stops. From the impulse response, the
Schroeder curve must be caçlculated first by integration, before evaluating
the reverberation time.
Sabine showed that the reverberation time could be calculated from the
room volume and absorption by:
55.3𝑉
𝑐𝐴
Where V is the room volume; c the speed of sound, and A the total
absorption of all room surface.
The total absorption of the room can be calculated from the individual
absorption coefficients of the rooms surfaces, using the following
expression:
𝑇60 =
33
𝑁
𝐴 = ∑ 𝑆𝑖 𝛼𝑖
𝑖=𝑙
The absorption coefficient of a surface is the ratio of the energy absorbed by the
surface to the energy incident in values 0 to 1.
The absorption coefficient can be defined for a specific angle of incidence or
random incidence as required.
The reverberation time formulations are statistical models of room acoustic
behavior, and are only applicable where there are a large number of reflections
and the sound field is diffuse. For instance, at low frequencies, the modal
behavior of the room makes the sound field non-diffuse. Consequently, there is
a lower frequency bound on the applicability of statistical absorptipn
formulations. The lower bound is usually taken to be the Schroeder frequency
given by:
𝑓 ≥ 2000√T60βˆ•π‘‰
Although this formal limit has been known for many years it does not prevent
many practitioners, standards and researchers still defining and using
absorption coefficients below the Schroeder frequency, as it is convenient, even
if not strictly phisically correct.
The reverberant field level is reduced by the addition of absorption and hence
the noise exposure is decreased by typically up to 3-4dB.
There are situations here de absorbent need some different materials. Porous
absorbers are more effective at mid to high frequencies, but this is where the
ear is most sensitive and consequently where noise control is most needed in
the working environment.
If the room’s dimensions are very dissimilar, there is a tendency to get different
reverberation times in different directions as happens with many factories.
Sound will decay faster if it is propagating perpendicular rather then parallel to
the floor, as the perpendicular propagating sound will reflect more often, and it
is at the reflections that most absorption occurs. Even when the room
dimensions have been carefully chosen, however, the frequency response of
the room will still be uneven and acoustic treatment is needed.
Porous absorbers are not usually used as they would have to be extremely thick
to provide significant bass absorption. Resonant absorbers are preferred for
treating low frequencies. The problem with resonant absorbers is that they
usually only provide a narrow bandwidth of absorption. To cover a wide
bandwidth, a series of absorbers are required, each tuned to a different
frequency range.
34
3.2 Echo control in auditoria and lecture theatres
In a large auditorium, the reflection from the rear wall is a common source of
echo problems for audience members near the front of the stalls or performers
on the stage. Echos are very likely if the rear wall forms a concave arc, which
focuses the reflections at the front of the wall. One technique for removing the
echo is to apply absorption to the rear wall.
𝑝(𝑑, π‘Ÿ) = 𝐴𝑒𝑗(πŸ‚π’• − π’Œ. 𝒓) = 𝑨𝒆𝒋(πŸ‚π’• − π’Œπ’™ − π’Œ π’š − π’Œ 𝒛)
Where k= {Kx, Ky, Kz } is the wavenumber with Kx being the component in the
x direction,K²= β”‚Kβ”‚²= Rx²+Ry²+Rz²; A is a constant related to the magnitude of
the wave; r={x, y, z} is the the location of the observation point; t is time, and πŸ‚
=2ƒ= Rc is the angular frequency, where ƒ is the frequency and c the speed of
sound.
For a porous absorber, the effective density and bulk modulus can be related to
the characteristic impedance and wavenumber by the following formulations.
The characteristic impedance is given by:
𝑍𝑐 =√𝐾𝑒𝑃𝑒
And the propagation wavenumber by:
𝐾 = πœ”√
𝑃𝑒
𝐾𝑒
3.3 Impedance, admittance, reflection coefficient and absorption
The effect that a surface has on an acoustic wave can be characterized by for
inter-related acoustic quantities: the impedance, the admittance, the pressure
reflection coefficient and the absorption coefficient. These for acoustic
quantities is fundamental to understanding absorbing materials.
For most absorbents, the speed of sound is much less then that in air.
Consequently, the angle of propagation in the medium is smaller then in the air.
In situations where sound pressure levels are very high, the non-linear
behaviour of sound within the absorbent will need to be considered. Silencers
come in three main forms: reactive, absorptive and a combination of reactive
and absorptive. They are of more interest because they remove sound energy
using porous absorbents. The attenuation is proportional to the perimeter area
35
ratio, T/A, where P is the lined perimeter and A the cross-section of the silencer.
The low frequency performance is determined by the thickness of the baffles,
with d=λ/8 being optimal.
Absorbers performance is likely to decrease over time unless specialized and
expencive durable absorbers are used.
3.4 Natural noise control
It is common to find grass or tree covered areas around major noise sources
and optimazing the natural features to maximize attenuation offers a
sustainable solution to noise. Where spaces allows, the use of natural means
(trees, shrubs and ground) rather then artificial barriers, has the advantage of
contributing to other issues in sustainability such as: reducing air polluition;
generating access to local green areas, and reversing the long term decline in
wildlife habitats and populations.
3.5 Loudspeaker cabinets
Most conventional loudspeaker are mounted within cabinets to prevent sound
generated by the rear of the driver interfering with that radiating from the front.
The enclosure changes the behavior of the driver, because the air cavity forms
a compliance which alters the mechanical behavior of the driver, and this must
be allowed for in the loudspeaker design . By placing absorption within the
cavity the resonant modes are damped and the sound quality improved.
Applications and basic principles of diffusers
There are locations, such as rear walls of large auditoria, where there is a
general consensus that diffusers are a good treatment to prevent echoes and
better than traditional absorbers.
3.6 Echo control in auditoria
Echoes are caused by late arriving by reflections with a level significantly above
the general reverberance. The echo might also come from a balcony front or
many other multiple paths. Fluter echoes are caused by repeated reflections
from parallel walls and are often heard in lecture theatres, corridors and
meeting rooms. In other cases, the choice between diffusers and absorbers will
rest on whether the energy lost to absorption will detract or improve other
aspects of the acoustics, such as the reverberance, envelopment and
intelligibility.
3.7 Wavefronts and diffusers reflections
In diffuse reflection the wavefront is spatially unaltered from the incident sound.
Consequently, the sound from the source reflects straight back, and is
unchanged and not dispersed.
36
When the direct sound and a specular reflection combine, they form a comb
filter. The time delay between the direct sound and the reflection determine the
frequency spacing of the minima and maxima, and the relative amplitudes of the
sound, the levels of the minima and maxima. Comb filtering is an effect that
should be avoided in critical listening rooms and performance spaces.
Another way of forming a diffuser is to combine reflection and absorption. By
putting patches of absorbent on a wall, reflections from parts of the surface will
be absent and dispersion is generated. Traditionally acousticians have utilized
patches of absorption on walls to obtain dispersion.
Manu diffusers are designed simply assuming that temporal variation will
produce uniform spatial dispersion and an acceptable frequency response,
however this is not necessarily the case. Both surface and volume diffusion
refers to cases where the sound field, or surface reflections, become more
complex.
The diffuser introduces temporal dispersion of the reflected sound, which leads
to a more complicated frequency response. The regularity of the comb filtering
is minimized, and consequently its audibility is diminished.
If some liveliness is to be left in the room, a combination of absorbers and
diffusers is better than absorption and flat walls, which generate specular
reflections. Consequently, many of the industry’s leading mastering facilities use
this combination of treatments. Live performance studios also usually employ a
mixture of absorbers and diffusers.
A listener would not consider sitting 30cm from a multi-way loudspeaker,
because the listener would be in the near field of the device. At some distance
from the loudspeaker, all individual high, mid and low frequency waves from the
individual drivers will combine to form a coherent wavefront.
When listening to music in a room, the total field is heard which is a combination
of the direct sound and reflections.
3.8 Blurring the focusing from concave surfaces
Concave surfaces can cause focusing in a similar manner to concave mirrors.
This can lead to uneven sound around a room, which is usually undesirable.
The treatment available are absorbers or diffusers. The concentration of sound
at the focus is clear. To overcome this problem, diffuser was specified by Raf
Orlowski of Arup Acoustic, UK.
3.9 MEASUREMENT OF ABSORBER PROPERTIES
For many practitioners, the only important measurement is that which gives the
random incidence absorption coefficient in a reverberation chamber. While this
37
may be the absorption coefficient that is needed for performance specifications
in room design, other measurements are needed to understand and model
absorptive materials. The more controlled environment that is often used is the
impedance tube, which allows normal incidence impedance and absorption to
be determined.
The impedance of the sample alters how sound is reflected and, by measuring
the resulting standing wave, it is possible to calculate the normal incidence
absorption coefficient and surface impedance of the sample.
The most common free field method uses a two-microphone approach, but this
is often only applicable to isotropic, homogeneous samples. Attention has
recently turned to using more than two microphones; however, the
measurements appear to be problematic and very noise sensitive.
The highest frequency, ƒu, that can be measured in a tube is then determined
by:
𝑐
ƒα΅€= 2𝑑 Where d is the tube diameter or maximum width and c the speed of
sound.
For normal incidence, the reflection coefficients of the sample obtained using
the in situ method match those obtained using a standing wave method in an
impedance tube. For oblique incident and at low frequencies (<800HZ), the
method fails with the reflection coefficient exceeding. This occurs because there
is an implicit assumption of plane waves in the methodology. At low
frequencies, the edges of the test samole created other types of reflected
waves, which then render the technique inaccurate.
3.9.1 Porous absorption
Porous absorbers are carpets, acoustic tiles, acoustic foams, curtains,
cushions, cotton and mineral wools such as fiberglass. They are materials
where sound propagation occurs in a network of interconnected pores in such a
way that viscous and thermal effects cause acoustic energy to be dissipated.
They are used widely to treat acoustic problems, in cavity walls and noisy
environments to reduce noise and in rooms to reduce reverberance.
For the porous absorber to create significant absorption, it needs to be placed
somewhere where the particle velocity is high. The particle velocity close to a
room boundary is usually zero, and so the parts of the absorbent close to the
boundary generate insignificant absorption. The amount of energy absorbed by
a porous material varies with angle of incidence.
The performance varies most with angle of incidence for the least dense
mineral wools. With too high a flow resistivity the impedance mismatch between
38
the air and the absorbent causes the sound to reflect from the front face and the
absorption reduces.
To get broadband passive absorption across the frequencies, requires a
combination of resonant and porous absorption. If the perforated sheet does not
have a very open structure, with a large per cent open area, the mass effect of
the holes will increase the absorption at low frequency but decrease absorption
at high frequency.
3.9.2 Resonant absorbers
By exploiting resonance it is possible to get absorption at low to midfrequencies. It is difficult to achieve this with porous absorbers, because of the
required thickness of the material. The ideas and concepts of resonant
absorption have been known for many decades. In recent years, some more
specialist devices have been produced, for instance clear absorbers, but these
are still based on the same basic physics. While some devices, such as many
basic Helmholtz absorbers, can be predicted with reasonable accuracy.
The high frequency absorption decreases because the proportion of solid parts
of the perforated sheet increases, and these parts reflect high frequency sound.
The maximum absorption decreases somewhat as the resonant frequency
decreases. If these absorbers were tuned to a lower frequency, this decrease
would be more marked. The reason for this is that the impedance of the porous
material moves further from the characteristic impedance of air at low
frequencies, making the absorbent less efficient. A lower flow resistivity leads to
an impedance less then characteristic, which results in a reduction in bandwidth
and maximum absorption.
The frequency of absorption can be varied by choosing the hole size, open area
and cavity depth. Although in this case, the amount of variation in this design
variables that is achievable is rather limited, because of restrictions imposed by
diffusers surface profile. To get absorption across a broader frequency range, a
double layer construction is needed, or additional porous absorbent needs to be
placed on the room surface.
3.9.3 Helmholtz resonator
When a porous absorbent is placed in the cavity, sound propagation is
generally normal to the surface and so the need for subdividing is less critical,
except at very low frequencies.
The hole spacing should be large compared to the hole diameter. The acoustic
mass per unit area is : m=ᡨD²t’/Ԉa² where t’ is the thickness of the perforated
sheet with the and corrections and other variables. Under these assumptions,
the resonant frequency is:
39
ƒ=
𝑐
2πœ‹
√
S
tαΏ―V
where S =πœ‹π‘Ž2 is the area of the holes, and V the volume = D²d of each unit cell.
This is the same formulation as derived by other methods, such as lumped
parameter equivalent electrical circuits.
At low frequencies, without partitions within the cavity, this may become less
true as lateral propagation modes become more significant. Any lateral
propagation would be expected to decrease the absorption achieved for most
angles of incidence.
Active absorbers and diffusers.
Low frequencies have long wavelengths, which means the absorbers and
diffusers have to be large to perturb or absorb the wavefronts. In recent years
there has been growing interest in the use of active control technologies to
absorb or diffuse low frequency sound.
Active absorption has much in common with active noise control, indeed in
many ways it is the same concept just re-organized behind a slightly different
philosophy.
3.9.4 Active Absorption in three dimensions
A sound field in a room can be expressed as a modal decomposition. This
implies that the sound field may be considered as the sum of a large number of
second-order functions; these functions can be implemented as infinite inpulse
response (IIR) biquad filters. The coefficients of these filters are determined by
fitting responses to measurements in the physical sound field. The sensitivity of
this control regime to changes in room conditions is unknown. Presumably it
would be necessary to regularly recalibrate the system for the damping to
remain efficient.
3.9.5 Active Diffusers
Active devices might offer significant advantages over passive devices. Most
importantly, they allow diffusion over a wider bandwidth by extending the
response of the diffusing surfaces to lower frequencies. This is useful because
the space available for diffusers is usually limited. To achieve good diffusion, a
passive diffuser must be significantly deep compared to the wavelength of
sound, and at low frequencies building space costs generally limit the depth of
treatments and so performance is compromised.
Active diffusers also enable surface to be designed which are not physically
realizable using passive technologies, for example surfaces where the well
40
depth is frequency dependent. Another limitation in diffuser design comes from
the visual requirements of interior designers. A good diffuser must be a unifield
part of the architectural design, rather than an obvious add-on. Active devices
allow variability. Many rooms have to be multi-purpose, and active elements
have the potential to enable the acoustics of a space to be easily changed.
The high frequency diffusion is provided by the passive elements in the diffuser,
and the active elements deal with the low frequencies. There is a
complementary relationship between the passive and active elements, as there
was with hybrid active absorbers.
3.9.6 Controllers
The structures and control regimes described for active absorbers can be
adapted to make active diffusers. The control loudspeaker is instrumented to
measure and velocity, and from this the surface impedance can be obtained
and manipulated to a desired value using the techniques described for active
absorption. The target surface impedance required for active diffusers is more
complex than for active absorbers and more difficult to achieve. Furthermore, in
comparison to active absorbers, active diffusers have a smaller region of stable
control and are more sensitive to control impedance errors. This explains why
active diffusers are so much more difficult to produce than active absorbers.
4. ACOUSTIC: AN INTRODUCTION
Every layman knows that a theatre or concert hall may have good or less
“acoustics” ex. Some churches, in working rooms such as factories or openplan offices. However, the calculation of just one single normal mode of a
realistic room with all its details turns out to quite difficult.
For a room with a volume as small as 400m3 and a reverberation tim 1s the
frequency range that covers most of the audible frequencies, namely, those
abouve 100Hz.
4.1 Geometric room acoustic
In analogy to geometric optics limited to the range of very high frequencies
where diffraction and interferences can be neglected, it is convenient to think of
sound rays as the carriers of sound energy instead of extended sound waves.
From this concept it can be immediately concluded that the total energy of a
sound ray remains constant during its propagation – provided we neglect
attenuation in the air; however however, its energy density is inversely
proportional to the distance from its origin, as in any spherical wave.
41
The most important law of geometric room acoustics is the law of “specular”
reflection which reduces to the simple rule
Angle of incidence = angle of reflection
The construction of rays is a simple way to examine the contributions of wall
and celing portions to the supply of an audience with sound. However, if many
multiple reflections are to be taken into account, this picture will become too
confusing and looses its clarity. One original source and all its images produce
the same signal. The energy loss of a ray due to imperfect wall reflections is
approximately accounted for by attributing a reduced power output to the image
source.
The construction of image sources is particularly simple for a rectangular room.
Because of the symmetry of this enclosure many of its image sources coincide.
For calculating the resulting sound signal in an receiving point P we ought to
add the sound pressures of all contributions. If the room is excited by a sine
tone the total intensity in P is obtained as
1=
1
1
| ∑ 𝑃𝑛 | ² =
∑ ∑ 𝑃𝑛 π‘ƒπ‘š
2π‘π‘œ
2π‘π‘œ
𝑛
With the aid of geometric acoustic we can examine not only the spatial
distribution of stationary sound energy within a room but also the temporal
succession in which the reflected sounds arrive at a given point in the room.
The signal received by the listener can be represented as
𝑠 ′ (𝑑) = ∑ π‘Žπ‘›π‘ (𝑑−𝑑𝑛)
𝑛
Generally, our hearing does not perceive reflections with delays of less than
about 50 ms as separated acoustical events. Instead such reflections enhance
the apparent loudness of the direct sound, therefore they are often referred to
as “useful reflections”.
Another feature of a reflection is the direction from which it arrives at the
listener’s position. Although most of the sound energy he receives arrives from
other directions than from that of the direct sound. This interesting effect is due
to the “law of the first wavefront”.
4.2 Diffuse sound field
42
In a closed room the sound waves are repeatedly reflected from its boundary,
and with each reflection they change their direction.
The energy density is associated with the intensity and is dw= I’ dΩ/c. The total
energy density is obtained by integrating this quantity over all directions, that is,
over the full solid angle 4πœ‹:
𝑀=
1
∬ 𝐼′(πœ™, 𝛩)𝑑Ω
𝑐 4πœ‹
The independence of the” differential intensity” I’ on the angles πœ™ and Ɵ as was
assumed here means that all directions participate equally in the sound
propagation.
4.3 Energy density and reverberation
Now our main concern is the energy density which will be established a certain
acoustical power is supplied to room. It is intuitively clear that the energy
density in a room will be the higher, the more acoustical power is reduced by a
sound source operated in it, and the lesser energy per second will be lost by
dissipative processes, in particular, the lower the absorption coefficient of the
boundary. This leads us immediately to the energy balance:
Temporal change of the energy content = energy supplied by the source –
absorbed energy.
The energy supplied to the room per second is the power output P of the
source. Hence the mathematical expression for the energy balance reads
𝑉
𝑑𝑀
𝑑𝑑
= 𝑃(𝑑) −
𝑐
4
Aw
This is a differential equation of first order for the energy density. The source
power P is assumed as constant, then the same holds for the energy density.
This leads to
𝑀=
4𝑃
𝑐𝐴
This formula agrees with what we have expected.
4.4 Electroacoustic transducers
43
Acoustical signals can be analysed and processed in nearly every desired way;
furthermore, we can transmit them over long distances or stor them in several
ways and retriev them at any time.
An electroacoustic transducer is a system that means in two ports, one of them
being the input terminals to which the signal to be converted is applied, while
the second one is the output terminal which yields the result of the conversion.
In the ideal case for any input signal 𝑠1 (t) the corresponding output signal 𝑠2 (t)
should be given by the simple relationship
𝑠2 (t) = K . 𝑠1 (t – Δt)
With Constant K. The time interval Δt in the argument of 𝑠1 is to indicate that we
allow for some delay of the output signal provided it is sufficiently small.
At best we can expect that a real transducer meets condition within a limited
frequency range. The majority of electroacoustic transducers employ electrical
or magnetic fields to link mechanical and electrical quantities.
4.5 Piezoelectric transducer
Many solid materials are electrically polarized when they are deformed which
manifests itself as an electrical charge on its surface. This property is called
piezoelectricity.
The piezoelectric transducer can be represented by an equivalent electrical
circuit, which can be throught of as the entrails, so-to-speak, of the box.
In the audio range the piezoelectric transducer has to compete with several
other kinds of transducers.
Most piezoelectric transducers for practical applications, however, consist of
ceramics made of certain ferroelectric materials such as barium titanate, lead
zirconate (PZT) or lead metaniobate.
4.6 Electrostatic transducer
If we leave out piezoelectric material from the parallel-plate capacitor we arrive
at the electrostatic transducer, also known as dielectric or capacitive transducer.
The reverse effect exists too: varying de distance of the electrodes of a charged
capacitor changes its capacity and hence its voltage, or its charge, or both of
them, depending on the electrical load of the electrodes. Any change of
electrical charge is linked to a charging or discharging current.
The electrodes of a parallel-plate capacitor charged to a voltage U attract each
other the force:
44
πΉπ‘‘π‘œπ‘‘= 𝐢0 . π‘ˆ 2 Hence, the total force πΉπ‘‘π‘œπ‘‘ is composed of a constant part which
2𝑑
must be balanced by a suitable suspension of the electrodes and by the
alternating force.
4.7 Magnetic transducer
The magnetic transducer consists basically, of a magnet and a movable part –
the armature – made of magnetically soft iron.
As with the electrostatic transducer the force is proportional to the square of the
electrical quantity – here of the magnetic field strength.
The magnetostrictive transducer had its particular merits as a robust sound
projector in underwater sound. Likewised, it played an important role in the
generation of intense ultrasound with frequencies up to about 50kHz.
If we consider a dynamic loudspeaker with a transducer constant M = 1 N/A.
We assume that the alastic suspension of the diaphragm and the moving coil
has a compliance of 5 . 10−4m/N; the inductance L of the moving coil be mH.
With these data yilds:
𝐾𝑑𝑦𝑛 = 0.707
4.8 Microphones
Microphones in the widest sense are electroacoustical sound receivers, that is,
devices which convert acoustical or mechanical vibrations into electrical signals.
They are used to record sound signals such as speech or music. Furthermore,
they serve to measure sound field quantities, iin particular, the sound pressure.
For the performance of a microphone several characteristic features are
important. One of them is sensitivity. At best the sensitivity can be expected to
be constant within a certain limited frequency range. The directivity of a
microphone is a third feature which is of significance in practical applications.
The increase of the membrane deflection with the angular frequency ω = ck,
and its characteristic dependence on the angle under which the sound wave
arrives.
The frequency dependence of the pressure sensitivity vanishes, by the way, if
the openings in the rear wall are very long and narrow. Then the motion of the
diaphragm is controlled by the flow resistance r which the air experiences when
it is forced through the openings. The mechanical input impedance of some
microphone capsules is considerably complicated in its mechanical construction
and sensibility.
45
4.9 Condenser microphone
Probably the simplest way to convert acoustic signals into electrical ones is by
arranging a metal plate inside the capsule, parallel to the diaphragm. Together
with the membrane it forms a parallel-plate capacitor, and the whole device acts
as an electrostatic transducer.
The condenser microphone is considered a pressure gradient sensor and they
are built in many different forms. The fabrication of these microphones makes
great demands on mechanical precision. Its sensitivity is of the order of 10
mV/Pa.
4.10 Piezoelectric microphones
In a piezoelectric microphone the diaphragm is in mechanical contact with a
body of piezoelectric material which performs thetransduction of its deflection
into an electrical signal. Nevertheless, the cable connecting the microphone to
an amplifier must not be too long, because it would be increased by the cable
capacitance resulting in a lower sensitivity. The sensitivity of a piezoelectric
microphone is comparable to that of a condenser microphone.
4.11 Dynamic microphones
The dynamic transducer principle is another useful basis of microphone
construction. One widely used version of a dynamic microphone is the movingcoil microphone. Essentially, its consist of a small coil within the cylindrical air
gap of a permanent magnet which produces a radial magnetic field in it. The
output voltage of this microphone is, as with all dynamic transducers,
proportional to the velocity of the moving coil. Therefore, if the pressure
sensitivity is to be frequency independent, the motion of the diaphragm should
be resistance-controlled, that is, with a mechanical input impedance just
consisting of a resistance r. In the case we have
π‘ˆ1=0 = Mv = MrS . p
In general, the inductance of the moving coil can be neglected against its
electrical resistance which is typically 200Ohm. The microphone can be
connected to a cable without dramatic loss of sensitivity.
4.12 Carbon Microphone
For a long time, the carbon microphone was the standard microphone in
telephone communication techniques. This is because of its high sensitivity and
its mechanical robustness, properties which were deemed more important in its
practical operation than high fidelity. However, it has been replaced with other
microphone types.
4.13 Hydrophones
46
Hydrophones are microphones for detecting sound in a liquid media. They have
a wide application especially in underwater techniques and are important
components of most sonar systems. In ultra-sonic they are mainly used for
measuring purposes as, for instance, for the examination and the surveillance
of the sound field produced by ultrasonic sound projectors. Most hydrophones
are based on the piezoelectric transducer principle. Particularly, high-frequency
bandwidth is achieved with the needle hydrophone; it consist of metal needle
the tip of which is covered by a thin film of polyvinylidene fluoride (PVDF), which
of course must be polarized before use.
4.14 Loudspeakers and other electroacoustic sound sources
Loudspeakers are the most widely used electroacoustic devices. We find them
in every radio or television set, in every stereo system and even our motor car is
equipped with several loudspeaker. Another important aspect is the output
power attainable with a loudspeaker, of course, under the condition that nonlinear distortions within tolerable limits. The construction of the transducer
determines the electrical properties of the sound transmitter but is also
responsible for the occurrence of linear and non-linear distortions.
In particular, after the sound power radiated from a circular piston in an infinite
baffle is in the low-frequency limit (Ka« 1 with a = piston radius.
The relationship between membrane motion and sound pressure which is so
unfavourable for loudspeaker construction is a characteristic feature of sound
radiation into three-dimensional space.
4.14.1 Dynamic loudspeaker
The loudspeaker which is almost exclusively used today is based on the
dynamic transducer principle. As in the moving-coil microphone the conductor
interacting with the magnetic field is a cylindrical coil of wire arranged
concentrically in the annular gap of a strong permanent magnet.
The condition that the loudspeaker is driven with frequencies-constant electrical
current is not very strigent because for frequencies below about 1000Hz the
inductance of the voice coil can be neglected in comparison of its resistance
which typically 4 or 8 ohms. Therefore the loudspeaker can be fed without
problem from a constant voltage amplifier. Generally, the diaphragm of
loudspeakers is composed of paper, sometimes of plastics or aluminum. At
medium and high frequencies it vibrates no longer uniformly since flexural
resonances may be excited on it which impair the frequency response of the
loudspeaker.
Of course, it is much easier to optimize a loudspeaker for a restricted frequency
range. Therefore the whole audio range is often subdivided into several
frequency bands for instance, in a low- frequency, a mid-frequency and a high
47
frequency band, which are served by separate loudspeaker fed from suitable
cross-over network. The transient behavior of the dynamic loudspeaker is
mainly determined by its mechanical resonance.
4.14.2 Electrostatic or condensor loudspeaker
The diaphragm of a condenser loudspeaker consists of a thin and light foil of
metal or a metalized plastic. The mechanical impedance of the membrane is so
low that the mechanical properties of the electrostatic loudspeaker are mainly
determined by its radiation.
At low frequencies the radiation impedance of the loudspeaker consists mainly
of reactive component, π‘—πœ”π‘šπ‘Ÿ with π‘šπ‘Ÿ denoting the “radiation mass”. Hence the
velocity of the membrane is
π‘£π‘œ =
𝑁
π‘—πœ”π‘šπ‘Ÿ
.U
The electrostatic loudspeaker is highly appreciated by some connoisseurs.
Nevertheless, it represents no real alternative to the much more robust and less
costly dynamic loudspeaker.
4.14.3 Magnetic loudspeaker
Today, the magnetic loudspeaker is of historical interest only, because it is
inferior to the dynamic loudspeaker when it comes to distortions. Nevertheless,
in the beginning of the broadcasting era even in the thirties it was in widespread
use because of its simple construction and its high efficiency. Its essential
components are a permanent magnet and a small bar of iron, the armature,
which is kept in front of the magnetic poles by a flexible spring. Depending on
the polarity its current the armature is drawn either towards the left or the right
side. This motion is conveyed to a conical paper membrane by a little pin.
4.14.4 The closed loudspeaker cabinet
The most common way of avoiding an acoustical short-circuit is by mounting the
loudspeaker system into one wall of an otherwise closed enclosure. However, it
should be noted that the loudspeaker interacts with the air enclosed in the box.
Thus the air is alternately compressed and rarefied by the motion of the
diaphragm, hence at low frequencies it reacts like a spring increasing the
stiffness of the system.
4.14.5 The bass-reflex cabinet
In a way it is unsatisfactory to waste the sound energy radiated from the rear
side of a loudspeaker diaphragm. The bass-reflex cabinet is a method which
uses that sound portion to extend the range of efficient sound radiation towards
lower frequencies. It differs from the closed loudspeaker cabinet, in having a
48
vent in the front wall of the cabinet including a short tube inside. This por emits
sound when the air confined in it is set into oscillations.
At very low frequencies the air confined in the vent moves in the opposite
direction to the motion of the diaphragm: if the membrane is slowly pressed
inside the displaced air simply escapes through the vent.
4.14.6 Horn loudspeakers
In real horn loudspeakers the diaphragm is driven by specially designed
dynamical systems. The radiation resistance which loads the diaphragm of the
driven system can be further increased by arranging for a “compression
chamber” between the diaphragm and the throat of the horn.
The earlier statements on the function of horn loudspeakers apply to infinitely
long horns . For exact calculations of the horn shapes the curvature of
wavefronts must be taken into account. Horn loudspeakers are very common
applied in sound reinforcement systems.
4.14.7 Loudspeaker directivity
Most loudspeakers project the sound they produce preferably in a certain
direction except at very low frequencies. This directivity is often desirable
especially in sound reinforcement applications when large audiences are to be
supplied with sound as, for instance, in sports arenas or in large halls. One
benefit of loudspeaker directivity is that the power output of amplifiers and
loudspeakers can be kept smaller when the sound energy is directed towards
the place where it is needed (audience area).
Many horns have rectangular cross sections with different expansions in both
directions, for instance, exponential in vertical direction but with flat side walls.
Another common design is the multicellular horn consisting of several individual
horns created by subdividing the cross section with partitions. Furthermore,
linear loudspeaker arrays, sometimes also called line or column systems are
widely used.
It is useful to characterize a particular direction not by the elevation angle 𝛼 but
by its complement 𝛼 ′ = 90 − 𝛼, that is by the angle between that direction and
the line axis.
sin 𝛼 = cos 𝛼 ′ = sin πœƒ cos ∅
Thus, for instance, it is no longer necessary to mount loudspeaker arrays
vertically; instead, they can be placed underneath the ceiling, giving them the
desired directivity by proper delays. In this way they are less visible and the
stage area more pleasant.
49
5. GENERAL ANALYSIS
The electronic engineering, is of extreme importance in today's world we live
in, pushing new
technologies.
The electrical engineering is present in most areas drivers of development,
especially now that globalization affects everyone in an almost voluntary,
because we can not isolate the other parts
of the globe.
It is important to note how new technologies are expanding to the four corners
of the world precisely because of the advances seen in electronic engineering
which infects people with innovations due to the facilitation of life that these
technologies offer.
In my country (Cape Verde) the use of this technological evolution is evident
because fifteen years ago electrification and phone internet radio and television
existed only in urban centers. Today, these goods and services, are present in
almost all rural areas due to the introduction of new technologies allowing
brands the value that electronic engineering is developing, facilitating and
helping to improve the
lives of
people.
Lately, the cooperation of developed countries with poor countries, has been
based more on creating the conditions for the expansion of new technologies to
the urban and rural populations approaching them with the rest of the world.
In all related areas, this is the electronic engineering, not only in the design of
systems that use universal and adequate to certain climatic conditions,
geographical and topographical.
Adriano Pedro Rodrigues
2013/01/22
50
6.CONCLUSION
Engineering is one of the areas in its various aspects based its principles on
innovation and is considered one of the pillars of knowledge, having as source
of the past and projecting the present and the future. Hence, the authors of the
various chapters that served as subjects of study, are concerned with raising
awareness of the history or origin of things, allowing us to know the aspects and
stages of evolving technologies, serving to boost each other.
Engineering is something that not to and, new technologies continue to
challenge engineers to innovate various areas of correcting errors that may
have occurred in the past and projecting the best for the future.
In this case, the engineering electro / electronics is one of the most important
levels of new technologies, with the support of the same with regard to
instrumentation, control, and
production efficiency.
In conclusion, electronic engineering is present in most areas driving
development.
Because we are currently living in constant liaison with the new technologies, it
is important to acquire knowledge within the general culture in facilitating the
management of equipment that take part in our daily lives.
Adriano Pedro Rodrigues
2013/01/25
51
7. BIBLIOGRAPHY
-Aldred, Julia, Edition: 1º Global Média 2009 and the bibliography refered in this
Edition.
-Cox, Trevor J. DÁntónio, Peter Edition: 2009 and the bibliography refered in
this Edition.
-Kuttruff, Heinrich. Taylor & Francis 2007 and the bibliography refered
In this Edition.
-Nicolaidis, Ryan; Walker, Bruce; Neinberg, Gil 2012
Download