System-on-a-chip

advertisement
Topic relevant selected content from the highest rated entries, typeset, printed and
shipped.
Combine the advantages of up-to-date and in-depth knowledge with the convenience of
printed books.
A portion of the proceeds of each book will be donated to the Wikimedia Foundation to
support their mission: to empower and engage people around the world to collect and
develop educational content under a free license or in the public domain, and to disseminate it effectively and globally.
The content within this book was generated collaboratively by volunteers. Please be advised that nothing found here has necessarily been reviewed by people with the expertise
required to provide you with complete, accurate or reliable information. Some information in this book maybe misleading or simply wrong. The publisher does not guarantee
the validity of the information found here. If you need specific advice (for example,
medical, legal, financial, or risk management) please seek a professional who is licensed
or knowledgeable in that area.
Sources, licenses and contributors of the articles and images are listed in the section
entitled “References”. Parts of the books may be licensed under the GNU Free Documentation License. A copy of this license is included in the section entitled “GNU Free
Documentation License”
All used third-party trademarks belong to their respective owners.
Contents
System-on-a-chip
1
Ambient power
4
Analog signal
4
Analog transmission
6
Passive analogue filter development
7
Antimetric (electrical networks)
21
Bartlett's bisection theorem
23
Beat frequency oscillator
26
Bessel filter
28
Brassboard
31
Breadboard
32
Bridged T delay equaliser
38
Butterworth filter
39
Chamfer
47
Channel length modulation
48
Chebyshev filter
50
Circuit design
57
Circuit diagram
60
Circuit extraction
63
Clock feedthrough
64
CMOS
64
Colpitts oscillator
70
Composite epoxy material
74
Composite image filter
75
Constraint graph (layout)
79
Coopmans Approximation
80
Crystal radio
81
Current mirror
97
Delay-locked loop
104
Design closure
105
Digital Clock Manager
108
Digital electronics
108
Distributed element model
117
DO-160
120
DO-254
122
Driven right leg circuit
124
Dual impedance
125
Electromagnetic field solver
131
Electronic circuit design
133
Electronic design automation
134
Electronic game
138
Elliptic filter
143
Elmore delay
148
Equivalent impedance transforms
149
Evolvable hardware
158
Excitation table
160
Fault coverage
161
Fault model
162
FO4
163
Frequency compensation
164
Frequency-locked loop
166
Haitz's Law
167
Hardware obfuscation
168
Harmonic balance
169
Honeywell
171
Image filter end terminations
178
Image impedance
181
Impedance matching
183
Integrated circuit design
192
Integrated circuit layout
196
Integrated passive devices
197
Isolation amplifier
197
Jump wire
200
Lattice phase equaliser
201
LDMOS
206
List of 4000 series integrated circuits
206
List of 7400 series integrated circuits
210
List of semiconductor IP core vendors
226
List of system-on-a-chip suppliers
234
Logic design
236
Logic optimization
237
Logic synthesis
238
Mechanical filter
240
Mesh analysis
253
Microphonics
256
Miller effect
257
Miller theorem
260
Mixed-signal integrated circuit
266
mm'-type filter
268
Moore's law
271
Multi-threshold CMOS
283
Negative feedback
284
Network analysis (electrical circuits)
286
Network synthesis filters
297
No instruction set computing
300
Noise margin
301
Norator
302
Nullator
303
Nullor
303
Open-circuit time constant method
305
OpenCores
308
Orion (system-on-a-chip)
310
Parasitic element (electrical networks)
310
PCMOS
311
Perfboard
311
Phase-locked loop
313
Pole splitting
324
Pollack's Rule
328
Post wall
329
Potting (electronics)
329
Primary line constants
330
Prototype filter
336
Quality Intellectual Property Metric
341
Quarter-wave impedance transformer
343
Real-time analyzer
345
Reflections of signals on conducting lines
346
Reliability (semiconductor)
354
Rock's law
356
Rockwell Collins
357
Roll-off
363
S-TEC Corporation
365
Schematic driven layout
366
Schematic Integrity Analysis
366
Semiconductor intellectual property core
367
Single-sideband modulation
369
Source transformation
374
Step response
375
Stripboard
383
Substrate coupling
386
Superheterodyne receiver
387
Surface-mount technology
394
Symbolic circuit analysis
405
System-level solution
408
T pad
408
Tape-out
411
Test compression
412
Thermal management of electronic devices and systems
413
Through-hole technology
419
Turret board
420
Universal Avionics
422
Vandal resistant switch
423
Variable-frequency oscillator
426
Via (electronics)
429
Voltage doubler
431
Voltage-controlled oscillator
436
Widlar current source
440
Π pad
446
References
Article Sources and Contributors
450
Image Sources, Licenses and Contributors
457
Article Licenses
License
466
System-on-a-chip
System-on-a-chip
System-on-a-chip or system on chip (SoC or SOC) refers to
integrating all components of a computer or other electronic system
into a single integrated circuit (IC) chip. It may contain digital, analog,
mixed-signal, and often radio-frequency functions – all on a single
chip substrate. A typical application is in the area of embedded
systems.
The contrast with a microcontroller is one of degree. Microcontrollers
typically have under 100K of RAM (often just a few KBytes) and often
really are single-chip-systems; whereas the term SoC is typically used
with more powerful processors, capable of running software such as
the desktop versions of Windows and Linux, which need external
The AMD Geode is an x86 compatible
memory chips (flash, RAM) to be useful, and which are used with
system-on-a-chip
various external peripherals. In short, for larger systems
system-on-a-chip is hyperbole, indicating technical direction more than
reality: increasing chip integration to reduce manufacturing costs and to enable smaller systems. Many interesting
systems are too complex to fit on just one chip built with a process optimized for just one of the system's tasks.
When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP)
comprising a number of chips in a single package. In large volumes, SoC is believed to be more cost effective than
SiP since it increases the yield of the fabrication and because its packaging is simpler.[1]
Another option, as seen for example in higher end cell phones and on the Beagle Board, is package on package
stacking during board assembly. The SoC chip includes processors and numerous digital peripherals, and comes in a
ball grid package with lower and upper connections. The lower balls connect to the board and various peripherals,
with the upper balls in a ring holding the memory busses used to access NAND flash and DDR2 RAM. Memory
packages could come from multiple vendors.
Structure
A typical SoC consists of:
• One microcontroller, microprocessor or DSP core(s). Some SoCs – called multiprocessor System-on-Chip
(MPSoC) – include more than one processor core.
• Memory blocks including a selection of ROM, RAM, EEPROM and flash.
• Timing sources including oscillators and phase-locked loops.
• Peripherals including counter-timers, real-time timers and power-on reset generators.
• External interfaces including industry standards such as USB, FireWire, Ethernet, USART, SPI.
• Analog interfaces including ADCs and DACs.
• Voltage regulators and power management circuits.
These blocks are connected by either a proprietary or industry-standard bus such as the AMBA bus from ARM
Holdings. DMA controllers route data directly between external interfaces and memory, by-passing the processor
core and thereby increasing the data throughput of the SoC.
1
System-on-a-chip
Design flow
A SoC consists of both the hardware
described above, and the software that
controls the microcontroller, microprocessor
or DSP cores, peripherals and interfaces.
The design flow for an SoC aims to develop
this hardware and software in parallel.
Most SoCs are developed from pre-qualified
hardware blocks for the hardware elements
described above, together with the software
drivers that control their operation. Of
particular importance are the protocol stacks
that drive industry-standard interfaces like
USB. The hardware blocks are put together
using CAD tools; the software modules are
integrated using a software development
environment.
A key step in the design flow is emulation:
the hardware is mapped onto an emulation
platform based on a field programmable
gate array (FPGA) that mimics the behavior
Microcontroller-based System-on-a-Chip
of the SoC, and the software modules are
loaded into the memory of the emulation
platform. Once programmed, the emulation platform enables the hardware and software of the SoC to be tested and
debugged at close to its full operational speed. (Emulation is generally preceded by extensive software simulation. In
fact, sometimes the FPGAs are used primarily to speed up some parts of the simulation work.)
After emulation the hardware of the SoC follows the place and route phase of the design of an integrated circuit
before it is fabricated.
Chips are verified for logical correctness before being sent to foundry. This process is called functional verification,
and it accounts for a significant portion of the time and energy expended in the chip design life cycle (although the
often quoted figure of 70% is probably an exaggeration).[2] Verilog and VHDL are typical hardware description
languages used for verification. With the growing complexity of chips, hardware verification languages like
SystemVerilog, SystemC, e, and OpenVera are also being used. Bugs found in the verification stage are reported to
the designer.
2
System-on-a-chip
3
Fabrication
SoCs can be fabricated
technologies, including:
by
several
• Full custom
• Standard cell
• FPGA
SoC designs usually consume less power and
have a lower cost and higher reliability than
the multi-chip systems that they replace. And
with fewer packages in the system, assembly
costs are reduced as well.
However, like most VLSI designs, the total
cost is higher for one large chip than for the
same functionality distributed over several
smaller chips, because of lower yields and
higher NRE costs.
Books
• (2003) Wael Badawy, Graham Jullien
(2003). System-on-chip for real-time
applications. Kluwer. ISBN 1402072546,
9781402072543. 465 pages
System-on-a-Chip Design Flow
• Furber, Stephen B. (2000). ARM system-on-chip architecture. Boston: Addison-Wesley. ISBN 0-201-67519-6.
Notes
[1] "The Great Debate: SOC vs. SIP" (http:/ / www. eetimes. com/ electronics-news/ 4052047/ The-Great-Debate-SOC-vs-SIP). EE Times. .
Retrieved 2009-08-12.
[2] "Is verification really 70 percent?" (http:/ / www. eetimes. com/ showArticle. jhtml?articleID=21700028). Eetimes.com. . Retrieved
2009-08-12.
External links
• SOCC (http://www.ieee-socc.org/index.html) Annual IEEE International SOC Conference
Ambient power
Ambient power
An ambient power circuit is a circuit capable of extracting useful power from radio noise. Examples include Joe
Tate's Ambient Power Module1 [1] and the crystal radio.
Analog signal
An analog or analogue signal is any continuous signal for which the time varying feature (variable) of the signal is
a representation of some other time varying quantity, i.e., analogous to another time varying signal. It differs from a
digital signal in terms of small fluctuations in the signal which are meaningful. Analog is usually thought of in an
electrical context; however, mechanical, pneumatic, hydraulic, and other systems may also convey analog signals.
An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid
barometer uses rotary position as the signal to convey pressure information. Electrically, the property most
commonly used is voltage followed closely by frequency, current, and charge.
Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in
physical phenomena, such as sound, light, temperature, position, or pressure, and is achieved using a transducer. An
analog signal is one where at each point in time the value of the signal is significant, where as a digital signal is one
where at each point in time, the value of the signal must be above or below some discrete threshold.
For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a
microphone which induces corresponding fluctuations in the current produced by a coil in an electromagnetic
microphone, or the voltage produced by a condensor microphone. The voltage or the current is said to be an "analog"
of the sound.
An analog signal has a theoretically infinite resolution. In practice an analog signal is subject to noise and a finite
slew rate. Therefore, both analog and digital systems are subject to limitations in resolution and bandwidth. As
analog systems become more complex, effects such as non-linearity and noise ultimately degrade analog resolution
to such an extent that the performance of digital systems may surpass it. Similarly, as digital systems become more
complex, errors can occur in the digital data stream. A comparable performing digital system is more complex and
requires more bandwidth than its analog counterpart. In analog systems, it is difficult to detect when such
degradation occurs. However, in digital systems, degradation can not only be detected but corrected as well.
Advantages
The main advantage is the fine definition of the analog signal which has the potential for an infinite amount of signal
resolution.[1] Compared to digital signals, analog signals are of higher density.[2]
Another advantage with analog signals is that their processing may be achieved more simply than with the digital
equivalent. An analog signal may be processed directly by analog components,[3] though some processes aren't
available except in digital form.
4
Analog signal
Disadvantages
The primary disadvantage of analog signaling is that any system has noise – i.e., random unwanted variation. As the
signal is copied and re-copied, or transmitted over long distances, these apparently random variations become
dominant. Electrically, these losses can be diminished by shielding, good connections, and several cable types such
as coaxial or twisted pair.
The effects of noise create signal loss and distortion. This is impossible to recover, since amplifying the signal to
recover attenuated parts of the signal amplifies the noise (distortion/interference) as well. Even if the resolution of an
analog signal is higher than a comparable digital signal, the difference can be overshadowed by the noise in the
signal.
Most of the analog systems also suffer from generation loss.
Modulation
Another method of conveying an analog signal is to use modulation. In this, some base signal (e.g., a sinusoidal
carrier wave) has one of its properties modulated: amplitude modulation involves altering the amplitude of a
sinusoidal voltage waveform by the source information, frequency modulation changes the frequency. Other
techniques, such as changing the phase of the base signal also work.
Analog circuits do not involve quantisation of information into digital format. The concept being measured over the
circuit, whether sound, light, pressure, temperature, or an exceeded limit, remains from end to end.
See digital for a discussion of digital vs. analog.
Sources: Parts of an earlier version of this article were originally taken from Federal Standard 1037C in support of
MIL-STD-188.
References
[1] "Digital Signal Processing: Instant access." Butterworth-Heinemann – Page 3
[2] "Concise Dictionary of Computing." Penguin Reference – Penguin Books – pages 11–12.
[3] "Digital Signal Processing: Instant access." Butterworth-Heinemann – pages 2–3
5
Analog transmission
Analog transmission
Analog (or analogue) transmission is a transmission method of conveying voice, data, image, signal or video
information using a continuous signal which varies in amplitude, phase, or some other property in proportion to that
of a variable. It could be the transfer of an analog source signal, using an analog modulation method such as
Frequency modulation (FM) or Amplitude modulation (AM), or no modulation at all.
Some textbooks also consider passband data transmission using a digital modulation methods such as ASK, PSK and
QAM, i.e. a sinewave modulated by a digital bit-stream, as analog transmission and as an analog signal. Others
define that as digital transmission and as a digital signal. Baseband data transmission using line codes, resulting in a
pulse train, are always considered as digital transmission, although the source signal may be a digitized analog
signal.
Modes of transmission
Analog transmission can be conveyed in many different fashions:
•
•
•
•
twisted-pair or coax cable
fiber-optic cable
Via air
Via water
There are two basic kinds of analog transmission, both based on how they modulate data to combine an input signal
with a carrier signal. Usually, this carrier signal is a specific frequency, and data is transmitted through its variations.
The two techniques are amplitude modulation (AM), which varies the amplitude of the carrier signal, and frequency
modulation (FM), which modulates the frequency of the carrier.[1]
Types of analog transmissions
Most analog transmissions fall into one of several categories. Until recently, most telephony and voice
communication was primarily analog in nature, as was most television and radio transmission. Early
telecommunication devices utilized analog-to-digital conversion devices called modulator/demodulators, or modems,
to convert analog data to digital data and back.
Benefits and drawbacks
Analog transmission is still very popular, in particular for shorter distances, due to significantly lower costs and
complex multiplexing and timing equipment is unnecessary, and in small "short-haul" systems that simply do not
need multiplexed digital transmission.[2]
However, in situations where a signal often has high signal-to-noise ratio and cannot achieve source linearity, or in
long distance, high output systems, analog is unattractive due to attenuation problems. Furthermore, as digital
techniques continue to be refined, analog systems are increasingly becoming legacy equipment.[2]
Recently, some nations, such as the Netherlands, have completely ceased analog transmissions on certain media,
such as television,[3] for the purposes of the government saving money.[4]
6
Analog transmission
7
References
[1] The Froehlich/Kent Encyclopedia of Telecommunications By Allen. Kent, Froehlich E. Froehlich.1991 Marcel Dekker. ISBN 0824729005
[2] Telecommunication System Engineering By Roger L. Freeman.2004 John Wiley and Sons. ISBN 0471451339
[3] Netherlands Ends Analog Transmission - Goodbye antenna, hello digital... - dslreports.com (http:/ / www. dslreports. com/ shownews/ 80237)
[4] http:/ / www. nytimes. com/ aponline/ technology/ AP-Netherlands-TV. html?_r=2& oref=slogin& oref=slogin
Passive analogue filter development
This article is about the history and development of passive linear analogue filters used in electronics. For
linear filters in general see Linear filter. For electronic filters in general see Electronic filter.
Linear analog electronic filters
[1]
Analogue filters are a basic building block of signal processing much used in electronics. Amongst their many
applications are the separation of an audio signal before application to bass, mid-range and tweeter loudspeakers; the
combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen
radio station in a radio receiver and rejection of others.
Passive linear electronic analogue filters are those filters which can be described with linear differential equations
(linear); they are composed of capacitors, inductors and, sometimes, resistors (passive) and are designed to operate
on continuously varying (analogue) signals. There are many linear filters which are not analogue in implementation
(digital filter), and there are many electronic filters which may not have a passive topology – both of which may
have the same transfer function of the filters described in this article. Analogue filters are most often used in wave
filtering applications, that is, where it is required to pass particular frequency components and to reject others from
analogue (continuous-time) signals.
Analogue filters have played an important part in the development of electronics. Especially in the field of
telecommunications, filters have been of crucial importance in a number of technological breakthroughs and have
been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that
the early development of filters was intimately connected with transmission lines. Transmission line theory gave rise
to filter theory, which initially took a very similar form, and the main application of filters was for use on
telecommunication transmission lines. However, the arrival of network synthesis techniques greatly enhanced the
degree of control of the designer.
Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to
implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often
still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective.
Wherever possible, and especially at low frequencies, analogue filters are now implemented in a filter topology
which is active in order to avoid the wound components required by passive topology.
It is possible to design linear analogue mechanical filters using mechanical components which filter mechanical
vibrations or acoustic waves. While there are few applications for such devices in mechanics per se, they can be used
in electronics with the addition of transducers to convert to and from the electrical domain. Indeed some of the
earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the
time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of
mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in
inductors, capacitors and resistors respectively.
Passive analogue filter development
Historical overview
There are three main stages in the history of passive analogue filter development:
1. Simple filters. The frequency dependence of electrical response was known for capacitors and inductors from
very early on. The resonance phenomenon was also familiar from an early date and it was possible to produce
simple, single-branch filters with these components. Although attempts were made in the 1880s to apply them to
telegraphy, these designs proved inadequate for successful frequency division multiplexing. Network analysis was
not yet powerful enough to provide the theory for more complex filters and progress was further hampered by a
general failure to understand the frequency domain nature of signals.
2. Image filters. Image filter theory grew out of transmission line theory and the design proceeded in a similar
manner to transmission line analysis. For the first time filters could be produced that had precisely controllable
passbands and other parameters. These developments took place in the 1920s and filters produced to these designs
were still in widespread use in the 1980s, only declining as the use of analogue telecommunications has declined.
Their immediate application was the economically important development of frequency division multiplexing for
use on intercity and international telephony lines.
3. Network synthesis filters. The mathematical bases of network synthesis were laid in the 1930s and 1940s. After
the end of World War II network synthesis became the primary tool of filter design. Network synthesis put filter
design on a firm mathematical foundation, freeing it from the mathematically sloppy techniques of image design
and severing the connection with physical lines. The essence of network synthesis is that it produces a design that
will (at least if implemented with ideal components) accurately reproduce the response originally specified in
black box terms.
Throughout this article the letters R,L and C are used with their usual meanings to represent resistance, inductance
and capacitance, respectively. In particular they are used in combinations, such as LC, to mean, for instance, a
network consisting only of inductors and capacitors. Z is used for electrical impedance, any 2-terminal[1]
combination of RLC elements and in some sections D is used for the rarely seen quantity elastance, which is the
inverse of capacitance.
Resonance
Early filters utilised the phenomenon of resonance to filter signals. Although electrical resonance had been
investigated by researchers from a very early stage, it was at first not widely understood by electrical engineers.
Consequently, the much more familiar concept of acoustic resonance (which in turn, can be explained in terms of the
even more familiar mechanical resonance) found its way into filter design ahead of electrical resonance.[2]
Resonance can be used to achieve a filtering effect because the resonant device will respond to frequencies at, or
near, to the resonant frequency but will not respond to frequencies far from resonance. Hence frequencies far from
resonance are filtered out from the output of the device.[3]
8
Passive analogue filter development
Electrical resonance
Resonance was noticed early on in experiments with
the Leyden jar, invented in 1746. The Leyden jar
stores electricity due to its capacitance, and is, in
fact, an early form of capacitor. When a Leyden jar is
discharged by allowing a spark to jump between the
electrodes, the discharge is oscillatory. This was not
suspected until 1826, when Felix Savary in France,
and later (1842) Joseph Henry[4] in the US noted that
a steel needle placed close to the discharge does not
always magnetise in the same direction. They both
independently drew the conclusion that there was a
transient oscillation dying with time.[5]
Hermann von Helmholtz in 1847 published his
A 1915 example of an early type of resonant circuit known as an Oudin
coil which uses Leyden jars for the capacitance.
important work on conservation of energy[6] in part
of which he used those principles to explain why the
oscillation dies away, that it is the resistance of the circuit which dissipates the energy of the oscillation on each
successive cycle. Helmholtz also noted that there was evidence of oscillation from the electrolysis experiments of
William Hyde Wollaston. Wollaston was attempting to decompose water by electric shock but found that both
hydrogen and oxygen were present at both electrodes. In normal electrolysis they would separate, one to each
electrode.[7]
Helmholtz explained why the oscillation decayed but he had not explained why it occurred in the first place. This
was left to Sir William Thomson (Lord Kelvin) who, in 1853, postulated that there was inductance present in the
circuit as well as the capacitance of the jar and the resistance of the load.[8] This established the physical basis for the
phenomenon - the energy supplied by the jar was partly dissipated in the load but also partly stored in the magnetic
field of the inductor.[9]
So far, the investigation had been on the natural frequency of transient oscillation of a resonant circuit resulting from
a sudden stimulus. More important from the point of view of filter theory is the behaviour of a resonant circuit when
driven by an external AC signal: there is a sudden peak in the circuits response when the driving signal frequency is
at the resonant frequency of the circuit.[10] James Clerk Maxwell heard of the phenomenon from Sir William Grove
in 1868 in connection with experiments on dynamos[11] , and was also aware of the earlier work of Henry Wilde in
1866. Maxwell explained resonance[12] mathematically, with a set of differential equations, in much the same terms
that an RLC circuit is described today.[2] [13] [14]
Heinrich Hertz (1887) experimentally demonstrated the resonance phenomena[15] by building two resonant circuits,
one of which was driven by a generator and the other was tunable and only coupled to the first electromagnetically
(i.e., no circuit connection). Hertz showed that the response of the second circuit was at a maximum when it was in
tune with the first. The diagrams produced by Hertz in this paper were the first published plots of an electrical
resonant response.[2] [16]
Acoustic resonance
As mentioned earlier, it was acoustic resonance that inspired filtering applications, the first of these being a telegraph
system known as the "harmonic telegraph". Versions are due to Elisha Gray, Alexander Graham Bell (1870s),[2]
Ernest Mercadier and others. Its purpose was to simultaneously transmit a number of telegraph messages over the
same line and represents an early form of frequency division multiplexing (FDM). FDM requires the sending end to
be transmitting at different frequencies for each individual communication channel. This demands individual tuned
9
Passive analogue filter development
resonators, as well as filters to separate out the signals at the receiving end. The harmonic telegraph achieved this
with electromagnetically driven tuned reeds at the transmitting end which would vibrate similar reeds at the
receiving end. Only the reed with the same resonant frequency as the transmitter would vibrate to any appreciable
extent at the receiving end.[17]
Incidentally, the harmonic telegraph directly suggested to Bell the idea of the telephone. The reeds can be viewed as
transducers converting sound to and from an electrical signal. It is no great leap from this view of the harmonic
telegraph to the idea that speech can be converted to and from an electrical signal.[2] [17]
Early multiplexing
By the 1890s electrical resonance was much more widely understood
and had become a normal part of the engineer's toolkit. In 1891 Hutin
and Leblanc patented an FDM scheme for telephone circuits using
resonant circuit filters.[20] Rival patents were filed in 1892 by Michael
Pupin and John Stone Stone with similar ideas, priority eventually
being awarded to Pupin. However, no scheme using just simple
resonant circuit filters can successfully multiplex (i.e. combine) the
wider bandwidth of telephone channels (as opposed to telegraph)
without either an unacceptable restriction of speech bandwidth or a
channel spacing so wide as to make the benefits of multiplexing
[2] [21]
uneconomic.
The basic technical reason for this difficulty is that the frequency
response of a simple filter approaches a fall of 6 dB/octave far from the
Hutin and Leblanc's multiple telegraph filter of
point of resonance. This means that if telephone channels are squeezed
1891 showing the use of resonant circuits in
[18] [19]
in side-by-side into the frequency spectrum, there will be crosstalk
filtering.
from adjacent channels in any given channel. What is required is a
much more sophisticated filter that has a flat frequency response in the required passband like a low-Q resonant
circuit, but that rapidly falls in response (much faster than 6 dB/octave) at the transition from passband to stopband
like a high-Q resonant circuit.[22] Obviously, these are contradictory requirements to be met with a single resonant
circuit. The solution to these needs was founded in the theory of transmission lines and consequently the necessary
filters did not become available until this theory was fully developed. At this early stage the idea of signal
bandwidth, and hence the need for filters to match to it, was not fully understood; indeed, it was as late as 1920
before the concept of bandwidth was fully established.[23] For early radio, the concepts of Q-factor, selectivity and
tuning sufficed. This was all to change with the developing theory of transmission lines on which image filters are
based, as explained in the next section.[2]
At the turn of the century as telephone lines became available, it became popular to add telegraph on to telephone
lines with an earth return phantom circuit.[24] An LC filter was required to prevent telegraph clicks being heard on
the telephone line. From the 1920s onwards, telephone lines, or balanced lines dedicated to the purpose, were used
for FDM telegraph at audio frequencies. The first of these systems in the UK was a Siemens and Halske installation
between London and Manchester. GEC and AT&T also had FDM systems. Separate pairs were used for the send and
receive signals. The Siemens and GEC systems had six channels of telegraph in each direction, the AT&T system
had twelve. All of these systems used electronic oscillators to generate a different carrier for each telegraph signal
and required a bank of band-pass filters to separate out the multiplexed signal at the receiving end.[25]
10
Passive analogue filter development
Transmission line theory
The earliest model of the transmission line
was probably described by Georg Ohm
(1827) who established that resistance in a
wire is proportional to its length.[26] [27] The
Ohm model thus included only resistance.
Latimer Clark noted that signals were
delayed and elongated along a cable, an
Ohm's model of the transmission line was simply resistance.
undesirable form of distortion now called
dispersion but then called retardation, and
Michael Faraday (1853) established that this
was due to the capacitance present in the
transmission line.[28] [29] Lord Kelvin
(1854) found the correct mathematical
description needed in his work on early
transatlantic cables; he arrived at an
Heaviside's model of the transmission line. L, R, C and G in all three diagrams are
equation identical to the conduction of a
[30]
the primary line constants. The infinitesimals δL, δR, δC and δG are to be
This model
heat pulse along a metal bar.
understood as Lδx, Rδx, Cδx and Gδx respectively.
incorporates
only
resistance
and
capacitance, but that is all that was needed
in undersea cables dominated by capacitance effects. Kelvin's model predicts a limit on the telegraph signalling
speed of a cable but Kelvin still did not use the concept of bandwidth, the limit was entirely explained in terms of the
dispersion of the telegraph symbols.[2] The mathematical model of the transmission line reached its fullest
development with Oliver Heaviside. Heaviside (1881) introduced series inductance and shunt conductance into the
model making four distributed elements in all. This model is now known as the telegrapher's equation and the
distributed elements are called the primary line constants.[31]
From the work of Heaviside (1887) it had become clear that the performance of telegraph lines, and most especially
telephone lines, could be improved by the addition of inductance to the line.[32] George Campbell at AT&T
implemented this idea (1899) by inserting loading coils at intervals along the line.[33] Campbell found that as well as
the desired improvements to the line's characteristics in the passband there was also a definite frequency beyond
which signals could not be passed without great attenuation. This was a result of the loading coils and the line
capacitance forming a low-pass filter, an effect that is only apparent on lines incorporating lumped components such
as the loading coils. This naturally led Campbell (1910) to produce a filter with ladder topology, a glance at the
circuit diagram of this filter is enough to see its relationship to a loaded transmission line.[34] The cut-off
phenomenon is an undesirable side-effect as far as loaded lines are concerned but for telephone FDM filters it is
precisely what is required. For this application, Campbell produced band-pass filters to the same ladder topology by
replacing the inductors and capacitors with resonators and anti-resonators respectively.[35] Both the loaded line and
FDM were of great benefit economically to AT&T and this led to fast development of filtering from this point
onwards.[36]
11
Passive analogue filter development
Image filters
The filters designed by Campbell[38]
were named wave filters because of
their property of passing some waves
and strongly rejecting others. The
method by which they were designed
was called the image parameter
[37]
Campbell's sketch of the low-pass version of his filter from his 1915 patent
showing
method[39] [40] [41] and filters designed
the now ubiquitous ladder topology with capacitors for the ladder rungs and inductors for
the stiles. Filters of more modern design also often adopt the same ladder topology as
to this method are called image
used by Campbell. It should be understood that although superficially similar, they are
filters.[42]
The
image
method
really quite different. The ladder construction is essential to the Campbell filter and all the
essentially consists of developing the
sections have identical element values. Modern designs can be realised in any number of
transmission constants of an infinite
topologies, choosing the ladder topology is merely a matter of convenience. Their
response is quite different (better) than Campbell's and the element values, in general,
chain of identical filter sections and
will all be different.
then terminating the desired finite
number of filter sections in the image
impedance. This exactly corresponds to the way the properties of a finite length of transmission line are derived from
the theoretical properties of an infinite line, the image impedance corresponding to the characteristic impedance of
[43]
the line.
From 1920 John Carson, also working for AT&T, began to develop a new way of looking at signals using the
operational calculus of Heaviside which in essence is working in the frequency domain. This gave the AT&T
engineers a new insight into the way their filters were working and led Otto Zobel to invent many improved forms.
Carson and Zobel steadily demolished many of the old ideas. For instance the old telegraph engineers thought of the
signal as being a single frequency and this idea persisted into the age of radio with some still believing that
frequency modulation (FM) transmission could be achieved with a smaller bandwidth than the baseband signal right
up until the publication of Carson's 1922 paper.[44] Another advance concerned the nature of noise, Carson and Zobel
(1923)[45] treated noise as a random process with a continuous bandwidth, an idea that was well ahead of its time,
and thus limited the amount of noise that it was possible to remove by filtering to that part of the noise spectrum
which fell outside the passband. This too, was not generally accepted at first, notably being opposed by Edwin
Armstrong (who ironically, actually succeeded in reducing noise with wide-band FM) and was only finally settled
with the work of Harry Nyquist whose thermal noise power formula is well known today.[46]
Several improvements were made to image filters and their theory of operation by Otto Zobel. Zobel coined the term
constant k filter (or k-type filter) to distinguish Campbell's filter from later types, notably Zobel's m-derived filter (or
m-type filter). The particular problems Zobel was trying to address with these new forms were impedance matching
into the end terminations and improved steepness of roll-off. These were achieved at the cost of an increase in filter
circuit complexity.[47] [48]
A more systematic method of producing image filters was introduced by Hendrik Bode (1930), and further
developed by several other investigators including Piloty (1937-1939) and Wilhelm Cauer (1934-1937). Rather than
enumerate the behaviour (transfer function, attenuation function, delay function and so on) of a specific circuit,
instead a requirement for the image impedance itself was developed. The image impedance can be expressed in
terms of the open-circuit and short-circuit impedances[49] of the filter as
. Since the image impedance
must be real in the passbands and imaginary in the stopbands according to image theory, there is a requirement that
the poles and zeroes of Zo and Zs cancel in the passband and correspond in the stopband. The behaviour of the filter
can be entirely defined in terms of the positions in the complex plane of these pairs of poles and zeroes. Any circuit
which has the requisite poles and zeroes will also have the requisite response. Cauer pursued two related questions
arising from this technique: what specification of poles and zeroes are realisable as passive filters; and what
12
Passive analogue filter development
realisations are equivalent to each other. The results of this work led Cauer to develop a new approach, now called
network synthesis.[48] [50] [51]
This "poles and zeroes" view of filter design was particularly useful where a bank of filters, each operating at
different frequencies, are all connected across the same transmission line. The earlier approach was unable to deal
properly with this situation, but the poles and zeroes approach could embrace it by specifying a constant impedance
for the combined filter. This problem was originally related to FDM telephony but frequently now arises in
loudspeaker crossover filters.[50]
Network synthesis filters
The essence of network synthesis is to start with a required filter response and produce a network that delivers that
response, or approximates to it within a specified boundary. This is the inverse of network analysis which starts with
a given network and by applying the various electric circuit theorems predicts the response of the network.[52] The
term was first used with this meaning in the doctoral thesis of Yuk-Wing Lee (1930) and apparently arose out of a
conversation with Vannevar Bush.[53] The advantage of network synthesis over previous methods is that it provides a
solution which precisely meets the design specification. This is not the case with image filters, a degree of
experience is required in their design since the image filter only meets the design specification in the unrealistic case
of being terminated in its own image impedance, to produce which would require the exact circuit being sought.
Network synthesis on the other hand, takes care of the termination impedances simply by incorporating them into the
network being designed.[54]
The development of network analysis needed to take place before network synthesis was possible. The theorems of
Gustav Kirchhoff and others and the ideas of Charles Steinmetz (phasors) and Arthur Kennelly (complex
impedance)[55] laid the groundwork.[56] The concept of a port also played a part in the development of the theory,
and proved to be a more useful idea than network terminals[1] .[48] The first milestone on the way to network
synthesis was an important paper by Ronald Foster (1924),[57] A Reactance Theorem, in which Foster introduces the
idea of a driving point impedance, that is, the impedance that is connected to the generator. The expression for this
impedance determines the response of the filter and vice versa, and a realisation of the filter can be obtained by
expansion of this expression. It is not possible to realise any arbitrary impedance expression as a network. Foster's
reactance theorem stipulates necessary and sufficient conditions for realisability: that the reactance must be
algebraically increasing with frequency and the poles and zeroes must alternate.[58] [59]
Wilhelm Cauer expanded on the work of Foster (1926) [60] and was the first to talk of realisation of a one-port
impedance with a prescribed frequency function. Foster's work considered only reactances (i.e., only LC-kind
circuits). Cauer generalised this to any 2-element kind one-port network, finding there was an isomorphism between
them. He also found ladder realisations[61] of the network using Thomas Stieltjes' continued fraction expansion. This
work was the basis on which network synthesis was built, although Cauer's work was not at first used much by
engineers, partly because of the intervention of World War II, partly for reasons explained in the next section and
partly because Cauer presented his results using topologies that required mutually coupled inductors and ideal
transformers. Although on this last point, it has to be said that transformer coupled double tuned amplifiers are a
common enough way of widening bandwidth without sacrificing selectivity.[62] [63] [64]
Image method versus synthesis
Image filters continued to be used by designers long after the superior network synthesis techniques were available.
Part of the reason for this may have been simply inertia, but it was largely due to the greater computation required
for network synthesis filters, often needing a mathematical iterative process. Image filters, in their simplest form,
consist of a chain of repeated, identical sections. The design can be improved simply by adding more sections and
the computation required to produce the initial section is on the level of "back of an envelope" designing. In the case
of network synthesis filters, on the other hand, the filter is designed as a whole, single entity and to add more
13
Passive analogue filter development
sections (i.e., increase the order)[65] the designer would have no option but to go back to the beginning and start over.
The advantages of synthesised designs are real, but they are not overwhelming compared to what a skilled image
designer could achieve, and in many cases it was more cost effective to dispense with time-consuming
calculations.[66] This is simply not an issue with the modern availability of computing power, but in the 1950s it was
non-existent, in the 1960s and 1970s available only at cost, and not finally becoming widely available to all
designers until the 1980s with the advent of the desktop personal computer. Image filters continued to be designed
up to that point and many remained in service into the 21st century.[67]
The computational difficulty of the network synthesis method was addressed by tabulating the component values of
a prototype filter and then scaling the frequency and impedance and transforming the bandform to those actually
required. This kind of approach, or similar, was already in use with image filters, for instance by Zobel,[47] but the
concept of a "reference filter" is due to Sidney Darlington.[68] Darlington (1939),[41] was also the first to tabulate
values for network synthesis prototype filters,[69] nevertheless it had to wait until the 1950s before the
Cauer-Darlington elliptic filter first came into use.[70]
Once computational power was readily available, it became possible to easily design filters to minimise any arbitrary
parameter, for example time delay or tolerance to component variation. The difficulties of the image method were
firmly put in the past, and even the need for prototypes became largely superfluous.[71] [72] Furthermore, the advent
of active filters eased the computation difficulty because sections could be isolated and iterative processes were not
then generally necessary.[66]
Realisability and equivalence
Realisability (that is, which functions are realisable as real impedance networks) and equivalence (which networks
equivalently have the same function) are two important questions in network synthesis. Following an analogy with
Lagrangian mechanics, Cauer formed the matrix equation,
where [Z],[R],[L] and [D] are the nxn matrices of, respectively, impedance, resistance, inductance and elastance of
an n-mesh network and s is the complex frequency operator
. Here [R],[L] and [D] have associated energies
corresponding to the kinetic, potential and dissipative heat energies, respectively, in a mechanical system and the
already known results from mechanics could be applied here. Cauer determined the driving point impedance by the
method of Lagrange multipliers;
where a11 is the complement of the element A11 to which the one-port is to be connected. From stability theory Cauer
found that [R], [L] and [D] must all be positive-definite matrices for Zp(s) to be realisable if ideal transformers are
not excluded. Realisability is only otherwise restricted by practical limitations on topology.[52] This work is also
[63]
A well
partly due to Otto Brune (1931), who worked with Cauer in the US prior to Cauer returning to Germany.
[73]
known condition for realisability of a one-port rational
impedance due to Cauer (1929) is that it must be a
function of s that is analytic in the right halfplane (σ>0), have a positive real part in the right halfplane and take on
real values on the real axis. This follows from the Poisson integral representation of these functions. Brune coined
the term positive-real for this class of function and proved that it was a necessary and sufficient condition (Cauer had
only proved it to be necessary) and they extended the work to LC multiports. A theorem due to Sidney Darlington
states that any positive-real function Z(s) can be realised as a lossless two-port terminated in a positive resistor R. No
resistors within the network are necessary to realise the specified response.[63] [74] [75]
As for equivalence, Cauer found that the group of real affine transformations,
where,
14
Passive analogue filter development
is invariant in Zp(s), that is, all the transformed networks are equivalents of the original.[52]
Approximation
The approximation problem in network synthesis is to find functions which will produce realisable networks
approximating to a prescribed function of frequency within limits arbitrarily set. The approximation problem is an
important issue since the ideal function of frequency required will commonly be unachievable with rational
networks. For instance, the ideal prescribed function is often taken to be the unachievable lossless transmission in the
passband, infinite attenuation in the stopband and a vertical transition between the two. However, the ideal function
can be approximated with a rational function, becoming ever closer to the ideal the higher the order of the
polynomial. The first to address this problem was Stephen Butterworth (1930) using his Butterworth polynomials.
Independently, Cauer (1931) used Chebyshev polynomials, initially applied to image filters, and not to the now
well-known ladder realisation of this filter.[63] [76]
Butterworth filter
Butterworth filters are an important class[65] of filters due to Stephen Butterworth (1930)[77] which are now
recognised as being a special case of Cauer's elliptic filters. Butterworth discovered this filter independently of
Cauer's work and implemented it in his version with each section isolated from the next with a valve amplifier which
made calculation of component values easy since the filter sections could not interact with each other and each
section represented one term in the Butterworth polynomials. This gives Butterworth the credit for being both the
first to deviate from image parameter theory and the first to design active filters. It was later shown that Butterworth
filters could be implemented in ladder topology without the need for amplifiers, possibly the first to do so was
William Bennett (1932)[78] in a patent which presents formulae for component values identical to the modern ones.
Bennett, at this stage though, is still discussing the design as an artificial transmission line and so is adopting an
image parameter approach despite having produced what would now be considered a network synthesis design. He
also does not appear to be aware of the work of Butterworth or the connection between them.[40] [79]
Insertion-loss method
The insertion-loss method of designing filters is, in essence, to prescribe a desired function of frequency for the filter
as an attenuation of the signal when the filter is inserted between the terminations relative to the level that would
have been received were the terminations connected to each other via an ideal transformer perfectly matching them.
Versions of this theory are due to Sidney Darlington, Wilhelm Cauer and others all working more or less
independently and is often taken as synonymous with network synthesis. Butterworth's filter implementation is, in
those terms, an insertion-loss filter, but it is a relatively trivial one mathematically since the active amplifiers used by
Butterworth ensured that each stage individually worked into a resistive load. Butterworth's filter becomes a
non-trivial example when it is implemented entirely with passive components. An even earlier filter which
influenced the insertion-loss method was Norton's dual-band filter where the input of two filters are connected in
parallel and designed so that the combined input presents a constant resistance. Norton's design method, together
with Cauer's canonical LC networks and Darlington's theorem that only LC components were required in the body of
the filter resulted in the insertion-loss method. However, ladder topology proved to be more practical than Cauer's
canonical forms.[80]
Darlington's insertion-loss method is a generalisation of the procedure used by Norton. In Norton's filter it can be
shown that each filter is equivalent to a separate filter unterminated at the common end. Darlington's method applies
15
Passive analogue filter development
to the more straightforward and general case of a 2-port LC network terminated at both ends. The procedure consists
of the following steps:
1.
2.
3.
4.
5.
determine the poles of the prescribed insertion-loss function,
from that find the complex transmission function,
from that find the complex reflection coefficients at the terminating resistors,
find the driving point impedance from the short-circuit and open-circuit impedances,[49]
expand the driving point impedance into an LC (usually ladder) network.
Darlington additionally used a transformation found by Hendrik Bode that predicted the response of a filter using
non-ideal components but all with the same Q. Darlington used this transformation in reverse to produce filters with
a prescribed insertion-loss with non-ideal components. Such filters have the ideal insertion-loss response plus a flat
[66] [81]
attenuation across all frequencies.
Elliptic filters
Elliptic filters are filters produced by the insertion-loss method which use elliptic rational functions in their transfer
function as an approximation to the ideal filter response and the result is called a Chebyshev approximation. This is
the same Chebyshev approximation technique used by Cauer on image filters but follows the Darlington
insertion-loss design method and uses slightly different elliptic functions. Cauer had some contact with Darlington
and Bell Labs before WWII (for a time he worked in the US) but during the war they worked independently, in some
cases making the same discoveries. Cauer had disclosed the Chebyshev approximation to Bell Labs but had not left
them with the proof. Sergei Schelkunoff provided this and a generalisation to all equal ripple problems. Elliptic
filters are a general class of filter which incorporate several other important classes as special cases: Cauer filter
(equal ripple in passband and stopband), Chebyshev filter (ripple only in passband), reverse Chebyshev filter (ripple
only in stopband) and Butterworth filter (no ripple in either band).[80] [82]
Generally, for insertion-loss filters where the transmission zeroes and infinite losses are all on the real axis of the
complex frequency plane (which they usually are for minimum component count), the insertion-loss function can be
written as;
where F is either an even (resulting in an antimetric filter) or an odd (resulting in an symmetric filter) function of
frequency. Zeroes of F correspond to zero loss and the poles of F correspond to transmission zeroes. J sets the
passband ripple height and the stopband loss and these two design requirements can be interchanged. The zeroes and
poles of F and J can be set arbitrarily. The nature of F determines the class of the filter;
•
•
•
•
if F is a Chebyshev approximation the result is a Chebyshev filter,
if F is a maximally flat approximation the result is a passband maximally flat filter,
if 1/F is a Chebyshev approximation the result is a reverse Chebyshev filter,
if 1/F is a maximally flat approximation the result is a stopband maximally flat filter,
A Chebyshev response simultaneously in the passband and stopband is possible, such as Cauer's equal ripple elliptic
filter.[80]
Darlington relates that he found in the New York City library Carl Jacobi's original paper on elliptic functions,
published in Latin in 1829. In this paper Darlington was surprised to find foldout tables of the exact elliptic function
transformations needed for Chebyshev approximations of both Cauer's image parameter, and Darlington's
insertion-loss filters.[66]
16
Passive analogue filter development
17
Other methods
Darlington considers the topology of coupled tuned circuits to involve a separate approximation technique to the
insertion-loss method, but also producing nominally flat passbands and high attenuation stopbands. The most
common topology for these is shunt anti-resonators coupled by series capacitors, less commonly, by inductors, or in
the case of a two-section filter, by mutual inductance. These are most useful where the design requirement is not too
stringent, that is, moderate bandwidth, roll-off and passband ripple.[72]
Other notable developments and applications
Mechanical filters
Edward Norton, around 1930, designed a mechanical
filter for use on phonograph recorders and players.
Norton designed the filter in the electrical domain and
then used the correspondence of mechanical quantities
to electrical quantities to realise the filter using
mechanical components. Mass corresponds to
inductance, stiffness to elastance and damping to
resistance. The filter was designed to have a maximally
flat frequency response.[75]
In modern designs it is common to use quartz crystal
filters, especially for narrowband filtering applications.
The signal exists as a mechanical acoustic wave while
it is in the crystal and is converted by transducers
between the electrical and mechanical domains at the
terminals of the crystal.[84]
Transversal filters
Norton's mechanical filter together with its electrical equivalent
circuit. Two equivalents are shown, "Fig.3" directly corresponds to
the physical relationship of the mechanical components; "Fig.4" is an
equivalent transformed circuit arrived at by repeated application of a
well known transform, the purpose being to remove the series
resonant circuit from the body of the filter leaving a simple LC ladder
[83]
network.
Transversal filters are not usually associated with passive implementations but the concept can be found in a Wiener
and Lee patent from 1935 which describes a filter consisting of a cascade of all-pass sections.[85] The outputs of the
various sections are summed in the proportions needed to result in the required frequency function. This works by
the principle that certain frequencies will be in, or close to antiphase, at different sections and will tend to cancel
when added. These are the frequencies rejected by the filter and can produce filters with very sharp cut-offs. This
approach did not find any immediate applications, and is not common in passive filters. However, the principle finds
many applications as an active delay line implementation for wide band discrete-time filter applications such as
television, radar and high-speed data transmission.[86] [87]
Passive analogue filter development
Matched filter
The purpose of matched filters is to maximise the signal-to-noise ratio (S/N) at the expense of pulse shape. Pulse
shape, unlike many other applications, is unimportant in radar while S/N is the primary limitation on performance.
The filters were introduced during WWII (described 1943)[88] by Dwight North and are often eponymously referred
to as "North filters".[86] [89]
Filters for control systems
Control systems have a need for smoothing filters in their feedback loops with criteria to maximise the speed of
movement of a mechanical system to the prescribed mark and at the same time minimise overshoot and noise
induced motions. A key problem here is the extraction of Gaussian signals from a noisy background. An early paper
on this was published during WWII by Norbert Wiener with the specific application to anti-aircraft fire control
analogue computers. Rudy Kalman (Kalman filter) later reformulated this in terms of state-space smoothing and
prediction where it is known as the linear-quadratic-Gaussian control problem. Kalman started an interest in
state-space solutions, but according to Darlington this approach can also be found in the work of Heaviside and
earlier.[86]
Modern practice
LC passive filters gradually became less popular as active amplifying elements, particularly operational amplifiers,
became cheaply available. The reason for the change is that wound components (the usual method of manufacture
for inductors) are far from ideal, the wire adding resistance as well as inductance to the component. Inductors are
also relatively expensive and are not "off-the-shelf" components. On the other hand, the function of LC ladder
sections, LC resonators and RL sections can be replaced by RC components in an amplifier feedback loop (active
filters). These components will usually be much more cost effective, and smaller as well. Cheap digital technology,
in its turn, has largely supplanted analogue implementations of filters. However, there is still an occasional place for
them in the simpler applications such as coupling where sophisticated functions of frequency are not needed.[90] [91]
Footnotes
[1] A terminal of a network is a connection point where current can enter or leave the network from the world outside. This is often called a pole
in the literature, especially the more mathematical, but is not to be confused with a pole of the transfer function which is a meaning also used
in this article. A 2-terminal network amounts to a single impedance (although it may consist of many elements connected in a complicated set
of meshes) and can also be described as a one-port network. For networks of more than two terminals it is not necessarily possible to identify
terminal pairs as ports.
[2] Lundheim, p.24
[3] L. J. Raphael, G. J. Borden, K. S. Harris, Speech science primer: physiology, acoustics, and perception of speech, p.113, Lippincott Williams
& Wilkins 2006 ISBN 0-7817-7117-X
[4] Joseph Henry, "On induction from ordinary electricity; and on the oscillatory discharge", Proceedings of the American Philosophical Society,
vol 2, pp.193-196, 17th June 1842
[5] Blanchard, pp.415-416
[6] Hermann von Helmholtz, Uber die Erhaltung der Kraft (On the Conservation of Force), G Reimer, Berlin, 1847
[7] Blanchard, pp.416-417
[8] William Thomson, "On transient electric currents", Philosophical Magazine, vol 5, pp.393-405, June 1853
[9] Blanchard, p.417
[10] The resonant frequency is very close to, but usually not exactly equal to, the natural frequency of oscillation of the circuit
[11] William Grove, "An experiment in magneto-electric induction", Philosophical Magazine, vol 35, pp.184-185, March 1868
[12] Oliver Lodge and some other English scientists tried to keep acoustic and electric terminology separate and promoted the term "syntony".
However it was "resonance" that was to win the day. Blanchard, p.422
[13] James Clerk Maxwell, "On Mr Grove's experiment in magneto-electric induction", Philosophical Magazine, vol 35, pp 360-363, May 1868
[14] Blanchard, pp.416–421
[15] Heinrich Hertz, "Electric waves", p.42, The Macmillan Company, 1893
[16] Blanchard, pp.421-423
18
Passive analogue filter development
[17] Blanchard, p.425
[18] M Hutin, M Leblanc, Multiple Telegraphy and Telephony, United States patent US0838545, filed 9 May 1894, issued 18 Dec 1906
[19] This image is from a later, corrected, US patent but patenting the same invention as the original French patent
[20] Maurice Hutin, Maurice Leblanc, "Êtude sur les Courants Alternatifs et leur Applicationg au Transport de la Force", La Lumière Electrique,
2 May 1891
[21] Blanchard, pp.426-427
[22] Q factor is a dimensionless quantity enumerating the quality of a resonating circuit. It is roughly proportional to the number of oscillations,
which a resonator would support after a single external excitation (for example, how many times a guitar string would wobble if pulled). One
definition of Q factor, the most relevant one in this context, is the ratio of resonant frequency to bandwidth of a circuit. It arose as a measure of
selectivity in radio receivers
[23] Lundheim (2002), p. 23
[24] Telegraph lines are typically unbalanced with only a single conductor provided, the return path is achieved through an earth connection
which is common to all the telegraph lines on a route. Telephone lines are typically balanced with two conductors per circuit. A telegraph
signal connected common-mode to both conductors of the telephone line will not be heard at the telephone receiver which can only detect
voltage differences between the conductors. The telegraph signal is typically recovered at the far end by connection to the center tap of a line
transformer. The return path is via an earth connection as usual. This is a form of phantom circuit
[25] K. G. Beauchamp, History of telegraphy, pp 84-85, Institution of Electrical Engineers, 2001 ISBN 0-85296-792-6
[26] Georg Ohm, Die galvanische Kette, mathematisch bearbeitet, Riemann Berlin, 1827
[27] At least, Ohm described the first model that was in any way correct. Earlier ideas such as Barlow's law from Peter Barlow were either
incorrect, or inadequately described. See, for example. p.603 of;
*John C. Shedd, Mayo D. Hershey, "The history of Ohm's law", The Popular Science Monthly, pp.599-614, December 1913 ISSN 0161-7370.
[28] Hunt, pp 62-63
[29] Werner von Siemens had also noted the retardation effect a few years earlier in 1849 and came to a similar conclusion as Faraday. However,
there was not so much interest in Germany in underwater and underground cables as there was in Britain, the German overhead cables did not
noticeably suffer from retardation and Siemen's ideas were not accepted. (Hunt, p.65.)
[30] Thomas William Körner, Fourier analysis, p.333, Cambridge University Press, 1989 ISBN 0-521-38991-7
[31] Heaviside, O, Electrical Papers, vol 1, pp.139-140, Boston, 1925
[32] Heaviside, O, "Electromagnetic Induction and its propagation", The Electrician, 3 June 1887
[33] James E. Brittain, "The Introduction of the Loading Coil: George A. Campbell and Michael I. Pupin", Technology and Culture, Vol. 11, No.
1 (Jan., 1970), pp. 36–57, The Johns Hopkins University Press doi:10.2307/3102809
[34] Darlington, pp.4-5
[35] The exact date Campbell produced each variety of filter is not clear. The work started in 1910, initially patented in 1917 (US1227113) and
the full theory published in 1922, but it is known that Campbell's filters were in use by AT&T long before the 1922 date (Bray, p.62,
Darlington, p.5)
[36] Bray, J, Innovation and the Communications Revolution, p 62, Institute of Electrical Engineers, 2002
[37] George A, Campbell, Electric wave-filter, US patent 1 227 113, filed 15 July 1915, issued 22 May 1917.
[38] Campbell has publishing priority for this invention but it is worth noting that Karl Willy Wagner independently made a similar discovery
which he was not allowed to publish immediately because World War I was still ongoing. (Thomas H. Lee, Planar microwave engineering,
p.725, Cambridge University Press 2004 ISBN 0-521-83526-7.)
[39] The term "image parameter method" was coined by Darlington (1939) in order to distinguish this earlier technique from his later
"insertion-loss method"
[40] "History of Filter Theory" (http:/ / www. quadrivium. nl/ history/ history. html), Quadrivium, retrieved 26th June 2009
[41] S. Darlington, "Synthesis of reactance 4-poles which produce prescribed insertion loss characteristics", Journal of Mathematics and Physics,
vol 18, pp.257-353, September 1939
[42] The terms wave filter and image filter are not synonymous, it is possible for a wave filter to not be designed by the image method, but in the
1920s the distinction was moot as the image method was the only one available
[43] Matthaei, pp.49-51
[44]
[45]
[46]
[47]
Carson, J. R., "Notes on the Theory of Modulation" Procedures of the IRE, vol 10, No 1, pp.57-64, 1922 doi:10.1109/JRPROC.1922.219793
Carson, J R and Zobel, O J, "Transient Oscillation in Electric Wave Filters", Bell Systems Technical Journal, vol 2, July 1923, pp.1-29
Lundheim, pp.24-25
Zobel, O. J.,Theory and Design of Uniform and Composite Electric Wave Filters, Bell Systems Technical Journal, Vol. 2 (1923), pp. 1-46.
[48] Darlington, p.5
[49] The open-circuit impedance of a two-port network is the impedance looking into one port when the other port is open circuit. Similarly, the
short-circuit impedance is the impedance looking into one port when the other is terminated in a short circuit. The open-circuit impedance of
the first port in general (except for symmetrical networks) is not equal to the open-circuit impedance of the second and likewise for
short-circuit impedances
[50] Belevitch, p.851
[51] Cauer et al., p.6
[52] Cauer et al., p.4
19
Passive analogue filter development
[53] Karl L. Wildes, Nilo A. Lindgren, A century of electrical engineering and computer science at MIT, 1882-1982, p.157, MIT Press, 1985
ISBN 0-262-23119-0
[54] Matthaei, pp.83-84
[55] Arthur E. Kennelly, 1861 - 1939 (http:/ / www. ieee. org/ web/ aboutus/ history_center/ biography/ kennelly. html) IEEE biography,
retrieved June 13 2009
[56] Darlington, p.4
[57] Foster, R M, "A Reactance Theorem", Bell System Technical Journal, vol 3, pp.259-267, 1924
[58] Cauer et al., p.1
[59] Darlington, pp.4-6
[60] Cauer, W, "Die Verwirklichung der Wechselstromwiderstände vorgeschriebener Frequenzabhängigkeit" ("The realisation of impedances of
specified frequency dependence"), Archiv für Elektrotechnic, vol 17, pp.355-388, 1926 doi:10.1007/BF01662000
[61] which is the best known of the filter topologies. It is for this reason that ladder topology is often referred to as Cauer topology (the forms
used earlier by Foster are quite different) even though ladder topology had long since been in use in image filter design
[62] A.P.Godse U.A.Bakshi, Electronic Circuit Analysis, p.5-20, Technical Publications, 2007 ISBN 81-8431-047-1
[63] Belevitch, p.850
[64] Cauer et al., pp.1,6
[65] A class of filters is a collection of filters which are all described by the same class of mathematical function, for instance, the class of
Chebyshev filters are all described by the class of Chebyshev polynomials. For realisable linear passive networks, the transfer function must
be a ratio of polynomial functions. The order of a filter is the order of the highest order polynomial of the two and will equal the number of
elements (or resonators) required to build it. Usually, the higher the order of a filter, the steeper the roll-off of the filter will be. In general, the
values of the elements in each section of the filter will not be the same if the order is increased and will need to be recalculated. This is in
contrast to the image method of design which simply adds on more identical sections
[66] Darlington, p.9
[67] Irwin W. Sandberg, Ernest S. Kuh, "Sidney Darlington", Biographical Memoirs, vol 84, page 85, National Academy of Sciences (U.S.),
National Academies Press 2004 ISBN 0-309-08957-3
[68] J. Zdunek, "The network synthesis on the insertion-loss basis", Proceedings of the Institution of Electrical Engineers, p.283, part 3, vol 105,
1958
[69] Matthaei et al., p.83
[70] Michael Glynn Ellis, Electronic filter analysis and synthesis, p.2, Artech House 1994 ISBN 0-89006-616-7
[71] John T. Taylor, Qiuting Huang, CRC handbook of electrical filters, p.20, CRC Press 1997 ISBN 0-8493-8951-8
[72] Darlington, p.12
[73] A rational impedance is one expressed as a ratio of two finite polynomials in s, that is, a rational function in s. The implication of finite
polynomials is that the impedance, when realised, will consist of a finite number of meshes with a finite number of elements
[74] Cauer et al., pp.6-7
[75] Darlington, p.7
[76] Darlington, pp.7-8
[77] Butterworth, S, "On the Theory of Filter Amplifiers", Wireless Engineer, vol. 7, 1930, pp. 536-541
[78] William R. Bennett, Transmission network, US Patent 1,849,656, filed 29 June 1929, issued 15 March 1932
[79] Matthaei et al., pp.85-108
[80]
[81]
[82]
[83]
Darlington, p.8
Vasudev K Aatre, Network theory and filter design, p.355, New Age International 1986, ISBN 0-85226-014-8
Matthaei et al., p.95
E. L. Norton, "Sound reproducer", US Patent US1792655, filed 31st May 1929, issued 17th February 1931
[84] Vizmuller, P, RF Design Guide: Systems, Circuits, and Equations, pp.81-84, Artech House, 1995 ISBN 0-89006-754-6
[85]
[86]
[87]
[88]
N Wiener and Yuk-wing Lee, Electric network system, United States patent US2024900, 1935
Darlington, p.11
B. S. Sonde, Introduction to System Design Using Integrated Circuits, pp.252-254, New Age International 1992 ISBN 81-224-0386-7
D. O. North, "An analysis of the factors which determine signal/noise discrimination in pulsed carrier systems" (http:/ / ieeexplore. ieee. org/
xpl/ freeabs_all. jsp?arnumber=1444313), RCA Labs. Rep. PTR-6C, 1943
[89] Nadav Levanon, Eli Mozeson, Radar Signals, p.24, Wiley-IEEE 2004 ISBN 0-471-47378-2
[90] Jack L. Bowers, "R-C bandpass filter design", Electronics, vol 20, pages 131-133, April 1947
[91] Darlington, pp.12-13
20
Passive analogue filter development
21
References
Bibliography
• Belevitch, V, "Summary of the history of circuit theory", Proceedings of the IRE, vol 50, Iss 5, pp.848-855,
May 1962 doi:10.1109/JRPROC.1962.288301.
• Blanchard, J, "The History of Electrical Resonance", Bell System Technical Journal, vol.23, pp.415–433,
1944.
• E. Cauer, W. Mathis, and R. Pauli, "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the
Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000),
Perpignan, June, 2000. Retrieved online (http://www.cs.princeton.edu/courses/archive/fall03/cs323/
links/cauer.pdf) 19th September 2008.
• Darlington, S, "A history of network synthesis and filter theory for circuits composed of resistors, inductors,
and capacitors", IEEE Trans. Circuits and Systems, vol 31, pp.3-13, 1984 doi:10.1109/TCS.1984.1085415.
• Bruce J. Hunt, The Maxwellians (http://books.google.com/books?id=23rBH11Q9w8C&
printsec=frontcover), Cornell University Press, 2005 ISBN 0-8014-8234-8.
• Lundheim, L, "On Shannon and "Shannon's Formula", Telektronikk, vol. 98, no. 1, 2002, pp. 20-29 retrieved
online (http://www.iet.ntnu.no/groups/signal/people/lundheim/Page_020-029.pdf) 25th Sep 2008.
• Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures,
McGraw-Hill 1964.
Antimetric (electrical networks)
An antimetric electrical network is one that exhibits anti-symmetrical electrical properties. The term is often
encountered in filter theory, but it applies to general electrical network analysis. Antimetric is the diametrical
opposite of symmetric; it does not merely mean "asymmtetric" (i.e., "lacking symmetry").
Definition
References to symmetry and antimetry of a
network usually refer to the input
impedances of a two-port network when
correctly terminated. A symmetric network
will have the two equal impedances, Zi1 and
Zi2. For an antimetric network, the two
impedances must be the dual of each other
with respect to some nominal impedance R0.
That is,[1]
Examples of symmetry and antimetry: both networks are low-pass filters but one is
symmetric (left) and the other is antimetric (right). For a symmetric ladder the 1st
element is equal to the nth, the 2nd equal to the (n-1)th and so on. For an antimetric
ladder, the 1st element is the dual of the nth and so on.
which is well-defined because R0 ≠ 0 and Zi2 ≠ 0. Hence,
Antimetric (electrical networks)
22
It is necessary for antimetry that the terminating impedances are also the dual of each other, but in many practical
cases the two terminating impedances are resistors and are both equal to the nominal impedance R0. Hence, they are
both symmetric and antimetric at the same time.[1]
Other network parameters may also be referred to as antimetric. For instance, for a two-port network described by
scattering parameters (S-parameters),
if the network is symmetric, and
if the network is antimetric.[2]
Physical and electrical antimetry
Symmetric and antimetric networks are often also topologically symmetric and antimetric, respectively. That is, the
physical arrangement of their components and values are symmetric or antimetric as in the ladder example above.
However, it is not a necessary condition for electrical antimetry. For example, if the example networks from the
preceding section have an additional T-section added to the left-hand side, then the networks remain topologically
[3]
symmetric and antimetric. However, the network resulting from the application of Bartlett's bisection theorem
applied to the first T-section in each network are neither physically symmetric nor antimetric but retain their
electrical symmetric (in the first case) and antimetric (in the second case) properties.[4]
Mechanics
Antimetry appears in mechanics as a property of forces, motions, and
oscillations. Symmetric forces produce translational motion and
normal stress, and antimetric forces produce rotational motion and
shear stress. Any asymmetric pair of forces can be expressed as a linear
combination of a symmetric and an antimetric pair.[5]
References
[1] Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and
Coupling Structures, pp. 70–72, McGraw-Hill, 1964.
[2] Carlin, HJ, Civalleri, PP, Wideband circuit design, pp. 299–304, CRC Press, 1998.
ISBN 0849378974.
Examples of symmetric (top) and antimetric
(bottom) forces acting on a pivoted beam.
[3] Bartlett, AC, "An extension of a property of artificial lines", Phil. Mag., vol 4,
p. 902, November 1927.
[4] Belevitch, V, "Summary of the History of Circuit Theory", Proceedings of the IRE,
vol 50, p. 850, May 1962.
[5] Ray, SS. Structural steelwork: analysis and design, pp. 44–46, Wiley-Blackwell,
1998. ISBN 0632038578.
Bartlett's bisection theorem
23
Bartlett's bisection theorem
Bartlett's Bisection Theorem is an electrical theorem in network analysis due to Albert Charles Bartlett. The
theorem shows that any symmetrical two-port network can be transformed into a lattice network.[1] The theorem
often appears in filter theory where the lattice network is sometimes known as a filter X-section following the
common filter theory practice of naming sections after alphabetic letters to which they bear a resemblance.
The theorem as originally stated by Bartlett required the two halves of the network to be topologically symmetrical.
The theorem was later extended by Wilhelm Cauer to apply to all networks which were electrically symmetrical.
That is, the physical implementation of the network is not of any relevance. It is only required that its response in
[2]
both halves are symmetrical.
Applications
Lattice topology filters are not very common. The reason for this is that they require more components (especially
inductors) than other designs. Ladder topology is much more popular. However, they do have the property of being
intrinsically balanced and a balanced version of another topology, such as T-sections, may actually end up using
more inductors. One application is for all-pass phase correction filters on balanced telecommunication lines. The
theorem also makes an appearance in the design of crystal filters at RF frequencies. Here ladder topologies have
some undesirable properties, but a common design strategy is to start from a ladder implementation because of its
simplicity. Bartlett's theorem is then used to transform the design to an intermediate stage as a step towards the final
implementation (using a transformer to produce an unbalanced version of the lattice topology).[3]
Definition and proof
Definition
Start with a two-port network, N, with a plane of
symmetry between the two ports. Next cut N through
its plane of symmetry to form two new identical
two-ports, ½N. Connect two identical voltage
generators to the two ports of N. It is clear from the
symmetry that no current is going to flow through any
branch passing through the plane of symmetry. The
impedance measured into a port of N under these
circumstances will be the same as the impedance
measured if all the branches passing through the plane
of symmetry were open circuit. It is therefore the same
impedance as the open circuit impedance of ½N. Let us
call that impedance
.
Now consider the network N with two identical voltage
generators connected to the ports but with opposite
polarity. Just as superposition of currents through the
branches at the plane of symmetry must be zero in the
previous case, by analogy and applying the principle of duality, superposition of voltages between nodes at the plane
of symmetry must likewise be zero in this case. The input impedance is thus the same as the short circuit impedance
of ½N. Let us call that impedance
.
Bartlett's bisection theorem
24
Bartlett's bisection theorem states that the network N is equivalent to a lattice network with series branches of
and cross branches of
[4]
.
Proof
Consider the lattice network shown with identical generators, E,
connected to each port. It is clear from symmetry and
superposition that no current is flowing in the series branches
. Those branches can thus be removed and left open circuit
without any effect on the rest of the circuit. This leaves a circuit
loop with a voltage of 2E and an impedance of
giving a
current in the loop of;
and an input impedance of;
as it is required to be for equivalence to the original two-port.
Similarly, reversing one of the generators results, by an identical argument, in a loop with an impedance of
and an input impedance of;
Recalling that these generator configurations are the precise way in which
and
were defined in the original
two-port it is proved that the lattice is equivalent for those two cases. It is proved that this is so for all cases by
considering that all other input and output conditions can be expressed as a linear superposition of the two cases
already proved.
Bartlett's bisection theorem
25
Examples
Lattice equivalent of a T-section high-pass filter
Lattice equivalent of a Zobel bridge-T low-pass filter
It is possible to use the Bartlett transformation in reverse; that is, to transform a symmetrical lattice network into
some other symmetrical topology. The examples shown above could just as equally have been shown in reverse.
However, unlike the examples above, the result is not always physically realisable with linear passive components.
This is because there is a possibility the reverse transform will generate components with negative values. Negative
quantities can only be physically realised with active components present in the network.
Extension of the theorem
There is an extension to Bartlett's theorem that allows a symmetrical filter network operating between equal input
and output impedance terminations to be modified for unequal source and load impedances. This is an example of
impedance scaling of a prototype filter. The symmetrical network is bisected along its plane of symmetry. One half
is impedance-scaled to the input impedance and the other is scaled to the output impedance. The response shape of
the filter remains the same. This does not amount to an impedance matching network, the impedances looking in to
the network ports bear no relationship to the termination impedances. This means that a network designed by
Bartlett's theorem, while having exactly the filter response predicted, also adds a constant attenuation in addition to
the filter response. In impedance matching networks, a usual design criteria is to maximise power transfer. The
output response is "the same shape" relative to the voltage of the theoretical ideal generator driving the input. It is
not the same relative to the actual input voltage which is delivered by the theoretical ideal generator via its load
[5] [6]
impedance.
The constant gain due to the difference in input and output impedances is given by;
Bartlett's bisection theorem
26
Note that it is possible for this to be greater than unity, that is, a voltage gain is possible, but power is always lost.
References
[1] Bartlett, AC, "An extension of a property of artificial lines", Phil. Mag., vol 4, p902, November 1927.
[2] Belevitch, V, "Summary of the History of Circuit Theory", Proceedings of the IRE, vol 50, pp850, May, 1962.
[3] Vizmuller, P, RF Design Guide: Systems, Circuits, and Equations, pp 82–84, Artech House, 1995 ISBN 0890067546.
[4] Farago, PS, An Introduction to Linear Network Analysis, pp117-121, The English Universities Press Ltd, 1961.
[5] Guillemin, EA, Synthesis of Passive Networks: Theory and Methods Appropriate to the Realization and Approximation Problems, p207,
Krieger Publishing, 1977, ISBN 0882754815
[6] Williams, AB, Taylor, FJ, Electronic Filter Design Handbook, 2nd ed. McGraw-Hill, New York, 1988.
Beat frequency oscillator
A beat frequency oscillator or BFO in radio telegraphy, is a dedicated oscillator used to create an audio frequency
signal from Morse code (CW} transmissions to make them audible. The signal from the BFO is then heterodyned
with the intermediate frequency signal to create an audio frequency signal. A BFO may also be used to produce an
intelligible signal from a single-sideband (SSB) modulated carrier by essentially reproducing the "missing" carrier.
(An amplitude modulated carrier has dual complementary sidebands and thus requires twice the bandwidth and
power of SSB.) SSB is widely used in amateur or "ham" radio.
Example
A receiver is tuned to a Morse code signal, and the receiver's intermediate frequency (IF) is Fif = 45000 Hz. That
means the dots and dashes have become pulses of a 45000 Hz signal, which is inaudible.
To make them audible, the frequency needs to be shifted into the audio range, for instance F
do that, the desired frequency shift is Fbfo = 44000 Hz.
baseband
= 1000 Hz. To
When the signal at frequency Fif is multiplied by that waveform in the mixer stage of the receiver. This shifts the
signal to two other frequencies: |Fif − Fbfo| and (Fif + Fbfo). The difference frequency, |Fif − Fbfo| = 1000 Hz, is also
known as the beat frequency.
The other frequency, (Fif + Fbfo) = 89000 Hz, can then be removed by a lowpass filter, such as an ordinary speaker
(which cannot vibrate at such a high frequency) or the human ear (which is not sensitive to frequencies over
approximately 20kHz).
Fbfo = 46000 Hz also produces the desired 1000 Hz beat frequency. Using a higher or lower frequency than the IF
has little consequence for Morse reception, but will invert the spectrum of received SSB transmissions, making the
resultant speech unintelligible.
Notes
By varying the BFO frequency around 44000 Hz, the listener can vary the output audio frequency; this is useful to
correct for small differences between the tuning of the transmitter and the receiver, particularly useful when tuning
in single sideband voice. The waveform produced by the BFO beats against the IF signal in the mixer stage of the
receiver. Any drift of the local oscillator or the beat-frequency oscillator will affect the pitch of the received audio,
so stable oscillators are used. [1]
For a radio signal with more bandwidth than Morse code, low-side injection preserves the relative order of the
frequency components. High-side injection reverses their order, which is often desirable to counteract a previous
Beat frequency oscillator
reversal in the radio receiver.
References
[1] Paul Horowitz, Winfield Hill "The Art of Electronics 2nd Ed." Cambridge University Press 1989 ISBN 0521370957page 898
• "Radiotelephone" (http://www.tpub.com/content/neets/14189/css/14189_57.htm), NEETS, Module
17--Radio-Frequency Communication Principles. Integrated Publishing, Electrical Engineering Training Series.
• "Ceramic Filter Beat Frequency Oscillator" (http://www.naturemagics.com/ham-radio/ceramic-filter-bfo.
shtm), Naturemagics.com (http://www.naturemagics.com/).
• "Voice Modes" (http://www.arrl.org/voice-modes), AARL (http://www.arrl.org/).
27
Bessel filter
28
Bessel filter
Linear analog electronic filters
[1]
In electronics and signal processing, a Bessel filter is a type of linear filter with a maximally flat group delay
(maximally linear phase response). Bessel filters are often used in audio crossover systems. Analog Bessel filters are
characterized by almost constant group delay across the entire passband, thus preserving the wave shape of filtered
signals in the passband.
The filter's name is a reference to Friedrich Bessel, a German mathematician (1784–1846), who developed the
mathematical theory on which the filter is based. The filters are also called Bessel–Thomson filters in recognition of
W. E. Thomson, who worked out how to apply Bessel functions to filter design.[1]
The transfer function
A Bessel low-pass filter is characterized by its transfer function:[2]
A plot of the gain and group delay for a
fourth-order low pass Bessel filter. Note that the
transition from the pass band to the stop band is
much slower than for other filters, but the group
delay is practically constant in the passband. The
Bessel filter maximizes the flatness of the group
delay curve at zero frequency.
where
is a reverse Bessel polynomial from which the filter gets its name and
give the desired cut-off frequency. The filter has a low-frequency group delay of
is a frequency chosen to
.
Bessel filter
29
Bessel polynomials
The transfer function of the Bessel filter is a rational function whose
denominator is a reverse Bessel polynomial, such as the following:
The roots of the third-order Bessel polynomial
are the poles of filter transfer function in the s
plane, here plotted as crosses.
The reverse Bessel polynomials are given by:[2]
where
Example
The transfer function for a third-order (three-pole) Bessel low-pass
filter, normalized to have unit group delay, is
Gain plot of the third-order Bessel filter, versus
normalized frequency
Bessel filter
Group delay plot of the third-order Bessel filter,
illustrating flat unit delay in the passband
The roots of the denominator polynomial, the filter's poles, include a real pole at s = −2.3222, and a
complex-conjugate pair of poles at s = −1.8389 ± j1.7544, plotted above. The numerator 15 is chosen to give a gain
of 1 at DC (at s = 0).
The gain is then
The phase is
The group delay is
The Taylor series expansion of the group delay is
Note that the two terms in ω2 and ω4 are zero, resulting in a very flat group delay at ω = 0. This is the greatest
number of terms that can be set to zero, since there are a total of four coefficients in the third order Bessel
polynomial, requiring four equations in order to be defined. One equation specifies that the gain be unity at ω = 0
and a second specifies that the gain be zero at ω = ∞, leaving two equations to specify two terms in the series
expansion to be zero. This is a general property of the group delay for a Bessel filter of order n: the first n − 1 terms
in the series expansion of the group delay will be zero, thus maximizing the flatness of the group delay at ω = 0.
30
Bessel filter
References
[1] Thomson, W.E., "Delay Networks having Maximally Flat Frequency Characteristics", Proceedings of the Institution of Electrical Engineers,
Part III, November 1949, Vol. 96, No. 44, pp. 487–490.
[2] Giovanni Bianchi and Roberto Sorrentino (2007). Electronic filter simulation & design (http:/ / books. google. com/
books?id=5S3LCIxnYCcC& pg=PT53& dq=Bessel+ filter+ polynomial& lr=& as_brr=3& ei=gyeWSvTbIpmwNPyaqNcH#v=onepage&
q=Bessel filter polynomial& f=false). McGraw–Hill Professional. p. 31–43. ISBN 9780071494670. .
External links
•
•
•
•
http://www.filter-solutions.com/bessel.html
http://www.rane.com/note147.html
http://www.crbond.com/papers/bsf.pdf
http://www-k.ext.ti.com/SRVS/Data/ti/KnowledgeBases/analog/document/faqs/bes.htm
Brassboard
A brassboard or brass board is an experimental or demonstration test model, intended for field testing outside the
laboratory environment. A brassboard follows an earlier prototyping stage called a breadboard. A brassboard
contains both the functionality and approximate physical configuration of the final operational product. Unlike
breadboards, brassboards typically recreate geometric and dimensional constraints of the final system which are
critical to its performance, as is the case in radio frequency systems.[1] While representative of the physical layout of
the production-grade product, a brassboard will not necessarily incorporate all final details, nor represent the
physical size and quality level of the final deliverable product.
Exact definition of a brassboard depends on the industry and has changed with time. A 1992 guide book on proposal
preparation defined a brassboard or a breadboard as "a laboratory or shop working model that may or may not look
like the final product or system, but that will operate in the same way as the final system". The definition of a
breadboard was further narrowed to purely electronic systems, while a brassboard was treated as "a similar
arrangement for hydraulic, pneumatic or mechanically interconnected components".[2]
In modern system-on-a-chip prototyping, brassboard is defined as a second prototyping stage that follows
engineering validation boards (EVB) and precedes wingboards and final pre-production samples. Typically, the
board area decreases four times with each of these steps, so a brassboard is one fourth as large as an EVB, four times
larger than a wingboard and around sixteen times larger than a production device. A modern brassboard printed
circuit board typically contains ten conductive layers while a considerably larger EVB typically has eighteen (it
needs larger and more sophisticated ground planes to overcome the effects of larger area and longer connecting
tracks).[3]
Footnotes
[1] Mooz et al., p. 205.
[2] Stewart and Stewart, p. 46.
[3] Waldo, p. 170.
References
• Hal Mooz, Kevin Forsberg, Howard Cotterman (2003). Communicating project management: the integrated
vocabulary of project management and systems engineering (http://books.google.com/
books?id=pthp6P7bKuAC). John Wiley and Sons. ISBN 0471269247.
31
Brassboard
32
• Rodney D. Stewart, Ann L. Stewart (1992). Proposal preparation (http://books.google.com/
books?id=HS7xoLSXUJYC). Wiley-IEEE. ISBN 0471552690.
• Whitson G. Waldo (2010). Program Management for System on Chip Platforms: New Product Introduction of
Hardware and Software (http://books.google.com/books?id=Jf2r9nLaHXoC). First Books. ISBN 1592994830.
Breadboard
A breadboard (protoboard) is a construction base for a one-of-a-kind
electronic circuit, a prototype. In modern times the term is commonly
used to refer to a particular type of breadboard, the solderless
breadboard (plugboard).
Because the solderless breadboard does not require soldering, it is
reusable. This makes it easy to use for creating temporary prototypes
and experimenting with circuit design. Older breadboard types did not
have this property. A stripboard (veroboard) and similar prototyping
printed circuit boards, which are used to build permanent soldered
prototypes or one-offs, cannot easily be reused. A variety of electronic
systems may be prototyped by using breadboards, from small analog
and digital circuits to complete central processing units (CPUs).
A solderless breadboard with a completed circuit
This 1920s TRF radio manufactured by Signal is
constructed on a wooden breadboard
Breadboard
Evolution
In the early days of radio, amateurs nailed bare copper wires or
terminal strips to a wooden board (often literally a cutting board for
bread) and soldered electronic components to them.[1] Sometimes a
paper schematic diagram was first glued to the board as a guide to
placing terminals, then components and wires were installed over their
symbols on the schematic. Using thumbtacks or small nails as
mounting posts was also common.
The hole pattern for a typical etched prototyping
Breadboards have evolved over time, with the term now being used for
PCB (printed circuit board) is similar to the node
all kinds of prototype electronic devices. For example, US Patent
pattern of the solderless breadboards shown
[2]
3,145,483, filed in 1961 and granted in 1964, describes a wooden
above.
plate breadboard with mounted springs and other facilities. US Patent
3,496,419,[3] , filed in 1967 and granted in 1970, refers to a particular printed circuit board layout as a Printed
Circuit Breadboard. Both examples refer to and describe other types of breadboards as prior art.
The breadboard most commonly used today is usually made of white plastic and is a pluggable (solderless)
[4]
breadboard. It was designed by Ronald J Portugal of EI Instruments Inc. in 1971.
Solderless breadboard
Typical specifications
A modern solderless breadboard consists of a perforated block of plastic with numerous tin plated phosphor bronze
or nickel silver alloy[5] spring clips under the perforations. The clips are often called tie points or contact points. The
number of tie points is often given in the specification of the breadboard.
The spacing between the clips (lead pitch) is typically 0.1" (2.54 mm). Integrated circuits (ICs) in dual in-line
packages (DIPs) can be inserted to straddle the centerline of the block. Interconnecting wires and the leads of
discrete components (such as capacitors, resistors, and inductors) can be inserted into the remaining free holes to
complete the circuit. Where ICs are not used, discrete components and connecting wires may use any of the holes.
Typically the spring clips are rated for 1 ampere at 5 volts and 0.333 amperes at 15 volts (5 watts).
33
Breadboard
34
Bus and terminal strips
Solderless breadboards are available from several
different manufacturers, but most share a similar
layout. The layout of a typical solderless breadboard is
made up from two types of areas, called strips. Strips
consist of interconnected electrical terminals.
Terminal strips
The main areas, to hold most of the electronic
components.
In the middle of a terminal strip of a breadboard,
one typically finds a notch running in parallel to
the long side. The notch is to mark the centerline
of the terminal strip and provides limited airflow
(cooling) to DIP ICs straddling the centerline.
The clips on the right and left of the notch are
each connected in a radial way; typically five
clips (i.e., beneath five holes) in a row on each
side of the notch are electrically connected. The
five clip columns on the left of the notch are
often marked as A, B, C, D, and E, while the
ones on the right are marked F, G, H, I and J.
When a "skinny" Dual In-line Pin package (DIP)
integrated circuit (such as a typical DIP-14 or
DIP-16, which have a 0.3 inch separation
between the pin rows) is plugged into a
breadboard, the pins of one side of the chip are
supposed to go into column E while the pins of
the other side go into column F on the other side
of the notch.
Logical 4-bits adder where sums are linked to LEDs on a typical
breadboard.
Bus strips
To provide power to the electronic components.
A bus strip usually contains two columns: one for
ground and one for a supply voltage. However,
some breadboards only provide a single-column
power distributions bus strip on each long side.
Typically the column intended for a supply
voltage is marked in red, while the column for
ground is marked in blue or black. Some
manufacturers connect all terminals in a column.
Others just connect groups of, for example, 25
consecutive terminals in a column. The latter
design provides a circuit designer with some
more control over crosstalk (inductively coupled
noise) on the power supply bus. Often the groups
in a bus strip are indicated by gaps in the color marking.
Example breadboard drawing. Two
bus strips and one terminal strip in
one block. 25 consecutive terminals
in a bus strip connected (indicated
by gaps in the red and blue lines).
Four binding posts depicted at the
top.
Breadboard
35
Bus strips typically run down one or both sides
of a terminal strip or between terminal strips. On
large breadboards additional bus strips can often
be found on the top and bottom of terminal strips.
Some manufacturers provide separate bus and terminal
strips. Others just provide breadboard blocks which
contain both in one block. Often breadboard strips or
blocks of one brand can be clipped together to make a
larger breadboard.
In a more robust variant, one or more breadboard strips
are mounted on a sheet of metal. Typically, that
backing sheet also holds a number of binding posts.
These posts provide a clean way to connect an external
power supply. This type of breadboard may be slightly
easier to handle. Several images in this article show
such solderless breadboards.
Close-up of a solderless breadboard. An IC straddling the centerline
is probed with an oscilloscope probe. The solderless breadboard is
mounted on a blue painted metal sheet. Red and black binding posts
are present. The black one partly obscured by the oscilloscope probe.
Diagram
A "full size" terminal breadboard strip typically consists of around 56 to 65 rows of connectors, each row containing
the above mentioned two sets of connected clips (A to E and F to J). Together with bus strips on each side this makes
up a typical 784 to 910 tie point solderless breadboard. "Small size" strips typically come with around 30 rows.
Miniature solderless breadboards as small as 17 rows (no bus strips, 170 tie points) can be found, but these are less
well suited for practical use.
Jump wires
Jump wires for solderless breadboarding can be obtained in ready-to-use jump wire sets or can be manually
manufactured. The latter can become tedious work for larger circuits. Ready-to-use jump wires come in different
qualities, some even with tiny plugs attached to the wire ends. Jump wire material for ready-made or homemade
wires should usually be 22 AWG (0.33 mm²) solid copper, tin-plated wire - assuming no tiny plugs are to be attached
to the wire ends. The wire ends should be stripped 3/16" to 5/16" (approx. 5 mm to 8 mm). Shorter stripped wires
might result in bad contact with the board's spring clips (insulation being caught in the springs). Longer stripped
wires increase the likelihood of short-circuits on the board. Needle-nose pliers and tweezers are helpful when
inserting or removing wires, particularly on crowded boards.
Differently colored wires and color coding discipline are often adhered to for consistency. However, the number of
available colors is typically far fewer than the number of signal types or paths. Typically, a few wire colors are
reserved for the supply voltages and ground (e.g., red, blue, black), some are reserved for main signals, and the rest
are simply used where convenient. Some ready-to-use jump wire sets use the color to indicate the length of the wires,
but these sets do not allow a meaningful color-coding schema.
Breadboard
36
Inside a breadboard: construction
The following images show the inside of a bus strip.
inside breadboard 1
inside breadboard 2
inside breadboard 5
inside breadboard 6
inside breadboard 3
inside breadboard 4
Advanced solderless breadboards
Some manufacturers provide high-end versions of solderless breadboards. These are typically high-quality
breadboard modules mounted on a flat casing. The casing contains additional equipment for breadboarding, such as a
power supply, one or more signal generators, serial interfaces, LED or LCD modules, and logic probes.[6]
Solderless breadboard modules can also be found mounted on devices like microcontroller evaluation boards. They
provide an easy way to add additional periphery circuits to the evaluation board.
Breadboard
Limitations
Due to large stray capacitance (from 2-25 pF per contact point), high
inductance of some connections and a relatively high and not very
reproducible contact resistance, solderless breadboards are limited to
operation at relatively low frequencies, usually fewer than 10 MHz,
depending on the nature of the circuit. The relative high contact
resistance can already be a problem for DC and very low frequency
circuits. Solderless breadboards are further limited by their voltage and
current ratings.
Solderless breadboards usually cannot accommodate surface-mount
technology devices (SMD) or components with grid spacing other than
An example of a complex circuit built on a
breadboard. The circuit is an Intel 8088 single
0.1" (2.54 mm). Further, they cannot accommodate components with
board computer.
multiple rows of connectors if these connectors don't match the dual
in-line layout—it is impossible to provide the correct electrical
connectivity. Sometimes small PCB adapters called breakout adapters can be used to fit the component to the board.
Such adapters carry one or more components and have 0.1" (2.54 mm) connectors in a single in-line or dual in-line
layout. Larger components are usually plugged into a socket on the adapter, while smaller components (e.g., SMD
resistors) are usually soldered directly onto the adapter. The adapter is then plugged into the breadboard via the 0.1"
connectors. However, the need to solder the components onto the adapter negates some of the advantage of using a
solderless breadboard.
Complex circuits can become unmanageable on a breadboard due to the large amount of wiring required.
Alternatives
Alternative methods to create prototypes are point-to-point construction, reminiscent of the original breadboards,
wire wrap, wiring pencil, and boards like the stripboard. Complicated systems, such as modern computers
comprising millions of transistors, diodes, and resistors, do not lend themselves to prototyping using breadboards, as
their complex designs can be difficult to lay out and debug on a breadboard. Modern circuit designs are generally
developed using a schematic capture and simulation system, and tested in software simulation before the first
prototype circuits are built on a printed circuit board. Integrated circuit designs are a more extreme version of the
same process: since producing prototype silicon is costly, extensive software simulations are performed before
fabricating the first prototypes. However, prototyping techniques are still used for some applications such as RF
circuits, or where software models of components are inexact or incomplete.
References
[1] Description of the term breadboard (http:/ / tangentsoft. net/ elec/ breadboard. html)
[2] U.S. Patent 3145483 (http:/ / www. google. com/ patents?vid=3145483) Test Board for Electronic Circuits
[3] U.S. Patent 3496419 (http:/ / www. google. com/ patents?vid=3496419) Printed Circuit Breadboard
[4] US patent D228136 (http:/ / v3. espacenet. com/ textdoc?DB=EPODOC& IDX=USD228136), Ronald J. Portugal, "breadboard for electronic
components or the like", issued 1973-08-14
[5] Global Specialties PB-204 Solderless Proto-Board System (http:/ / www. tequipment. net/ GlobalSpecialtiesPB204. html)
[6] Powered breadboard (http:/ / pundit. pratt. duke. edu/ wiki/ PBB_272)
37
Breadboard
External links
• Atwater Kent Breadboard Receivers (http://www.sparkmuseum.com/BREADBD.HTM)
• Large parallel processing design prototyped on 50 connected breadboards (http://www.objectivej.com/
hardware/propcluster/_IMG_0031_pac_bboard_2_of_10_partial_completion.JPG)
Bridged T delay equaliser
The bridged-T delay equaliser is an electrical all-pass filter circuit
utilising bridged-T topology whose purpose is to insert an, ideally,
constant delay at all frequencies in the signal path. It is a class of image
filter.
Applications
The network is used when it is required that two or more signals are
matched to each other on some form of timing criterion. Delay is added
to all other signals so that the total delay is matched to the signal which
already has the longest delay. In television broadcasting, for instance, it
is desirable that the timing of the television waveform synchronisation
pulses from different sources are aligned as they reach studio control
rooms or network switching centres. This ensures that cuts between
sources do not result in disruption at the receivers. Another application
occurs when stereophonic sound is connected by landline, for instance from an outside broadcast to the studio centre.
It is important that delay is equalised between the two stereo channels as a difference will destroy the stereo image.
When the landlines are long and the two channels arrive by substantially different routes it can require many filter
sections to fully equalise the delay.
Operation
The operation is best explained in terms of the phase shift the network introduces. At low frequencies L is low
impedance and C' is high impedance and consequently the signal passes through the network with no shift in phase.
As the frequency increases, the phase shift gradually increases, until at some frequency, ω0, the shunt branch of the
circuit, L'C', goes in to resonance and causes the centre-tap of L to be short-circuited to ground. Transformer action
between the two halves of L, which had been steadily becoming more significant as the frequency increased, now
becomes dominant. The winding of the coil is such that the secondary winding produces an inverted voltage to the
primary. That is, at resonance the phase shift is now 180°. As the frequency continues to increase, the phase delay
also continues to increase and the input and output start to come back into phase as a whole cycle delay is
approached. At high frequencies L and L' approach open-circuit and C approaches short-circuit and the phase delay
tends to level out at 360°.
The relationship between phase shift (φ) and time delay (TD) with angular frequency (ω) is given by the simple
relation,
It is required that TD is constant at all frequencies over the band of operation. φ must therefore be kept linearly
proportional to ω. With suitable choice of parameters, the network phase shift can be made linear up to about 180°
phase shift.
38
Bridged T delay equaliser
39
Design
The four component values of the network provide four degrees of freedom in the design. It is required from image
theory (see Zobel network) that the L/C branch and the L'/C' branch are the dual of each other (ignoring the
transformer action) which provides two parameters for calculating component values. A third parameter is set by
choosing a resonant frequency, this is set to (at least) the maximum frequency the network is required to operate at.
There is one remaining degree of freedom that the designer can use to maximimally linearise the phase/frequency
response. This parameter is usually stated as the L/C ratio. As stated above, it is not practical to linearise the phase
response above 180°, ie half a cycle, so once a maximum frequency of operation, fm is chosen, this sets the
maximum delay that can be designed in to the circuit and is given by,
For broadcast sound purposes, 15 kHz is often chosen as the maximum usable frequency on landlines. A delay
equaliser designed to this specification can therefore insert a delay of 33μs. In reality, the differential delay that
might be required to equalise may be many hundreds of microseconds. A chain of many sections in tandem will be
required. For television purposes, a maximum frequency of 6 MHz might be chosen, which corresponds to a delay of
83ns. Again, many sections may be required to fully equalise. In general, much greater attention is paid to the
routing and exact length of television cables because many more equaliser sections are required to remove the same
delay difference as compared to audio.
Butterworth filter
The Butterworth filter is a type of signal
processing filter designed to have as flat a
frequency response as possible in the passband so
that it is also termed a maximally flat magnitude
filter. It was first described in 1930 by the British
engineer Stephen Butterworth in his paper entitled
"On the Theory of Filter Amplifiers".[1]
The frequency response plot from Butterworth's 1930 paper.
Butterworth filter
40
Original paper
Linear analog electronic filters
[1]
Butterworth had a reputation for solving "impossible" mathematical problems. At the time filter design was largely
by trial and error because of its mathematical complexity. His paper was far ahead of its time: the filter was not in
common use for over 30 years after its publication. Butterworth stated that
An ideal electrical filter should not only completely reject the unwanted frequencies but should also have
uniform sensitivity for the wanted frequencies.
At the time, filters generated substantial ripple in the passband, and the choice of component values was highly
interactive. Butterworth showed that a low pass filter could be designed whose cutoff frequency was normalized to 1
radian per second and whose frequency response (gain) was
where ω is the angular frequency in radians per second and n is the number of reactive elements (poles) in the filter.
If ω = 1, the amplitude response of this type of filter in the passband is 1/√2 ≈ 0.707, which is half power or −3 dB.
Butterworth only dealt with filters with an even number of poles in his paper. He may have been unaware that such
filters could be designed with an odd number of poles. He built his higher order filters from 2-pole filters separated
by vacuum tube amplifiers. His plot of the frequency response of 2, 4, 6, 8, and 10 pole filters is shown as A, B, C,
D, and E in his original graph.
Butterworth solved the equations for two- and four-pole filters, showing how the latter could be cascaded when
separated by vacuum tube amplifiers and so enabling the construction of higher-order filters despite inductor losses.
In 1930 low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were
rather lossy. Butterworth discovered that it was possible to adjust the component values of the filter to compensate
for the winding resistance of the inductors.
He used coil forms of 1.25″ diameter and 3″ length with plug in terminals. Associated capacitors and resistors were
contained inside the wound coil form. The coil formed part of the plate load resistor. Two poles were used per
vacuum tube and RC coupling was used to the grid of the following tube.
Butterworth also showed that his basic low-pass filter could be modified to give low-pass, high-pass, band-pass and
band-stop functionality.
Overview
The frequency response of the Butterworth filter is maximally flat (has no ripples) in the passband and rolls off
towards zero in the stopband.[2] When viewed on a logarithmic Bode plot the response slopes off linearly towards
negative infinity. A first-order filter's response rolls off at −6 dB per octave (−20 dB per decade) (all first-order
lowpass filters have the same normalized frequency response). A second-order filter decreases at −12 dB per octave,
a third-order at −18 dB and so on. Butterworth filters have a monotonically changing magnitude function with ω,
unlike other filter types that have non-monotonic ripple in the passband and/or the stopband.
Compared with a Chebyshev Type I/Type II filter or an elliptic filter, the Butterworth filter has a slower roll-off, and
thus will require a higher order to implement a particular stopband specification, but Butterworth filters have a more
linear phase response in the pass-band than Chebyshev Type I/Type II and elliptic filters can achieve.
Butterworth filter
41
Example
A simple example of a Butterworth filter is
the third-order low-pass design shown in the
figure on the right, with C2 = 4/3 F, R4 =
1 Ω, L1 = 3/2 H, and L3 = 1/2 H. Taking the
impedance of the capacitors C to be 1/Cs
and the impedance of the inductors L to be
Ls, where s = σ + jω is the complex
frequency, the circuit equations yield the
transfer function for this device:
A third-order low-pass filter (Cauer topology). The filter becomes a Butterworth
filter with cutoff frequency ωc=1 when (for example) C2=4/3 farad, R4=1 ohm,
L1=3/2 henry and L3=1/2 henry.
The magnitude of the frequency response (gain) G(ω) is given by
and the phase is given by
The group delay is defined as the derivative of the
phase with respect to angular frequency and is a
measure of the distortion in the signal introduced by
phase differences for different frequencies. The gain
and the delay for this filter are plotted in the graph on
the left. It can be seen that there are no ripples in the
gain curve in either the passband or the stop band.
Gain and group delay of the third-order Butterworth filter with ωc=1
frequency plane.
The log of the absolute value of the transfer function
H(s) is plotted in complex frequency space in the
second graph on the right. The function is defined by
the three poles in the left half of the complex
Butterworth filter
42
These are arranged on a circle of radius unity,
symmetrical about the real s axis. The gain function
will have three more poles on the right half plane to
complete the circle.
By replacing each inductor with a capacitor and each
capacitor with an inductor, a high-pass Butterworth
filter is obtained.
A band-pass Butterworth filter is obtained by placing a
capacitor in series with each inductor and an inductor
in parallel with each capacitor to form resonant circuits.
The value of each new component must be selected to
resonate with the old component at the frequency of
interest.
A band-stop Butterworth filter is obtained by placing a
capacitor in parallel with each inductor and an inductor
in series with each capacitor to form resonant circuits.
The value of each new component must be selected to
resonate with the old component at the frequency to be
rejected.
Log density plot of the transfer function H(s) in complex frequency
space for the third-order Butterworth filter with ω =1. The three
c
poles lie on a circle of unit radius in the left half-plane.
Transfer function
Like all filters, the typical prototype is
the low-pass filter, which can be
modified into a high-pass filter, or
placed in series with others to form
band-pass and band-stop filters, and
higher order versions of these.
The
gain
of
an
n-order
Butterworth low pass filter is given in
terms of the transfer function H(s) as
Plot of the gain of Butterworth low-pass filters of orders 1 through 5, with cutoff
frequency
. Note that the slope is 20n dB/decade where n is the filter order.
where
• n = order of filter
• ωc = cutoff frequency (approximately the -3dB frequency)
•
is the DC gain (gain at zero frequency)
Butterworth filter
43
It can be seen that as n approaches infinity, the gain becomes a rectangle function and frequencies below ωc will be
passed with gain
, while frequencies above ωc will be suppressed. For smaller values of n, the cutoff will be less
sharp.
We wish to determine the transfer function H(s) where
(from Laplace transform). Since H(s)H(-s)
evaluated at s = jω is simply equal to |H(jω)|2, it follows that
The poles of this expression occur on a circle of radius ωc at equally spaced points. The transfer function itself will
be specified by just the poles in the negative real half-plane of s. The k-th pole is specified by
and hence;
The transfer function may be written in terms of these poles as
The denominator is a Butterworth polynomial in s.
Normalized Butterworth polynomials
The Butterworth polynomials may be written in complex form as above, but are usually written with real coefficients
by multiplying pole pairs which are complex conjugates, such as
and
. The polynomials are normalized by
setting
. The normalized Butterworth polynomials then have the general form
for n even
for n odd
To four decimal places, they are
n
Factors of Polynomial
1
2
3
4
5
6
7
8
The normalized Butterworth polynomials can be used to determine the transfer function for any low-pass filter
cut-off frequency
, as follows
, where
Download