Module 3C: AC Circuits III – Power, Frequency considerations and

advertisement
Module 3C: AC Circuits III – Power, Frequency considerations and Filters
To review what we’ve done so far, we have considered circuits containing resistors, capacitors and
inductors (“RLC networks”) under an AC source.
We discovered that capacitors and inductors under AC behave very similarly to resistors, with an “Ohmlike” law V = IX, where X represents the reactance (capacitive or inductive), although there is a 90° phase
shift between voltage and current. For inductors, V leads I by 90°, and for capacitors, I leads V by 90°;
this is summarized by the mnemonic ELI the ICE man. When devices of one type are present in a circuit,
our usual rules applied: in series, resistances or reactances add, and in parallel we use the one-over
formula, or alternately add the conductances G or susceptances BC / BL.
However, when devices of different types are present, the math becomes more complicated due to the
phase shifts involved. Although we could write down our loop equations (which turn out to be secondorder differential equations) and solve them through brute force, we abandoned this idea in favor of a
much more graphical technique, treating our voltages, currents and reactances as complex-valued
quantities. By introducing the complex factor j, the phase shifts “fall out” naturally, and we can use
some very nice properties of exponentials, and things we already know about vector and vector-like
quantities, to solve these circuits.
The answer turns out to be that devices of different types combine and may be replaced by an
equivalent device which exhibits a property called impedance Z, where Z has a “real part” corresponding
to the resistance, and an “imaginary part” corresponding to the reactance, or equivalently, a
“magnitude” and a “phase angle.” Then we can re-use all of our old Ohm’s Law tools, with the proviso
that we take into account both magnitudes and phase angles (or, equivalently, real and imaginary parts).
The two simplest cases are the series RLC circuit, in which our components are strung out in series, so
that all devices have the same current through them, and the parallel RLC circuit, in which all
components are in parallel so the voltage across them is the same. In analyzing the parallel RLC circuit it
was helpful to consider admittance Y, which is the reciprocal of the impedance.
More complicated networks could be simplified by replacing series or parallel components with an
equivalent impedance or admittance, and then combining the equivalent impedances or admittances,
using the fact that series impedances add, parallel admittances add, addition and subtraction of complex
quantities are best done in rectangular form, and multiplication and division of complex quantites are
best done in polar/phasor form. Though the mechanics can be quite tedious, the principles of
decomposition of circuits are exactly the same as for DC circuits.
For circuits which cannot be decomposed, we can use our time-honored Kirchhoff methods (e.g. branch
currents, mesh currents, node voltages) to produce systems of simultaneous algebraic equations (the
use of complex numbers / phase angles removes the derivatives and integrals, so we don’t have to deal
with differential equations!), again making sure to keep track of both magnitudes and phase angles.
For linear networks, we could also apply our superposition, Thevenin or Norton theorems, to replace an
otherwise intractable network with more convenient equivalents, or subcircuits.
However, the one thing we have not considered is how the frequency affects our analysis; thus far ω has
simply been a “given” which we used to calculate XL and XC. Soon we will relax that restriction, and
consider what happens as frequency changes. But first we will revisit the issue of power in AC circuits.
I.
Power considerations in AC circuits
Earlier we saw that the time-averaged power delivered by an AC source to a resistive load R is given by
Pavg = Vrms Irms = Vrms2/R = Irms2R
where the only thing we had to remember is to always use rms voltage or current! However, we also
saw that the time-averaged power delivered to a capacitor or inductor was given by
Pavg = 0
regardless of XL, XC, V or I. We may guess that this discrepancy has something to do with the phase
angle between voltage and current. And, in fact, that’s correct. Since the instantaneous power is
P(t) = V(t)I(t)
it can be shown that the time-averaged power, if there is a phase angle φI between the voltage and
current, is given by
Pavg = ∫ P(t) dt / ∫ dt = Vrms Irms cos(φI)
where cos(φI) is referred to as the power factor (pf) of the circuit. For a purely resistive load, φI = 0, so pf
= cos(φI) = 1, and for a purely reactive load, φI = ±90°, so pf = cos(φI) = 0, so the above two limits “fall”
out of our general expression. In the case of a general load with impedance Z = Z ∠ φZ, Irms = Vrms/Z and
φI = φZ, so
Pavg = Vrms Irms cos(φI) = (Vrms2/Z) cos(φI) = Irms2 Z cos(φI)
In analogy with the way we used complex numbers to deal with voltages and currents, and later
resistances/reactances/impedances, we may be motivated to do the same thing with power, as follows.
We define an “apparent power” S, which is a complex quantity given by
S = P + jQ
where P is our familiar “real power” given by the above formulae and Q is a real quantity known as
“reactive power.” (The reactive power Q has nothing to do with Q as the symbol for charge; we’re just
overloading our operators. Sorry!)
Physically, P represents the electrical energy per unit time which is converted to heat in the circuit’s
resistance, and as a measure of heat production is rated in Watts. Q represents electrical energy that is
“shuffled back and forth” between the capacitors and inductors of the circuit. Q is never converted
from electrical energy into any other form of energy, and therefore it is not given in Watts, but
somewhat pedantically in “reactive volt-amperes” (VA, sometimes written as var to avoid confusion with
S). S represents the complex/phasor/vector/Pythagorean sum of P and Q; similarly to Q it is measured in
volt-amperes since it does not uniquely represent electrical energy converted to heat. Mathematically,
P = Vrms Irms cos(φI)
(in Watts (W))
Q = Vrms Irms sin(φI)
(in reactive Volt-Amperes (VA or var))
From this it follows that
S = |S| = sqrt(P2+Q2) = Vrms Irms (in Volt-Amperes (VA))
Put another way, the relationship between P, Q and S is exactly the same as that between R, X = XL - XC
and Z, or G, B = BC - BL and Y, with φI replacing φZ (or φY).
Now, why should we care about this? If I connect an AC source to a highly reactive load (i.e. |X| >> R, φI
is large and approaching ±90°). In this case, the power factor cos(φI) is nearly zero, so the real power P
may be quite small. However, the apparent power S can be significantly larger than P!
This is crucial because the actual voltages and the actual currents being delivered to the circuit (i.e. the
magnitudes of Vrms and Irms) are given by S, even if P is much smaller. So if I have a 100 Vrms source
connected to a load which dissipates P = 1 kW, it is not correct to say that I = P/V = 10 Arms, since there is
an unknown phase factor between current and voltage. What is correct is to say that I = P/Vcos(φI),
which can be significantly larger than our previous calculation if cos(φI) is close to zero! And it is this
“real” I which we must use in order to determine what size fuses or circuit breakers are required to
supply a circuit, how thick our power cord must be, how large a generator we must use, or how large
the resistive losses in our conductive cable will be. That is, if our load is very reactive, our supply
circuitry must be “overrated” by a factor of 1/cos(φI), as compared to a naïve calculation based on the
power P being dissipated by the resistive portion of the load.
Thus, if we have a load to which is being delivered a real power P = 2 kW, but with a power factor cos(φI)
= 0.1 (meaning that it’s a highly reactive load!), the apparent power S = P/cos(φI) = 20 kVA, and all our
wiring and supply circuitry must be rated to supply 20 kVA. For example, if Vrms = 1 kV, then Irms = S/Vrms
= 20 kVA / 1 kV = 20 A, so all our wiring must be rated to handle 20 A. If we naively go with a 2 kVA
source and wiring (which would only be able to handle Irms = 2 A at Vrms = 1 kV), or think we’re splurging
on a “factor of safety” of two by going with a 4 kVA source and wiring (which could only handle 4 A at 1
kV), we’re going to blow something up, because the actual current (20 A) will far exceed what our circuit
elements are rated to handle!
However, it’s worth noting that while S is crucial to circuit design, P is still significant since it prescribes
how much heat will be generated by the circuit; this heat must be dissipated in some way (e.g. by a
metal heatsink, cooling fans or circulating coolant) to prevent overheating. This means that while a
power supply and wiring may be perfectly capable of delivering an apparent power of 20 kVA to a
circuit, the amount of heat generated by the load will be far larger if its power factor is ~ 1 (so P is nearly
20 kW, and nearly all of that apparent power is converted to heat) than if its pf is very small (so P << 20
kW, and most of the apparent power goes around in circles as reactive power Q). For this reason, AC
devices (such as generators and uninterruptible power supplies (UPSes)) often carry ratings in terms of
both real power P (in W), and apparent power S (in VA) – the device’s rating must exceed both the
maximum expected P, and the maximum expected S, and all your wiring must be rated to exceed the
maximum anticipated currents (calculated from S) for the load you intend to connect.
II.
Frequency considerations: Resonance
Let’s now give a little more thought to the issue of frequency. Thus far, we have restricted ourselves to
AC sources (or, in the complicated cases we started to consider, multiple AC sources) at a single
frequency, for which we simply go ahead and calculate values of XL and XC and plough through our
calculations accordingly.
Well, what happens in circuits as the frequency changes, or if there are multiple sources operating at
different frequencies?
From our formulas, we know that XC = 1/ωC and XL = ωL. Therefore, for a given C and L in a circuit, as
frequency increases, we expect XC to decrease, and XL to increase.
This is consistent with our expectations – in DC circuits, capacitors initially (that is, right after conditions
change) act like short circuits, and increasingly oppose the flow of current until they act like “infinite
resistors” in the long-term limits. Inductors, on the other hand, will induce a voltage to oppose any
change in current, but in the long term act like short circuits (or, more precisely, act like resistors with a
resistance due to the lengths of wire they contain).
Thought of another way, capacitors “like” change. And higher frequencies correspond to more
“change.” Inductors, on the other hand, “dislike” change. They are happiest when things vary slowly
(or, better yet, not at all!) Resistors, on the third(?) hand, don’t really care, They just follow V = IR,
whatever they are at that moment. If it helps to think of circuits “politically,” inductors are
“conservative,” capacitors are “liberal,” and resistors are “apathetic.”
Now, given that XL (and its propensity to cause voltage to lead current) increases with frequency and XC
(and its propensity to cause current to lead voltage) decreases with frequency, it follows that there is a
particular frequency at which the imaginary component of Z (or Y) is equal to zero (or, equivalently, φZ =
φY = 0), and thus the voltage and current drawn by the circuit are in phase with each other (i.e. φI = 0).
For the simple series RLC circuit, the imaginary part of the impedance is given by XL-XC, and for simple
parallel RLC circuits, the imaginary part of the admittance is given by BC-BL. This peculiar zeroing
condition occurs when XL=XC (or, equivalently, BL=BC). One can show that this occurs when
ω = ωres = 1/sqrt(LC)
where ωres is known as the resonant frequency, and a circuit driven by a source with ω = ωres is referred
to as being in resonance. For circuits with more complex topologies, the above formula fails to hold
(although it’s usually approximately true), and the resonant frequency must be calculated by finding the
impedance as a function of frequency, and solving for Im(Z) or Im(Y) = 0.
When a circuit is in resonance, the following things are true (and in fact equivalent):
-
The current drawn is in phase with the applied voltage: φI = 0
Since the imaginary part of the impedance is zero, the total impedance is real (i.e. φZ = 0 and
Im(Z) = 0) and its magnitude is equal to the real part of the impedance: |Z| = Re(Z). The
same is true for admittance: φY = 0, Im(Y) = 0 and |Y| = Re(Y).
For the simple series RLC circuit, at resonance Z = R ∠ 0, and at resonance the impedance takes on its
minimum possible value (the value of Z is higher for any nonzero value of XL-XC). This means that the
current drawn at resonance is larger than at any other frequency. Furthermore, since the phase angle
between the voltage and current is zero, the apparent power S is equal to the real power P, whose
values are both at a maximum at this frequency (and the reactive power Q has a minimum value –
zero!). Therefore, for a given value of V0 (or I0), maximum real power (and minimum reactive power)
will be delivered to a load of impedance Z is the source’s frequency ω = ωres. And the power delivered
will be given by Pavg = Vrms2/R = Irms2R = Vrms Irms, since φI = 0. This is depicted below:
(One can even go a step further to develop the “maximum power transfer” theorem for the series RLC
circuit: if a source has an internal impedance Zi, maximum power transfer at a given frequency occurs
when a load with ZL = Zi* (where the * represents the complex conjugate, i.e. φZ → -φZ) is connected, and
for that load, the frequency at which maximum power is transferred is ω = ωres, which corresponds to
|Zi| = |ZL| = minimum, and Im(Zi) = Im(ZL) = 0.)
For the simple parallel RLC circuit, at resonance Y = G ∠ 0 (or, equivalently, Z = R ∠ 0), and at resonance
the impedance takes on its maximum possible value. This may be a bit surprising since it’s the exact
opposite of the somewhat-intuitive series RLC result, until we consider what is happening in a parallel
RLC circuit. The currents through the capacitor and inductor are always 180° out of phase with one
another. At resonance, IC = IL (in magnitude), and the capacitor and inductor essentially form a “tank”
which “exchange” the exact same amount of current back and forth between them, so that zero current
(ideally) is drawn from the source toward the LC combination. At any other frequency, there is some
current (equal to IL-IC, since they are no longer equal) which must be drawn from the source, which
means that the total current drawn from the source is the phasor sum of that drawn by the resistor and
that drawn by the LC combination. Therefore, as you move away from resonance more current is
drawn by the whole circuit (as IC-IL increases from zero), which results in a smaller effective impedance
Z. This is depicted below:
For circuits more complicated than a simple series or parallel combination, we cannot simply set XL = XC
and solve for ωres = 1/sqrt(LC). We must instead solve for the actual impedance (or admittance) in terms
of all the reactances/resistances/conductances/susceptances, and set the imaginary part to zero to
solve for ωres.
And the behavior, in general, can be more complicated, particularly when large resistances are
connected in complicated ways, since it gives rise to “damping,” which can be a pain to deal with. One
crucial change is that, unlike simple series or parallel RLC resonant circuits, the “damped resonant
frequency” at which φZ = 0 does not necessarily coincide with either the “undamped resonant
frequency” 1/sqrt(LC) or the “driven resonant frequency” at which maximum current is drawn from a
source. Fortunately, this effect is usually quite small (and I will generally ignore it), unless there are very
large resistances (and hence significant “damping”) in the circuit, in which case the response is much
more complicated than we have assumed thus far.
That is, in general, there will be an “undamped resonant frequency” given by 1/sqrt(LC), a “damped
resonant frequency” which is somewhat smaller, and at which Im(Z) = 0, and a third “driven resonant
frequency” at which the power delivered to the load is maximized (for series LC combinations) or
minimized (for parallel LC combinations). For the simple series and simple parallel RLC circuits, all three
of these frequencies coincide.
III.
Frequency considerations: Q and Bandwidth
Given the “bell curve”-like shape of the above graphs, now that we’ve decided that the impedance
minimum/maximum occurs at ω = ωres, we may want to consider how rapidly the circuit deviates from
the resonant condition, or, in other words, the width of the “bell curve.” In general, curves like this can
be described by a bandwidth ∆ω centered on the resonant frequency ωres. For a series RLC circuit, at
frequencies of ωres - ∆ω/2 and of ωres + ∆ω/2, the power delivered to the circuit drops to half of the
power delivered to the circuit at resonance; this is both because the impedance Z increases as one
moves away from resonance (so I = V/Z decreases) and because the phase angle between voltage and
current increases as one moves away from resonance (and thus the power factor cos(φI) decreases).
(∆ω is also known as the “Full Width at Half Maximum” (FWHM) of the relationship between frequency
and power delivered, and is sometimes referred to as a “3 dB width,” as 3 dB corresponds to a factor of
two: log10(3/10) = 0.5. There are two “3 dB points” at which power is half its resonance value, at ωres ∆ω/2 and ωres + ∆ω/2,and ∆ω is the difference between the two 3 db ponts.)
This FWHM, or bandwidth ∆ω is quite instructive. A series RLC circuit with a small bandwidth ∆ω, for
example, has a minimum impedance at ω = ωres, which very rapidly increases as ω deviates from ωres, so
that the power delivered to the circuit drops sharply as the frequency deviates from ωres. A series RLC
circuit with a large bandwidth ∆ω has a relatively “flat” impedance, in that the value of Z does not
increase very rapidly as one deviates from the resonant condition, so the power delivered depends only
rather weakly on the frequency. One can also define a “fractional bandwidth” as the ratio of bandwidth
to resonant frequency:
Fb = ∆ω/ωres
Most commonly, however, this bandwidth factor is expressed in terms of the “quality factor” Q (yes,
we’ve now used Q three times – for charge, for reactive power, and now the quality factor!), given by
Q = ωres/∆ω = Fb-1
so that a circuit with a small bandwidth (relative to its resonant frequency ωres) is a “high-Q circuit,” and
one with a wide bandwidth has a low value of Q. High-Q circuits can also be said to have a high degree
of selectivity, in that the delivered power drops sharply with frequency as it deviates from the resonant
value.
For simple circuits (e.g. series or parallel) in which ωres corresponds to XL = XC, Q can be written as
Q = XL/R
where XL = XC so the choice of XL is arbitrary.
For example, if I am looking at a series RLC circuit with ωres = 10 krad/s , and at ω = ωres, R = 100 Ω and XL
= XC = 1 kΩ, Q = 10, and Fb = Q-1 = 0.1. Then ∆ω = Fbωres = ωres/Q = 1 krad/s, and the “3 db points” are
located at ωres - ∆ω/2 = 9.5 krad/s and at ωres + ∆ω/2 = 10.5 krad/s. At the resonant frequency, the
power delivered Presonance = Vrms2/Zres = Vrms2/R, and at the 3 dB points, P = Presonance/2 = Vrms2/2R (where
the factor of 2 comes from both the increase in Z and the decrease in power factor as you move away
from resonance).
In more complex cases, one can calculate the resonant frequency manually, determine the power drawn
as a function of frequency and calculate the “3 dB” points to determine the bandwidth, from which Q
can be calculated.
IV.
Frequency considerations: Transient response
At this point it’s worth stepping back and thinking about what we may have missed.
Recall that when we started the course, we ignored time dependence, and assumed that when a switch
is opened or closed V and I go instantaneously from their “initial” to “final” values with no lag or delay;
this is known as the assumption of “steady state.” When we discussed capacitors and inductors, we
realized that this time dependence was crucial, giving rise to our “charging” and “discharging” circuits
with exponential time dependences (with characteristic timescale τ = RC or L/R) of V and I. However, on
sufficiently long timescales (> 5τ), our RC and RL circuits always eventually reverted to a “steady state”
condition, with current through capacitors and voltages across inductors asymptotically approaching
zero, and voltages across capacitors and currents through inductors asymptotically approaching their
steady-state values.
When we changed to AC circuits, we may not have realized it, but we essentially “reverted” to our initial
position. While we considered V(t) and I(t) to be changing with time, we considered them to be
changing in a rather “benign” way, depending only on the applied frequency ω. This is known as the
“sinusoidal steady state.”
Some of you are confused by the fact that when we use a DC source, capacitors and inductors behave in
this unusual time-dependent way, but with an AC source, capacitors and inductors reduce to “resistorlike” devices obeying an “Ohm-like law.” It turns out that these two behaviors are completely consistent
with each other, although not in an obvious way.
A useful train of thought is to consider that in an AC RC or RL circuit (which removes issues relating to
resonance), there are effectively two timescales at play: there is the period T = 2π/ω, which is solely a
function of the source, but there is also the time constant τ = RC or L/R of the circuit. The first describes
the characteristic timescale over which the source voltage changes appreciably, whereas the second
characterizes the characteristic timescale over which the circuit reacts to any changes in the source
voltage. For an RLC circuit, the relevant timescale is τ = 2π/ωres, which is the characteristic time scale on
which the capacitance and the inductance “talk back and forth” to each other.
Well, a DC source is effectively an AC source with ω = 0 (if ω → 0, V(t) = V0cos(ωt) = V0 for all time, just
like a good DC source should do), so T → ∞. Therefore, the DC response we see (in which VC = Q/C and
VL = LdI/dt in our circuit differential equations) is valid in the limit that T >> τ. In other words, T >> τ
means that the source fluctuates on timescales much longer than the circuit’s characteristic response
time, so that the circuit is always in a “steady state” configuration). It turns out that all of the analysis
we’ve done for AC circuits is valid in this limit as well, and this is what gives rise to the “Ohm-like” laws
of reactance when the source voltage is sinusoidal, with the time-dependent phenomena giving rise to
the phase differences between V and I we all know and love.
But what happens as the frequency continues to increase? Well, eventually T << τ, and when this
happens the source is changing so quickly that the circuit cannot “see” the changes since the reactive
devices (capacitors and inductors) cannot react (by changing voltages or currents) quickly enough to
keep up.
In the case of an RC circuit, if a high-frequency sinusoidal input with ω >> τ-1 is applied, the voltage
across the capacitor will be “smoothed out” to the (whole-period) average voltage since the capacitor
voltage cannot fluctuate fast enough to keep “track” of the fluctuating source voltage. This is something
extremely useful – capacitors can be used to “filter out” high-frequency signals. One particular
application involving large capacitors (giving a circuit with a large large τ = RC) is as a “filter” to smooth
out fluctuations (“ripple”) in a supply voltage (with τ >> the duration of voltage fluctuations), or as the
final step in converting an AC signal to a DC signal (with the first step – filtering out or reversing the
current direction when I < 0 so that the current always flows the same way - being accomplished using
one-way conductors known as diodes; we will discuss these in Module 4).
Left: Use of a capacitor to smooth out fluctuations
in source voltage. The left panel represents the
time-varying voltage applied by a source, and the
right panel represents the time-dependent voltage
measured across a sufficiently large capacitor.
Another interesting phenomenon involves the study of spikes or dips in circuit parameters on timescales
<< τ, and a detailed study (on timescales comparable to τ) or analysis of what happens when a circuit
abruptly changes (such as when a switch is opened or closed, or a voltage suddenly changes in
intensity). This gives rise to the large field of “transient analysis” in electrical engineering.
To study this in more detail, let’s think about a resonant parallel RLC circuit. We know that in such a
circuit, the impedance Z = R, and the reactances XC = XL. In such a circuit, the resistor draws a current
with magnitude IR0 = V0/R. The capacitor and inductor draw currents IC0 = V0/XC and IL0 = V0/XL, which are
equal in magnitude and 180° out of phase with one another. Therefore, the total current drawn by the
capacitor/inductor combination is zero, so we may consider C and L as a tank, between which voltages
and currents are passed, sort of like a “hot potato.”
Analogy between RLC circuits and a mass on a spring
(optional)
There is an interesting analogy between this parallel RLC circuit and the familiar mass on a Hooke’s law
spring from our first semester of physics. A mass oscillating on a spring has a constant total energy T,
which may be considered to be the sum of two parts: a kinetic energy K, and a potential energy U:
T=K+U
with K = ½ mv2 and U = ½ kx2
→
T = ½ mv2 + ½ kx2
As the mass oscillates back and forth, the energy shifts back and forth between the two forms. K is
maximized when the mass is moving most quickly (as it passes its equilibrium position, so U = 0), and U
is maximized when the mass is at its maximum displacement (when it is momentarily at rest, so K = 0).
An RLC circuit of this type works in much the same way. A capacitor with voltage V between its plates
has an energy given by UC = ½ CV2, and an inductor with current I flowing through its coils has an energy
given by UL = ½ LI2. The total energy of the capacitor/inductor combination is given by
U = UL + UC = ½ LI2 + ½ CV2
We may be tempted to think of the capacitor energy as “potential energy” stored between the
capacitor’s plates and the inductor energy as “kinetic energy” stored in the motion of the charges that
comprises the current. In this case voltage represents the “displacement” of the charges in the circuit
from their equilibrium configuration, and the current represents the “motion” of the charges.
And since the RLC circuit and the mass on the spring are described by the same equations, the
mathematical solution is exactly the same: sinusoidal oscillations! In the case of the mass on a spring,
the oscillations occur at a “resonance frequency” ωres = sqrt(k/m), and in the RLC circuit the resonance
frequency is given by ωres = sqrt(1/LC); if we recall that C is defined “oddly,” essentially as the reciprocal
of how resistance and inductance are defined, the two equations are exactly the same.
The mass on a spring has a “static equilibrium” which corresponds to the mass at rest at its equilibrium
position. But if it is displaced from that position, it will undergo sinusoidal oscillations for all time. Of
course, in reality, friction will cause those sinusoidal oscillations to eventually damp out. In our circuit,
the role of “friction” is played by resistance.
What differs from what you may have learned before is that a mass-spring system can be “driven” by
oscillating the other end of the spring up and down; in this case the mass will tend to oscillate at the
driving frequency ωd (in our circuit analogy this driving is caused by our AC voltage source!). However,
the amplitude of this response will be maximized (corresponding to and extremum of impedance in our
circuit analogy) when ωd = ωres; this result holds true in both the mass-spring system and our RLC circuit.
Now, what happens if this driving source is suddenly shut down? Well, we know intuitively that the
mass doesn’t immediately stop moving; it continues to bob up and down, at a frequency which trends
toward the resonant frequency ωres, and continues to do so until the amplitude decreases to zero as a
result of damping.
The same is true for our RLC circuit. If the source is suddenly shut off, or disconnected from our LC
“tank,” the capacitor and inductor will continue to play “hot potato,” exchanging electrical energy
between each other, with a sinusoidal dependence with a frequency trending toward the resonant
frequency ωres of the circuit. And this oscillation will continue indefinitely, although the effect of
damping (due to resistance) will be to cause the amplitude of the oscillations to decline toward zero.
In the limit that the damping is quite large (“overdamping”), there are no oscillations, and in fact the
system exponentially returns to equilibrium with a timescale inversely proportional to the strength of
the damping. In the limit that the damping is quite small (“underdamping”), oscillations can persist for
quite a long time; this phenomenon is known as “ringing,” and is what in fact will happen when a driving
source is disconnected from an LC circuit: the components will continue to “toss” voltage and current
between them, and there will be sinusoidally oscillating currents and voltages that can persist for some
time after the source is disconnected. Smack in the middle of those two extremes is the case of “critical
damping,” in which the system completes exactly one oscillation before returning to static equilibrium.
This damping can be described by the “damping ratio” ς (Greek letter zeta), where ς = 0 corresponds to
the “undamped” case (zero resistance or friction), for which oscillations continue forever, ς = 1
corresponds to the one-oscillation “critically damped” case, 0 < ς < 1 corresponds to “underdamping”
with a sequence of oscillations of diminishing amplitude, and ς > 1 corresponds to “overdamping,” with
a rapid exponential return to static equilibrium.
In our mass-on-a-spring example,
ς = c / (2mωres)
where ωres = sqrt(k/m) is the resonant frequency, and c is the coefficient of viscous damping, with F = -cv
being the velocity-dependent damping force. Clearly, as c increases, so does the damping ratio, and ς =
0 in the frictionless c = 0 limit.
In a simple RLC circuit,
ς = (R/2) sqrt(C/L)
(series RLC circuit)
ς = (G/2) sqrt(L/C)
(parallel RLC circuit)
As expected, ς is proportional to the resistance R in a series circuit, and perhaps as expected, ς is
proportional to G in the parallel RLC circuit (since G represents how much current the resistor “sucks”
away from the LC combination).
The mathematics of dealing with these “transient responses” in all but the simplest case can be quite
difficult, and is far beyond the scope of this course; often the solutions are analytically impossible, so the
circuits must be modeled using software.
Many practical circuits involve signals switching on and off or changing in amplitude in order to encode
information, with a transient response expected every time there is a change. And as practical demands
require faster and faster switching (i.e. higher CPU or data transfer speeds) these transient responses
become increasingly important.
In order to avoid sizable amounts of ringing in a circuit, you may try to insert sufficiently large (or small)
resistances into the circuit to increase the level of damping. But doing so can have other undesirable
effects (leaking away power from the source, limiting the current that may be drawn, or producing
unwanted heat that must be dissipated), so you may just have to deal with the effects of transients
(which may, for example, limit your data transfer rate, since you have to wait for the ringing to “quiet
down” before you can move on to the next bit).
V.
Filters: High-pass, Low-pass, Band-pass and Band-stop
Let’s now think in a little more detail about what happens as frequency changes in a circuit, and ways of
practically harnessing the physics.
We know that as frequency increases, XL increases and XC decreases. We may recall what happens in a
DC circuit if two resistors are connected in series: the share of the source voltage dropped across each
resistor is a fraction of the total voltage, with the fraction being equal to that resistor’s share of the total
resistance. For the circuit at right, if the source has a voltage Vin
I = Vin / Req = Vin / (R1+R2)
so that
VR1 = I R1
VR2 = I R2
I could then hook up a “load” across R2, and the voltage Vout = V2 across the load would be a function of
the values of R1 and R2 (which I could arbitrarily select out of my “big ol’ bin o’ resistors”).
If I imagine replacing R2 with a potentiometer, I could adjust Vout by adjusting its resistance up or down.
With such a circuit I could deliver a user-specified voltage Vout to a load, using a fixed-voltage source Vin.
Of course, things aren’t really so simple: as soon as I connect a load (such as another resistor) across R2,
it must be replaced by an Req = R2||Rload, which is manifestly less than R2, so the voltage across it Vout will
diminish. Such a “voltage divider” circuit only works well in the limit that the majority of the current
continues to flow through R2, with only a relatively small amount going to the load (i.e. R2 << Rload). In
other words, voltage dividers are relatively inefficient (in that the power delivered to the load is only a
small fraction of the total power being drawn from the source), but they do serve their advertised
purpose: enabling the delivery to a load of some adjustable fraction of a source voltage, an adjustable
Vout anywhere between 0 and Vin.
Now, let’s imagine what happens if I replace R2 with an inductor, and the
source with an AC voltage source with amplitude Vin = V0. (This circuit is
shown at right, with the “input voltage” source removed from the
depiction for reasons that will eventually become clear.) The inductor has
XL = ωL, and the circuit has an impedance Z with magnitude Z =
sqrt(R12+XL2). The current through R1 and L (in the limit that zero current
passes out the right end of the circuit) is then
I0 = Vin/Z
so
VR1 = I0R1
VL = I0XL = Vout
where both I0 and XL have a frequency dependence, so that
Vout/Vin = ωL / sqrt(R2 + ω2L2)
At extremely low frequencies, the ω in the numerator keeps Vout to a low value: the inductor appears as
a “short” to very low frequencies, so Vout/Vin is extremely small).
At extremely high frequencies, XL >> R, so the denominator asymptotically approaches XL, and hence Vout
asymptotically approaches Vin: nearly all of the voltage is dropped across the inductor (and hence the
load).
Therefore, this filter can be considered to be a “high pass filter,” in that relatively high-frequency signals
coming from the “source” are passed on to the load with minimal attenuation, whereas low-frequency
signals are mostly dropped across the resistor, so very little of them make it to the load.
One can also show that the delivered power drops to half the value one would get if the load were
connected directly to the source (i.e. Vout2/Vin2 = 0.5, since Pload = Vload2/R) when XL = R, and hence ω = R/L
(which, not entirely coincidentially, is 1/τ for the comparable DC RL circuit). That is, “all else being
equal,” which is not entirely true (see the next section about Impedance Considerations).
Now, suppose we replace R2 with a capacitor. Here, I0 = Vin/Z, with Z =
sqrt(R12+XC2), so
VR1 = I0R1
VC = I0XC = (V0/ωC) / sqrt(R2+(1/ωC)2) = Vout
At low frequencies, XC is enormous, so nearly all of the voltage is dropped across the capacitor, and
hence passed along to the load. At high frequencies, the capacitor appears as a “short,” nearly all of the
voltage is dropped across the resistor, and almost nothing makes it to the load. So this filter acts as a
“low pass filter.” When XC = R (i.e. ω = 1/RC = 1/τ for the simple RC circuit) the output voltage drops to
1/sqrt(2) of the source voltage, and hence the power delivered to the load is half of what the load would
receive if connected directly to Vin (all else being equal).
One can also imagine replacing R1 with a capacitor, in which case the circuit (with the load connected
across the resistor R2) acts as a high-pass filter (since R2 >> XC at high frequencies and XC >> R2 at low
frequencies), or replacing R1 with an inductor, giving a low-pass filter (since R2 << XL at high frequencies
and XL << R2 at low frequencies).
Now, what happens if I set up a circuit as at right? In this case the output is
being taken across a series combination of an inductor and a capacitor, which
presents an impedance ZLC = |XL-XC|. The LC combination has a resonant
frequency at which XL = XC (and thus ωres = 1/sqrt(LC)) at which Z = 0, so the
output voltage is zero. As one travels away from the resonant frequency –
either increasing or decreasing - Z increases, and eventually (once ω - ωres >>
the bandwidth ∆ω) ZLC >> R so Vout becomes close to Vin.
Thus in a filter like this, voltages with a frequency near ωres are strongly attenuated, whereas voltages
with frequencies far from ωres pass through with minimal attenuation. This is known as a “band stop”
(or “notch”) filter, since it attenuates signals with frequencies falling within a certain band range (with
the bandwidth adjustable by selection of the Q of the RLC circuit!)
Finally, what happens if I replace the series LC combination with a
parallel LC combination (right)? Now the LC combination has an
admittance Y = |BC-BL|, and hence Z = 1/Y = 1/|BC-BL|. At
resonance, BC = BL (corresponding again to ωres = 1/sqrt(LC)) and Z is
maximal (in theory, infinite!). This means that all of the voltage is
dropped across the LC combination, and hence voltages with ω = ωres are passed unattenuated. As
frequency moves away from ωres, either BC or BL becomes increasingly large, so Z becomes smaller, and
hence VLC =I0Z becomes smaller and smaller; signals with frequencies far from ωres are thus strongly
attenuated. This filter acts as a “band pass” filter since it passes only signals with frequencies falling
within a band around ωres, again with the bandwidth controllable by adjusting the Q of the resonant
circuit.
I could even go a step further with my band-pass or band-stop filter, and replace R, and either L or C
(usually C), with a variable device. Then, by adjusting C, I can alter ωres, and hence the center of the pass
band or stop band. By adjusting R, I can adjust the Q of the resonant circuit, which adjusts the band
width. Now I have a completely configurable filter which I can use, for example, as the tuning circuit in a
radio or TV – by adjusting C I choose which frequency (“station”) to tune to, and by adjusting R I can
choose the bandwidth to pass (i.e. to reduce interference from a neighboring station).
Impedance Considerations
You may have noticed by now that there are lots and lots of ways to make a high-pass filter, since the
only criterion was to choose a circuit for which Vout increases with frequency. This can be done with an
RC circuit (Vout across C), an LR circuit (Vout across R) or an LC circuit (Vout across C), with infinitely many
arbitrary choices of R, L and C. And those are just the simplest examples – there are lots of more
complex networks that have the same essential property! There are similarly many ways to make lowpass, band-pass and band-stop filters.
You may also be thinking that these filters won’t work so well when connected to a load with some
arbitrary impedance Z – the load is certainly going to perturb the filter in some way, just as it did with
our DC voltage divider!
And even if you are careful to pick a filter whose properties will not be affected significantly by a given
load, you also may be wondering about how efficient these filters are at transferring a signal from a
source to a load. If you’re designing a band-pass filter to allow a signal of a certain frequency to pass
through without attenuation, you’re not doing a very good job if most of the desired signal is being lost
in the filter (even if the signals of other frequencies are being attenuated even more)!
We already realized that DC voltage dividers only work well when they’re extremely inefficient. Are AC
filters doomed to the same fate? The answer is that they can actually do quite well, if you’re careful in
designing the appropriate filter for your circuit.
This may get you thinking about the “maximum power transfer theorem” mentioned earlier, which
stated that maximum power is transferred to a load when the impedances of the source and the load
are matched. To think about this, we may want to draw our filter system in the form of a “block
diagram” such as this:
Block diagrams are useful in that they help us to see the forest without getting lost in the trees. Here,
the “source” is the source of some signal (that is, the signal we want with a voltage, and a bunch of
unwanted stuff at other frequencies; these signals are each parametrized by their voltage Vin). This may
be the signals coming in from an antenna system, for example. The “load” is the output device to which
we wish to pass the signal, such as a speaker. The “filter” is what does the job of selecting out what we
want, and rejecting what we don’t by adjusting the value of Vout relative to Vin for each frequency.
The “source” is connected to terminals a and b of the filter, and the “load” to terminals c and d of the
filter.
What we would like is for the signal we want to “see” an impedance between a and b Zab which closely
matches the impedance of the source Zsource. That is, Zab = Zsource*. When this is true we will transfer
maximum power at the frequency of the desired signal to the filter. And that is exactly what we want!
(Note: For AC circuits, the Maximum Power Transfer Theorem formally says that power transfer is
maximized when Zsource = Zload*, where the * represents complex conjugation. That said, at a resonant
frequency, Im(Z) will be zero, so the complex conjugation will usually be of no consequence.)
Then, we would like the filter to have some frequency-dependent internal impedance such that signals
at undesired frequencies are “shunted” between terminals A and B (i.e. the filter presents either a very
low impedance, or a very high impedance, between a and b to all undesired signals). The signal we
would like, on the other hand, “sees” Zab = Zsource*, is delivered maximally to the filter, and is distributed
throughout the filter such that a relatively high voltage exists between terminals c and d, so that Vout is
near Vin. But that’s not it. The load (e.g. speaker) has its own impedance, and the filter will transfer
maximum power to the load when the filter’s output impedance (i.e. the impedance between c and d
Zcd, for a signal of the desired frequency) matches the impedance of the load Zload: Zload = Zcd*.
So the “design problem” for a band-pass filter is as follows:
We would like
a. At the desired frequency ω0, the “input impedance” of the filter (i.e. Zab) should closely
match the output impedance of the source Zsource, so that maximum power is transferred
from the source to the filter.
b. At ω0, the “output impedance” of the filter (i.e. Zcd) should closely match the input
impedance of the load Zload, so that maximum power is transferred from the filter to the
load.
c. At or near ω0, we would like the filter to have an internal configuration that allows Vout to
have a large value; ideally, Vout ≈Vin.
d. For signals away from the desired frequency, we would like to have an impedance mismatch
between source and filter and/or filter and load, so that the undesired signals are not
propagated through.
By working through these requirements we can design an optimal filter – the configuration, and the
most appropriate values of R, C and L – for our application. This problem is encountered when designing
a tuner to connect an RF transmitter to an antenna, for example – you want every last bit of transmitter
power at the desired frequency to make it to the antenna, yet minimize the transmission of harmonics
and other spurious emission.
(If you don’t get all your power to the antenna, this is at best wasteful – since you want to get the
strongest signal possible, and overbuilding is not an option since the cost of transmitters increases
exponentially with power capabilities – and at worst is damaging since all that power which is rejected
from your antenna can be “reflected” back into the transmitter, possibly destroying the final amplifier
stages! (And that’s about as far from the opposite of “a good thing” as one can get…)
Download