Delay Lines Using Self-Adapting Time Constants

advertisement
Delay Lines Using Self-Adapting Time Constants
Shao-Jen Lim and John G. Harris
Computational Neuro-Engineering Laboratory
University of Florida
Gainesville, FL 32611
MSE is computed as a function of mu values
Abstract | Transversal lters using ideal tap delay lines
x 10
are a popular form of short-term memory based ltering 5
in adaptive systems. Some applications where these lters
have attained considerable success include system identication, linear prediction, channel equalization and echo can- 4
cellation [1]. The gamma lter improves on the simple FIR
delay line by allowing the system to choose a single optimal
time-constant by minimizing the Mean Squared Error of the
system [8]. However, in practice it is dicult to determine
the optimal value of the time constant since the performance 3
surface is nonconvex. Also, many times a single time constant is not sucient to well represent the input signal. We
propose a nonlinear delay line where each stage of the delay
line adapts its time constant so that the average power at 2
the output of the stage is a constant fraction of the power
at the input to the stage. Since this adaptation is independent of the Mean Square Error, there are no problems
with local minima in the search space. Furthermore, since 1
each stage adapts its own time constant, the delay line is
R=0.75
able to represent signals that contain a wide variety of time
R=0.82
scales. We discuss both discrete- and continuous-time real0
izations of this method. Finally, we are developing analog
0
0.5
1
VLSI hardware to implement these nonlinear delay lines.
mu
Such an implementation will provide fast, inexpensive, and
low-power solutions for many adaptive signal processing ap- Fig. 1. The solid line shows the MSE of a third-order single plications.
gamma lter as a function of for identication of the lter of
MSE
−3
I. Introduction
Innite impulse response (IIR) lters are more costeective than the widely used ideal delay lines in adaptive
signal processing. The gamma lter is one of the successful
IIR lter design which stability is guaranteed [8] [6] and
it is a marked improvement over the FIR lter because of
its adjustable memory depth [5] [6]. The gamma lter has
been applied to a variety of real-world problems such as
echo cancelation, system identication, times series prediction, noise reduction, and dynamic modeling [7].
However, in practice it is hard to search for the optimal
time constant of the gamma lter because of the nonconvex
performance surface associated with the time-constant [6].
Also, many times a single valued time constant may not be
able to fully represent the incoming signal. To deal with
this problem, we introduce a nonlinear gamma delay line
where each gamma unit adjusts its own time constant simultaneously such that the average power at the output of
each gamma unit is a constant fraction of the power at the
input. There are no local minima problems in this method
because of the Mean Square Error is unrelated to the time
scale adaptation. Moreover, since each stage adapts its
own time constant, the delay line is able to represent signals that contain a wide variety of time scales.
To provide fast, inexpensive, and low-power solutions to
many adaptive signal processing applications, we are de-
equation 8. The dashed dot line is the optimal solution of a thirdorder self-adjusting time constant delay line when the constant
fraction < is set equal to 0.82 and the dashed lines represents
<=0.75. Note that the mean square error here for both methods
are computed by using Wiener-Hopf solution.
veloping analog VLSI hardware to implement these nonlinear delay lines. Each stage of the nonlinear delay line
consists of a ve-transistor transconductance amplier and
a capacitor congured to realize a rst-order low-pass lter. The time constant of the lter is adapted so that the
signal power is attenuated by a constant fraction at each
stage. Sections II and II of this paper discuss the discreteand continuous-time realizations of this method. Section
IV describes the continuous-time analog VLSI circuitry we
have used to implement the self-adapting delay lines.
II. Discrete Domain
The gamma lter in discrete domain is given by
xk [n] = (1 ? k )xk [n ? 1] ? k xk?1[n ? 1]
(1)
where xk [n] represents the output of a k stage delay line
at iteration n, xk?1[n] is the input of the kth stage gamma
unit, and k is the adaptive memory parameter for kth
stage.
If the input to the gamma model is a simple sinusoidal
signal xk?1[n] = A cos(!0 n), the input power spectrum and
−3
14
x 10
MSE is computed as a function of mu values
MSE is computed as a function of mu values
0.08
0.07
12
0.06
10
MSE
MSE
0.05
8
0.04
0.03
6
0.02
4
0.01
2
0
0.5
mu
1
0
0
0.5
mu
1
Fig. 2. The solid line depicts the MSE of third-order single gamma Fig. 3. The solid line depicts the Mean Square Error of third order
lter as a function of for identication of the lter of equasingle gamma lters as a function of for identication of
tion 9. The dashed-dot line is the optimal solution for a thirdthe lter of equation 10, and the dashed dot line is the optimal
order self-adjusting time constant delay line when < is set equal
solution of a third-order self-adjusting time constant delay lines
to 0.87.
when the constant fraction < is set equal to 0.87.
the average input power can be computed by
Pxk?1 (ej! ) = 12 A2 (0 (! ? !0) + 0 (! + !0 )) (2)
2
(3)
}xk?1 = A2
respectively and the average output power is
2
2
}xk = A2 ((1 ? )2 + 1) ?k2(1 ? ) cos(! ) (4)
k
k
0
Dividing equation 4 by equation 3, gives a constant fraction
that is related a function of the k of the gamma unit and
the signal frequency as shown in the following equation:
2
< = }}xk = ((1 ? )2 + 1) ?k2(1 ? ) cos(! ) (5)
xk?1
k
k
0
In other words, the k is a nonlinear monotonic function
of the input signal frequency, while the value of the fraction < will distort this function. Each tap in a cascade
of self-adjusting tap delays will converge to the same time
constant provided a single frequency sine wave is input to
the cascade.
Using the properties of the discrete gamma lter, we
designed the following stochastic gradient descent update
equation for :
k [n] = k [n ? 1] + k (<E (d2k?1[n]) ? E (d2k [n])) (6)
where dk is the gamma delayed output of the input signal
dk?1 when d0 stands for the desired signal and the weight
update is calculated using the standard LMS rule given by:
wk [n] = wk [n ? 1] + w e[n]xk [n]
(7)
We will discuss a few system-identication examples to
illustrate how the self-adjusting k delay line architecture performs compared to a conventional single- adaptive
gamma lter.
The rst \unknown" system to be identied is
? 0:8731z ?1 ? 0:8731z ?2 + z ?3 ) (8)
H (z ) = 01:005(1
? 2:8653z ?1 + 2:7505z ?2 ? 0:8843z ?3
The mean square error as a function of was calculated by
evaluating = E (d2[n])+ W T RW ? 2P T W while the optimal weight vector W is computed by solving the WeinerHopf equation. We assumed a uniformly distributed zero
mean white noise input. The results are displayed in Figure 1. Note that these results present only the theoretical rather than empirical results since the Wiener-Hopf
equations were used to solve for the optimal solution in
both methods. The solid line in Figure 1 depicts the Mean
Square Error of a conventional third-order gamma lter as
a function of the single- value for identication of the lter of equation 8. The dashed-dot line shows the optimal
solution of a third-order self-adjusting time constant delay
lines when the constant fraction < is set equal to 0:82 while
the dashed lines is for < = 0:75. Thus, it is clear that the
self-adjusting time constant delay line can outperform the
single gamma lter for certain problems without requiring a complicated nonconvex search.
In Figure 2 and 3, we show two more examples that
demonstrate the performance of the self-adjusting time
constant delay lines. Figure 2 is the performance surface
for the third-order elliptic low-pass lter described by
? 0:0009z ?1 ? 0:0009z ?2 + 0:0563z ?3
H (z ) = 0:0563
1 ? 2:1291z ?1 + 1:7834z ?2 ? 0:5435z ?3
(9)
MSE is computed by using the continuous LMS update rule
−8
x 10
+
X0(t)
8
+
7
Pole1
Pole2
Weight1
Weight2
6
Weight3
d1(t)
+
T1(t)
+
X1(t) +
T1(t)
d2(t)
T2(t)
MSE
5
+
d0(t)
4
3
X2(t)
2
T2(t)
1
W0
W1
W2
-
+
sum
e(t)
0
6
8
10
12
14
mu
16
18
20
22
Fig. 4. A schematic of a continuous-time system identication prob- Fig. 5. The solid line depicts the experimental Mean Square Error
lem in which the upper left delay line is the unknown system to
of a continuous-time second-order single gamma lter as a
be modeled, the lower left delay line is an adaptive gamma sysfunction of for identication of the lter of equation 18, and
tem trained such that it approximates the system in mean square
the dashed dot line is the empirical optimal solution of a seconderror sense, and the last delay line is used to adjust the time
order self-adjusting time constant delay lines when the constant
constant 1 (t) and 2 (t) shown so that the average power at the
fraction < is set equal to 0.65 with poles found at 16.99 and 9.5.
outputs of the stage d1 (t) and d2 (t) are a constant fraction of the
average power of the inputs d0 (t) and d1 (t) respectively.
while Figure 3 shows the performance surface of
? 0:1800z ?1 ? 0:2835z ?2 + 0:2572z ?3
H (z ) = 0:3000
1 ? 2:1000z ?1 + 1:4300z ?2 ? 0:3150z ?3
(10)
Note that, the constant fraction < for both equation 9 and
10 are set equal to 0.87.
III. Continuous-Time Domain
In the continuous-time domain, the gamma lter can be
calculated by using [2] [3] [8]
dxk(t) = ? x (t) ? x (t)
(11)
k k
k k?1
dt
where xk (t) represents the output of a k-stage delay line at
time t, xk?1(t) stands for the input of the k-stage gamma
unit, and k is the reciprocal of time constant k .
If the input to an analog gamma model is a sinusoidal
signal with frequency !0 radians, xk?1(t) = A cos(!0 t), the
input power spectrum and the average input power can be
expressed as
Pxk?1 (j! ) = 12 A2 (0 (! ? !0) + 0 (! + !0 )) (12)
2
(13)
}xk?1 = A2
respectively and the average output power is
2
}xk = A2 1 + (1 ! )2
(14)
k 0
Dividing equation 14 by equation 13, we get a constant
fraction which is related only to the time constant of the
gamma unit and the signal frequency:
(15)
< = }}xk = 1 + (1 ! )2
xk?1
k 0
As in the discrete-time case, the time constant computed
by this method is a monotonic function of the frequency of
the input sine wave.
Bringing the behavior of each gamma stage together with
the delay lines, we can design a self-adjusting time-constant
delay line that adapts to the properties of the incoming signal. Figure 4 shows a schematic of an analog system identication problem in which the upper left delay line is an
\unknown" system to be identied and the lower left delay line is an adaptive gamma system with weights trained
to minimize the mean square error. The last delay line is
used to adjust the time constant 1 (t) and 2 (t) shown so
that the average power at the outputs of the stage d1(t)
and d2 (t) are a constant fraction of the average power of
the inputs d0(t) and d1(t) respectively. In other words,
k = 1=k is adapted by using the following learning rule:
k
2
2
(16)
k d
dt = (<dk?1 ? dk )
where k is a time constant of the k update which is
chosen to be much larger than k . Note that equation 16
uses the instantaneous power of both input and output
signal instead of the average power.
Similar to the discrete-time adaptation of FIR and IIR
adaptive lters, the weights w0(t), w1 (t), and w2 (t) are adjusted according to the following continuous-time gradient
MSE is computed by using the continuous LMS update rule
−7
x 10
MSE is computed by using the continuous LMS update rule
−7
x 10
4
4
3.5
3
3
MSE
MSE
2.5
2
1
2
1.5
1
0
0.5
−1
0
4
6
8
10
12
14
mu
16
18
20
22
24
5
10
15
mu
20
25
Fig. 6. The solid line depicts the experimental Mean Square Error Fig. 7. The solid line depicts the experimental Mean Square Error of
of a second order analog single gamma lters as a function of
a continuous-time third-order lters as a function of the single for identication of the lter of equation 19, and the dashed
for identication of the lter of equation 20. The dashed dot line
dot line is the empirical optimal solution of the second-order selfis the empirical optimal solution of the third-order self-adjusting
adjusting time constant delay line when the constant fraction <
time constant delay lines when the constant fraction < is set
is set equal to 0.65 with poles found at 13.1 and 6.1.
equal to 0.7 with poles found at 15.355, 8.998 and 2.05.
descent update [2] [3] [1] [6] [8]:
(17)
w dwdtk(t) = e(t)xk (t)
where w is a time constant of the weight update larger
than k , the time constant of each stage.
Based on this signal and time constant relationship, we
rst model an analog system with poles located at 15.3564
and 1.5356
3071s + 0:5895
H (s) = s2 +0:16
(18)
:8920s + 23:5818
by using 2 delay lines with self-adapting time constants.
The solid line in Figure 5 depicts the experimental Mean
Square Error of the conventional second-order single gamma lters as a function of for identication of the
lter of equation 18. The dashed-dot line shows the empirical optimal solution of a second-order self-adjusting time
constant delay lines when the constant fraction < is set
equal to 0.65 with poles found at 16.99 and 9.5.
Figure 6 and 7 give two more examples that show the
benet of the MSE unrelated updating scheme. Figure 6 is
the performance surface for a third order lter with poles
located at 15.3564, 2.8793 and 1.5356
:3071s2 + 1:7981s + 2:7159
H (s) = s3 +019
:7713s2 + 72:2184s + 67:8976 (19)
by using two follower integrators to model, while Figure 7
gives the mean square error versus of another third order
lter with poles at 15.3564, 7.6782 and 1.5356
0:3071s2 + 4:0089s + 7:2425
H (s) = s3 + 24
:5702s2 + 153:2814s + 181:0618 (20)
which is modeled by three consecutive follower integrator
lters. The constant fractions < of both examples are set
equal to 0.65 and 0.7 respectively.
IV. Circuit Implementation
Since equals C=G where C is the capacitance of an
RC integrator and G is the transconductance of a follower
which is equivalent to
q
kT Vb
(21)
G = Io e 2kT
( q )
as given in [4]. The relationship between the bias voltage
of a follower and its input signal frequency can be derived
by combining equations 15 and 21:
2kT qC!0 )
Vb = kT
(22)
ln(
q q Io 1 ? 1
<
and as depicted as shown in Figure 9. In equation 21 and
22, k stands for Boltzmann's constant, T temperature, q
electron charge, a fabrication constant expressing the effectiveness of the gate in determining the surface potential
for a CMOS transistor, and C capacitance in the followerintegrator circuit.
Figure 8 gives an overview of how a cascade of follower
integrators adjust their own time constants with respect to
the incoming signal d0 as shown in Figure 4. The upper
plot shows the circuit results when the input d0 is composed of two frequencies 500Hz and 1000Hz signal for the
time duration 0ms to 60ms. The signal changes abruptly to
a single frequency 500Hz signal at 60ms. The lower graph
depicts the learning path of two bias voltages. It is clear
Fig. 8. Time constant adaptation for a continuous-time two-stage delay line which is similar to the one shown on the right middle portion
of Figure 4. The upper plot shows that the input signal d0 is composed of two frequencies 500Hz and 1000Hz from 0ms to 60ms, but it
changes abruptly to a single frequency 500Hz signal after 60ms. The lower graph depicts the learning path of two bias voltages. It is
clear that when there are two dierent frequencies in d0 , the two bias voltages separate so that each of them corresponding to one of the
input frequencies. When the input collapses to a single frequency, the two bias voltages converge to the same value.
that when there are two dierent frequencies in the d0, two
bias voltages separate into two separate values corresponding to the two frequencies. When the input signal collapses
to a single frequency, the two bias voltages now converge
to the same value. In actual practice, the time constant for
update will be made much longer than what was used in
this example, providing much smoother curves.
Figure 10 shows a schematic of a self-adjusting time constant circuit consisting of three follower-integrators in the
upper portion of the plot and three absolute-value circuits
for computing the instantaneous power of each stage and
automatically adjusting the time constant. This schematic
is a three-tap delay-line version of the circuit shown in the
middle right of Figure 4 which consists of only two delay
lines. Figure 11 shows a detailed schematic of the absolute
value circuit.
V. Conclusion
In this paper, we introduce a nonlinear delay line where
each stage of the delay line adapts its time constant so that
the average power at the output is a constant fraction of
the average power of the input. There are no problems with
local minima in the search space as long as the fraction < is
set to a constant. Figure 12 shows the mean square error
of equation 10 as a function of <. It is clear that when
the number of delay elements increases, the performance
surface of this self-adapting delay lines is nonconvex with
respect to <. Nevertheless, the self-adapting time constant
delay lines still be a favorable choice, since its simplicity
makes it easier to be implemented by CMOS process and
the optimal value of < stays mostly around 0.6 to 0.9 while
the range of optimal could be ranging from 0 to 1.
Acknowledgments: This work was supported by an
NSF CAREER award #MIP-9502307.
References
[1] B.Widrow and S. Stearns. Adaptive Signal Processing. Prentice
Hall, 1985.
[2] J. Juan, J. G. Harris, and J. C. Principe. Analog VLSI implementations of continuous-time memory structures. In 1996 IEEE International Symposium on Circuits and Systems, volume 3, pages
338{340, 1996.
[3] J. Juan, J. G. Harris, and J. C. Principe. Analog hardware implementation of adative lter structures. In Proceedings of the
International Conference on Neural Networks, 1997.
[4] C. Mead. Analog VLSI and Neural Systems. Addison-Wesley,
1989.
[5] J. C. Principe, J. Kuo, and S. Celebi. An analysis of short term
memory structures in dynamic neural networks. IEEE transactions on Neural Networks, 5(2):331{337, 1994.
[6] J. C. Principe, B. De Vries, and P.G. de Oliveira. The gamma
lter { a new class of adaptive IIR lters with restricted feedback.
IEEE transactions on signal processing, 41(2):649{656, 1993.
[7] J.C. Principe, S. Celebi, B. de Vries, and J.G. Harris. Locally
recurrent networks: the gamma operator, properties, and extensions. In O. Omidvar and J. Dayho, editors, Neural Networks
and Pattern Recognition. Academic Press, 1997.
[8] B. De Vries and J. C. Principe. The gamma model { a neural
model for temporal processing. Neural Networks, 5:565{576, 1992.
The relationship between a sinusoidal input and biased voltage of a follower
0.85
0.8
in
0.75
Vb in Volts
W=6u
L=18u
-
W=6u
L=18u
W=6u
L=18u
+
R=0.75
ipout
0.7
R=0.50
R=0.25
ref
W=6u
L=18u
-
W=6u
L=18u
W=6u
L=18u
0.65
+
inout
absbias
0.6
0.55 4
10
5
6
10
Frequency in Hz
10
Fig. 9. The relationship between bias voltage of the follower integrator and its input signal frequency while changing the constant
fraction <. In this gure, kT=(q) is 43 10?3 , C capacitance
of a capacitor is 1 10?12 Farads and Io is 1 10?15 Amps.
in1
+
Fig. 11. A detailed schematic of the absolute circuit.
MSE is computed as a function of ratio values
-
in3
-
in2
-
+
L=18u
W=6u
ratio
L=18u
W=6u
0.06
+
gbiasconst
0.04
MSE
-
+
-
+
-
+
ref
absbias
0.02
ipout
gbias3
inout
ratio
ipout
gbias2
inout
ratio
ipout
ratio
inout
gbias1
ratio
0
0.6
0.8
1
ratio
Fig. 10. A schematic of the self-adjusting time constant circuit which
consists of the three follower-integrators in the upper portion of Fig. 12. Mean Square Error of equation 10 as a function of <.
the plot and three absolute value circuits for computing the inNote that the mean square error is calculated by evaluating =
stantaneous power at each stage and automatically adjusting the
E (d2 [n]) + W T RW ? 2P T W , while the optimal weight vector
time constant. This schematic is a three tap delay-line version
W is computed by solving the Weiner-Hopf equation.
of the circuit shown in the middle right of Figure 4. The detailed
schematic of the absolute value circuit can be found in Figure 11
Download