ADAPTIVE LOG DOMAIN FILTERS USING FLOATING GATE TRANSISTORS

advertisement
ADAPTIVE LOG DOMAIN FILTERS USING FLOATING GATE TRANSISTORS
Pamela A. Abshire, Eric Liu Wong, Yiming Zhai and Marc H. Cohen
Electrical and Computer Engineering / Institute for Systems Research
University of Maryland, College Park, MD 20742, USA
{pabshire,eltan,ymzhai,mhcohen}@glue.umd.edu
ABSTRACT
Lyapunov stability well suited for adaptive control of
Infinite Impulse Response (IIR) filters. IIR filters offer
the advantage of smaller filter structures and fewer filter
coefficients than FIR filters in order to model plants of
similar complexity.
We present an adaptive log domain filter with integrated
learning rules for model reference estimation. The system
is a first order low pass filter based on a log domain
topology that incorporates multiple input floating gate
transistors to implement on-line learning of gain and time
constant. Adaptive dynamical system theory is used to
derive robust learning rules for both gain and timeconstant adaptation in a system identification task. The
adaptive log domain filters have simulated cutoff
frequencies above 100kHz with power consumption of
23PW and show robust adaptation of the estimated gain
and time constant as the parameters of the reference filter
are changed.
Section 2 develops robust and stable learning rules for
adapting the gain and time constant of a log domain low
pass filter in a system identification task. Section 3
describes the log domain filter architecture using MITEs
to implement the filter and integrate the learning rules for
gain and time constant. Section 4 describes and discusses
circuit simulation results for our system when a variety of
inputs are presented as might be in a system identification
task. Section 6 summarizes and draws conclusions from
this work.
1. INTRODUCTION
2. DERIVATION OF ROBUST LEARNING RULES
There is a growing need for adaptive signal conditioning
to improve performance in dynamic and complex signal
processing applications. Control laws must use limited
information to robustly and stably drive the adaptive
system’s parameters in a direction that meets overall
system performance specifications. In this paper we
combine log domain filter circuit architecture and floating
gate transistors to implement stable learning rules for the
free parameters, gain and time constant.
We describe control laws for a tunable filter which
address the classical problem of system identification,
depicted in Figure 1: an input signal is applied to both an
unknown system (plant) and to an adaptive estimator
(model) system which estimates the parameters of the
unknown plant. The difference between the plant and the
model, the error, is used to adjust the parameters. We
design the adaptive laws for adjusting the control
parameters so as to ensure stability of the learning
procedure.
Other groups have described filtering applications based
on floating gate MOS circuits. Hasler et al [1] described
the Auto-zeroing Floating Gate Amplifier (AFGA) and its
use in bandpass filter structures with very low frequency
response capability. Minch [2, 3] developed circuits and
synthesis techniques using Multiple Input Translinear
Elements (MITEs) for a variety of signal processing
applications. Our designs also use MITE elements for
compactness and elegance.
Model
Reference
(plant)
input
(u)
;‹,(((
6
(x3,x4)
Adaptive
Estimator
(model)
Few groups have reported integrated analog adaptive
filters. Juan et al. [4] and Stanacevic and Cauwenberghs
[5] have designed analog transversal Finite Impulse
Response (FIR) filters that include adaptation of weights.
Both Juan et al. and Stanacevic and Cauwenberghs use
Least Mean Square (LMS)-based adaptation algorithms.
LMS methods are well suited to implementations of FIR
filters; in this work we present methods based on
plant output (x1)
error (e1)
+
model output (x2)
Figure 1: The system identification problem: an input u
is applied to both plant and model filters. The error e1 is
the difference of plant and model outputs, (x2 – x1) and is
used to adapt the parameters of the model, (x3,x4).
,
,6&$6
The unknown plant and the adaptive model filters are
described by the state-variable representation:
x1 Ax1 ABu
plant output
x2
x3 x2 x3 x4u
model output
where x1 is the output of the plant, A is the reciprocal of
the plant time constant, B is the plant gain, u is the input
to both filters, x2 is the output of the model, x3 is the
estimate of the reciprocal time constant, and x4 is the
estimate of the gain.
In order to assess the performance and stability of the
adaptation, we construct the error system as the
differences between plant and model outputs, between
estimated and true reciprocal time constant, and between
estimated and true gain:
e1
x2 x1
output error
e2
x3 A
(1/time constant) error
e3
x4 B
gain error
We are interested in adaptive laws controlling system
parameters so that all errors tend towards zero with time.
Thus we can focus on the essential features of the control
problem by considering the dynamics of the error system:
e1 x2 x1
e2 x3
e3 x4
The dynamics of the output error are determined by the
system, but we have the flexibility to specify the dynamics
of the parameter errors so that the control laws drive the
estimates stably to their true values.
We employ the direct method of Lyapunov to investigate
the stability of the adaptive system and to derive
appropriate control laws [6]. We choose a suitable scalar
function and examine the temporal derivative of this
function along trajectories of the system. A Lyapunov
function must satisfy the following three conditions:
positive definite, negative definite time derivative, and
radially unbounded. For system identification of the first
order low-pass filter we consider the Lyapunov function:
V (e) 1 e12 e22 e32 .
2
This function satisfies the first and third conditions and
has the following temporal derivative, evaluated in terms
of the simple adaptive system described above:
Lyapunov function. There are multiple solutions which
provide such a negative time derivative:
V (e) Ae12 .
We choose the following pair of control laws:
e
2
In our implementation the estimate of the reciprocal time
constant is provided by integrating the product of the
output error with the temporal derivative of the model
output, and the estimate of the gain is provided by
integrating the output error on a capacitor.
3. CIRCUIT IMPLEMENTATION
3.1 MITE Implementation of Log Domain Filters
Log domain filters are a dynamic extension of classical
static translinear circuits. They offer wide tuning range,
large dynamic range, and low voltage / low power
operation. The circuit in Figure 2 is used both as the plant
and the model with labels in parentheses representing
model parameters. Cascode transistors are not shown for
clarity. In subthreshold operation the MITE current is an
exponential function of the summed inputs:
I1
A e BV1 Vr CV2 I 2
I4
B V2 V3 IW I 4
Ae
B V3 Vg
IW
I I BV V CV2 in W e g r
I4
ª I in BVg Vr º
«1 e
»
¬ I4
¼
BIW ª
B V V I 4 I in e g r º
¼»
C ¬«
BI 4V3
Ÿ I4
Ae12 e1e2 x2 Bu "
we choose them to satisfy the second condition for the
Ae
We determine the transfer function for the output current
I4 by differentiating it, then substituting our results from
KCL and MITE relationships above:
e1e3 e2u Au e2 e2 e3e3
Note that the control laws for the time constant and gain
errors ( e2 and e3 respectively) remain unspecified, and
I in I 3
We apply Kirchoff’s Current Law (KCL) at the capacitive
node to find the relationship between the MITE currents
and the capacitive current:
Ae12 e1e2 x2 Bu "
e1e2 e3u Ae1e3u e2 e2 e3e3
e Au
1
These rules may be simplified further since in current
mode log domain filters, many system variables are
strictly positive, including the estimate of the reciprocal
time constant x3, the true reciprocal time constant A, and
the input u. Multiplying the rules by a positive scalar
factor affects the rate of adaptation, but not the direction.
Thus we can express the control laws simply:
e2 v e1 x2 and e3 v e1 .
V (e) e1e1 e2 e2 e3e3
x
e 2 and e
1x
3
3
BI 4
IW
C
which is a first order low-pass transfer function with time
constant W C BIW . The time constant is the ratio
between capacitance and bias current, easily tuned by
adjusting the bias current.
,
3.2 MITE implementation of learning rules
The plant and model are first order low-pass filters, each
with two adjustable parameters: gain and the reciprocal of
the time constant. We have implemented learning rules
derived using the Lyapunov method described in Section
2. The inputs to the learning rules are the system output
error and the temporal derivative of the model output. The
temporal derivative of the model output is computed using
Note that V3
Vb E ln D I f and V3
E I f I f 1 , so the
adaptation rule becomes e2 v E I p I f I f I f 1 .
The time constant learning rule requires a four quadrant
multiplication, also implemented using a MITE circuit
with inputs Id1, Id2, Ip and If and outputs Im1 and Im2.
Schematics for the learning rules and summing nodes are
shown in Figure 4: panel (a) shows the integrator for gain
adaptation; panel (b) shows the cascode arrangement used
in all current mirrors to minimize Early effect and
increase trans-amp gain; and panel (c) shows the
integrator and differential pair for time constant
adaptation.
Figure 2: Log domain MITE filter topology for a first
order low-pass transfer function used for both plant and
filter. Labels in parentheses refer to filter variables, the
rest to the plant.
the circuit shown in Figure 3: A wide range OTA operates
as a voltage follower with a capacitor connected to the
output, with current Id=Id1-Id2. This current relates to the
input voltage as I d
1
sCd 1 sCd g m12 I f R . When
g m12 sCd , the output current is approximately the
derivative of the input voltage I d | sCd I f R . To make
g m12 large, we operate the input devices near threshold. It
is not necessary to explicitly convert the filter output
current into voltage; we use intermediate node voltage V3
directly as input to the temporal derivative computation.
Figure 4: MITE implementation of learning rules for gain
and time constant.
4. SIMULATION RESULTS
We simulate the circuit with HSPICE using BSIM3v3
models for a 0.35Pm technology. We use the technique in
[7] to avoid floating-node problems in the simulator. The
floating gate voltages are initialized to -1.6V with all
other nodes grounded in order to bias the log-domain
signal nodes (V1, V2, and V3) at Vdd/2 to ensure maximum
operating range. We use a square wave (Figure 5),
harmonic sine waves (Figure 6 a, b, and c), and equally
spaced sine wave frequencies (Figure 6 d, e, and f) as
inputs.
Figure 5 shows adaptation with a 10kHz square wave.
The square wave pulses from 20nA to 160nA. Figure 5 (a)
is the error Ie between the plant and filter output. Fig 5 (b)
Figure 3: Circuit for computing temporal derivative.
,
shows VW and VWBest. We intentionally vary the time
constant of the plant (by a factor of 16) to see how well
the filter adapts. The different VW values correspond to IW
of 40nA from 0-2ms, 80nA from 2-3ms, 20nA from 34ms, 160nA from 4-6ms, and 10nA from 6-8ms. For all
changes in VW, VWBest accurately tracks the new value. Ie
o 0 when VWBest o VW. Vgain is not shown here, and is
fixed at 1.4V. Vg_est o Vgain at 0.65ms. The adaptation
rate depends on signal strength, currents IgD and IWD, and
capacitors CG and CT.
5. CONCLUSIONS
The circuit design approach we have developed is novel
in that it utilizes log domain filters implemented with
MITE circuits to integrate learning rules for system
identification. We chose to implement adaptive filters
using a log domain topology because log domain filters
are compact current mode IIR filters that operate with low
power, have wide tuning range and large dynamic range,
and capability for high frequency operation. Further,
we’ve developed robust learning rules based on Lyapunov
stability. These learning rules are implemented using
MITE structures, highlighting the elegance and symbiotic
nature of the design methodology.
An earlier design of this adaptive system with derivative
approximated by a high pass filter has been fabricated in
0.5Pm and 0.35Pm technologies and is currently being
tested. Dimensions are 260 Pm by 150 Pm in a 0.5 Pm
technology for plant, model and learning rules.
We are in the process of extending this work to higher
order adaptive filter structures.
Figure 5: Adapting with a 10kHz square wave input
signal.
ACKNOWLEDGEMENTS
We thank the MOSIS service for providing chip
fabrication through their Educational Research Program.
We thank Brad Minch, Paul Hasler, and Chris Diorio for
stimulating discussions at the Telluride Neuromorphic
Eng Workshop 1998. We thank Gert Cauwenberghs for
his guidance as advisor at JHU. P.A. is supported by an
NSF CAREER Award (NSF-EIA-0238061).
Next, we show the adaptation when the signal is a mixture
of sine waves. In Figure 6 (a)-(c) we use a combination of
sine waves at 10kHz, 20kHz, 40kHz, and 80kHz as input.
In Figure 6 (d)-(f) the input is a summation of 14 sine
waves, whose frequency ratio is an irrational number
5S/2, spanning from 5kHz to 97kHz. For those two very
different inputs, VWBest accurately tracks VW [Figure 6
(b),(e)] and Vg_est tracks Vgain [Figure 6 (c),(f)], and Ie
approaches zero when adaptation is completed.
REFERENCES
[1] Hasler, P., Minch, B.A. and Diorio, C., "An Autozeroing
Floating-Gate Amplifier," IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, vol. 48, pp.
74-82, 2001.
[2] Minch, B.A., Hasler, P. and Diorio, C., "Multiple-Input
Translinear Element Networks," IEEE Transactions on Circuits
and Systems II, vol. 48, pp. 20-28, 2001.
[3] Minch, B.A., "Multiple-Input Translinear Element LogDomain Filters," IEEE Transactions on Circuits and Systems II:
Analog and Digital Signal Processing, vol. 48, pp. 29-36, 2001.
[4] Juan, J.-K., Harris, J.G. and Principe, J.C., "Analog
Hardware Implementation of Adaptive Filter Structures,"
presented at International Conference on Neural Networks,
1997.
[5] Stanacevic, M. and Cauwenberghs, G., "Charge-Based
Cmos Fir Adaptive Filter," presented at Midwest Symposium on
Circuits and Systems, 2000.
[6] Narendra, K.S. and Annaswamy, A.M., Stable Adaptive
Systems. Prentice-Hall, New Jersey, 1989.
[7] Rahimi, K., Diorio, C., Hernandez, C., et al., "A Simulation
Model for Floating-Gate Mos Synapse Transistors," presented at
ISCAS, 2002.
Figure 6: Adapting with (a)-(c) 4 harmonic sine waves;
(d)-(f) 14 geometrically spaced sine waves from 5-97kHz.
,
Download