Self-Tuning Time–Delay Compensation and Equalization for

advertisement
JOURNAL OF ADVANCED INSTRUMENTATION AND MEASUREMENT, 2010
1
Self-Tuning Time–Delay Compensation and
Equalization for Audio Entertainment Devices
T.E. Gibson Y. Matton
J.W. Morin D.E. Klenk V.A. Hong
Abstract—There are many applications where audio tuning can enhance a users listening experience. The act of
tuning an audio setup can be challenging, especially when
multiple speakers are involved. To minimize placement difficulties when installing or modifying a speaker setup, an automated system has been conceived. The system automatically adjusts speaker time delay, volume, and frequency content to achieve optimal listening conditions for the specific
location of the user. The system sends a binary stochastic
signal from each speaker which is then received by two microphones in a remote control. The remote then transmits
the recorded microphone data to the receiver to calculate
the sound lag and gain for each speaker from the user’s position. The delay and volume of each speaker is then adjusted
so that the optimal listening experience may be achieved.
Index Terms—time–delay, sound level, equalization, cross
correlation, impulse response, system identification
I. Introduction
T
HE position of speakers relative to a listener’s location
can dramatically alter sound experiences. Surround
sound systems, for example, are designed to stimulate the
3D sphere of human hearing with audio channels above
and below the listener. One issue commonly seen with
speaker systems is that the intended sound experience can
be significantly degraded depending on the location of the
listener with respect to the speakers. The reason for this is
because without reproducing suggested speaker placement,
the time for sound to travel to the user’s ears varies, altering the intended audio profile of the speakers. Additionally,
the shape, materials, and existing objects in a room can
drastically a↵ect the room’s acoustics. As the audio signal
travels from the speaker to the ear, reflection and dampening due to a room’s acoustics may introduce unwanted
distortion. With this system, the intent is to minimize the
degradation of sound quality through the use of electronics
employing signal processing and system identification.
The proposed design automatically adjusts the acoustic
properties of a speaker system to achieve optimal listening
conditions. The design incorporates MEMS microphones
into the system remote control to achieve a portable and
convenient tuning device. By integrating the microphones
into the remote, the system can adjust the system sound
properties to be tailored to the listener’s position in the
room. While there have been other sound calibration devices in the past, none were designed with the microphones
incorporated into the system remote. Another advantage
of this design is that it can be used with virtually any
type of speaker setup. Existing speaker calibration systems
such as the Audyssey R MultiEQTM requires microphones
in each speaker, which increases cost and cannot adjust for
All authors are with the Mechanical Engineering Department,
Massachusetts Institute of Technology, Cambridge MA., 02139, USA
a variable listener location.
This paper focuses on the system setup and the acoustic
characterization methods that are to be performed by the
device. Presently, the system prototype utilizes a single
speaker and single detection microphone.
II. Mathematical Preliminaries: System
Identification
System identification is a robust field in engineering, devised by control engineers and mathematicians looking to
answer the following question. Given the input to a Linear Time Invariant (LTI) system, x(t) where t is time, and
knowing the output of that system, y(t), can the dynamics
of said system be characterized? This question is illustrated in Figure 1, where h(⌧ ) is defined as the impulse
response function which does completely characterize the
dynamics of the LTI system. The relationship between the
input, the impulse response, and output of a linear system
is described below
Z 1
y(t) = (h ⇤ x)(t) =
h(⌧ )x(t ⌧ )d⌧,
(1)
1
where the operation (·) ⇤ (·) is convolution, and in this formulation, h is convolved with x in order to obtain y. In this
work, the input and output of the system will be digitally
analyzed and for that reason the discrete form of x(t) and
y(t) are represented as xi and yi respectively, where the
subscript notation indexes the i–th sampled component of
the signals.
The processes by which the impulse response function
is extracted from the known input and output of a system is not straightforward and relies on the concepts of
autocorrelation and cross–correlation of signals. The autocorrelation of a discrete time sequence xi of N samples
indexed with i = 0 to N 1 is defined as,
Cxx,j =
N j
1 X
(xi
N i=0
x̄) · (xi+k
x̄)
(2)
where x̄ denotes the mean of x, j = 0, 1, . . . m and Cxx 2
Rm+1 is a column vector indexed by j with m representing
discrete
Linear System
discrete
Fig. 1. Impulse response function in relation to input and output of
linear system with discrete approximation for input and output.
2
JOURNAL OF ADVANCED INSTRUMENTATION AND MEASUREMENT, 2010
the total number of lags. The lag time of a system is the
inverse of the sampling rate in seconds. As an example,
if data was sampled at 1 kHz and m = 1000, then the
autocorrelation would contain 1001 data points starting at
0 seconds and spaced in 0.001 second intervals. The crosscorrelation of two finite sequences is defined as
Cxy,j =
N j
1 X
(xi
N i=0
x̄) · (yi+k
ȳ)
(3)
where j = 0, 1, . . . m and Cxy 2 Rm+1 is a column vector
indexed by j. The cross-correlation and auto-correlation
are then related to the discrete impulse response function,
h through the following representation of discrete convolution
Cxy = Toeplitz {Cxx } h
(4)
where hj is the j-th lag of the discrete impulse response
function h 2 Rm+1 and the Toeplitz operation constructs
a matrix of the following form:
2
3
a0 a1 a2 . . . . . . am
.. 7
6
..
6 a1 a0 a1
.
. 7
6
7
6
.. 7
.
.
.
..
..
..
6 a2 a1
. 7
7 (5)
Toeplitz{a} = 6
6 .
7
..
..
..
6 ..
7
.
.
.
a
a
1
27
6
6 .
7
.
.. a
4 ..
a0 a1 5
1
a m . . . . . . a2 a 1 a 0
with a 2 Rm+1 a dummy variable used to illustrate the operation.[1] The discrete impulse response function is then
obtained as
h = Toeplitz {Cxx }
1
Cxy .
(6)
Note that inverting the Toeplitz matrix is just the discrete
operation for deconvolvinging Cxx from Cxy . The process
by which the discrete impulse response function is obtained
from discrete inputs and outputs of a linear system have
been fully described, however, the computations just described above are computationally burdensome. For that
reason the duel of convolution in the frequency domain
is introduced, but first the Discrete Fourier Transform
(DFT) is formalized. The DFT operation F : CN ! CN is
defined as X = F {x} where
Xk =
N
X1
xn e
2⇡i
N kn
where (·)⇤ is the complex conjugate operator. Recalling
that the auto correlation and cross–correlation are convolution operations in the discrete time domain, the following
operation is an equivalent definition to the previously defined correlation operations:
⇣
⌘
Cxx = Real F 1 F{x x̄} · F {x x̄}⇤
(10)
⇣
⌘
Cxy = Real F 1 F{x x̄} · F {y ȳ}⇤
(11)
where the function Real(·) returns the real component of a
finite dimensional complex vector. The above defined operation di↵ers from the previous definitions in the scaling
of the correlation vectors as well as in their length. Before,
in the discrete time domain representation, the total number of lags in the correlations was m + 1, using the Fourier
Transform to obtain these correlations the lengths of Cxx
and Cxy are N which is equal to the dimension of x and y.
In this work the specific functions used to autocorrelate,
cross–correlate and deconvolve are contained in Appendix
A.
Earlier it was mentioned that the discrete time domain
operations were costly to compute. Using the DFT as
described above will not decrease the computation time
required to obtain the impulse response function. However, the Fast Fourier Transform (FFT) is equivalent to
the DFT and reduces the number of mathematical operations from O(N 2 ) to O(N log N ). As an example, the FFT
of a data set with 10,000 points is 0.4 % of the computation cost when using the DFT. Notice in Appendix A that
the FFT function is used and not the DFT function. Fore
more details on the FFT refer to [2] and [3]
III. Hardware and Experimental Setup
A block diagram of the experimental setup is shown in
Figure 2. First, a computer is used to generate a stochastic
binary signal that is written to an analog output channel of
DAQ
k = 0, . . . , N
1
Analog Out
(7)
Analog In
n=0
and i is the imaginary unit vector. The inverse Discrete
Fourier Transform (iDFT) F 1 : CN ! CN is defined as
x = F 1 {X}
xn =
N 1
2⇡i
1 X
Xk e N kn
N
n = 0, . . . , N
1.
Audio
Amp
Power
Supply
Mic.
Amp
(8)
k=0
Letting b and c represent finite dimensional complex column vectors, the duel of convolution in the frequency domain:
F{b ⇤ c} = F{b} · F {c}⇤
(9)
Speaker
Fig. 2. Experimental setup.
Mic.
Gibson, Matton, Morin, Klenk and Hong: TUNING COMPENSATION FOR TIME–DELAY AND EQUALIZATION
3
TABLE I
Experimental Equipment.
Item
Model
Manufacturer
MATLAB
R2009A
MathWorks
Labview
V9.0f3
Nat. Instruments
DAQ
USB-6215
Nat. Instruments
Power Supply
E3631A
Hewlett-Packard
Speaker (15W)
TR600-CXi
JL Audio
Audio Amp
LM4755
National Semi.
Electret Mic.
MD9745APZ-F Knowles Acoustics
Electret Amp
BOB-08669
Sparkfun.com
MEMS Mic.⇤
SPM0404HE5H Knowles Acoustics
Op-Amp⇤
LM741CN
National Semi.
Audio Amp⇤
LM386
National Semi.
Speaker (0.5W)⇤ COM-09151
Sparkfun.com
*not used in final system
a data acquisition unit (DAQ). The analog output signal is
fed an audio power amplifier that drives a speaker. Simultaneously, analog input channels on the DAQ sample both
the analog output excitation signal and the response signal
produced by the microphone. Note, the direct connection
between the analog in and analog out is for investigating a
delay inherent in the DAQ itself, described below. A power
supply provides the power input to the speaker and microphone amplifiers. After the stochastic signal is completed,
the computer reads in the recorded sound data from the
DAQ’s bu↵er for processing.
The experimental apparatus underwent two iterations.
The final setup was designed for long range experiments,
where the distance between the speakers and receiving microphone approached ten meters. The initial hardware ultimately served as a proof-of-concept as it did not meet
range and sensitivity requirements. The initial setup will
be outlined below to show briefly the design considerations
before focusing on the final setup. Hardware from both the
initial and final apparatus designs is listed Table I
The initial test setup used a Knowles Acoustics MEMS
microphone shown in Figure 3 and a low cost 8 ⌦, 0.5
W speaker (not shown). The MEMS microphone output
was amplified by a simple non-inverting amplifier using an
LM741 op-amp. The 0.5 W speaker was driven by a low
power LM386 audio amplifier. A LabVIEW Virtual Instrument VI was created to interface with the DAQ and
MEMS microphone
Elecret microphone
High-power
stereo amp
Low-power
audio amp
Fig. 4. Breadboard with MEMS microphone, Elecret microphone,
low–power audio amp, high–power stereo amp and IR sensor.
send an excitation signal to the speaker and read in the
microphone’s response signal. This setup provided a functional test bed for designing and refining the VI, but it
lacked sufficient power for long range tests nor the microphone sensitivity, regardless of the low power output of the
speakers.
The second and final hardware setup used an electret
microphone with its associated signal conditioning circuit,
seen in Figure 12. Additionally, a new 15 W speaker and
audio amplifier were chosen. The speakers, designed for car
stereos, is 4 ⌦ and have a significantly improved frequency
response, 59 Hz - 22 kHz, as compared to the previous
speaker. To drive the speakers, an LM4755 stereo audio
amplifier was used. A photo of the circuits built are shown
in Figure 4
IV. Experimental Results
Three di↵erent experiments were carried out in this
work. Originally two experiments were planned. However, there was an artifact in the data that precipitated
a third. The three experiments are presented as follows:
DAQ delay investigation, speed of sound verifictaion and
frequency shaping.
Fig. 3. Microphones Left: Knowles Acoustics MEMS surface mount
microphone SPM0404HE5H-PB Right: Knowles Acoustics electret
condenser microphone MD9745APZ-F on breakout board, sparkfun.com.
A. DAQ Delay
While performing experiments to determine the speed
of sound in air, a spike in the impulse response function
4
JOURNAL OF ADVANCED INSTRUMENTATION AND MEASUREMENT, 2010
Analog In
1
Analog Out
1
Analog In
2
2
1
h
0
−1
Fig. 5. Analog I/O connections for characterization of DAQ delay
−2
appeared at 1 lag. This was repeatable and suggested that
there was a delay in the system of 1 lag. In order to
confirm this, the analog output from the DAQ was connected directly to the first two analog input channels as
shown in Figure 5. This would short circuit the dynamics
of the speaker and the microphone and allow the impulse
response function of just the DAQ’s D/A and A/D conversion to be extracted. A two second binary stochastic signal
was sent from the analog out of the DAQ while sampling
on all three channels was performed at 100 kHz. For the
first DAQ delay system identification Analog Out 1 was
chosen as the input x and Analog In 1 as the output y.
The results of this experiment are contained in Figure 6.
The autocorrelation function has a value of 1 at 0 lag
and negligible magnitude for all other lag values. The autocorrelation function illustrates the memory property of
a signal. A signal with an autocorrelation value of 0 for all
non–zero lags is a signal whose values have no dependence
on previous values. Therefore it is confirmed that the input signal is a random data stream. The cross–correlation
function shows a spike at a lag of 1, which is then captured
as a spike at a lag of 1 in the impulse response function
as well. This confirms the 1 lag delay in the internal dynamics of the DAQ. This experiment was repeated at a
sampling rate of 1 kHz with identical results, thus the delay is independent of sampling rate. The experiment was
also repeated using Analog Input 2 of the DAQ as y and
the 1 lag delay was still present.
Cx y
Correlation
Cx x
1
1
0.5
0.5
0
0
0
5
Lag [1 · 10
10
5
0
s]
5
Lag [1 · 10
10
5
−3
0
50
100
150
Lag [1 · 10
200
5
250
300
s]
Fig. 7. Impulse response function for distance of 0.5 m .
TABLE II
Experimental Data.
Distance [m]
4.835
5.317
5.504
Time [s]
0.0136
0.0150
0.0153
One explanation of the delay could be in the choice of
the triggering method used in the construction of the LabVIEW Virtual Instruments (VI). The triggering methods
for timing of the data acquisition were not all judiciously
explored. The true origin of this artifact was never determined.
B. Speed of Sound
The time delay of the system will be estimated as the
lag time of the largest peak in the impulse response function from the audio amp input to microphone amp output
as measured by the DAQ. The experimental set up is as
depicted in Figure 2 with sampling occurring at 100 kHz.
An experimental result of the impulse response function
for a speaker and microphone that were 0.5 meters apart
is shown in Figure 7. Notice the artifact spike at 1 lag as
discussed in the previous section. That spike is ignored
and the true delay of the system is estimated as 157 lags
(0.00157 seconds).
This procedure was repeated at 3 di↵erent distances centered around 5 meters, see Table II. A linear least squares
fit was performed on this data to determine the speed of
sound. The curve fitting is shown on Figure 8. A value
of 346.658 m/s was found for the speed of sound, with an
s]
6.5
Distance [m]
0
h
−5
−10
6
d = 346.658t + 0.1205
5.5
5
4.5
−15
0
2
4
6
Lag [1 · 10
8
5
s]
Fig. 6. Data analysis to determine DAQ delay .
10
12
4
0.013
0.0135
0.014
0.0145
Time [s]
0.015
Fig. 8. Curve fit to determine the speed of sound .
0.0155
0.016
Gibson, Matton, Morin, Klenk and Hong: TUNING COMPENSATION FOR TIME–DELAY AND EQUALIZATION
where = 1.4 is the adiabatic index of air, R = 8.314510
J·mol 1 ·K 1 is the molar gas constant, M = 0.0289645
kg·mol 1 is the mean molar mass of air and T is the temperature. Assuming room temperature (293 K) the speed
of sound is 346.583 m/s. While the exact temperature
of the room was not recorded during the experiment, the
low standard deviation, and approximation of the speed of
sound to within 1%, assuming T = 293 K, suggest thats
this method is accurate at estimating the time it takes for
sound to travel in air.
C. Equalization: Frequency Shaping
(13)
with the discrete representation
Yp = H · Hp · X.
(14)
where Yp , Hp , H and X 2 CN .
For this experiment a desired output,
Ydesired = G
!23.5 s2 + 2⇣!1 + !12
!12 (s2 + 2⇣!2 + !22 )1.75
(15)
was chosen where G = Y1 , ⇣ = 0.707, !1 = 150 and !2 =
2000 are free design parameters pre–defined to suite the
end users desired listening experience, and s = f i with f
the frequency and i the imaginary unit vector. The pre–
filter was then selected as:
Hp , Ydesired /Y
(16)
where the / symbol denotes point–wise division of two finite dimensional vectors.
The frequency shaping experiment was carried out in
two steps. First, an unfiltered binary stochastic signal was
sent to the audio amp with the microphone 0.3 meters from
Fig. 9. Pre–filter for desired frequency content.
10
10
10
10
5
Unfilterred
Shaped
Log−mean
Desired
4
3
2
1
1
10
2
3
10
10
Frequency [1/s]
4
10
Fig. 10. Transfer function and model fit
Consider the dynamical block diagram shown in Figure 9. This schematic is similar to Figure 2, however a
pre–filter Hp has been placed in front of the plant dynamics H. Note the upper case notation, simply illustrating
equivalent dynamics just in the frequency domain instead
of time. The goal of the experiment in this section is to
design the filter so that the shaped output Yp will have a
pre–determined shape in the frequency domain. The above
described dynamics are represented as:
Yp (f ) = H(f )Hp (f )X(f ),
10
|H(f)|
o↵set of 0.1205 m, and a standard deviation of for the linear fit of 0.0001435. Using the ideal gas law, the speed of
sound in air can be approximated as
r
RT
c=
,
(12)
M
5
the speaker. The results of this experiment are shown in
Figure 10. The light gray lines show the frequency content
from the microphone amp, Y . The next step was to determine the pre filter as illustrated in Equation 16. Before
the pre–filter could be computed, the noise had to be removed from Y . The black line shows the results from the
log–spaced averaging of Y , denoted Ymean . The pre–filter
dynamics were then redefined as:
Hp , Ydesired /Ymean .
(17)
The second component of the experiment was then carried out by passing the binary stochastic signal through
the pre–filter before entering the audio amp. The results
from this experiment are also contained in Figure 10. The
dash–dot line represents the desired frequency content of
the microphone output and the dark gray line the actual
filtered output. From the figure it is apparent that the
unfiltered and filtered microphone outputs are distinctly
di↵erent. The filtered system response is similar to the
desired frequency shape.
V. Hand-Held Interface Design
The experiments just described were a proof of concept
for a handheald device. The final product would center
around a remote control with two microphones imbedded
in the side, Figure 11. A low–power RF transceiver (e.g.
TI-CC2500) would serve as a data link between the microphone and audio receiver. An ARM R microcontroller
interfaced with a transceiver will be integrated inside the
remote, and a similar microcontroller-transceiver architecture would preside in the audio receiver. In this stated
configuration: (1) the audio receiver would transmit a binary stochastic signal to each speaker (2) the acoustic profile of each speaker would be received by the microphones
(3) the recorded information would then be transmitted
back to receiver were the time delay would be determined
as well as the frequency signature of each speaker (4) each
6
JOURNAL OF ADVANCED INSTRUMENTATION AND MEASUREMENT, 2010
Appendices
A. Matlab Scripts Used
In the following MATLAB scripts, the frequency techniques for convolution were utilized following [4].
A. Autocorrelation function
Fig. 11. Concept remote with two electret microphones .
speaker would then have a unique time delay and pre–filter
setting within the receiver, ensuring an optimal listening
experience.
VI. Conclusions and Future Work
The audio calibration system can achieve optimized
sound conditions with features not currently availed in existing products. To design this system, software packages
such as LabVIEW R and MATLAB R have provided quick
and efficient environments for testing signal processing algorithms, which can be ported over to specific embedded
hardware in future iterations. While the system currently
employs a single microphone and speaker, it can easily
be adapted for listening environments containing multiple speakers. With a user-friendly interface connected to
high-quality audio components, the SpeakerBox audio adjustment system will have the capability of delivering an
excellent listening experience through the use of delay adjustment and frequency adjustment.
VII. Acknowledgements
This work was completed in fulfillment of the final
project in course 2.131 Advanced Instrumentation and
Measurement at the Massachusetts Institute of Technology. The authors would like to thank Professor Ian Hunter
for illustrating the techniques for proper system identification with specific recognition for his help on developing
the frequency shaping techniques used in this work. The
authors are also extremely grateful for the help that the
teaching assistant, Adam J. Wahab, o↵ered. Adam was
instrumental in the formulation of the project idea and
helped with the purchase and design of all the hardware
components. A special thanks to National Instruments for
the DAQ and LabVIEW software that was used.
References
[1] Ian W. Hunter, “2.131 advanced instrumentation and measurement,” Class Notes, Spring 2010.
[2] J. W. Cooley and J. W. Turkey, “An algorithm for the machine computation of the comlplex fourier series,” Mathematics
of Computation, vol. 19, 1965.
[3] P. Duhamel and M. Vetterli, “Fast fourier transforms: A tutorial
review and a state of the art,” Signal Processing, vol. 19, 1990.
[4] G.E.P Box, G.M. Jenkins, and G.C. Reinsel, Time Series Analysis: Forecasting and Control, Prentice-Hall, 3rd edition, 1994.
------------------------------------------------------function ACF = autocorr(Series , nLags)
nFFT =2^(nextpow2(length(Series)) + 1);
F
=fft(Series-mean(Series) , nFFT);
F
=F .* conj(F);
ACF =ifft(F);
ACF =ACF(1:(nLags + 1));
ACF =ACF ./ ACF(1);
ACF =real(ACF);
-------------------------------------------------------
B. Crosscorrelation function
------------------------------------------------------function XCF=crosscorr(Series1, Series2, nLags)
Series1 =Series1 - mean(Series1);
Series2 =Series2 - mean(Series2);
L1
=length(Series1);
L2
=length(Series2);
nFFT =2^(nextpow2(max([L1 L2])) + 1);
F =fft([Series1(:) Series2(:)] , nFFT);
XCF =ifft(F(:,1) .* conj(F(:,2)));
XCF =XCF([(nLags+1:-1:1)(nFFT:-1:(nFFT-nLags+1))]);
XCF =real(XCF) / (sqrt(ACF1(1)) *sqrt(ACF2(1)));
-------------------------------------------------------
C. Deconvolution function
------------------------------------------------------function [q,r]=deconv(b,a)
[mb,nb] = size(b);
nb = max(mb,nb);
na = length(a);
if na > nb
q = zeros(superiorfloat(b,a));
r = cast(b,class(q));
else
[q,zf] = filter(b, a, [1 zeros(1,nb-na)]);
if mb ~= 1
q = q(:);
end
if nargout > 1
r = zeros(size(b),class(q));
lq = length(q);
r(lq+1:end) = a(1)*zf(1:nb-lq);
end
end
-------------------------------------------------------
Gibson, Matton, Morin, Klenk and Hong: TUNING COMPENSATION FOR
TIME–DELAY
AND
EQUALIZATION
0
1
2
3
0
1
2
3
VCC
4
5
6
7
5
7
6
VCC
24V
A
4V
4
8
A
A
1000µF
10k
VCC
VCC
VCC
6
Microphone
7
D
2.7k
100k
3
E
4
5
2
135k
Mute
LM4755
2.7
0.1µF
A
5
10µF
F
10V
8
E
12k
A
VCC
7
5V
Fig. 12. Electret microphone breakout board amp arrangement using
OPA344.
E
6
1000µF
8
AmpA
+
0.1µF
SIG
INA
2
0.1µF
83k
100pF
1
+
Bias
100µF
D
0
2.7
C
GND
83k
-
0.1µF
0.1µF
HDR1X3
D
SIG
C
1
1k
1
AmpB
3
INB
2
4
1000µF
2.7k
OPA344
5
3
1µF
135k
B
10k
2.2k
C
4
B
GND
1µF
4V
-
B
VCC
4V
4V
4V
F
F
6
B
250µF
LM386
5
Speaker
Je↵rey W. GMorin (BS’09) was born in
Manchester, New Hampshire, in 1986. He
0
1
2
4
5
received
the 3B.S. degree
mechanical
C in
engineering
from the University of New
6
7
8
Hampshire, Durham, in 2009. He is currently
a M.S candidate at Massachusetts Institute
of Technology, Cambridge, where he is researching active fluids and robotic locomotion
under the supervision of Dr.D Anette Hosoi.
In the summers of 2005 to 2009 he worked at
Advanced Combustion Technology, Hooksett
NH, where he developed ultra-low NOx burners for large scale power
plants. He is also a student member of Tau Beta Pi and ASME.
+
3
4
C
1
G
2
SIG
10k
0
Fig. 14. High power stereo amp arrangement usingB LM4755.
8
0.05µF
7
G
1
2
3
4
-
10
5
GND
Vin
D
Fig. 13. Low power audio amp using LM386.
E
B. Circuit Diagrams
E
Dan E. Klenk (S.B.’09) received an S.B in
F
mechanical engineering from the
Massachusetts
Institute of Technology, Cambridge, in 2009.
He is currently a master’s student also at MIT
researching walking dynamics and biomechanics.
F
Travis E. Gibson (B.S.’06 M.S.’08) was born
in Jacksonville, FL, in 1984. He received the
B.S. degree in mechanical engineering from
Georgia Institute of Technology, Atlanta, in
2006 and the M.S. degree in mechanical engiG
neering from Massachusetts Institute of Technology, Cambridge in 2008. He is currently a
Ph.D. candidate at MIT, where he is researching adaptive
control under
the supervision
of
0
1
2
3
4
Dr. Anuradha Annaswamy. In the summers
of 2003 to 2006 he worked at Vistakon, Jacksonville FL, summers of 2008 and 2009 at NASA Langley Research
Center, Hampton VA, and Summer 2010 at Boeing, Huntington
Beach CA. He is also a student member of the AIAA and IEEE.
Yves Matton (B.S.’08) was born in Paris,
France, in 1986. He received the B.Sc. degree in
technological innovation from Ecole Polytechnique (Paris) in 2008, and is currently a M.S.
degree candidate in mechanical engineering at
MIT. He is currently research assistant working on new newtonian fluids under the supervision of Professor L.J Gibson and Professor
G.H.McKinley.
G
5
6
7
8
Vu A. Hong (S.B.’10) was born in San Diego,
California, in 1988. He is currently an S.B degree candidate in mechanical engineering at the
Massachusetts Institute of Technology, Cambridge, where he is researching bacteria mediated ice nucleation under the supervision of
Professor Evelyn Wang. He will be working at
Palo Alto Research Center, Palo Alto, CA, this
summer, and will be pursuing graduate studies
at Stanford University thereafter.
6
Download