studentChallengeSolnMDSkowronski

advertisement
Submitted to the Acoustical Society of America First Pan-American/Iberian Meeting on Acoustics, Cancun,
Mexico, December 2002
Solution of Student Challenge Problem
Technical Committee on Signal Processing in Acoustics, ASA
Mark D. Skowronski
Computational Neuro-Engineering Laboratory
University of Florida, Gainesville, FL, USA
Problem Statement
We are given three time-domain sequences that can be modelled as sine waves embedded
in additive noise. The samples can be modelled as follows:
x1(t) = A1sin(2 f1 t)+N1(t)
x2(t) = A2sin(2 f2 t)+N2(t)
x3(t) = A3sin(2 f3 t)+A4sin(2 f4 t)+N3(t)
We need to find the freqencies f1 – f4, the amplitudes A1 – A4, as well as the noise signals
N1(t) – N3(t). The following is known about the unknown parameters:
50<A1<200;
50<A2<200;
50<A3<200;
10<A4<50;
5<f1<8 Hz
5<f2<8 Hz
5<f3<8 Hz
0.5<f4<2 Hz
No information is provided about the noise signals. Note that all sinusoids are assumed
to have zero-phase.
The Data
The signals are provided in Matlab data files. Each signal is sampled at 80 Hz with 16bit resolution and is 2 seconds in duration (161 samples). For a noise-free sinusoid, only
three samples of infinite precision are necessary to exactly determine the sinusoid
amplitude, frequency, and phase (given the Nyquist criterion for sampling is met). When
additive noise is present, or when the samples are of finite precision, the problem
becomes one of spectral estimation. For any finite-length sample of a sinusiod, the
method employed for estimation of signal parameters (amplitude, frequency) should
address two issues: 1) accuracy of estimation, 2) robustness to noise. Several methods
for parameter estimation are proposed and compared as to how they address parameter
estimation accuracy and noise robustness.
Parameter Estimation Methods
The various methods must estimate both amplitude and frequency accurately. Some
methods (time domain: peak picking, autocorrelation, zero-crossing) have coarse
resolution of frequency, since a small integer (period, lag) is used to compute the
frequency estimate. Other methods are highly sensitive to noise. Linear prediction (LP)
models the signal with an all-pole filter, yet additive noise can produce zeros in the
signal, weakening the effectiveness of the all-pole model [1]. Other methods only
determine frequency: multiple signal classicfication (MUSIC), estimation of signal
parameter via rotational invariance technique (ESPRIT), the method of Chan et.al. [1].
These pseudo-spectral methods require a second technique for estimating amplitude
(typically linear regression), and they assume Gaussian noise [2]. These methods do not
find parameters of lowest variance for the Gaussian noise assumption, though they are
computationally tractable. The discrete Fourier transform (DFT) can be used to find a
sinusoid basis function that is maximally correlated with the given signal, and arbitrary
frequency and amplitude accuracy can be achieved by zero-padding the discrete-time
signal before computing the signal’s DFT. For signals with only one sinusoid embedded
in noise, the highest peak in the DFT-based periodogram provides the maximum
likelihood (ML) parameter estimates, which are the parameters most likely to describe a
finite-length sequence (lowest estimate variance) [3]. The periodogram, however, does
not produce ML estimates for signals with more than one sinusoid embedded in noise.
An algorithm which finds the ML estimates for the general case of several sinusoids
embedded in Gaussian noise is the method of nonlinear least-squares (LS) [3]. We
employ nonlinear LS for this problem for three reasons: 1) The measure is
mathematically tractible (verticle offset method). 2) Most real-world noise sources are
Gaussian. Since LS concentrates on second-order statistics, it readily handles Gaussian
noise [4]. 3) LS provides the ML estimates for signals composed of more than one
sinusoid, which is the case for this problem.
Nonlinear least-squares method
The method consists of finding the parameters that satisfy the following cost function:

fi 

x
(
n
)

A
sin(
2

n) 

i

[ Ai , fi ]
fs 
n 
2
[ Ai , f i ]  arg min
for the single-sinusoid signals x1 and x2, where fs is the sampling rate (80 Hz). For the
two-sinusoid signal x3, the above equation would contain a second sinusoid with
amplitude and frequency [Aj,fj] independent from [Ai,fi]. Because of the low
dimensionality of parameter space (2 dimensions for x1 and x2, 4 dimensions for x3) as
well as the given constraints on the parameters of interest, we use the coarse-to-fine mesh
method. This ‘brute force’ method is effective for low-dimensionality constrained
problems, while other optimization techniques may be used in large-dimension spaces
(monte carlo, conjugate gradient descent, Newton methods).
Coarse-to-fine Method
The coarse-to-fine method is an iterative procedure, choosing parameters after each
iteration directly from evaluation of the cost function. At each iteration, points are
choosen along a coarse grid, equally-spaced in a constrained parameter space. The cost
function is evaluated at every point along the grid, and the point which produces the
minimum cost is determined. This point is then used as the center of a hypercube which
spans a smaller region of the previous constrained parameter space (usually the
hypercube vertices are the nearest neighbor grid points along each dimension from the
previous constrained parameter space). This ‘zooming in’ is repeated for each iteration
until sufficient precision in the estimated parameters is achieved. Figure 1 shows the LS
error surface for x1 across the constrained parameter space. The surface appears wellbehaved (a single global minimum with wide attraction domain) and suitable for coarseto-fine optimization. A similar error surface for x2 can be seen in Figure 2. Error from
the coarse-to-fine method can occur if the mesh is too coarse. That is, if the initial grid
points of the first iteration do not occur near the global minimum, the method may
converge to a local minimum. By observing the error surfaces for x1 and x2, initial
spacing along the frequency axis should be finer than 0.25 Hz, while spacing along the
amplitude axis should be finer than 50.
6
x 10
8
5
4.5
7.5
4
Frequency, Hz
7
3.5
3
6.5
2.5
2
6
1.5
1
5.5
0.5
5
50
100
150
200
Amplitude
Figure 1. LS cost function for x1(n) over the given constrained parameter space.
6
x 10
8
4.5
7.5
4
3.5
Frequency, Hz
7
3
6.5
2.5
2
6
1.5
5.5
1
0.5
5
50
100
150
200
Amplitude
Figure 2. LS cost function for x2(n) over the given constrained parameter space.
Results
Table 1 summarizes the estimated parameters for the sinusoidal models. The MSE for
each model is proportional to the variance of the noise signals N1 – N3. Observation of
the noisy signals x1 – x3 confirms that x2 should have a higher MSE than x1, while x3
should have the smallest MSE.
Table 1. Estimated sinusoid parameters.
A1 = 120.9
A2 = 93.52
A3 = 143.2
A4 = 21.71
f1 = 6.64 Hz
f2 = 5.85 Hz
f3 = 7.04 Hz
f4 = 1.38 Hz
MSE = 207.5
MSE = 1611
MSE = 116.9
The 16-bit data limit amplitude accuracy to 4 significant digits, while the 161-point
sequence, sampled at 80 Hz, limits frequency resolution to 3 significant digits. Figures 3
– 5 show the plots for the estimated noise signals N1 – N3. Note the integer discrete time
index n = [0:N-1], where N = 161 samples (all models are 0 at n=0—no phase term to
estimate).
40
30
20
Amplitude
10
0
-10
-20
-30
-40
-50
0
0.2
0.4
0.6
0.8
1
Time ,sec
1.2
1.4
1.6
1.8
2
Figure 3. Plot of N1(n) = x1(n) – A1sin(2 f1/fs n).
150
100
Amplitude
50
0
-50
-100
-150
0
0.2
0.4
0.6
0.8
1
Time ,sec
1.2
1.4
1.6
Figure 4. Plot of N2(n) = x2(n) – A2sin(2 f2/fs n).
1.8
2
30
20
Amplitude
10
0
-10
-20
-30
-40
0
0.2
0.4
0.6
0.8
1
Time ,sec
1.2
1.4
1.6
1.8
2
Figure 5. Plot of N3(n) = x3(n) – [A3sin(2 f3/fs n) + A4sin(2 f4/fs n)].
Remarks
The noise signals N1 – N3 are stored in the wav files n1.wav – n3.wav. To keep the
signals from clipping during the writing process (.wav files limited to  1), each sequence
is scaled by 200 before writing the .wav file.
Bibliography
1. Chan, Y. T., Lavoie, J. M. M., and Plan, J. B., “A Parameter Estimation Approach to
Estimation of Frequencies of Sinusoids”, IEEE Transactions on Acoustics, Speech
and Signal processing, Vol. ASSP-29 No. 2, p. 214-219, April, 1981.
2. Kusuma, J., “Parametric Frequency Estimation: ESPRIT and MUSIC”, MIT Tutorial,
http://web.mit.edu/kusuma/www/Papers/parametric.pdf, 2000
3. Stoica, P. and Moses, R., Introduction to Spectral Estimation, Prentice Hall, 1997
4. D. Erdogmus and J. Principe: ``Comparison of entropy and mean square error criteria
in adaptive system training using higher order statistics,'' Proc. 2nd Intl. Workshop on
Independent Component Analysis (ICA'00), Helsinki, Finland, 2000.
Download