Computer Exercises

advertisement
Computer Exercises
1. Generate a stationary process AR(2) denoted by sn . Suppose that
sn  a1sn  1  a2 sn  2  n
Here, the parameters of a1 , a2 , 2 are determined by yourselves. Then generate a
white noise with the variance  v2 . The received signal is
xn  sn  vn
with a variable SNR. Put the received the signal to a Wiener-filter, the length of
the filter is N. And the output is denoted by yn .
① Study on the relationship between the cost function and SNR of the signal,
provided the length of the Wiener-filter is given.
② Study on the relationship between the cost function and length of the filter,
provided the SNR is given.
③. When one-step prediction is done, how about the cost function varies with the
SNR and length of the Wiener-filter.
2. Examine the transient behavior of the steepest-descent algorithm applied to a
predictor that operates on a real-valued autoregressive (AR) process.
Fig. P2
shows the structure of the predictor, assumed to contain two tap weights that
denoted by w1 n  and w2 n ; the dependence of these tap weights on the
number of iterations, n, emphasizes the transient condition of the predictor. The
AR process xn  is described by the second-order difference equation
xn  a1 xn 1  a2 xn  2  vn
where the sample vn  is drawn from a white-noise process of zero mean and
variance  v2 . The AR parameters a1 and a2 are chosen so that the roots of the
characteristic equation
1  a1 z 1  a2 z 2  0
are complex; that is, a12  4a2 . The particular values assigned to a1 and a2 are
determined by the desired eigenvalue spread  R  . For specified values of a1
and a2 , the variance of  v2 of the white-noise vn  is chosen to make the
process xn  have variance  x2  1 .
x(n)
z 1
x(n-1)
w1(n)
z 1
x(n-2)
w2(n)

y(n)
Fig .P2 Tow-tap predictor for real-valued input
The requirement is to evaluate the transient behavior of the steepest-decent
algorithm for the following conditions:

Varying eigenvalue spread  R  and fixed step-size parameter 

Varying step-size parameter  and fixed eigenvalue spread  R 

Plot the learning curve please.
3. By adaptive filtering, do the research work on the picking up of a signal with only
one frequency.
① Suppose that
x(t )  st   cos(2ft) ,here s t  is a wideband signal, f is
arbitrary chosen, you are required to extract cos2ft  .
② Suppose that
x(t )  st   Acos(2f1t   )  B cos(2f 2t   ) , st 
is a wideband signal, A, B, f1 , f 2 ,  can be selected arbitrary. You are required:

To extract both of the two signals with single frequency;

Suppose f 2  f1  f , extract the cos2f 2t  , do the research work
on the effect that SNR (Signal-Noise Ratio), f , N . Here N is the
length of the data.
4. For this computer experiment involving the LMS algorithm, use a first-order,
autoregressive (AR) process to study the effects of ensemble averaging on the
transient characteristics of the LMS algorithm for real data.
Consider an AR process of order one, described by the difference equation
x(n)  ax(n  1)  v(n)
where a is the (one and only) parameter of the process and vn  is a zero-mean
white-noise process of variance  v2 . To estimate the parameter a , use an adaptive
predictor of order one, as depicted in Fig. P4.
xn 
z
-1
xn - 1
ŵn
-

f(n)
+
Fig. P4 Adaptive first-ordor predictor
Given different sets of AR parameters, fixed the step-size parameter, and the
ˆ (0)  0 .
initial condition w

Please plot the transient behavior of weight wˆ (n) including the single
realization and ensemble-averaged result.

The transient behavior of the squared prediction error

Draw the experimental learning curves , what is the result when the
step-size parameter is reduced.
5. Study the use of the LMS algorithm for adaptive equalization of a linear dispersive
channel that produces (unknown) distortion. Assume that the data are all real
valued. Fig. P5 shows the block diagram of the system used to carry out the study.
Random-number generator 1 provides the test signal x n , used for probing the
channel, whereas random-number generator 2 serves as the source of additive
white noise vn  that corrupts the channel output. These two random-number
generators are independent of each other. The adaptive equalizer has the task of
correcting for the distortion produced by the channel in the presence of the
additive noise. Random-number generator 1, after suitable delay, also supplies the
desired response applied to the adaptive equalizer in the form of the training
sequence.
The experiment is required

To evaluate the response of the adaptive equalizer using the LMS algorithm
to changes in the eigenvalue spread  (R ) and step-size parameter  .

The effect on the squared error when length of the delay and the length of the
data is changed.
Delay
Random-noise
Generator(1)
Channel
xn

Adaptive
Transversal
equalizer

e(n)
v(n)
Random-noise
Generator(2)
Fig.P5 Block diagram of adaptive equalizer experiment
6. Generate 100 samples of a zero-mean white noise sequence  (n) with variance
 2 
1
, by using a uniform random number generator.
12
(a) Compute the autocorrelation of  (n) for 0  m  15 .
(b) Compute the periodogram estimate P ( f ) and plot it.
(c) Generate 10 different realizations of  (n) , and compute the corresponding
sample autocorrelation sequences
rk (m) , 1  k  10 and 0  m  15 .
Compute the average autocorrelation sequence as rav (m) 
1 10
 rk (m) and
10 k 1
the corresponding periodogram for rav (m) .
(d) Compute and plot the average periodogram using the Bartlet method.
(e) Compute on the results in parts (a) through (d).
7. A random signal is generated by passing zero-mean white Gaussian noise with
H ( z) 
1
1  az  0.99 z 1  az 1  0.98z 2 
1
2
(a) Sketch a typical plot of the theoretical power spectrum xx ( f ) for a small
value of the parameter a (i.e., 0<a<0.1). Pay careful attention to the value of the
two spectral peaks and the value of Pxx ( ) for    / 2 .
(b) Let a=0.1. Determine the section length M required to resolve the spectral
peaks of xx ( f ) when using Bartlett’s method.
(c) Consider the Blackman-Turkey method of smoothing the periodogram. How
many lags of the correlation estimate must be used to obtain resolution
comparable to that of the Bartlett estimate considered in part (b)? How many
data must be used if the variance of the estimate is to be comparable to that of a
four-section Bartlett estimate?
(d) Generate a data sequence x(n) by passing white Gaussian noise through H(z)
and compute the spectral estimates based on the Bartlett and Blackman-Tukey
methods and, thus, confirm the results obtained in parts (b) and (c).
(e) For a=0.05, fit an AR(4) model to 100 samples of the data based on the
Yule-Walker method, and plot the power spectrum. Avoid transient effects by
discarding the first 200 samples of the data.
(f) Repeat part (e) with the Burg method.
(g) Repeat parts (e) and (f) for 50 data samples, and comment on similarities and
differences in the results.
8. Research on the performance of AR power spectrum estimates that were obtained
with artificially generated data. The objective is to compare the spectral estimation
methods on the basis of their frequency resolution, bias and robustness in the
presence of additive noise.The data consist of either one or two sinusoids and
additive Gaussian noise. The two sinousoids are space  f apart. The underlying
process is ARMA(p,q), p and q are decided by yourself.

Estimate the PSD by Yule-Walker method, research on the effects varying
data length, SNR, the order of AR model.

Estimate the PSD by Burg method, research on the effects varying data
length, SNR, the order of AR model.

Estimate the PSD by LS method, research on the effects varying data
length, SNR, the order of AR model.
9.Consider the adaptive predictor shown in Fig. P9
Fig. P9
(a) Determine the quadratic performance index and the optimum parameters for the
signal
xn   sin
n
  n 
4
where n is white noise with variance  2  0.1
(b) Generate a sequence of 1000 samples of xn  , and use the LMS algorithm to
adaptively obtain the predictor coefficients. Compare the experimental results
with the theoretical values obtained in part (a). Use a step size of  
1
max .
10
(c) Repeat the experiment in part (b) for N  10 trials with different noise sequences,
and compute the average values of the predictor coefficients. Comment on how
these results compare with the theoretical values in part (a).
10. An autoregressive process is described by the difference equation
xn  1.26xn  1  0.81xn  2  n
(a) Generate a sequence of N=1000 samples of xn  , where n is a white
noise sequence with variance  2  0.1 . Use the LMS algorithm to determine
the parameters of a second-order ( p  2 ) linear predictor. Begin with
a1 0   a2 0   0 . Plot the coefficients a1 n and a2 n as a function of the
iteration number.
(b) Repeat part (a) for 10 trials, using different noise sequences, and superimpose
the 10 plots of a1 n and a2 n .
(c) Plot the learning curve for the average (over the 10 trials) MSE for the data in
part (b).
11. A random process xn  is given as
xn   sn    n   sin 0 n      n , 0   ,   0
4
where n is an additive white noise sequence with variance  2  0.1
(a) Generate N=1000 samples of xn  and simulate an adaptive line enhancer of
length L  4 . Use the LMS algorithm to adapt the ALE.
(b) Plot the output of the ALE.
(c) Compute the autocorrelation  xx m of the sequence xn  .
(d) Determine the theoretical values of the ALE coefficients and compare them
with the experimental values.
(e) Compute and plot the frequency response of the linear predictor (ALE).
(f) Compute and plot the frequency of the prediction-error filter.
(g) Compute and plot the experimental values of the autocorrelations ree m of
the output error sequence for 0  m  10 .
(h) Repeat the experiment for 10 trials, using different noise sequences, and
superimpose the frequency response plots on the same graph.
(i) Comment on the result in parts (a) through (h).
12. Consider an AR process xn  defined by the difference equation
xn  a1 xn  1  a2 xn  2  vn
where vn  is an additive white noise of zero mean and variance  v2 . The AR
parameters a1 and a2 are both real valued: a1  0.1 , a2  0.8
(a) Calculate the noise variance  v2 such that the AR process xn  has unit
variance. Hence generate different realizations of the process xn  .
(b) Given the input xn  , an LMS filter of length M  2 is used to estimate the
unknown AR parameters a1 and a2 . The step-size parameter  is assigned
the value 0.05. Justify the use of this design value in the application of the
small-step-size theory.
(c) For one realization of the LMS filter, compute the prediction error
f n  xn  xˆn
and the two tap-weight errors
 1 n  a1  wˆ 1 n
and
 2 n  a2  wˆ 2 n
Using power spectral plots of f n,  1 n and  2 n , show that f n 
behaves as white noise, whereas  1 n and  2 n behave as low-pass
processes.
(d) Compute the ensemble-average learning curve of the LMS filter by averaging
the squared value of the prediction error f n  over an ensemble of 100
different realizations of the filter.
(e) Using the small-step-size statistical theory, compute the theoretical learning
curve of the LMS filter and compare your result against the measured result of
part (d).
13. Consider a linear communication channel whose transfer function may take one of
three possible forms:
(i) H z   0.25  z 1  0.25z 2
(ii) H z   0.25  z 1  0.25z 2
(iii) H z   0.25  z 1  0.25z 2
The channel output, in response to the input xn , is defined by
y n    hk xn k  n 
k
where hn is the impulse response of the channel and vn  is additive white
Gaussian noise with zero mean and variance  v2  0.01 . The channel input xn
consists of a Bernoulli sequence with xn  1 .
The purpose of the experiment is to design an adaptive equalizer trained by
using the LMS algorithm with step-size parameter   0.001 . In structural
terms, the equalizer is built around a transversal filter with 21 taps. For desired
response, a delayed version of the channel input, namely xn   , is supplied to
the equalizer. For each of the possible transfer functions listed under (i), (ii) and
(iii) do the following:
(a) Determine the optimum value of the delay 
that minimizes the
mean-square error at the equalizer output.
(b) For the optimum delay  determined in part (a), plot the learning curve of
the equalizer by averaging the squared value of the error signal over an
ensemble of 100 independent trials of the experiment.
14.To illustrate the optimum filtering theory developed in the preceding sections,
consider a regression model of order m  3 with its parameter vector denoted
by
a  ao , a1 , a2 
T
The statistical characterization of the model, to be real valued, as follows:
①. The correlation matrix of the input vector xn is
 1.1
 0.5
R
 0.1

 0.05
0.5 0.1  0.05
1.1 0.5
0.1 
0.5 1.1
0.5 

0.1 1.5
1.1 
where the dashed lines are included to identify the submatrices that
correspond to vary filter lengths.
②. The cross-correlation vector between the input vector xn and observable
data d n is
p  0.5272,0.4458,0.1003,0.0126 ,
T
where the value of the fourth entry ensures that the model parameter a3 is
zero.
③. The variance of observable data is
 d2  0.9486
④. The variance of the additive white noise is
 v2  0.1066
The requirement is to do three things:
· Investigate the variation of the minimum mean-square error J min produced
by a Wiener filter of varying length
M  1,2,3,4
· Display the error-performance surface of a Wiener filter with length M  2 .
· Compute the canonical form of the error-performance surface.
Download