# Homework 2 SOLUTION ```1
Homework 2 Fall 2016 AERE432 Due 9/16(F)
SOLUTION
Problem 1. (20pts) Often times, the noise that corrupts a given signal is assumed to be white noise. Consider a
bandlimited white noise random process N (t ) that has mean  N  0 and standard deviation  N  0.1volts . Its bandwidth
(BW) extends to 10 MHz.
(a)(10pts) Plot its power spectral density (psd), including numbers related to the above given information. Moreover,
explain how you arrived at those numbers. [Also, plot the 2-sided psd, and be sure to include units.]
Solution: The noise power is  N2  0.01volts2 . This must equal the area associated with S N ( f ) . Hence,
SN ( f ) 
 N2
2  BW

.01
7
2(10 )
 0.5(10 9 )
volts2
.
Hz
S N ( f )  0.5(10 9 )
 10 6 Hz
f
volts2
Hz
10 6 Hz
Figure 1.1 Plot of the noise psd.

(b)(10pts) Recall that RN ( ) 
S
N(
f ) e i 2  f df . Use this formula to compute an explicit expression for RN ( ) .

BW
Solution: RN ( ) 

ce
i 2 f 
df 
c ei 2 
BW

i 2 
c ei 2  BW   c e i 2  BW 
i 2 
 BW
BW
Now, recall that ei  cos  i sin
 c
RN ( )  
 
f
  ei 2  BW   e i 2  BW 
 
i2


 c
 
 
  ei 2  BW   e i 2  BW 
 
i2



.


e i   e i 
 sin  . And so, we have:
2i
  c 
 sin( 2 BW  ) 
2




.
     sin( 2 BW  )  c 2 BW  2 BW     N Sa(2 BW  )



 
and e i   cos  i sin . Hence,
Inserting the numbers gives: RN ( )  .01Sa(2 BW )  .01Sa(2 107  )
(c)(5pts) Extra Credit Compute a plot of your expression for RN ( ) . [Include your code in the Appendix.] Then validate
your plot by noting the value of RN (0) .
10
x 10
-3
Noise Autoc orrelation Function
8
6
4
2
0
-2
-4
-5
0
Time (sec )
5
x 10
-6
Figure 1.2 Plot of RN ( ) . Note that RN (0)  .01   N2 , which validates (in part) the plot.
2
RY ( )
Problem 2. (15pts) It is suggested that a certain real
process, Y (t ) , has the autocorrelation function shown at the right. Is this
 Y2

1
1
Figure 2.1 Plot of the process autocorrelation.
Solution:
Plot of Sa(w)
1
1

SY ( )   Y2 e i   d   Y2
e
i  
1
  Y2
i 
1
1
e
i
 ce
i
i 
 2 Y2
sin 

 2 Y2 Sa( ) .
0.8
0.6
0.4
0.2
The plot of Sa() at the right includes negative values. Since any psd
must have no negative values, the suggested autocorrelation function
Is impossible.
0
-0.2
-0.4
-20
-15
-10
-5
0
5
Frequency , w
10
15
20
Figure 2.1 Plot of the process psd for Y2  .5
Problem 3. (15pts) [Book Problem 2.12] A wss process, X (t ) , has RX ( )   X2 e | | . For Y (t )  aX (t )  b , obtain the
expression for RY ( ) .
Solution: Using the linearity of E(*):
E[Y (t )Y (t   )]  E{[aX (t )  b][aX (t   )  b]}  a 2 E[ X (t ) X (t   )]  abE[ X (t )]  abE[ X (t   )]  b2 .
Since  X2  lim R X ( )  lim  X2 e   | |  0 , we have  X  E ( X )  0 . Hence,
 
 
E[Y (t )Y (t   )]  a 2 E[ X (t ) X (t   )]  b2 , or RY ( )  a 2 RX ( )  b2  a 2 X2 e  | |  b2 .
3
Problem 4. (30pts) When using an atomic force microscope, it is essential that the scope base be as stable as possible.
This problem addresses two wss discrete-time random process models for the vibration, X (k ) , of the base.
(a)(5pts) Assume that X (k ) is zero mean white noise with variance  X2  9 . Compute (i) R X (m) , and from your
expression compute (ii) S X () . [This is a typical model choice of researchers not familiar with random processes.]
Solution:
(i) RX (m)  E( X k X k m )   X2 for m  0 , and is zero, otherwise.
(ii) S X ( ) 

R
m  
X
(m) e  i m   X2 for any  [ , ) .
(b)(10pts) Assume that X (k ) is zero mean non-white noise with variance  X2  9 . Specifically, assume that:
X (k )  0.8 X (k  1)  W (k )
(1)
 W2
where W (k ) is a white noise process. Find the numerical value for
in the following manner:
First, multiply (1) by X (k ) , and then take the expected value of this equation. Second, multiply (1) by X (k  m) for m  1 ,
and then take the expected value of this equation. For m  1 you should end up with two equations in the unknowns
RX (1) and  W2 , from which you can easily arrive at a numerical value for  W2 .
Solution:
Step 1: E( X k2 )  E( X k X k 1 )  E( X kWk ) where E( X kWk )  E[(X k 1  Wk )Wk ]  E(Wk X k 1 )  E(Wk2 )  W2 . Hence,
RX (0)   X2   RX (1)  W2 .
(2a)
Step 2: E ( X k X k m )  E ( X k 1 X k m )  E ( X k mWk ) .Hence, RX (m)   RX (m  1) . In particular: RX (1)
Substituting (2b) into (2a) gives:  X2   2 X2  W2 . Hence:
   X2
.
(2b)
W2  (1   2 ) X2  0.36(9)  3.24 .
(c)(15pts) Overlay plots a sample realization of {X (k )}1000
k 1 for each model. Then comment on how they visually differ.
[Note: Include your code in the Appendix. Also, choose the initial condition so that the process is, indeed, wss.]
Solution:
Sample Par tial Realizations of X1 and X2
15
x1
x2
10
5
0
-5
- 10
- 15
0
200
400
600
k
Figure 4.1 Plots of an n=1000 partial realization for X1 (model 1) and X2 (model 2).
COMMENT: The model 2 data varies more slowly than the model 1 data.
800
1000
4
Problem 5. (20pts) This problem addresses the data generated in Problem 4 in greater detail. Recall that the

autocorrelation function for a wss process is defined as: R X (m)  E ( X k X k  m ) . The process is said to be ergodic if:
1
n  n
lim
n


X k X k m

k 1

lim R X (m)
n 
pr

(5.1)
R X ( m)

where the equality in in relation probability. The quantity RX (m) is called the lagged-product estimator of R X (m) .
(a)(15pts) Write your own Matlab code for computing R X (m ) . Then use it to obtain plots of RX (m)m20 for each data


20
set in Problem 4. Then overlay plots of R X ( m )20
m  20 .
Solution:
Plots of True (dashed) & Estimated (solid) Ac orrs
9
8
7
6
5
4
3
2
1
0
-1
-20
-15
-10
-5
0
Lag
5
10
15
20
Figure 5.1 Plots of the lagged-product autocorrelations (solid lines) and true autocorrelations (dashed lines) for the
models in Problem 3.
(b)(5pts) Discuss how the plots give a more rigorous basis to your comment in Problem 4(c).
Discussion: The autocorrelation plot for Model #1 dies out slowly; indicating the nearby pairs of random variables are
more correlated with each other. This correlation is evidenced by a more slowly varying time series.
5
Appendix
%PROGRAM NAME: hw2.m
% PROBLEM 1:
varN=.01; BW=10^6;
taup=10^-9:10^-9:5*10^-6;
taun = -fliplr(taup);
tau = [taun , taup];
d = 2*pi*BW*tau;
R = varN*sin(d)./d;
figure(1)
plot(tau,R)
title('Noise Autocorrelation Function')
xlabel('Time (sec)')
grid 'minor'
pause
%================================
%Problem 2:
w=-20:.015:20;
Sa = sin(w)./w;
figure(2)
plot(w,Sa)
title('Plot of Sa(w)')
xlabel('Frequency , w')
grid
pause
%================================
%PROBLEM 4:
varX=9; a=.8; varW=(1-a^2)*varX;
X1 = normrnd(0,varX^.5,1000,1);
X2 = zeros(1000,1);
X2(1) = normrnd(0,varX^.5,1,1);
W = normrnd(0,varW^.5,1000,1);
for k = 2:1000
X2(k) = a*X2(k-1) + W(k);
end
K = 1:1000; K=K';
figure(3)
plot(K,[X1,X2])
legend('x1','x2')
xlabel('k')
title('Sample Partial Realizations of X1 and X2')
grid
pause
%================================
%PROBLEM 5:
nlags = 20;
[Rhat1,mvec]=LaggedAcorr(X1,nlags);
[Rhat2,mvec]=LaggedAcorr(X2,nlags);
figure(4)
plot(mvec,[Rhat1 , Rhat2])
R1p = zeros(nlags,1);
R1 = [ R1p;varX;R1p];
mpwr = 1:nlags; mpwr = mpwr';
R2p = varX*(a*ones(nlags,1)).^mpwr;
R2 = [flipud(R2p);varX;R2p];
hold on
plot(mvec,[R1 , R2],'--')
title('Plots of True (dashed) & Estimated (solid) Acorrs')
xlabel('Lag')
grid
hold off
function [Rhat,mvec]=LaggedAcorr(z,nlags)
%Lagged-Product Estimate of R:
n = length(z);
M = nlags;
mp = 1:M;
mvec = [-fliplr(mp),0,mp]';
R0 = mean(z.^2);
Rp = zeros(M,1);
for m = 1:M
z1=z(1:n-m); zm = z(m+1:n);
Rp(m)=((n-m)/n)*mean(z1.*zm);
end
Rhat = [flipud(Rp);R0;Rp];
end
```