Exam #3 Review Chapter 2: Random Variables Chapter 3: Several Random Variables

advertisement
Exam #3 Review
Chapter 2: Random Variables
Mean. Moments, Variance, Expected Value Operator
Chapter 3: Several Random Variables
Multiple random variables, Independence, Correlation Coefficient
Chapter 5. Random Processes
Random Processes, Wide Sense Stationary
Chapter 6: Correlation Functions
Sections
6-1
6-2
6-3
6-4
6-5
6-6
6-7
6-8
6-9
Introduction
Example: Autocorrelation Function of a Binary Process
Properties of Autocorrelation Functions
Measurement of Autocorrelation Functions
Examples of Autocorrelation Functions
Crosscorrelation Functions
Properties of Crosscorrelation Functions
Example and Applications of Crosscorrelation Functions
Correlation Matrices for Sampled Functions
Chapter 7: Spectral Density
Sections
7-1
7-2
7-3
7-4
7-5
7-6
7-7
7-8
7-9
7-10
7-11
Introduction
Relation of Spectral Density to the Fourier Transform
Weiner-Khinchine Relationship
Properties of Spectral Density
Spectral Density and the Complex Frequency Plane
Mean-Square Values From Spectral Density
Relation of Spectral Density to the Autocorrelation Function
White Noise
Contour Integration – (Appendix I)
Cross-Spectral Density
Autocorrelation Function Estimate of Spectral Density
Periodogram Estimate of Spectral Density
Examples and Application of Spectral Density
Homework problems
Previous homework problem solutions as examples – Dr. Severance’s Skill Examples
Skills #6
Skills #7
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
1 of 20
ECE 3800
Chapter 6: Correlation Functions
6.1 The Autocorrelation Function
For a sample function defined by samples in time of a random process, how alike are the
different samples?
X 1  X t1  and X 2  X t 2 
Define:
The autocorrelation is defined as:




 dx1  dx2  x1x2 f x1, x2 
R XX t1 , t 2   E  X 1 X 2  
The above function is valid for all processes, stationary and non-stationary.
For WSS processes:
R XX t1 , t 2   E X t X t     R XX  
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
1
 XX    lim
T   2T
and
T
 xt   xt     dt 
xt   xt   
T
 XX    R XX  
As a note for things you’ve been computing, the “zoreth lag of the autocorrelation” is

   dx  x
R XX t1 , t1   R XX 0   E X 1 X 1   E X 1 
2
2

2
2
2
1 f  x1    X   X

1
T   2T
T
 XX 0   lim
 xt 
2
 dt  xt 2
T
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
2 of 20
ECE 3800
6-3. Properties of Autocorrelation Functions
1)
 
R XX 0   E X 2  X 2 or  XX 0   xt 2
The mean squared value of the random process can be obtained by observing the zeroth lag of
the autocorrelation function.
R XX    R XX   
2)
The autocorrelation function is an even function in time. Only positive (or negative) needs to be
computed for an ergodic WSS random process.
R XX    R XX 0
The autocorrelation function is a maximum at 0. For periodic functions, other values may equal
the zeroth lag, but never be larger.
3)
4)
If X has a DC component, then Rxx has a constant factor.
X t   X  N t 
R XX    X 2  R NN  
Note that the mean value can be computed from the autocorrelation function constants!
5)
If X has a periodic component, then Rxx will also have a periodic component of the same
period.
Think of:
X t   A  cosw  t   ,
0    2 
where A and w are known constants and theta is a uniform random variable.
A2
R XX    E  X t X t    
 cosw   
2
5b)
For signals that are the sum of independent random variable, the autocorrelation is the
sum of the individual autocorrelation functions.
W t   X t   Y t 
RWW    R XX    RYY    2   X  Y
For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean)
RWW    R XX    RYY    2   X  Y
RWW    Rww    W 2  R xx     X 2  R yy    Y 2  2   X   Y
RWW    Rww    W 2  R xx    R yy     X 2  2   X   Y   Y 2
RWW    Rww    W 2  R xx    R yy     X   Y 2
Then we have
W 2   X  Y 2
Rww    R xx    R yy  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
3 of 20
ECE 3800
6)
If X is ergodic and zero mean and has no periodic component, then we expect
lim R XX    0
 
7)
Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes
permissible is in terms of the Fourier transform of the autocorrelation function. That is, if

 R XX    exp jwt   dt
R XX   

then the restriction states that
R XX    0 for all w
Additional concept:
X t   a  N t 
R XX    a 2  E N t   N t     a 2  R NN  
6-6. The Crosscorrelation Function
For a two sample function defined by samples in time of two random processes, how alike are
the different samples?
X 1  X t1  and Y2  Y t 2 
Define:
The cross-correlation is defined as:
R XY t1 , t 2   E  X 1Y2  
RYX t1 , t 2   E Y1 X 2  








 dx1  dy2  x1 y2 f x1, y2 
 dy1  dx2  y1x2 f  y1, x2 
The above function is valid for all processes, jointly stationary and non-stationary.
For jointly WSS processes:
R XY t1 , t 2   E X t Y t     R XY  
RYX t1 , t 2   EY t X t     RYX  
Note: the order of the subscripts is important for cross-correlation!
If the processes are jointly ergodic, the time average is equivalent to the probabilistic
expectation, or
1
 XY    lim
T   2T
T
 xt   yt     dt 
xt   y t   
T
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
4 of 20
ECE 3800
1
YX    lim
T   2T
and
T
 yt   xt     dt 
y t   xt   
T
 XY    R XY  
YX    RYX  
6-7. Properties of Crosscorrelation Functions
1)
The properties of the zoreth lag have no particular significance and do not represent
mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .
R XY 0  RYX 0 or  XY 0  YX 0
2)
Crosscorrelation functions are not generally even functions. However, there is an
antisymmetry to the ordered crosscorrelations:
R XY    RYX   
For
1
 XY    lim
T   2T
T
 xt   yt     dt 
T
Substitute
 XY    lim
T 
1
T   2T
 XY    lim
1
2T
xt   y t   
t  
T 
 x     y   d  x     y  
T 
T 
 y   x     d  y   x   
  YX   
T 
3)
The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes
sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation
should be a maximum when the lag equals the time delay!
R XY    R XX 0   R XX 0 
It can be shown however that
As a note, the crosscorrelation may not achieve the maximum anywhere …
4)
If X and Y are statistically independent, then the ordering is not important
R XY    E  X t   Y t     E  X t   E Y t     X  Y
and
R XY    X  Y  RYX  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
5 of 20
ECE 3800
5)
If X is a stationary random process and is differentiable with respect to time, the
crosscorrelation of the signal and it’s derivative is given by
dR XX  
R XX   
d
Defining derivation as a limit:
X t  e   X t 
X    lim
e
e 0
and the crosscorrelation

X t    e   X t    

R XX    E X t   X t     E  X t    lim

e
 e 0


E  X t   X t    e   X t   X t   
R XX    lim
e
e 0
E X t   X t    e   E X t   X t   
R XX    lim
e
e 0
R   e   R XX  
R XX    lim XX
e
e 0
dR XX  
R XX   
d
Similarly,
d 2 R XX  
R XX    
d 2


6-4. Measurement of the Autocorrelation Function
We love to use time average for everything. For wide-sense stationary, ergodic random
processes, time average are equivalent to statistical or probability based values.
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
Using this fact, how can we use short-term time averages to generate auto- or cross-correlation
functions?
An estimate of the autocorrelation is defined as:
1
Rˆ XX   
T 
T 
 xt   xt     dt
0
Note that the time average is performed across as much of the signal that is available after the
time shift by tau.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
6 of 20
ECE 3800
For tau based on the available time step, k, with N equating to the available time interval, we
have:
Rˆ XX kt  
1
N  1t   kt 
N k
 xit   xit  kt   t
i 0
1
Rˆ XX kt   Rˆ XX k  
N 1 k
N k
 xi   xi  k 
i 0
In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this
point the entire summation consists of one point and is therefore a poor estimate of the
autocorrelation. For useful results, k<<N!
As noted, the validity of each of the summed autocorrelation lags can and should be brought into
question as k approaches N. As a result, a biased estimate of the autocorrelation is commonly
used. The biased estimate is defined as:
~
R XX k  
1
N 1
N k
 xi   xi  k 
i 0
Here, a constant weight instead of one based on the number of elements summed is used. This
estimate has the property that the estimated autocorrelation should decrease as k approaches N.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
7 of 20
ECE 3800
6-2. Autocorrelation of a Binary Process
Binary communications signals are discrete, non-deterministic signals with plenty of random
variables involved in their generations.
Nominally, the binary bit last for a defined period of time, ta, but initially occurs at a random
delay (uniformly distributed across one bit interval 0<=t0<ta). The amplitude may be a fixed
constant or a random variable.
ta
A
t0
-A
bit(0)
bit(1)
bit(2)
bit(3)
bit(4)
The bit values are independent, identically distributed with probability p that amplitude is A and
q=1-p that amplitude is –A.
1
pdf t 0   , for 0  t 0  t a
ta
Determine the autocorrelation of the bipolar binary sequence, assuming p=0.5.

 
t 
 t   t0  a 2   k  ta 



xt    a k  rect  


ta
k  




R XX t1 , t 2   EX t1   X t 2 
For samples more than one period apart, t1  t 2  t a , we must consider


E a k  a j  p  A  p  A  p  A  1  p    A  1  p    A  p  A  1  p    A  1  p    A



E a k  a j  A 2  p 2 2  p  1  p   1  p 




2

E a k  a j  A  4  p 4  p  1
For p=0.5


2

2

E a k  a j  A 2  4  p 2 4  p  1  0
For samples within one period, t1  t 2  t a ,
Ea k  a k   p  A 2  1  p    A  A 2
2


Ea k  a k 1   A 2  4  p 2 4  p  1  0
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
8 of 20
ECE 3800
there are two regions to consider, the sample bit overlapping and the area of the next bit. But the
overlapping area … should be triangular. Therefore
R XX   
1

ta

ta
2
 Ea
k
 ak 1   dt 
ta
 2
1
R XX    
ta
ta
1

ta
2
1
t Eak  ak  dt  t a 
a
 ta 2
 Ea

ta
k
 ak   dt ,
for  t a    0
k
 ak 1   dt ,
for 0    t a
2
ta
 2
 2
 Ea
ta
2
or
R XX    A 2 
1

ta
R XX    A 2 
 t a 2
1  dt,

1

ta
ta
for  t a    0
2
ta
2
1  dt,
for 0    t a
ta
 2
Therefore
 2 ta  
A  t ,

a
R XX    

t
 A2  a  ,

ta
for  t a    0
for 0    t a
or recognizing the structure
  
R XX    A 2  1  , for  t a    t a
 ta 
This is simply a triangular function with maximum of A2, extending for a full bit period in both
time directions.
For unequal bit probability
 2  ta  
 
 4  p 2 4  p  1  , for  t a    t a
 A  
ta 
R XX    
 ta
 2
2
for t a  
 A  4  p 4  p  1 ,
As there are more of one bit or the other, there is always a positive correlation between bits (the
curve is a minimum for p=0.5), that peaks to A2 at  = 0.
Note that if the amplitude is a random variable, the expected value of the bits must be further
evaluated. Such as,
Eak  ak    2   2




Ea k  a k 1    2
In general, the autocorrelation of communications signal waveforms is important, particularly
when we discuss the power spectral density later in the textbook.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
9 of 20
ECE 3800
Example Cross-correlation (Fig_6_9)
A signal is measured in the presence of Gaussian noise having a bandwidth of 50 Hz and a
standard deviation of sqrt(10).
y t   xt   nt 
For
xt   2  sin2    500  t   
where theta is uniformly distributed from [0,2pi].
The corresponding signal-to-noise ratio, SNR, (in terms of power) can be computed as
SNR 
A2
S Power
2 

N Power  N 2
2
2
2  1  0 .1
2
10
10
SNRdB  10  log10 0.1  10 dB
(a) Form the autocorrelation
(b) Form the crosscorrelation with a pure sinusoid, LOt   2  sin2    500  t 
Signal Plus Noise Plot
20
10
0
-10
-20
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.3
0.4
0.5
0.3
0.4
0.5
Signal Plus Noise Auto-Correaltion Plot
30
20
10
0
-10
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Signal Plus Noise Cross-Correaltion Plot
2
1
0
-1
-2
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
10 of 20
ECE 3800
Chapter 7: Spectral Density
Much of engineering employs frequency domain methods for signal and system analysis.
Therefore., we are interested in frequency domain concepts and analysis related to probability
and statistics!
7-2. Relation of Spectral Density to the Fourier Transform
This function is also defined as the spectral density function (or power-spectral density) and is
defined for both f and w as:
2
E  Y w 


SYY w  lim
2T
T 
2
EY  f  


SYY  f   lim
2T
T 
or
The 2nd moment based on the spectral densities is defined, as:
Y
2
1

2


 SYY w  dw
and
Y
2


 SYY  f   df

Note: The result is a power spectral density (in Watts/Hz), not a voltage spectrum as (in V/Hz)
that you would normally compute for a Fourier transform.
7-2. Relation of Spectral Density to the Autocorrelation Function
For WSS random processes, the autocorrelation function is time based and, for ergodic
processes, describes all sample functions in the ensemble! In these cases the Wiener-Khinchine
relations is valid that allows us to perform the following.

S XX w  R XX   
 EX t   X t    exp iw   d

For an ergodic process, we can use time-based processing to aive at an equivalent result …
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
X    X w
For
1
 XX    lim
T   2T
T
 xt   exp iwt X w  dt
T
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
11 of 20
ECE 3800
1
T   2T
 XX    X w  lim
T
 xt   exp i wt  dt
T
 XX    X w  X  w  X w
2
We can define a power spectral density for the ensemble as:
S XX w  R XX   

 R XX    exp iw   d

Based on this definition, we also have
S XX w  R XX  
1
R XX t  
2
R XX     1 S XX w

 S XX w  expiwt   dw

7-3. Properties of the Power Spectral Density
The power spectral density as a function is always
 real,
 positive,
 and an even function in w.
As an even function, the PSD may be expected to have a polynomial form as:
S XX w  S 0
w 2n  a 2n  2 w 2n  2  a 2n  4 w 2n  4    a 2 w 2  a0
w 2m  b2m  2 w 2m  2  b2m  4 w 2m  4    b2 w 2  b0
where m>n.
Notice the squared terms, any odd power would define an anti-symmetric element that, by
definition and proof, can not exist!
Finite property in frequency. The Power Spectral Density must also approach zero as w
approached infinity …. Therefore,
w 2n  a2n2 w 2n2    a2 w 2  a0
1
w 2n

lim
 lim S 0 2m  n   0
S XX w     lim S 0 2 m
S
0
2
m

2
2
2
m
w 
w 
w 
   b2 w  b0
w  b2 m  2 w
w
w
For m>n, the condition will be met.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
12 of 20
ECE 3800
7-6. Relation of Spectral Density to the Autocorrelation Function
The power spectral density as a function is always
 real,
 positive,
 and an even function in w/f.
You can convert between the domains using:
The Fourier Transform in w
S XX w 

 R XX    exp iw   d

1
R XX t  
2

 S XX w  expiwt   dw

The Fourier Transform in f

S XX  f  
 R XX    exp i2f   d


R XX t  
 S XX  f   expi2ft   df

The 2-sided Laplace Transform
S XX s  

 R XX    exp s   d

1
R XX t  
j 2
j
 S XX s   expst   ds
 j
Deriving the Mean-Square Values from the Power Spectral Density
The mean squared value of a random process is equal to the 0th lag of the autocorrelation
 
EX
2
1
 R XX 0  
2
 
E X 2  R XX 0  


1
S XX w  expiw  0   dw 
2



 S XX w  dw


 S XX  f   expi2f  0  dw   S XX  f   df


Therefore, to find the second moment, integrate the PSD over all frequencies.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
13 of 20
ECE 3800
7-8. The Cross-Spectral Density
The Fourier Transform in w
S XY w 

 R XY    exp iw   d


1
R XY t  
2

and SYX w 

 RYX    exp iw   d

1
S XY w  expiwt   dw and RYX t  
2

Properties of the functions

 SYX w  expiwt   dw

S XY w  conjSYX w
Since the cross-correaltion is real,
 the real portion of the spectrum is even
 the imaginary portion of the spectrum is odd
7-7. White Noise
Noise is inherently defined as a random process. You may be familiar with “thermal” noise,
based on the energy of an atom and the mean-free path that it can travel.
 As a random process, whenever “white noise” is measured, the values are uncorrelated
with each other, not matter how close together the samples are taken in time.
 Further, we envision “white noise” as containing all spectral content, with no explicit
peaks or valleys in the power spectral density.
As a result, we define “White Noise” as
R XX    S 0   t 
N
S XX w   S 0  0
2
This is an approximation or simplification because the area of the power spectral density is
infinite!
For typical applications, we are interested in Band-Limited White Noise where
N0

S 0 
S XX w   
2
0

f W
W  f
The equivalent noise power is then:
  R
EX
2
XX 0  
W

S 0  dw  2  W  S 0  2  W 
W
N0
 N 0 W
2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
14 of 20
ECE 3800
Determine the autocorrelation of the binary sequence.
xt  

 pt  t 0  k  T 
A
k
k  
xt   pt  

A
k  
k
  t  t 0  k  T 
Determine the auto correlation of the discrete time sequence (leaving out the pulse for now)
y t  

A
k
k  
  t  t 0  k  T 
 

  
E  y t   y t     E   Ak   t  t 0  k  T     A j   t    t 0  j  T 
  j  
 k  

  

R yy    E  y t   y t     E    A j  Ak   t  t 0  k  T    t    t 0  j  T 
 k   j  

 

2
  Ak   t  t 0  k  T    t    t 0  k  T 

 k  

R yy    E   

   A j  Ak   t  t 0  k  T    t    t 0  j  T 
 k   jj  

k
 E A

R yy   
k
k  

2
 E t  t
  EA

0
 k  T    t    t 0  k  T 


 Ak  E  t  t 0  k  T    t    t 0  j  T 
j
k   j  
jk
  T1      EA 
R yy    E Ak 
2
 
2
k
2
1 
     m  T 
T m  
m0
 T1      EA 
R yy    E Ak  E Ak  
2

2
k

1 
     m  T 
T m  
 

      m  T 
m  


1
1 1
m 

S yy  f    A2    A2        f  
T
T T m   
T 
From here, it can be shown that
2
S xx  f   P  f   S yy  f 
R yy     A2 
1
1
      A2 
T
T

1
1
m 
2 

S xx  f   P f    A2    A2  2     f  
T
T 
T m   

2
2

m
2 
2 

S xx  f   P f   A  P f   A2     f  
T
T
T m   
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
15 of 20
ECE 3800
This is a magnitude scaled version of the power spectral density of the pulse shape and numerous
impulse responses with magnitudes shaped by the pulse at regular frequency intervals based on
the signal periodicity.
The result was picture in the textbook as …
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
16 of 20
ECE 3800
Generic Example of a Discrete Spectral Density (p. 267)
X t   A  B  sin2  f1  t  1   C  cos2  f 2  t   2 
where the phase angles are uniformly distributed R.V from 0 to 2π.
R XX    E X t X t   
 X t   A  B  sin 2  f1  t  1   C  cos2  f 2  t   2 

 E

  X t   A  B  sin 2  f 1  t     1   C  cos2  f 2  t      2 
 A 2  AB  sin 2  f1  t  1   AB  sin 2  f 1  t     1   

 2

 B  sin 2  f 1  t  1   sin 2  f1  t     1 
 AC  cos2  f  t     AC  cos2  f  t        
2
2
2
2

R XX    E 
2

C  cos2  f 2  t   2   cos2  f 2  t      2  



 BC  sin 2  f1  t  1   cos2  f 2  t      2  

 BC  cos2  f 2  t   2   sin 2  f 1  t      1 

 A2 

 2
B  sin 2  f 1  t   1   sin 2  f 1  t     1  

R XX    E 
C 2  cos2  f 2  t   2   cos2  f 2  t      2   



 BC  sin 2  f1  t  1  2  f 2  t      2 
With practice, we can see that
1
1

R XX    A 2  B 2  E   cos2  f1    cos2  f1 2t     21 
2
2

1
1

 C 2  E   cos2  f 2    cos2  f 2 2t     2 2 
2
2

which lead to
B2
C2
R XX    A 
 cos2  f1    
 cos2  f 2   
2
2
2
Forming the PSD
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
17 of 20
ECE 3800
And then taking the Fourier transform
2
B2  1
1
1
 C 1

     f  f 1      f  f 1  
     f  f 2      f  f 2 
S XX  f   A    f  
2 2
2
2
 2 2

2
S XX  f   A 2    f  
B2
C2
   f  f1     f  f1  
   f  f 2     f  f 2 
4
4
We also know from the before
X2 
1
2




 S XX w  dw 
 S  f   df
XX
Therefore, the 2nd moment can be immediately computed as

 2

B2
C2

X    A   f 
   f  f 1     f  f 1  
   f  f 2     f  f 2   df
4
4

 
2
X 2  A2 
B2
C2
B2 C 2
 2 
 2  A 2 

4
4
2
2
We can also see that
X  E A  B  sin 2  f 1  t   1   C  cos2  f 2  t   2   A
So,
 2  A2 
B2 C 2
B2 C 2

 A2 

2
2
2
2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
18 of 20
ECE 3800
Example Exam Questions
1. [40 pts])
A random process is defined by X t   ,
a.(10)
Derive the formula for the autocorrelation of X(t).
b.(10)
Find the spectral density of X(t)
c.(10)
Perform the crosscorrelation of X and Y for Y t   .
d.(10)
Interpretation of data in the results.
2. [40 pts])
Consider the random process with the spectral density graph shown below.
a.(10)
What is the autocorrelation function of this process.?
b.(10)
What is the total average power of this process?
c.(10)
What is the average DC power of this process?
d.(10)
What is the power of the signal over the frequency range from (1) a to b Hz and (2) c
to d Hz?
3. [30 pts])
Consider the following system block diagram with input signal X(t) and output
signal Y(t). Further suppose that the input signal autocorrelation is R XX    .
a.(10)
Estimate the mean and the standard deviation of the input X(t) from the
autocorrelation.
b.(10)
What is the autocorrelation of the output signal Y(t) in terms of
(1) the autocorrelation of X(t), R XX   , and
(2) purely as a function of .
c.(10)
Estimate the mean and the standard deviation of the output Y(t).
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
19 of 20
ECE 3800
4. [40 pts])
A stationary random process has a spectral density of
S XX w 
a.(10)
Find the autocorrelation function of the process.
b.(10)
Compute the total mean power of the signal.
c.(10)
Write an expression for the spectral density of a bandlimited white noise that has the
same power spectral density at w=0 and the same total mean power of the resulting
process.
d.(10)
Write an expression for the autocorrelation function of the bandlimited white noise
spectral density defined in part (c).
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
20 of 20
ECE 3800
Download