Chapter 6: Correlation Functions

advertisement
Chapter 6: Correlation Functions
6-1
Introduction
6-2
Example: Autocorrelation Function of a Binary Process
6-3
Properties of Autocorrelation Functions
6-4
Measurement of Autocorrelation Functions
6-5
Examples of Autocorrelation Functions
6-6
Crosscorrelation Functions
6-7
Properties of Crosscorrelation Functions
6-8
Example and Applications of Crosscorrelation Functions
6-9
Correlation Matrices for Sampled Functions
Concepts









Autocorrelation
o of a Binary/Bipolar Process
o of a Sinusoidal Process
Linear Transformations
Properties of Autocorrelation Functions
Measurement of Autocorrelation Functions
An Anthology of Signals
Crosscorrelation
Properties of Crosscorrelation
Examples and Applications of Crosscorrelation
Correlation Matrices For Sampled Functions
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
1 of 27
ECE 3800
Chapter 6: Correlation Functions
The Autocorrelation Function
For a sample function defined by samples in time of a random process, how alike are the
different samples?
Define:
X 1  X t1  and X 2  X t 2 
The autocorrelation is defined as:
R XX t1 , t 2   E  X 1 X 2  




 dx1  dx2  x1x2 f x1, x2 
The above function is valid for all processes, stationary and non-stationary.
For WSS processes:
R XX t1 , t 2   E X t X t     R XX  
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
and
 XX    R XX  
The mathematics require correlation as compared to convolution.
Convolution
y   

 h   x     d or yn  


 hk   xn  k 
k  
Correlation
y   

 x   h     d or yn  


 hk   xn  k 
k  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
2 of 27
ECE 3800
Properties of Autocorrelation Functions
1)
 
R XX 0   E X 2  X 2 or  XX 0   xt 2
The mean squared value of the random process can be obtained by observing the zeroth lag of
the autocorrelation function.
R XX    R XX   
2)
The autocorrelation function is an even function in time. Only positive (or negative) needs to be
computed for an ergodic, WSS random process.
R XX    R XX 0
The autocorrelation function is a maximum at 0. For periodic functions, other values may equal
the zeroth lag, but never be larger.
3)
4)
If X has a DC component, then Rxx has a constant factor.
X t   X  N t 
R XX    X 2  R NN  
Note that the mean value can be computed from the autocorrelation function constants!
5)
If X has a periodic component, then Rxx will also have a periodic component of the same
period. For,
X t   A  cosw  t   ,
0    2 
where A and w are known constants and theta is a uniform random variable.
R XX    E  X t X t    
A2
 cosw   
2
5b)
For signals that are the sum of independent random variable, the autocorrelation is the
sum of the individual autocorrelation functions.
W t   X t   Y t 
RWW    R XX    RYY    2   X  Y
For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean)
W 2   X  Y 2
Rww    R xx    R yy  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
3 of 27
ECE 3800
6)
If X is ergodic and zero mean and has no periodic component, then
lim R XX    0
 
No good proof provided.
The logical argument is that eventually new samples of a non-deterministic random process will
become uncorrelated. Once the samples are uncorrelated, the autocorrelation goes to zero.
7)
Autocorrelation functions cannot have an arbitrary shape. One way of specifying shapes
permissible is in terms of the Fourier transform of the autocorrelation function. That is, if
R XX   

 R XX    exp jwt   dt

then the restriction states that
R XX    0 for all w
Additional concepts that are useful … dealing with constant:
X t   a  N t 
R XX    EX t X t     Ea  N t   a  N t   
R XX    a 2  E N t   N t     a 2  R NN  
X t   a  Y t 
R XX    Ea  Y t   a  Y t   


R XX    E a 2  a  Y t   a  Y t     Y t   Y t   
R XX    a 2  a  E Y t   a  E Y t     E Y t   Y t   
R XX    a 2  2  a  E Y t   RYY  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
4 of 27
ECE 3800
Autocorrelation of a Binary Process
Binary communications signals are discrete, non-deterministic signals with plenty of random
variables involved in their generations.
Nominally, the binary bit last for a defined period of time, ta, but initially occurs at a random
delay (uniformly distributed across one bit interval 0<=t0<ta). The amplitude may be a fixed
constant or a random variable.
ta
A
t0
-A
bit(0)
bit(1)
bit(2)
bit(3)
bit(4)
The bit values are independent, identically distributed with probability p that amplitude is A and
q=1-p that amplitude is –A.
1
pdf t 0   , for 0  t 0  t a
ta
Determine the autocorrelation of the bipolar binary sequence, assuming p=0.5.
 
t 
 t   t0  a 2   k  ta


xt    a k  rect 

ta
k  









R XX t1 , t 2   EX t1   X t 2 
For p=0.5




E a k  a j  A 2  4  p 2 4  p  1  0
 
R XX    A 2  1 
 ta

, for  t a    t a


This is simply a triangular function with maximum of A2, extending for a full bit period in both
time directions.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
5 of 27
ECE 3800
For unequal bit probability
 2  ta  

 4  p 2 4  p  1 
 A  
ta
R XX    
 ta
 2
2
 A  4  p 4  p  1 ,





,


for  t a    t a
for t a  
As there are more of one bit or the other, there is always a positive correlation between bits (the
curve is a minimum for p=0.5), that peaks to A2 at  = 0.
Note that if the amplitude is a random variable, the expected value of the bits must be further
evaluated. Such as,
Eak  ak    2   2
Ea k  a k 1    2
In general, the autocorrelation of communications signal waveforms is important, particularly
when we discuss the power spectral density later in the textbook.
Examples of discrete waveforms used for communications, signal processing, controls, etc.
(a) Unipolar RZ & NRZ, (b) Polar RZ & NRZ , (c) Bipolar NRZ , (d) Split-phase Manchester,
(e) Polar quaternary NRZ.
From Chap. 11: A. Bruce Carlson, P.B. Crilly, Communication Systems,
5th ed., McGraw-Hill, 2010. ISBN: 978-0-07-338040-7
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
6 of 27
ECE 3800
Exercise 6-3.1
a) An ergodic random process has an autocorrelation function of the form
R XX    9  exp  4     16  cos 10     16
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is
 
E X 2  R XX 0   9  16  16  41   2   2
The constant portion of the autocorrelation represents the square of the mean. Therefore
E X    2  16 and   4
2
Finally, the variance can be computed as,
 2  E X 2   E X 2  R XX 0    2  41  16  25
b) An ergodic random process has an autocorrelation function of the form
R XX   
4  2  6
 2 1
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is
 
E X 2  R XX 0  
6
 6  2  2
1
The constant portion of the autocorrelation represents the square of the mean. Therefore
4  2  6 4
  4 and   2
t   2  1
1
E X    2  lim
2
Finally, the variance can be computed as,
 2  E X 2   E  X 2  R XX 0    2  6  4  2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
7 of 27
ECE 3800
Another Autocorrelation examples, using a time average
Example: Rect. Function
From Dr. Severance’s notes;
X t  
 Ak  Re ct
k
t  t 0  kT 

T

where A(k) are zero mean, identically distributed, independent R.V., t0 is a R.V. uniformly
distributed over possible bit period T, and T is a constant. k describes the bit positions.


 t  t 0  kT 
 t    t 0  jT 

R XX t , t     E
Ak  Re ct 
A j  Re ct 



T
T




j
 k


 t  t 0  kT 
 t    t 0  jT 
R XX t , t    
E  Ak  A j  Re ct 

  Re ct 

T
T







 
k
R XX t , t    
j
   EAk  A j  E Re ct

k
j
but
E A
 E A   
 A  
2
2
A
t  t 0  kT 
 t    t 0  jT 
  Re ct 

T
T



  A2
for k  j
 E  A2   A2
for k  j
If we assume that the symbol has a zero mean .  A  0 … and
k
j
Therefore (one summation goes away)
 
R XX t , t     E A 2 
k

 t  t 0  kT 
 t    t 0  kT 
 E Re ct 
  Re ct 

T
T





Now using the time average concept
T
1
 XX    lim 
T  T

2
T xt   xt     dt 
xt   xt   
2
Establishing the time average as the average of the kth bit, (notice that the delay is incorporated
into the integral, and assuming that the signal is Wide-Sense Stationary)
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
8 of 27
ECE 3800
T t0
2
  T1  
 XX    E A 2 
T  t 0
2

 t 
1  Re ct  T



  dt

for T>>=0
T t0
2
  T1   1 1 dt  EA  T1  t 
 XX    E A 2 
2
 T  t 0 
2
T t0
2
 T  t 0 
2
  T1  T 2  t   T 2  t     EA  T1  T   
 
    E A  1  ,
for   0
 T
 XX    E A 2 
0
0
2
2
XX
Due to symmetry,  XX     XX    , we have
 
  
 XX    E A 2  1  
 T
Note: if the symbol has a mean, there are additional “triangles” at all the offset bit times in T.
When the “triangles” are all summed a DC level based on the mean of the symbol squared will
result. Therefore,
 E A 2   A2   A2
for k  j
E Ak  A j  
 E  A2   A2
for k  j


 
And
  
  n T 
2
2 
 XX    E A 2  E  A  1    E A  1 

T

 n0
 T
  
2
 XX    E A 2  1    E A
 T
 

 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
9 of 27
ECE 3800
Example: Rect. Function that does not extend for the full period
X t  
 Ak  Re ct
k
t  t 0  kT 

 T 
where A(k) are zero mean, identically distributed, independent R.V., t0 is a R.V. uniformly
distributed over possible bit period T, and T and alpha are constants. k describes the bit
positions.
From the previous example, the average value during the time period was defined as:
T t0
2
  T1  
 XX    E A 2 

 t 
1  Re ct  T


T  t 0
2

  dt

For this example, the period remains the same, but the integration period due to the rect functions
change to:
T  t 0
2
  T1  

 XX    E A 2 
 T t0
2
for T>>=0
  T1 
 XX    E A 2 
T  t 0
2

  dt

1

1  1  dt  E A 2   t  T2


T  t 0
T
 T  t 0  
2
2
 t 0 
  T1  T 2  t   T 2  t     EA  T1  T   
   E A  T  1     E A    1   ,
for   0
T  T 
 T 
 XX    E A 2 
 XX

 t 
1  Re ct  T


0
0
2
2
2
Due to symmetry, we have

 
 XX    E A 2    1 

 T 
 
Note: if the symbol has a mean, there are additional “triangles” at all the offset bit times in T.
   n T 

 
2
 XX    E A 2    1 

  E  A    1 
T 

 T 
 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
10 of 27
ECE 3800
The Crosscorrelation Function
For a two sample function defined by samples in time of two random processes, how alike are
the different samples?
Define:
X 1  X t1  and Y2  Y t 2 
The crosscorrelation is defined as:
R XY t1 , t 2   E  X 1Y2  
RYX t1 , t 2   E Y1 X 2  








 dx1  dy2  x1 y2 f x1, y2 
 dy1  dx2  y1x2 f  y1, x2 
The above function is valid for all processes, jointly stationary and non-stationary.
For jointly WSS processes:
R XY t1 , t 2   E X t Y t     R XY  
RYX t1 , t 2   EY t X t     RYX  
Note: the order of the subscripts is important for cross-correlation!
If the processes are jointly ergodic, the time average is equivalent to the probabilistic
expectation, or
1
 XY    lim
T   2T
1
YX    lim
T   2T
T
 xt   yt     dt 
xt   y t   
T
T
 yt   xt     dt 
y t   xt   
T
and
 XY    R XY  
YX    RYX  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
11 of 27
ECE 3800
Properties of Crosscorrelation Functions
1)
The properties of the zoreth lag have no particular significance and do not represent
mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .
R XY 0  RYX 0 or  XY 0  YX 0
2)
Crosscorrelation functions are not generally even functions. However, there is an
antisymmetry to the ordered crosscorrelations:
R XY    RYX   
For
T
 xt   yt     dt 
1
 XY    lim
T   2T
xt   y t   
T
t  
Substitute
 XY    lim
1
T   2T
 XY    lim
1
T   2T
T 
 x     y   d  x     y  
T 
T 
 y   x     d  y   x   
  YX   
T 
3)
The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes
sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation
should be a maximum when the lag equals the time delay!
It can be shown however that
R XY    R XX 0  RYY 0 
As a note, the crosscorrelation may not achieve the maximum anywhere …
4)
If X and Y are statistically independent, then the ordering is not important
R XY    E  X t   Y t     E  X t   E Y t     X  Y
and
R XY    X  Y  RYX  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
12 of 27
ECE 3800
5)
If X is a stationary random process and is differentiable with respect to time, the
crosscorrelation of the signal and it’s derivative is given by
R XX   
dR XX  
d
Defining derivation as a limit:
X t  e   X t 
X    lim
e
e 0
and the crosscorrelation

X t    e   X t    

R XX    E X t   X t     E  X t    lim

e
 e0




E  X t   X t    e   X t   X t   
e
e 0
R XX    lim
E X t   X t    e   E X t   X t   
e
e 0
R XX    lim
R XX   e   R XX  
e
e 0
R XX    lim
R XX   
dR XX  
d
Similarly,
R XX    
d 2 R XX  
d 2
That is all the properties for the crosscorrelation function.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
13 of 27
ECE 3800
Example: A delayed signal in noise … cross correlated with the original signal …
X t 
Y t   a  X t  t d   N t 
Find
R XY    EX t Y t   
Then
R XY    E X t   a  X t  t d     N t   
R XY    a  R XX   t d   R XN  
For noise independent of the signal,
R XN    X  N  R NX  
and therefore
R XY    a  R XX   t d   X  N
For either X or N a zero mean process, we would have
R XY    a  R XX   t d 
Does this suggest a way to determine the time delay based on what you know of the
autocorrelation?
The maximum of the autocorrelation is at zero; therefore, the maximum of the crosscorrelation
can reveal the time delay!



any electronic system based on the time delay of a known, transmitted signal pulse …
Applications: radar, sonar, digital communications, etc.
Note: in communications, one form of “optimal” detector for a digital communications
symbol is a correlation receiver/detector. One detector “perfectly matched” to each
symbol transmitted is defined. The detector with the maximum output at the correct time
delay is selected as the symbol that was sent!
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
14 of 27
ECE 3800
Measurement of the Autocorrelation Function
We love to use time average for everything. For wide-sense stationary, ergodic random
processes, time average are equivalent to statistical or probability based values.
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
Using this fact, how can we use short-term time averages to generate auto- or cross-correlation
functions?
An estimate of the autocorrelation is defined as:
Rˆ XX   
1
T 
T 
 xt   xt     dt ,
for 0    T
0
Note that the time average is performed across as much of the signal that is available after the
time shift by tau.
In most practical cases, the function is performed in terms of digital samples taken at specific
time intervals, t . For tau based on the available time step, k, with N equating to the available
time interval, we have:
Rˆ XX kt  
1
N  1t   kt 
Rˆ XX kt   Rˆ XX k  
N k
 xit   xit  kt   t
i 0
1
N 1 k
N k
 xi   xi  k 
i 0
In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this
point the entire summation consists of one point and is therefore a poor estimate of the
autocorrelation. For useful results, k<<N!
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
15 of 27
ECE 3800
It can be shown (below) that the estimated autocorrelation is equivalent to the actual
autocorrelation; therefore, this is an unbiased estimate.
N k


1
ˆ
ˆ

E R XX kt   E R XX k   E
xi   xi  k 
 N 1 k

i 0



 



E Rˆ XX kt  




1

N 1 k
1

N 1 k
N k
 Exi   xi  k 
i 0
N k
 R XX kt 
i 0
1
  N  k  1  R XX kt 
N 1 k

E Rˆ XX kt   R XX kt 
As noted, the validity of each of the summed autocorrelation lags can and should be brought into
question as k approaches N. As a result, a biased estimate of the autocorrelation is commonly
used. The biased estimate is defined as:
~
R XX k  
1
N 1
N k
 xi   xi  k 
i 0
Here, a constant weight instead of one based on the number of elements summed is used. This
estimate has the property that the estimated autocorrelation should decrease as k approaches N.
The expected value of this estimate can be shown to be


k 
~

E R XX k   1 
  R XX kt 
N  1

The variance of this estimate can be shown to be (math not done)


2
~
Var R XX k   
N
M
 R XX kt 2
k  M
This equation can be used to estimate the number of time samples needed for a useful estimate.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
16 of 27
ECE 3800
Matlab Example
%%
% Figure 6_2
%
clear;
close all;
nsamples = 1000;
n=(0:1:nsamples-1)';
%%
% rect
rect_length = 100;
x = zeros(nsamples,1);
x(100+(1:rect_length))=1;
% Scale to make the maximum unity
Rxx1 = xcorr(x)/rect_length;
Rxx2 = xcorr(x,'unbiased');
nn = -(nsamples-1):1:(nsamples-1);
figure
plot(n,x)
figure
plot(nn,Rxx1,nn,Rxx2)
legend('Raw','Unbiased')
1.2
Raw
Unbiased
1
0.8
0.6
0.4
0.2
0
-0.2
-1000
-800
-600
-400
-200
0
200
400
600
800
1000
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
17 of 27
ECE 3800
Example: Excel
Skill 26.1
Consider the random process generated by the difference equation
xk   0.90  xk  1  0.81  xk  2   3  2 * RND  1
x0   1
x1  1
where RND is a uniformly distributed random number between 0 and 1. This is an autoregressive
process in that future values are dependent upon the past values of the signal. Note the values
that follow if there was no random input … (1, 1, 0.09, -0.729, -0.729, 0.06561, …)
Assume that the signal is sampled 5 times a second (0.2 sec. interval). Create a spread sheet to
generate the signal and compute the autocorrelation of the result.
Note: The frequency response of the “filter” is (based on ECE4550 information).
Magnitude (dB)
30
20
10
0
-10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Normalized Frequency ( rad/sample)
0.9
1
Phase (degrees)
50
0
-50
-100
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
18 of 27
ECE 3800
Example: MATLAB Figure 6-3 and 6-4 – What you see when doing a simulation!
%%
% Figure 6_3
%
clear;
close all;
rng(1000); % Replaces rand('seed',1000)
x=10*randn(1,1001);
t1=0:0.001:1
tlag=(-1:0.001:1);
Rxx=xcorr(x)/(1002);
figure(1)
subplot(2,1,1)
plot(t1,x);
xlabel('time'); ylabel('X');
subplot(2,1,2)
plot(tlag,Rxx);
xlabel('Lag time'); ylabel('Rxx');
40
X
20
0
-20
-40
0
0.1
0.2
0.3
0.4
-50
-1
-0.8
-0.6
-0.4
-0.2
0.5
time
0.6
0.7
0.8
0.9
1
0
0.2
Lag time
0.4
0.6
0.8
1
150
Rxx
100
50
0
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
19 of 27
ECE 3800
%%
% Figure 6_4
%
clear;
close all;
rng(1000); % Replaces rand('seed',1000)
x1=10*randn(1,1001);
h=ones(1,51)/51;
x=conv(h,x1);
x2=x(25:25+1000);
t1=0:0.001:1;
tlag=(-1:0.001:1);
Rx1=xcorr(x1)/(1002);
Rx2=xcorr(x2)/(1002);
figure(1)
subplot(2,1,1)
plot(t1,x1);
xlabel('time'); ylabel('X');
subplot(2,1,2)
plot(tlag,Rx1);
xlabel('Lag time'); ylabel('Rxx');
figure(2)
subplot(2,1,1)
plot(t1,x2);
xlabel('time'); ylabel('X');
subplot(2,1,2)
plot(tlag,Rx2);
xlabel('Lag time'); ylabel('Rxx');
%%
% Variance
%
M=length(Rx1);
V1=(2/M)*sum(Rx1.^2);
V2=(2/M)*sum(Rx2.^2);
S1=sqrt(V1);
S2=sqrt(V2);
fprintf('Data
fprintf('Data
fprintf('Filter
fprintf('Filter
Variance
Standard
Variance
Standard
%g\n',V1);
Deviation %g\n',S1);
%g\n',V2);
Deviation %g\n',S2);
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
20 of 27
ECE 3800
40
20
X
0
-20
-40
0
0.1
0.2
0.3
0.4
-50
-1
-0.8
-0.6
-0.4
-0.2
0
0.1
0.2
0.3
0.4
-1
-1
-0.8
-0.6
-0.4
-0.2
0.5
time
0.6
0.7
0.8
0.9
1
0
0.2
Lag time
0.4
0.6
0.8
1
0.6
0.7
0.8
0.9
1
0
0.2
Lag time
0.4
0.6
0.8
1
150
Rxx
100
50
0
4
X
2
0
-2
-4
0.5
time
2
Rxx
1
0
Data Variance 22.6494
Data Standard Deviation 4.75913
Filter Variance 0.110392
Filter Standard Deviation 0.332252
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
21 of 27
ECE 3800
Example Cross-correlation (Fig_6_9)
A signal is measured in the presence of Gaussian noise having a bandwidth of 50 Hz and a
standard deviation of sqrt(10).
y t   xt   nt 
For
xt   2  sin2    500  t   
where theta is uniformly distributed from [0,2pi].
The corresponding signal-to-noise ratio, SNR, (in terms of power) can be computed as
SNR 
A2
S Power
2 

N Power  N 2
2
2
2  1  0 .1
2
10
10
SNRdB  10  log10 0.1  10 dB
(a) Form the autocorrelation
(b) Form the crosscorrelation with a pure sinusoid, LOt   2  sin2    500  t 
Signal Plus Noise Plot
20
10
0
-10
-20
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.3
0.4
0.5
0.3
0.4
0.5
Signal Plus Noise Auto-Correaltion Plot
30
20
10
0
-10
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Signal Plus Noise Cross-Correaltion Plot
2
1
0
-1
-2
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
22 of 27
ECE 3800
Correlation Matrices for Sampled Functions
Collect a vector of time samples:
 X t1  
 X t  
2 
X 
  


 X t N 
Forming the autocorrelation matrix:
 X t1  
 X t  
2 
T
R XX  E X  X  
 X t1  X t 2   X t N 
  


 X t N 


 X t1   X t1  X t1   X t 2 
 X t   X t  X t   X t 
2
1
2
2
R XX  




 X t N   X t1  X t N   X t 2 
X t1   X t N  
X t 2   X t N  




 X t N   X t N 


If the random process is Wide Sense Stationary, the autocorrelation is related to the lag times.
Assuming that the data consists of consecutive time samples at a constant interval,
t 2  t1  t
t k  t1  k  1  t
Then
R XX 0 
R XX t 


R XX  t 
R XX 0 
R XX  




 R XX   N  1  t  R XX   N  2   t 
R XX  N  1  t  
 R XX  N  2   t 





R XX 0 


but since the autocorrelation function is symmetric
R XX 0 
R XX t 


R XX t 
R XX 0 
R XX  




 R XX  N  1  t  R XX  N  2   t 
R XX  N  1  t  
 R XX  N  2   t 





R XX 0 


Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
23 of 27
ECE 3800
Note that this matrix takes on a banded form along the diagonal, a Toeplitz Matrix. It has
“useful” properties when performing matrix computations!
The covariance matrix can also be described:
 X t1   X 


T   X t 2   X 

 X t1   X
C XX  E X  X    X  X  
 





 X t N   X 
 1 1
   
C XX   21 2 1



  N1   N   1
12   1   2
 2  2



X t 2   X
 X t N   X 
1N   1   N 
  2 N   2   N 
 N 2  N  2 

 N  N



For a WSS process, all the variances are identical and the correlation coefficients are based on
the lag interval.
C XX   XX
 1
 
1
2 
 
 

  N 1
1
1

 N 1 
  N  2 


 N 2 
 

1 
An Example
Let
R XX    10  exp     9
Generating a 3 x 3 matrix based on time samples that are “1 unit” apart yields:
 R XX 0  R XX t  R XX 2t   19 12.68 10.35
R XX   R XX t  R XX 0  R XX t    12.68 19 12.68
 R XX 2t  R XX t  R XX 0   10.35 12.68 19 
C XX   XX  R XX  X 
2
C XX   XX
 19 12.68 10.35
 12.68 19 12.68  9
10.35 12.68 19 
 1
0.368 0.135
 10 3.68 1.35 
 1





 3.68 10 3.68  10  0.368
1
0.368  10   e 1
e  2
1.35 3.68 10 
0.135 0.368
1 

e 1
1
e 1
e 2 

e 1 
1 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
24 of 27
ECE 3800
Welcome to concepts that you can discover more about in graduate school …
For an alternate interpretation, if you could measure multiple random processes, computation of
the Autocorrelation matrix would provide the second moments for each of the processes and the
correlation of each process to every other process!
 X 1 t  
 X t  
X t    2 
  


 X N t 
 X 1 t   X 1 t  X 1 t   X 2 t 
1
1
 X 2 t   X 1 t  X 2 t   X 2 t 
lim
R XX t   dt  lim



T   2T
T   2T
T
T 
 X N t   X 1 t  X N t   X 2 t 

T
T 


 X 1 t   X 1 t  X 1 t   X 2 t 
1
 X 2 t   X 1 t  X 2 t   X 2 t 
R XX  lim



T   2T
T 
 X N t   X 1 t  X N t   X 2 t 
T 

X 1 t   X N t  
 X 2 t   X N t  
 dt




 X N t   X N t 

 X 1 i   X 1 i  X 1 i   X 2 i 
 X i   X i  X i   X i 
1
1
2
2
 2

R XX 



2N  1
i N 
 X N i   X 1 i  X N i   X 2 i 
N

X 1 t   X N t  
 X 2 t   X N t  
 dt




 X N t   X N t 

X 1 i   X N i  
 X 2 i   X N i  




 X N i   X N i 

This concept is particularly useful when multiple sensors are measuring the same physical
phenomenon. Using multiple sensors, the noise in each sensor is independent, but the signal-ofinterest is not. The correlations formed can be used to describe the relative strength of the SOI
between sensors, while the autocorrelations relate the signal plus noise power observed in each
sensor.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
25 of 27
ECE 3800
This concept may also be applied for cross correlation, except the matrix dimensions may no
longer be square.
Let
 X 1 t  
 X t  
X t    2 
  


 X N t 
 Y1 t  
 Y t  
Y t    2 
  


YM t 
and
The resulting cross-correlation matrix is now N x M.
 X 1 t   Y1 t  X 1 t   Y2 t 
 X t   Y t  X t   Y t 
1
2
2
T
R XY  E X  Y   2




 X N t   Y1 t  X N t   Y2 t 


X 1 t   YM t  
 X 2 t   YM t  




 X N t   YM t 

As an example, imagine that there are N systems that consist of noise and a signal at a unique
frequency. If the signals are cross correlated with a reference signal with a desired frequency,
you would expect only the element containing that frequency to be non-zero.
Let
 A  exp j 2f1t  j1   N1 
 A  exp j 2f t  j   N 
2
2
2
and
X t   





 A  exp j 2f 3t  j 3   N 3 
Y t   exp j 2f d t 
 X 1 i   conj Y i  
 X i   conj Y i  
1
 2


R XY 



2N  1
i N 

 X N i   conj Y i 
N




i
' 
 A  exp j 2  f1  f d   j1   N1 
fs





N 


i
1
 j 2  f 2  f d   j 2   N 2' 
A
exp


R XY 

fs




2N  1
iN 






i
 j 3   N 3' 
 A  exp j 2  f 3  f d 3
fs





Where the frequency “cancels” a value will remain, where they don’t it should sum to zero in
time (assuming the noise is zero mean).
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
26 of 27
ECE 3800
As another alternative, if the same signal is delayed getting to each of multiple antenna the this
would turn into (notice that the phase and frequencies for X are now the same for all signals)



i
' 
 A  exp j 2  f1  f d   j 2f1 1  j1   N1 
fs





N 


i
1
 j 2  f1  f d   j 2f1 2  j1   N 2' 
A
exp


R XY 

fs




2N  1
iN 






i
 j 2f1 3  j1   N 3' 
 A  exp j 2  f1  f d 3
fs





If you “tune correctly, you get
 A  exp j 2f1 1  j1   N1' 

N 
1
A  exp j 2f1 2  j1   N 2' 


R XY 


2N  1

iN 

'
 A  exp j 2f1 3  j1   N 3 

and the relative phases between each of the antenna elements is related to the differential arrival
time between the antenna and reference. What would happen if you could change the phase in
each of the antenna inputs to perfectly align … and then sum them together.
This is the concept of Beamforming as applied in “Smart Antenna” systems.
More Examples
Matlab
Generation and correlation of a BPSK signal ….
Is it really a triangle ….
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
27 of 27
ECE 3800
Download