Exam #3 Review Chapter 4 Expectation and Moments

advertisement
Exam #3 Review
Chapter 4 Expectation and Moments
Mean. Moments, Variance, Expected Value Operator
Chapter 5. Random Processes
Random Processes, Wide Sense Stationary
Chapter 8 Random Sequences
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
Basic Concepts
Infinite-length Bernoulli Trials
Continuity of Probability Measure
Statistical Specification of a Random Sequence
Basic Principles of Discrete-Time Linear Systems
Random Sequences and Linear Systems
WSS Random Sequences
Power Spectral Density
Interpretation of the psd
Synthesis of Random Sequences and Discrete-Time Simulation
Decimation
Interpolation
Markov Random Sequences
Vector Random Sequences and State Equations
Convergence of Random Sequences
Laws of Large Numbers
442
447
452
454
471
477
486
489
490
493
496
497
500
511
513
521
Chapter 9 Random Processes
9.1
9.2
9.3
9.4
9.5
9.6
9.7
Basic Definitions
Some Important Random Processes
Continuous-Time Linear Systems with Random Inputs
White Noise
Some Useful Classifications of Random Processes
Stationarity
Wide-Sense Stationary Processes and LSI Systems
Wide-Sense Stationary Case
Power Spectral Density
An Interpretation of the Power Spectral Density
More on White Noise
Stationary Processes and Differential Equations
Periodic and Cyclostationary Processes
Vector Processes and State Equations
State Equations
544
548
572
577
578
579
581
582
584
586
590
596
600
606
608
Homework problems
B.J. Bazuin, Spring 2016
1 of 63
ECE 3800
Previous homework problem solutions as examples – Dr. Severance’s Skill Examples
Skills #6
Skills #7
This exam is likely to be four problems similar in nature to the 2015 exam. :
1. You will be given a random sequence or process. Determine the autocorrelation.
Determine the power spectral density. Perform a cross-correlation.
2. Filtering of random sequence or random process. Determine the input autocorrelation.
Determine the output autocorrelation. Determine the input power spectram density.
Determine the output power spectral density.
3. .Given a power spectral density, determine the random process or sequence mean, 2nd
moment (total power), variance. Determine the power in a frequency band.
And now for a quick chapter review … the important information without the rest!
B.J. Bazuin, Spring 2016
2 of 63
ECE 3800
Autocorrelation Function Basics
The Autocorrelation Function
For a sample function defined by samples in time of a random process, how alike are the
different samples?
X 1  X t1  and X 2  X t 2 
Define:
The autocorrelation is defined as:
R XX t1 , t 2   E  X 1 X 2  




 dx1  dx2  x1x2 f x1, x2 
The above function is valid for all processes, stationary and non-stationary.
For WSS processes:
R XX t1 , t 2   E X t X t     R XX  
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
T
1
 XX    lim
xt   xt     dt  xt   xt   
T  2T 
T
and
 XX    R XX  
Define:

x k  X k  and x k  X l 

RKK k , l   E X k   X l    xk  xl  pmf X x k , xl ; k , l 
*
*
xl
xk
For WSS
*
*
*
RKK k   E X k   X 0  E X k  n   X 0  n    x k  x0  pmf X  xk , x0 ; k ,0

 

xk
x0
If the process is ergodic, the sample average is equivalent to the probabilistic expectation, or
N
1
*
 KK k   lim
  X n  k   X n 
N  2 N  1
n N
As a note for things you’ve been computing, the “zoreth lag of the autocorrelation” is

    dx  x
R XX t1 , t1   R XX 0   E X 1 X 1   E X 1
2
2
1
2

f  x1    X 2   X 2

1
T  2T
 XX 0   lim
B.J. Bazuin, Spring 2016
T
 xt 
2
 dt  xt 
2
T
3 of 63
ECE 3800
Properties of Autocorrelation Functions
 
1)
R XX 0   E X 2  X 2
The mean squared value of the random process can be obtained by observing the zeroth lag of
the autocorrelation function.
R XX    R XX    or R XX k   R XX  k 
2)
The autocorrelation function is an even function in time. Only positive (or negative) needs to be
computed for an ergodic WSS random process.
R XX    R XX 0 or R XX k   R XX 0
The autocorrelation function is a maximum at 0. For periodic functions, other values may equal
the zeroth lag, but never be larger.
3)
4)
If X has a DC component, then Rxx has a constant factor.
X t   X  N t 
R XX    X 2  R NN  
Note that the mean value can be computed from the autocorrelation function constants!
5)
If X has a periodic component, then Rxx will also have a periodic component of the same
period.
Think of:
X t   A  cosw  t   ,
0    2 
where A and w are known constants and theta is a uniform random variable.
A2
R XX    E  X t X t    
 cosw   
2
5b)
For signals that are the sum of independent random variable, the autocorrelation is the
sum of the individual autocorrelation functions.
W t   X t   Y t 
RWW    R XX    RYY    2   X  Y
For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean)
RWW    R XX    RYY    2   X  Y
RWW    Rww    W 2  R xx     X 2  R yy    Y 2  2   X   Y
RWW    Rww    W 2  R xx    R yy     X 2  2   X   Y   Y 2
RWW    Rww    W 2  R xx    R yy     X   Y 2
Then we have
6)
W 2   X  Y 2
Rww    R xx    R yy  
If X is ergodic and zero mean and has no periodic component, then we expect
B.J. Bazuin, Spring 2016
4 of 63
ECE 3800
lim R XX    0
 
7)
Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes
permissible is in terms of the Fourier transform of the autocorrelation function. That is, if

 R XX    exp jwt   dt
R XX   

then the restriction states that
R XX    0 for all w
Additional concept:
X t   a  N t 
R XX    a 2  E N t   N t     a 2  R NN  
The Crosscorrelation Function
For a two sample function defined by samples in time of two random processes, how alike are
the different samples?
X 1  X t1  and Y2  Y t 2 
Define:
The cross-correlation is defined as:
R XY t1 , t 2   E  X 1Y2  
RYX t1 , t 2   E Y1 X 2  








 dx1  dy2  x1 y2 f x1, y2 
 dy1  dx2  y1x2 f  y1, x2 
The above function is valid for all processes, jointly stationary and non-stationary.
For jointly WSS processes:
R XY t1 , t 2   E X t Y t     R XY  
RYX t1 , t 2   EY t X t     RYX  
Note: the order of the subscripts is important for cross-correlation!
If the processes are jointly ergodic, the time average is equivalent to the probabilistic
expectation, or
1
 XY    lim
T   2T
1
T   2T
YX    lim
and
B.J. Bazuin, Spring 2016
T
 xt   yt     dt 
xt   y t   
 yt   xt     dt 
y t   xt   
T
T
T
 XY    R XY  
5 of 63
ECE 3800
YX    RYX  
Properties of Crosscorrelation Functions
1)
The properties of the zoreth lag have no particular significance and do not represent
mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .
R XY 0  RYX 0 or  XY 0  YX 0
2)
Crosscorrelation functions are not generally even functions. However, there is an
antisymmetry to the ordered crosscorrelations:
R XY    RYX   
For
1
 XY    lim
T   2T
T
 xt   yt     dt 
T
Substitute
 XY    lim
T 
1
T   2T
 XY    lim
1
2T
xt   y t   
t  
T 
 x     y   d  x     y  
T 
T 
 y   x     d  y   x   
  YX   
T 
3)
The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes
sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation
should be a maximum when the lag equals the time delay!
It can be shown however that
R XY    R XX 0   R XX 0 
As a note, the crosscorrelation may not achieve the maximum anywhere …
4)
If X and Y are statistically independent, then the ordering is not important
R XY    E  X t   Y t     E  X t   E Y t     X  Y
and
B.J. Bazuin, Spring 2016
R XY    X  Y  RYX  
6 of 63
ECE 3800
5)
If X is a stationary random process and is differentiable with respect to time, the
crosscorrelation of the signal and it’s derivative is given by
dR XX  
R XX   
d
Defining derivation as a limit:
X t  e   X t 
X    lim
e
e 0
and the crosscorrelation

X t    e   X t    

R XX    E X t   X t     E  X t    lim

e
 e 0


E  X t   X t    e   X t   X t   
R XX    lim
e
e 0
E X t   X t    e   E X t   X t   
R XX    lim
e
e 0
R   e   R XX  
R XX    lim XX
e
e 0
dR XX  
R XX   
d
Similarly,
d 2 R XX  
R XX    
d 2


Measurement of the Autocorrelation Function (Ergodic, WSS)
We love to use time average for everything. For wide-sense stationary, ergodic random
processes, time average are equivalent to statistical or probability based values.
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
Using this fact, how can we use short-term time averages to generate auto- or cross-correlation
functions?
An estimate of the autocorrelation is defined as:
1
Rˆ XX   
T 
T 
 xt   xt     dt
0
Note that the time average is performed across as much of the signal that is available after the
time shift by tau.
B.J. Bazuin, Spring 2016
7 of 63
ECE 3800
For tau based on the available time step, k, with N equating to the available time interval, we
have:
Rˆ XX kt  
1
N  1t   kt 
N k
 xit   xit  kt   t
i 0
1
Rˆ XX kt   Rˆ XX k  
N 1 k
N k
 xi   xi  k 
i 0
In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this
point the entire summation consists of one point and is therefore a poor estimate of the
autocorrelation. For useful results, k<<N!
As noted, the validity of each of the summed autocorrelation lags can and should be brought into
question as k approaches N. As a result, a biased estimate of the autocorrelation is commonly
used. The biased estimate is defined as:
~
R XX k  
1
N 1
N k
 xi   xi  k 
i 0
Here, a constant weight instead of one based on the number of elements summed is used. This
estimate has the property that the estimated autocorrelation should decrease as k approaches N.
B.J. Bazuin, Spring 2016
8 of 63
ECE 3800
Relation of Spectral Density to the Autocorrelation Function
For WSS random processes, the autocorrelation function is time based and, for ergodic
processes, describes all sample functions in the ensemble! In these cases the Wiener-Khinchine
relations is valid that allows us to perform the following.

 EX t   X t    exp iw   d
S XX w  R XX   

For an ergodic process, we can use time-based processing to aive at an equivalent result …
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
X    X w
For
1
 XX    lim
T   2T
T
 xt   exp iwt X w  dt
T
1
 XX    X w  lim
T   2T
T
 xt   exp i wt  dt
T
 XX    X w  X  w  X w
2
We can define a power spectral density for the ensemble as:
S XX w  R XX   

 R XX    exp iw   d

Based on this definition, we also have
S XX w  R XX  
1
R XX t  
2
R XX     1 S XX w

 S XX w  expiwt   dw

Properties of the Power Spectral Density
The power spectral density as a function is always
 real,
 positive,
 and an even function in w.
As an even function, the PSD may be expected to have a polynomial form as:
Finite property in frequency. The Power Spectral Density must also approach zero as w
approached infinity,
B.J. Bazuin, Spring 2016
9 of 63
ECE 3800
Relation of Spectral Density to the Autocorrelation Function
The power spectral density as a function is always
 real,
 positive,
 and an even function in w/f.
You can convert between the domains using:
The Fourier Transform in w
S XX w 

 R XX    exp iw   d

1
R XX t  
2

 S XX w  expiwt   dw

The Fourier Transform in f

S XX  f  
 R XX    exp i2f   d


R XX t  
 S XX  f   expi2ft   df

The 2-sided Laplace Transform
S XX s  

 R XX    exp s   d

1
R XX t  
j 2
j
 S XX s   expst   ds
 j
Deriving the Mean-Square Values from the Power Spectral Density
The mean squared value of a random process is equal to the 0th lag of the autocorrelation
 
EX
2
1
 R XX 0  
2
 
E X 2  R XX 0  


1
S XX w  expiw  0   dw 
2



 S XX w  dw


 S XX  f   expi2f  0  dw   S XX  f   df


Therefore, to find the second moment, integrate the PSD over all frequencies.
B.J. Bazuin, Spring 2016
10 of 63
ECE 3800
The Cross-Spectral Density
The Fourier Transform in w
S XY w 

 R XY    exp iw   d


1
R XY t  
2

and SYX w 

 RYX    exp iw   d

1
S XY w  expiwt   dw and RYX t  
2

Properties of the functions

 SYX w  expiwt   dw

S XY w  conjSYX w
Since the cross-correaltion is real,
 the real portion of the spectrum is even
 the imaginary portion of the spectrum is odd
B.J. Bazuin, Spring 2016
11 of 63
ECE 3800
Generic Example of a Discrete Spectral Density
X t   A  B  sin2  f1  t  1   C  cos2  f 2  t   2 
where the phase angles are uniformly distributed R.V from 0 to 2π.
R XX    E X t X t   
 X t   A  B  sin 2  f 1  t  1   C  cos2  f 2  t   2 

 E

  X t   A  B  sin 2  f 1  t     1   C  cos2  f 2  t      2 
 A 2  AB  sin 2  f1  t  1   AB  sin 2  f 1  t     1   
 2

 B  sin 2  f 1  t  1   sin 2  f1  t     1 

 AC  cos2  f  t     AC  cos2  f  t        
2
2
2
2

R XX    E 
2
C  cos2  f 2  t   2   cos2  f 2  t      2  



 BC  sin 2  f1  t  1   cos2  f 2  t      2  

 BC  cos2  f 2  t   2   sin 2  f 1  t      1 

 A2 

 2

B  sin 2  f1  t   1   sin 2  f 1  t      1  

R XX    E 
C 2  cos2  f 2  t   2   cos2  f 2  t      2   


 BC  sin 2  f 1  t  1  2  f 2  t      2 

With practice, we can see that
1
1

R XX    A 2  B 2  E   cos2  f1    cos2  f1 2t     21 
2
2

1
1

 C 2  E   cos2  f 2    cos2  f 2 2t     2 2 
2
2

which lead to
B2
C2
R XX    A 
 cos2  f1    
 cos2  f 2   
2
2
2
Forming the PSD
And then taking the Fourier transform
S XX  f   A 2    f  
2
B2  1
1
1
 C 1

     f  f 1      f  f 1  
     f  f 2      f  f 2 
2 2
2
2
 2 2

S XX  f   A 2    f  
B.J. Bazuin, Spring 2016
B2
C2
   f  f1     f  f1  
   f  f 2     f  f 2 
4
4
12 of 63
ECE 3800
We also know from the before
1
X 
2
2


 S w  dw   S  f   df
XX

XX

Therefore, the 2nd moment can be immediately computed as

X2 
 2

B2
C2









A


f



f

f


f

f

   f  f 2     f  f 2   df
1
1

4
4

X 2  A2 
B2
C2
B2 C 2
 2 
 2  A 2 

4
4
2
2
We can also see that
X  E A  B  sin 2  f 1  t   1   C  cos2  f 2  t   2   A
So,
 2  A2 
B.J. Bazuin, Spring 2016
B2 C 2
B2 C 2

 A2 

2
2
2
2
13 of 63
ECE 3800
Chapter 8 Random Sequences
8.1
Basic Concepts
Random Stochastic Sequence
Definition 8.1-1. Let ,, P be a probability space. Let    . Let X n,  be a mapping of
the sample space  into a space of complex-valued sequences on some index set Z. If, for each
fixed integer n  Z , X n,   is a random variable, then X n,  is a ransom (stochastic)
sequence. The index set Z is all integers,    n   , padded with zeros if necessary,
Example sets of random sequences.
Figure 8.1-1 Illustration of the concept of random sequence X(n,ζ), where the ζ domain
(i.e., the sample space Ω) consists of just ten values. (Samples connected only for plot.)
The sequences can be thought of as “realizations” of the random sequence or sample sequences.
The absolute sequence is the realization of individual random variables in time.


One the realization exists; it becomes statistical data related to on instantiation of the
Random Sequence.
Prior to collecting a realization, the Random Sequence can be defined probabilistically.
B.J. Bazuin, Spring 2016
14 of 63
ECE 3800
Statistical Specification of a Random Sequence
In general we are looking developing properties for developing random processes where:
(1) The statistical specification for the random sequence matches the probabilistic (or axiomatic)
specification for the random variables used to generate the sequence.
(2) We will be interested in stationary sequences where the statistics do not change in time. We
will be defining a wide-sense stationary random process definition where only the mean and the
variance need to be constant in time.
A random sequence X[n] is said to be statistically specified by knowing its Nth-order CDFs for
all integers N>=1. That states that we know …
FX x n , x n 1 ,, xn  N 1 ; n, n  1,, n  N  1 
P X n   xn , X n  1  xn 1 ,, X n  N  1  x n N 1 
If we specify all these infinite-order joint distributions at all finite times, using continuity of the
probability measures, we can calculate the probabilities of events involving an infinite number of
random variables via limiting operations involving the finite order CDFs.
Consistency can be guaranteed by construction … constructing models of stochastic sequences
and processes.
Moments play an important role and, for Ergodic Sequences, they can be estimated from a single
sample sequence of the infinite number that may be possible.
Therefore,

 X n  EX n   x  f X x; n   dx


x

n
 f X  xn   dxn

and for a discrete valued random sequences
 X n  EX n 
B.J. Bazuin, Spring 2016

x
k  
15 of 63
k
 P X n  xk 
ECE 3800
The Autocorrelation Function
The expected value of a random sequence evaluated at offset times can be determined.

  x
RKK k , l   E X k   X l  
*
 
k
 xl  f X  xk , xl   dxk  dxl
  


For sequences of finite average power … E X k    , then the correlation function will exist.
2
We can also describe the “centered” autocorrelation sequence as the autocovariance.
*
K KK k , l   E  X k    X k    X l    X l 

Note that



 E X k   X l   X k    l    k   X l   
 E X k   X l     k    l 
K KK k , l   E  X k    X k    X l    X l 
*
*
*
*
X
X
*
X
k    X l * 
*
X
X
K KK k , l   RKK k , l    X k    X l 
*
Basic properties of the functions:
Hermitian symmetry
RKK k , l   RKK l.k 
Hermitian symmetry
K KK k , l   K KK l.k 
Deriving other functions
*
*

RKK k , k   E X k 
K KK k , k    X2
B.J. Bazuin, Spring 2016
16 of 63
2

ECE 3800
Example 8.1-1 & 10 functions consisting of R.V and deterministic
sequences
X n,    X    f n 
Let
where X is a random variable and f is a deterministic function in sample time n.
Note then,
E X n,    E X    f n    X  f n 
The autocorrelation function becomes

  X    f k  X    f l   f
RKK k , l   E X k ,    X l ,   
*

*
X
x   dx


RKK k , l   f k   f l    X    X    f X  x   dx
*
*


RKK k , l   f k   f l   E X    X  
*
If X is a real R.V.

RKK k , l   f k   f l    X2   X
*
Similarly
*

2


K XX k , l   f k   f l   E  X     X    X     X 
*
*
K XX k , l   f k   f l   
*
B.J. Bazuin, Spring 2016
17 of 63

2
X
ECE 3800
Example 8.1-11 Waiting times in a line, creating a random sequence
Consider the random sequence of IID “exponential random variable” waiting times in a line.
Assume that each of the waiting times per individual t(k) is based on the exponential.
t 0
0,
f t; n  f t   
  exp   t , t  0
The waiting time is then described as.
n
T n   k 
k 1
where
T 1   1 , T 2   1   2 , … , T n   1   2     n
T(n) is the random sequence!
This calls for a summation of random variables, where the new pdf for each new sum is the
convolution of the exponential pdf with the previous pdf or
f t;2  f t   f t 
f t;3  f t;2  f t    f t   f t   f t 
Exam #1 derived the first convolution
t
f t;2     exp       exp   t     d
0
t
f t;2  2  exp   t    1  d  2  t  exp   t 
0
Repeating
t
f t;3   2    exp        exp   t     d
0
t
f t;3    exp   t     d   
3
3
0
2
2
 exp   t 
If you see the pattern …. we can jump to the nth summation where
B.J. Bazuin, Spring 2016
18 of 63
ECE 3800
f t; n  n 
 n1
n  1!
 exp   t  
  n1    exp   t 
n  1!
This is called the Erlang probability density function …
It is used to determine waiting times in lines and software queues … how long until your internet
request can be processed!
The mean and variance of the Erlang pdf used to define the random sequence T(n) is
T  n   
n

 T 2  n  Var  
n
2
Not that for every element of the sequence both the mean and variance are dependent upon the
sample number (definitely not stationary or even WSS).
A random sequence based on the Gaussian.
Assume iid Gaussian R.V. with zero mean and a variance

W n  N 0,  W
Letting
For


E W n  W  0 and E W n   W
2
2

2
What about the autocorrelation


 E W k 2   W 2 ,
*
RKK k , l   E W k   W l   
0,

or

k l
k l
RKK k , l    W   k  l 
2
or recognizing a WSS random sequence
2
RKK k    W   k 
B.J. Bazuin, Spring 2016
19 of 63
ECE 3800
A random sequence based on the sum of two Gaussians.
Assume iid Gaussian R.V. with zero mean and a variance
X n  W n  W n  1
For
E X n  2  W  0
Then,

 
 E W n
E X n  E W n  W n  1
2
2
and
2

 2  W n  W n  1  W n  1
2
  W  2  EW n  EW n  1   W
2
 2 W
also


2
2

RKK k , l   E W k   W k  1  W l   W l  1
*

RKK k , l   E W k   W l   W k   W l  1  W k  1  W l   W k  1  W l  1
*
*
*
*

But then
RKK k , l    W   k  l    W   k  l  1   W   k  1  l    W   k  1  l  1
2
2
2
2
RKK k , l   2   W   k  l    W   k  l  1   W   k  l  1
2
2
2
and recognizing WSS
RKK k   2   W   k    W   k  1   W   k  1
2
B.J. Bazuin, Spring 2016
2
20 of 63
2
ECE 3800
Stationary vs. Nonstationary Random Sequences and Processes
The probability density functions for random variables in time have been discussed, but what is
the dependence of the density function on the value of time, t or n, when it is taken?
If all marginal and joint density functions of a process do not depend upon the choice of the time
origin, the process is said to be stationary (that is it doesn’t change with time). All the mean
values and moments are constants and not functions of time!
For nonstationary processes, the probability density functions change based on the time origin or
in time. For these processes, the mean values and moments are functions of time.
In general, we always attempt to deal with stationary processes … or approximate stationary by
assuming that the process probability distribution, means and moments do not change
significantly during the period of interest.
The requirement that all marginal and joint density functions be independent of the choice of
time origin is frequently more stringent (tighter) than is necessary for system analysis. A more
relaxed requirement is called stationary in the wide sense: where the mean value of any random
variable is independent of the choice of time, t, and that the correlation of two random variables
depends only upon the time difference between them. That is
E  X t   X   X and
E X t1   X t 2   E X 0   X t 2  t1   X 0   X    R XX   for   t 2  t1
You will typically deal with Wide-Sense Stationary Signals.
B.J. Bazuin, Spring 2016
21 of 63
ECE 3800
Stationary Systems Properties
Mean Value
 X n   E X n  




 x  f X x; n   dx 
 x  f x;0  dx   0
X
X
The mean value is not dependent upon the sample in time.
Autocorrelation

  x
RKK k , l   E X k   X l  
*
 
 xl  f X  xk , xl ; k , l   dxk  dxl
*
k
  
 

 x
 xl  n  f X  xk  n , xl  n ; k  n, l  n   dxk  n  dxl  n
*
k n



 E X k  n   X l  n   RKK k  n,.l  n
And in particular
*
RKK k , l   RKK k  l ,0  RKK k  l 
Autocovariance

 E  X k  n   
 
  K
K KK k , l   E  X k    X k    X l    X l   E  X k    X    X l    X 
And in particular
*
X
   X l  n    X
*
*
KK
k  n,.l  n

K KK k , l   K KK k  l ,0  K KK k  l 
The autocorrelation and autocovariance are functions of the time difference and not the absolute
time.
B.J. Bazuin, Spring 2016
22 of 63
ECE 3800
8.2
Basic Principles of Discrete-Time Linear Systems
We get to do convolutions some more … in the discrete time domain!
Note: if you are in ECE 3710, this should be normal; otherwise, ECE 3100 probably talked about
linear systems being a convolution.
For a “causal” discrete finite impulse response linear system we will have ….

y n    hk  xn  k  
k 0
n
 hn  m  xm 
m  
For a “non-causal” discrete linear system we will have ….
y n  


k  
m  
 hk  xn  k    hn  m  xm 
For a linear system, superposition applies

y n    hk  a1  x1 n  k   a 2  x 2 n  k 
k 0


  hk  a1  x1 n  k    hk  a 2  x 2 n  k 
k 0
k 0


k 0
k 0
 a1   hk  x1 n  k   a 2   hk  x 2 n  k 
 y1 n   y 2 n 
For a filter with poles and zeros …. the filter may be autoregressive as well!


k 0
k 1
y n    bk  xn  k    a k  y n  k 
This is called a linear constant coefficient difference equation (LCCDE) in the text.
B.J. Bazuin, Spring 2016
23 of 63
ECE 3800
Linear time invariant and linear shift invariant
yn  k   Lxn  k , for all n
A time offset in the input will not change the response at the output! This is key to the
convolution theory!
System Impulse response
The response to a unit impulse is the impulse response
yn   L n   hn 
8.3
Random Sequences and Linear Systems
For a “non-causal” discrete linear system we will have …. (FIR filter)
 

 

E  y n   E   hk  xn  k   E   hn  m  xm 
k 

 m 

E  y n  

 hk  Exn  k  
k  

 hn  m  Exm 
m  
If WSS
E  y n    Y 

 hk   X 
k  
E  y n    Y   X 

 hn  m  
m  

X
 hk 
k  
The mean times the coherent gain of the filter.
For a filter with poles and zeros …. the filter may be autoregressive as well!




E  y n   E  bk  xn  k   E  ak  y n  k 
 k 0

 k 1



k 0
k 1
E  y n    bk  E xn  k    ak  E  y n  k 
If WSS


k 0
k 1
E  y n    Y   bk   X   a k   Y

bk 





k 0
 Y  1   ak    X   bk  and  Y   X 

k 0
 k 1

1   a k 
k 1
B.J. Bazuin, Spring 2016
24 of 63
ECE 3800
Auto- and Cross-Correlation
For a “causal” discrete finite impulse response linear system we will have … (impulse response
based)

y n    hk  xn  k  
k 0
n
 hn  m  xm 
m  
And performing a cross-correlation (assuming real R.V. and processing)



E xn1   y n2   E  xn1    hk  xn2  k 
k 0




E xn1   y n2   E  hk   xn1   xn2  k 
 k 0


E xn1   y n2    hk  E xn1   xn 2  k 
k 0

E xn1   y n2    hk  R XX n1 , n2  k 
k 0
For x(n) WSS

E xn   y n  m   R XY m    hk  R XX n  m  k  n 
k 0

E xn   y n  m   R XY m    hk  R XX m  k 
k 0
Exn   yn  m   R XY m   hm   R XX m 
What about the other way … YX instead of XY


E  y n1   xn2   E  hk   xn1  k   xn2 
 k 0


E  y n1   xn2    hk  E xn1  k   xn 2 
k 0

E  y n1   xn2    hk  R XX n1  k , n 2 
k 0
For x(n) WSS … see the next page
For x(n) WSS

E  y n   xn  m   RYX m    hk  R XX n  m  n  k 
k 0

E  y n   xn  m   RYX m    hk  R XX m  k 
k 0
B.J. Bazuin, Spring 2016
25 of 63
ECE 3800
Perform a change of variable for k to “-l” (assuming h(t) is real, see text for complex

E  y n   xn  m   RYX m    h l  R XX m  l 
l 0
Therefore
E y n   xn  m   RYX m   h m   R XX m 
What about the auto-correlation of y(n)?
And performing an auto-correlation (assuming real R.V. and processing)













E y n1  y n 2  E   h k1 x n1  k1   hk 2  xn 2  k 2 
k2 0
 k1 0





E  y n1   y n 2   E    hk1   hk 2   xn1  k1   xn 2  k 2 
 k1 0 k 2 0

E  y n1   y n2  


  hk   hk   Exn
1
2
1
 k1   xn2  k 2 
k1  0 k 2  0
E  y n1   y n2  


  hk   hk   R n
1
2
XX
1
 k1 , n 2  k 2 
k1  0 k 2  0
For x(n) WSS

E  y n   y n  m   RYY m  

  hk   hk   R n  m  k  n  k 
1
2
XX
2
1
k1  0 k 2  0
E  y n   y n  m   RYY m  


k1  0
k 2 0
 hk1    hk 2   R XX m  k1  k 2 
Summary: For x(n) WSS and a real filter
Exn   yn  m   R XY m   hm   R XX m 
E y n   xn  m   RYX m   h m   R XX m 
E yn   yn  m   RYY m   R XX m   hm   h m 
B.J. Bazuin, Spring 2016
26 of 63
ECE 3800
Example: White Noise Inputs to a causal filter
R XX n  
Let


E y n   RYY 0 

2

E y n   RYY 0 
2




 hk    hk   R k
1
2
k1  0
k2 0


k1  0
k2 0
 hk1    hk 2  

E y n   RYY 0  
2
N0
  n 
2

1
 k2 
N0
  k1  k 2 
2
N0 
  hk1   hk1 
2 k1 0
E y n   RYY 0  
2
XX
N0 
2
  hk 
2 k 0
For a white noise process, the mean squared (or 2nd moment) is proportional to the filter power.
Typically, there are similar derivations for sampled systems and continuous systems. .
B.J. Bazuin, Spring 2016
27 of 63
ECE 3800
The power spectral density output of linear systems
489
The discrete Power Spectral Density is defined as:
S XX w 

 R n   exp j  w  n 
n  
XX
The inverse transform is defined as
R XX n    1 S XX w 

1
 S XX w  exp j  w  n   dw
2 
Properties:
1.
Sxx(w) is purely real as Rxx(n) is conjugate symmetric
2.
If X(n) is a real-valued WSS process, then Sxx(w) is an even function, as Rxx(n) is real
and even.
3.
Sxx(w)>= 0 for all w.
4.
Rxx(m)=0 for all m>N for some finite integer. This is the condition for the Fourier
transform to exist … finite energy.
Cross-Spectral Density
Since we have already shown the convolution formula, we can progress to the cross-spectral
density functions
S XY w 
S XY w 
Then

 R n   exp j  w  n 
n  
XY

 hn   R n  exp j  w  n 
XX
n  
S XY w  S XX w  H w
And for the other cross spectral density
S YX w 
S XY w 
Then for a real filter

 R n   exp j  w  n 
n  
YX

 h m   R m  exp j  w  n 
XX
n  
S XY w  S XX w   H  w   S XX w  H w
*
B.J. Bazuin, Spring 2016
28 of 63
ECE 3800
The output power spectral density becomes
S YY w 
S YY w 

 R n   exp j  w  n 
n  
YY
 R m   hm   h m   exp j  w  n 

n  
*
XX
For all systems
S YY w   S XX w  H w   H w  S XX w  H w 
*
B.J. Bazuin, Spring 2016
29 of 63
2
ECE 3800
Synthesis of Random Sequences and Discrete-Time Simulations
We can generating a transfer function to provide a random sequence with a specified psd or
correlation function. Staring with a digital filter


k 0
k 1
y n    bk  xn  k    a k  y n  k 
The Fourier transform is
H w 
Y w Aw

X w Bw
where

B w   bn   exp j  w  n 
n 0

and
Aw  1   an   exp j  w  n 
n 1
The signal input to the filter is white noise, producing a constant magnitude frequency response.
Therefore,
N
N
2
*
S YY w  0  H w   H w  0  H w
2
2
For real causal coefficients, this is also equivalent to
N
S YY w  0  H w  H  w
2
Or using z-transform notation for z  exp j  w then z 1  exp j  w this can be written as
N
S YY  z   0  H  z   H z 1 
2
In the z-domain, the unit circle is a key component where this implies that there is a mirror
image about the unit circle for poles and zero in the z-domain. As an added point of interest,
minimum phase, stable filters will have all their poles and zeros inside the unit circle. The mirror
image elements form the “inverse filter”.
B.J. Bazuin, Spring 2016
30 of 63
ECE 3800
Example 8.4-5 Filter generation
If a desired psd can be stated as
S XX w 
For
 N2
1  2    cosw   2
2  cosw  exp j  w  exp j  w and 1  exp j  w  exp j  w
The equivalent z-transform representation is
2  cosw  z 1  z and 1  z 1  z
Then
S XX  z  
S XX  z    N2 
 N2

 

1    z 1  z    z 1    z 
 
1
1
1
  N2 

  N2  H z 1  H  z 
1
1    z  1    z 
1   z 1   z

1

The desired filter is then
H z  
1
1   z
The inverse transform becomes
hn    n  u n 
or
yn   xn     yn  1
Note: this should look similar to Example 8.4-6 ….
B.J. Bazuin, Spring 2016
31 of 63
ECE 3800
Chapter 9
9.1
Random Processes
Basic Definitions
Random Stochastic Processes
Definition 9.1-1. Let ,, P be a probability space. then define a mapping of X from the
sample space  to a space of continuous time functions. The elements in this space will be
called sample functions. This mapping is called a random process if at each fixed time the
mapping is a random variable, that is, X t ,    for each fixed t on the real line    t   .
Figure 9.1-1 A random process for a continuous sample space Ω = [0,10].
For the autocorrelation defined as:
R XX t1 , t 2   E  X 1 X 2  




 dx1  dx2  x1x2 f x1, x2 
For WSS processes:
R XX t1 , t 2   E X t X t     R XX  
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
and
 XX    R XX  
B.J. Bazuin, Spring 2016
32 of 63
ECE 3800
The application of the Expected Value Operator
Moments play an important role and, for Ergodic Processes, they can be estimated from a single
process in time of the infinite number that may be possible.
Therefore,
 X t   E X t 
and the correlation functions (auto- and cross-correlation)
*
R XX t1 , t 2   E X t1   X t 2 
R XY


t , t   EX t   Y t  
*
1
2
1
2
and the covariance functions (auto- and cross-correlation)
*
K XX t1 , t 2   E  X t1    X t1    X t 2    X t 2 
K XY
with

t , t   E X t   
1
2
1
X
t1   Y t 2   Y

t  
*
2
K XX t1 , t 2   R XX t1 , t 2    X t1    X t 2 
*
Note that the variance can be computed from the auto-covariance as
*
2
K XX t , t   E  X t    X t    X t    X t    X t 


and the “power” function can be computed from the auto-correlation

 
R XX t , t   E X t   X t   E X t 
For real X(t)
B.J. Bazuin, Spring 2016

*

2

R XX t , t   E X t    X t    X t 
2
33 of 63
2
2
ECE 3800
Example: xt   A  sin2  f  t  for A a uniformly distributed random variable A   2,2
R XX t1 , t 2   E X t1   X t 2   EA  sin2  f  t1   A  sin2  f  t 2 
1


R XX t1 , t 2   E X t1   X t 2   E  A 2   cos2  f  t1  t 2   cos2  f  t1  t 2 
2


R XX t1 , t 2  
 
1
 E A 2  cos2  f  t1  t 2   cos2  f  t1  t 2 
2
for   t 2  t1
R XX t1 , t 2  
1 2
  cos2  f     cos2  f  t1  t 2 
2 12
R XX t1 , t 2  
16
 cos2  f     cos2  f  t1  t 2 
24
A non-stationary process
B.J. Bazuin, Spring 2016
34 of 63
ECE 3800
Example 9.1-5 Auto-correlation of a sinusoid with random phase
Think of:
X t   A  sin w  t   ,
    
where A and w are known constants. And theta is a uniform pdf covering the unit circle.
The mean is computed as
 X t   EX t   EA  sinw  t   
 X t   EX t   A  Esinw  t   

1
 sin w  t     d
2 

 X t   EX t   A  
 X t   EX t  
 X t   EX t   
A

  cosw  t    
2 
A
A
 cosw  t     cosw  t     
 0  0
2 
2 
The auto-correlation is computed as

 
 E 1  cosw  t
2
R XX t1 , t 2   E X t1   X t 2   E A  sinw  t1     A  sinw  t 2   
*


R XX t1 , t 2   E X t1   X t 2   A 2
*
*
1


 t 2   1  cosw  t1  t 2   2   
2
A2
A2
R XX t1 , t 2  
 cosw  t1  t 2  
 Ecosw  t1  t 2   2   
2
2
A2
A2
 cosw  t1  t 2   0 
 cosw  t1  t 2 
2
2
A2
R XX    R XX    
 cos2  f   
2
R XX t1 , t 2  
Note that if A was a random variable (independent of phase) we would have …
E A2
E A2
R XX t1 , t 2  
 cosw  t1  t 2   R XX   
 cosw   
2
2
 
 
and
 X t   EX t   
E A
 0  0
2 
Note: this Random Process is Wide-Sense stationary (mean and variance not a function of time)
B.J. Bazuin, Spring 2016
35 of 63
ECE 3800
Example: xt   A  sin2  f  t    for  a uniformly distributed random variable   0,2 
The time based formulation:
1
 XX    lim
T   2T
1
T   2T
 XX    lim
 XX   
T
 xt   xt     dt 
xt   xt   
T
T
 A  sin 2  f  t     A  sin 2  f  t        dt
T
A2
1
 lim
2 T  2T
T
 cos2  f     cos2  f  2t     2  dt
T
A2
 XX   
 cos2  f   
2
It also ergodic if A is a constant! If A is an R.V. it may not be ergodic. (based on the R.V.)
B.J. Bazuin, Spring 2016
36 of 63
ECE 3800
 t  t0 
Example: xt   B  rect 
 for B =+/-A with probability p and (1-p) and t0 a uniformly
 T 
 T T
distributed random variable t 0   ,  . Assume B and t0 are independent.
 2 2

t t 
t t
R XX t1 , t 2   E X t1   X t 2   E  B  rect  1 0   B  rect  1 0
 T 
 T


t t 
t t
R XX t1 , t 2   E X t1   X t 2   E  B 2  rect  1 0   rect  1 0
 T 
 T







As the RV are independent

t t 
t t
R XX t1 , t 2   E X t1   X t 2   E B 2  E rect  1 0   rect  2 0
 T 
 T

 









t t 
t t
2
R XX t1 , t 2   A 2  p   A  1  p   E rect  1 0   rect  2 0
 T 
 T

T
R XX t1 , t 2   A 
2
2

T
2
t t  1
t t 
rect  1 0   rect  2 0    dt 0
 T  T
 T 
For t1  0 and t 2  
T
R XX 0,   A 
2
2
   t0  1
   dt 0
T  T
 1  rect
T
2
The integral can be recognized as being a triangle, extending from –T to T and zero everywhere
else.
 
R XX    A 2  tri  
T 
0,

 A 2  1  T   ,

T
R XX    
 A 2  1  T   ,

T
0,

B.J. Bazuin, Spring 2016
37 of 63
  T
T    0
0  T
T 
ECE 3800
Some Important Random Processes
Asynchronous Binary Signaling
The pulse values are independent, identically distributed with probability p that amplitude is a
and q=1-p that amplitude is –a. The start of the “zeroth” pulse is uniformly distributed from –T/2
to T/2
1
T
T
pdf D   , for   D 
D
2
2
Determine the autocorrelation of the bipolar binary sequence, assuming p=0.5.

 t  D  k T 
X t    X k  rect 

T


k  
Note: the rect function is defined as

 t  1,
rect    
 T  0,

Determine the Autocorrelation
T
T
t
2
2
else

R XX t1 , t 2   EX t1   X t 2 
 
 t  D  n T  
 t  D  k  T 
R XX t1 , t 2   E   X n  rect 1
   X k  rect 2

T
T

 k  


n  
  
 t  D  k  T 
 t  D  n T 
R XX t1 , t 2   E    X n  rect 1
  X k  rect 2

T
T




n   k 
  
 t  D  k  T 
 t  D  n T 
R XX t1 , t 2   E    X n  X k  rect 1
  X k  rect 2

T
T




n  k  
B.J. Bazuin, Spring 2016
38 of 63
ECE 3800
R XX t1 , t 2  


  EX
n   k  
n

 t  D  n T 
 t  D  k  T 
 X k   E rect 1
  X k  rect 2

T
T





For samples more than one period apart, t1  t 2  T , we must consider


E X k  X j  p  a  p  a  p  a  1  p    a   1  p    a   p  a  1  p    a   1  p    a 



E X k  X j  a 2  p 2 2  p  1  p   1  p 



2


E X k  X j  a 2  4  p 2 4  p  1




E X k  X j  a 2  4  p 2 4  p  1  0
For p=0.5
For samples within one period, t1  t 2  T ,
 
EX k  X k   E X k  p  a 2  1  p    a   a 2
2
2


EX k  X k 1   a 2  4  p 2 4  p  1  0
For samples within one period, t1  t 2  T , there are two regions to consider, the sample bit
overlapping and the area of the next bit.


 t  D  k T 
 t  D  k  T 
R XX t1 , t 2   a 2   E rect 1
  rect 2

T
T




k   
But the overlapping area … should be triangular. Therefore
T
2
1
1
R XX      EX k  X k 1   dt  
T  T
T
2
T
1 2
1
R XX      EX k  X k   dt  
T  T
T
 T 2
 E X
T
k
2
for  T    0
 X k 1   dt,
for 0    T
2
 T 2
 E X
T
 X k   dt,
k
2
or
1
R XX    a  
T
2
 T 2
1  dt,
T
for  T    0
2
T
1 2
R XX    a 2    1  dt,
t a  T
for 0    T
2
Therefore
B.J. Bazuin, Spring 2016
39 of 63
ECE 3800
 2 T 
a  T ,
R XX    
a 2  T   ,

T
for  T    0
for 0    T
or recognizing the structure
 
R XX    a 2  1 
 T

, for  T    T


This is simply a triangular function with maximum of a2, extending for a full bit period in both
time directions.
For unequal bit probability
 2  ta  
 
 4  p 2 4  p  1  ,
a  
T
R XX    
 T
 2
2
a  4  p 4  p  1 ,




for  T    T
for T  
As there are more of one bit or the other, there is always a positive correlation between bits (the
curve is a minimum for p=0.5), that peaks to a2 at  = 0.
Note that if the amplitude is a random variable, the expected value of the bits must be further
evaluated. Such as,
E X k  X k    2   2
EX k  X k 1    2
In general, the autocorrelation of communications signal waveforms is important, particularly
when we discuss the power spectral density later in the textbook.
If the signal takes on two levels a and b vs. a and –a, the result would be


E X k  X j  p  a  p  a  p  a  1  p   b   1  p   b   p  a  1  p   b   1  p   b 
For p = 1/2

E Xk X j
And
B.J. Bazuin, Spring 2016

1
1
1
 ab
  a2   a b  b2  

4
2
4
 2 
  p  a
EX k  X k   E X k
2
40 of 63
2
2
 1  p   b 2
ECE 3800
For p = 1/2
 
EX k  X k   E X k
2
2
1
a b a b
  a2  b2  
 

2
 2   2 


2
Therefore,
 a  b  2  a  b  2  
 
  1 

 2   2   T
R XX    
2
 a  b 
for   T
 2  ,


For a = 1, b = 0 and T=1, we have
1 1   
   1  ,
4 4  T 
R XX    
1
for   T
 4 ,
Figure 9.2-2

,


for  T    T
for  T    T
Autocorrelation function of ABS random process for a = 1, b = 0 and T = 1.
B.J. Bazuin, Spring 2016
41 of 63
ECE 3800
Exercise 6-3.1 – Cooper and McGillem
a) An ergodic random process has an autocorrelation function of the form
R XX    9  exp  4     16  cos 10     16
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is
E X 2  R XX 0  9  16  16  41   2   2
 
The constant portion of the autocorrelation represents the square of the mean. Therefore
2
E X    2  16 and   4
Finally, the variance can be computed as,
 2  E X 2  E X 2  R XX 0    2  41  16  25
 
b) An ergodic random process has an autocorrelation function of the form
4  2  6
R XX   
 2 1
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is
 
E X 2  R XX 0  
6
 6  2  2
1
The constant portion of the autocorrelation represents the square of the mean. Therefore
4  2  6 4
2
E X    2  lim 2
  4 and   2
t    1
1
Finally, the variance can be computed as,
 2  E X 2  E  X 2  R XX 0    2  6  4  2
 
B.J. Bazuin, Spring 2016
42 of 63
ECE 3800
Exercise 6-6.1
Find the cross-correlation of the two functions …
X t   2  cos2    f  t    and Y t   10  sin2    f  t   
Using the time average functions
1
 XY    lim
T   2T
1
T   2T
 XY    lim
T
 xt   yt     dt 
xt   y t   
T
T
 2  cos2    f  t     10  sin 2    f  t        dt
T
 XY    20  f
1
f
1
 2  sin2    f  2t     2   sin2    f    dt
0
1
f
1
f
0
0
 XY    10  f  sin 2    f  2t     2   dt  10  f  sin 2    f     dt
 XY    10 
f
4   f
1
f
 cos2    f  2t     2  0 f  10  f  sin 2    f       dt
1
0


2

 10  
 cos 2    f       2   cos2    f     2   10  sin 2    f   
4    
f



 10
 XY   
 cos4    2    f    2   cos2    f    2   10  sin 2    f   
4 
 10
 XY   
 cos2    f    2   cos2    f    2   10  sin 2    f   
4 
 XY    10  sin2    f   
 XY   
Using the probabilistic functions
R XY    Ext   yt   
R XY    E2  cos2    f  t     10  sin2    f  t      
R XY    20  Ecos2    f  t     sin2    f  t      
R XY    10  Esin2    f  2t  2    f    2   sin2    f   
R XY    10  sin2    f     10  Esin2    f  2t  2    f    2 
From prior understanding of the uniform random phase ….
R XY    10  sin2    f   
B.J. Bazuin, Spring 2016
43 of 63
ECE 3800
Section 9.4 Classifications of Random Processes
Definition 9.4-1.: Let X and Y be random processes.
(a) They are Uncorrelated if
*
*
R XY t1 , t 2   E X t1   Y t 2    X t1    Y t 2  ,


(b) They are Orthogonal if
*
R XY t1 , t 2   E X t1   Y t 2   0,


for all t1 and t 2
for all t1 and t 2
(c) They are Independent if for all positive integers n, the nth-order CDF of X and Y factors.
That is
FXY  x1 , y1 , x 2 , y 2 ,  , x n , y n ; t1 , t 2 ,  , t n 
 FX  x1 , x 2 ,  , x n ; t1 , t 2 ,  , t n   FY  y1 , y 2 ,  , y n ; t1 , t 2 ,  , t n 
Note that if two processes are uncorrelated and one of the means is zero, they are orthogonal as
well!
Stationarity
A random process is stationary when its statistics do not change with the continuous time
parameter.
FX  x1 , x 2 ,  , x n ; t1 , t 2 ,  , t n 
 FX  x1 , x 2 ,  , x n ; t1  T , t 2  T ,  , t n  T 
Overall, the CDF and pdf do not change with absolute time. They may have time characteristics,
as long as the elements are based on time differences and not absolute time.
FX  x1 , x 2 ; t1 , t 2   F X  x1 , x 2 ; t1  t 2 ,0 
f X  x1 , x 2 ; t1 , t 2   f X  x1 , x 2 ; t1  t 2 ,0 
This implies that


R XX t1 , t 2   E X t1   X t 2   R XX t1  t 2 ,0   R XX  ,0 
*
Definition 9.4-3.: Wide Sense Stationary
A random process is wide-sense stationary (WSS) when its mean and variance statistics do not
change with the continuous time parameter. We also include the autocorrelation being a function
of one variable …


E X t     X t   R XX  ,
B.J. Bazuin, Spring 2016
*
for      , independed nt of t
44 of 63
ECE 3800
Power Spectral Density
Definition 9.1-1: PSD
Let Rxx(t) be an autocorrelation function for a WSS random process. The power spectral density
is defined as the Fourier transform of the autocorrealtion function.
S XX w  R XX   

 R XX    exp iw   d

The inverse exists in the form of the inverse transform
1
R XX t  
2

 S XX w  expiwt   dw

Properties:
1.
Sxx(w) is purely real as Rxx(t) is conjugate symmetric
2.
If X(t) is a real-valued WSS process, then Sxx(w) is an even function, as Rxx(t) is real
and even.
3.
Sxx(w)>= 0 for all w.
Wiener–Khinchin Theorem
For WSS random processes, the autocorrelation function is time based and has a spectral
decomposition given by the power spectral density.
Also see:
http://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem
Why this is very important … the Fourier Transform of a “single instantiation” of a random
process may be meaningless or even impossible to generate. But if the random process can be
described in terms of the autocorrelation function (all ergodic, WSS processes), then the power
spectral density can be defined.
I can then know what the expected frequency spectrum output looks like and I can design a
system to keep the required frequencies and filters out the unneeded frequencies (e.g. noise and
interference).
B.J. Bazuin, Spring 2016
45 of 63
ECE 3800
Relation of Spectral Density to the Autocorrelation Function
For “the right” random processes, power spectral density is the Fourier Transform of the
autocorrelation:

S XX w  R XX   
 EX t   X t    exp iw   d

For an ergodic process, we can use time-based processing to arrive at an equivalent result …
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
1
E X t   X t      XX    lim
T   2T

T
 xt   xt     dt
T


1

 XX    E  X t   X t    
lim
xt   xt     dt   exp iw   d
T   2T

 
T

T 

1

 XX    lim
xt   xt     exp iw   d   dt


T   2T
T  

T 

1

 XX    lim
xt   xt     exp iwt     iwt   d   dt


T   2T
T  

T 

1
 xt   exp iwt   xt     exp iwt     d   dt
 XX    lim


T   2T
T  

T


1
xt   exp iwt   xt     exp iwt     d   dt
 XX    lim
T   2T
T
 

T


 
 
 


X    X w
If there exists
1
 XX    lim
T   2T
T
 xt   exp iwt X w  dt
T
1
T   2T
 XX    X w  lim
T
 xt   exp i wt  dt
T
 XX    X w  X  w  X w
B.J. Bazuin, Spring 2016
46 of 63
2
ECE 3800
Property:
Since Rxx is symmetric, we must have that
R XX    R XX    and E X w  i  O X w  E X w  i  O X w
For this to be true,  i  O X w  i  O X w , which can only occur if the odd portion of the
Fourier transform is zero! O X w  0 .
This provides information about the power spectral density,
S XX w  R XX    E X w
S XX w  E X w
S XX w  0
The power spectral density necessarily contains no phase information!
B.J. Bazuin, Spring 2016
47 of 63
ECE 3800
Example 9.5-3
Find the psd of the following autocorrelation function … of the random telegraph.
R XX    exp    ,
for   0
Find a good Fourier Transform Table … otherwise
S XX w 

 R    exp j  w     d
XX


S XX w   exp      exp j  w     d


0
S XX w   exp      exp j  w     d   exp     exp j  w     d

0

0
S XX w   exp  j  w        d   exp  j  w        d

0
exp  j  w      
exp  j  w      
S XX w 

  j  w 
  j w 
0


0
 exp  j  w       exp  j  w     0
S XX w  


  j  w 
j w


 exp  j  w     0 exp j  w       


  j  w 
  j  w    

 j  w   j  w 
1
1
S XX w 


  j  w      j  w     j  w     j  w   
 2 
2 
S XX w 
 2
2
2
 w 
w  2
For a=3
Figure 9.5-2
B.J. Bazuin, Spring 2016
Plot of psd for exponential autocorrelation function.
48 of 63
ECE 3800
Example 9.5-4
Find the psd of the triangle autocorrelation function … autocorrelation of rect.

 
R XX    tri  or R XX    1  ,
 T
T
T 
T
  
S XX w   1    exp j  w     d
T
T 
0
T
 
 
S XX w   1    exp j  w     d   1    exp j  w     d
T
T
T 
0
 exp j  w   
 exp j  w   
S XX w  



 jw
 jw
0
 T 

0
T
1   exp j  w    exp j  w   
 


2
T 
 jw
  j  w 
0
1   exp j  w    exp j  w    



2
T 
 jw
  j  w 
T

T
0
 1
exp j  w  T    exp j  w  T 
1 
S XX w  




 jw  
 jw
 j  w 
 j  w
1  1  T  exp j  w  T  exp j  w  T 

  2 

T w
 jw
w2


S XX w 

1  T  exp j  w  T  exp j  w  T  1 

 2
T 
 jw
w2
w 
exp j  w  T  exp j  w  T  1  T  exp j  w  T  T  exp j  w  T  

 


jw
jw
T 
jw
jw

1 2 1  exp j  w  T  exp j  w  T  




T w 2 T 
w2
w2
S XX w 
2  sinw  T  2  sinw  T  2 1 2 cosw  T 
  2  

w
w
T w
T
w2
S XX w  
2 1

 1  cosw  T 
T w2
 w T 
sin

2
2 1
 wT 
 2 


S XX w    2  2  sin j 
T

2
2 
T w

 w T 


 2 
B.J. Bazuin, Spring 2016
49 of 63
2
ECE 3800
Deriving the Mean-Square Values from the Power Spectral Density
Using the Fourier transform relation between the Autocorrelation and PSD
S XX w 

 R XX    exp iw   d

1
R XX t  
2

 S XX w  expiwt   dw

The mean squared value of a random process is equal to the 0th lag of the autocorrelation
 
EX
2
1
 R XX 0  
2
  R
EX
2
XX
0 


1
S XX w  expiw  0   dw 
2


 S XX w  dw





 S XX  f   expi2f  0  dw   S XX  f   df
Therefore, to find the second moment, integrate the PSD over all frequencies.
As a note, since the PSD is real and symmetric, the integral can be performed as
 
EX
2
1
 R XX 0   2 
2
  R
EX
2
XX 0   2

 S XX w  dw
0

 S XX  f   df
0
B.J. Bazuin, Spring 2016
50 of 63
ECE 3800
Converting between Autocorrelation and Power Spectral Density
Using the properties of the functions we can actually different variations of Transforms!
The power spectral density as a function is always
 real,
 positive,
 and an even function in w/f.
You can convert between the domains using any of the following …
The Fourier Transform in w
S XX w 

 R XX    exp iw   d

1
R XX t  
2

 S XX w  expiwt   dw

The Fourier Transform in f

S XX  f  
 R XX    exp i2f   d

R XX t  

 S XX  f   expi2ft   df

The 2-sided Laplace Transform (the jw axis of the s-plane)
S XX s  

 R XX    exp s   d

1
R XX t  
j 2
B.J. Bazuin, Spring 2016
j
 S XX s   expst   ds
 j
51 of 63
ECE 3800
Example: Inverse Laplace Transform.
S XX w 
A2 
 1

 X
2
X
2

  w 2


2  A2  
 2  w2
Substitute s for w
2  A2  
2  A2  
S XX s  

 2  s 2   s     s 
Partial fraction expansion
S XX s  
k0
k    s   k1    s 
k1
2  A2  

 0

  s     s 
  s     s 
  s     s 
 k 0  k1   s  0
k 0  k1     2  A 2  
S XX s  

k 0  k1

2k 0  2  A 2
A2
A2

  s     s 
Taking the LHP Laplace Transform
 A2 
L
  A  exp t 
   s  
for t  0
Taking the RHP with –s and then –t.


A2
2
2
2
L
  A  exp t   A  exp   t   A  expt 
    s  
for t  0
Combining we have
R XX    A 2  exp    t 
B.J. Bazuin, Spring 2016
52 of 63
ECE 3800
7-6.3 A stationary random process has a spectral density of.
5,
10  w  20
S XX w  
else
0,
(a) Find the mean-square value of the process.


1
2
  S XX w  dw 
  S XX w  dw
R XX 0 
2   
2  0
10
20
20
1
1
1
R XX 0  
  5  dw 
  5  dw  2 
 5  dw
2   10
2   10
2    20
R XX 0  
10
10
50
20
 w 10 
 20  10  
2 
2 

(b) Find the auto-correlation function the process.

1
R XX t  
 S XX w  exp j  w  t   dw
2   
R XX t  
5
2 
10

 20
   exp j  w  t   dw   exp j  w  t   dw 
 20

 10
5
R XX t  
2 
R XX t  
R XX t  
5
2 
5
2 
R XX t  
 exp j  w  t  20 exp j  w  t  10 





j t
j t
10
 20 

 exp j  20  t  exp j  10  t  exp j  10  t  exp j  20  t  

 



j t
j t
j t
j t


  exp j  20  t  exp j  20  t    exp j  10  t  exp j  10  t   
  



 
j t
j t
j t
j t
 


5
2 
 2  j  sin 20  t  2  j  sin 10  t  
5
 
 sin 20  t   sin 10  t 

 
j t
j t
  t

B.J. Bazuin, Spring 2016
53 of 63
ECE 3800
R XX t  
5 
 20  10 
 20  10   10
  2  sin 
 t   cos
 t   
 sin 5  t   cos15  t 
 t 
 2

 2
   t
R XX t  
50 sin 5  t 
50
 5t 
 cos15  t  
 sinc

  cos15  t 


5t
  
(c) Find the value of the auto-correlation function at t=0..
50 sin 5  0 
50
50
R XX 0  
 cos15  0  
 sinc

  cos15  0 


50
  
R XX 0  
50

 1  1 
R XX 0  
50

 1  1
50

It must produce the same result!
B.J. Bazuin, Spring 2016
54 of 63
ECE 3800
White Noise
Noise is inherently defined as a random process. You may be familiar with “thermal” noise,
based on the energy of an atom and the mean-free path that it can travel.

As a random process, whenever “white noise” is measured, the values are uncorrelated
with each other, not matter how close together the samples are taken in time.

Further, we envision “white noise” as containing all spectral content, with no explicit
peaks or valleys in the power spectral density.
As a result, we define “White Noise” as
R XX    S 0   t 
S XX w   S 0 
N0
2
Band Limited White Noise
S  N 0

2
S XX w    0
0
f W
W  f
The equivalent noise power is then:
 
E X 2  R XX 0  
W
S
0
 dw  2  W  S 0  N 0  W
W
But what about the autocorrelation?
R XX t  
W
S
0
 expi  2  f  t   df
W
 expi 2ft  
 expi 2Wt  exp i 2Wt  
R XX t   S 0  
 S0  



i 2t
i 2t
 i 2t  W

W
R XX t   S 0 
For sinc xt  
2  i  sin i 2Wt 
i 2t
xt 
xt
R XX t   2  W  S 0  sinc2Wt 
B.J. Bazuin, Spring 2016
55 of 63
ECE 3800
The Cross-Spectral Density
Why not form the power spectral response of the cross-correlation function?
The Fourier Transform in w
S XY w 

 R XY    exp iw   d
and SYX w 

1
R XY t  
2



 RYX    exp iw   d

1
S XY w  expiwt   dw and RYX t  
2


 SYX w  expiwt   dw

Properties of the functions
S XY w  conjSYX w
Since the cross-correlation is real,
 the real portion of the spectrum is even
 the imaginary portion of the spectrum is odd
There are no other important (assumed) properties to describe
B.J. Bazuin, Spring 2016
56 of 63
ECE 3800
Section 9.3 Continuous-Time Linear Systems with Random Inputs
Linear system requirements:
Definition 9.3-1 Let x1(t) and x2(t) be two deterministic time functions and let a1 and a2 be two
scalar constants. Let the linear system be described by the operator equation
y t   Lxt 
then the system is linear if “linear super-position holds”
La1  x1 t   a 2  x 2 t   a1  Lx1 t   a 2  Lx 2 t 
for all admissible functions x1 and x2 and all scalars a1 and a2.
For x(t), a random process, y(t) will also be a random process.
Linear transformation of signals: convolution in the time domain
yt   ht   xt 
y t 
h t 
x t 
Linear transformation of signals: multiplication in the Laplace domain
Y s   H s   X s 
X s 
H s 
Y s 
The convolution Integrals (applying a causal filter)
y t  

 xt     h   d
0
or
y t  
t
 ht     x   d

Where for physical realize-ability, causality, and stability constraints we require
ht   0

for t  0 and
 ht   dt  

B.J. Bazuin, Spring 2016
57 of 63
ECE 3800
Example: Applying a linear filter to a random process
ht   5  exp 3t 
for t  0
X t   M  4  cos2t   
where M and  are independent random variables,  uniformly distributed [0,2].
We can perform the filter function since an explicit formula for the random process is known.
y t  
t
 ht     x   d

t
 5  exp 3t     M  4  cos2     d
y t  

t
y t   5  M 
t
 exp 3t     d  20   exp 3t     cos2     d

y t   5  M 

exp 3t   
3

t
t
10 
 exp 3t     expi2  i   exp i2  i   d

y t  
5 M
 exp 3t     expi 2  i  exp 3t     exp i 2  i  
 10  


3
3  i2
3  i2

 
t
5 M
 expi 2t  i  exp i 2t  i  
 10  


3
3  i2
3  i2


5 M
 3  i 2   expi 2t  i   3  i 2   exp i 2t  i  
 10  
y t  

3
94


5  M 20
y t  

 3  cos2t     2  sin 2t   
3
13
y t  
Linear filtering will change the magnitude and phase of sinusoidal signals (DC too!).
X t   M  4  cos2t   
y t  
5
5
M 
 4  cos2t     ,
3
13
  33.69 
Expected value operator with linear systems
For a causal linear system we would have
B.J. Bazuin, Spring 2016
58 of 63
ECE 3800

y t    xt     h   d
0
and taking the expected value


E  y t   E   xt     h   d 
0


E  y t    E xt     h   d
0

E  y t     t     h   d
0
For x(t) WSS


0
0
E  y t      h   d     h   d
Notice the condition fop a physically realizable system!
The coherent gain of a filter is defined as:

h gain   ht   dt  H 0 
0
Therefore,
E Y t   E  X   hgain  E X   H 0 
Note that:
Hf 

 ht   exp i  2  f  t   dt


For a causal filter
H  f    ht   exp i  2  f  t   dt
0

At f=0
H 0    ht   dt
0
And
E  y t     H 0 
What about a cross-correlation?
(Converting an auto-correlation to cross-correlation)
For a linear system we would have
B.J. Bazuin, Spring 2016
59 of 63
ECE 3800
y t  

 xt     h   d

And performing a cross-correlation (assuming real R.V. and processing)



E xt1   y t 2   E  xt1    xt 2     h   d 






E xt1   y t 2   E   xt1   xt 2     h   d 


E xt1   y t 2  

 Ext   xt
1
2
    h   d


E xt1   y t 2  
 R t , t
XX
1
2
    h   d

For x(t) WSS

E xt   y t     R XY   
 R      h   d
XX

Ext   yt     R XY    R XX    h 
What about the other way … YX instead of XY
And performing a cross-correlation (assuming real R.V. and processing)


E  y t1   xt 2   E   xt1     h   d  xt 2 





E  y t1   xt 2   E   xt1     xt 2   h   d 


E  y t1   xt 2  

 Ext
1
    xt 2   h   d

E  y t1   xt 2  

 R t
XX
1
  , t 2   h   d

For x(t) WSS … see the next page
For x(t) WSS
E  y t   xt     RYX   

 R t    t     h   d
XX

B.J. Bazuin, Spring 2016
60 of 63
ECE 3800

E  y t   xt     RYX   
 R      h   d
XX

Perform a change of variable for lamba to “-kappa” (assuming h(t) is real, see text for complex0
E  y t   xt     RYX   

 R      h    d
XX

Therefore
E  y t   xt     RYX   

 R      h    d
XX

E yt   xt     RYX    R XX    h  
What about the auto-correlation of y(t)?
And performing an auto-correlation (assuming real R.V. and processing)



E  y t1   y t 2   RYY t1 , t 2   E   xt1  1   h1   d1   xt 2   2   h 2   d 2 



 


E  y t1   y t 2   RYY t1 , t 2   E     xt1  1   xt 2   2   h 2   d 2  h1   d1 
  

E  y t1   y t 2   RYY t1 , t 2  
 
  Ext
1
 1   xt 2   2   h 2   d 2  h1   d1
  
 
E  y t1   y t 2   RYY t1 , t 2  
  R t
XX
1
 1 , t 2   2   h 2   d 2  h1   d1
  
For x(t) WSS
 
E  y t   y t     RYY   
  R   
XX
2
 1   h1   h 2   d 2  d1
  





E  y t   y t     RYY       R XX   1    2   h 2   d 2   h1   d1
 
B.J. Bazuin, Spring 2016
61 of 63
ECE 3800
Example: White Noise Inputs to a causal filter
R XX t  
Let

N0
  t 
2




2
E Y t   RYY 0    h1     R XX 1   2   h 2   d 2   d1
0
0


E Y t 
2

 N 0

 RYY 0    h1    
  1   2   h 2   d 2   d1
0
0 2




E Y t   RYY 0  
2

E Y t 
2

N0 
 h1   h1   d1
2 0
N0 
2
 RYY 0  
  h1   d1
2 0
For a white noise process, the mean squared (or 2nd moment) is proportional to the filter power.
The power spectral density output of linear systems
The first cross-spectral density
R XY    R XX    h 
S XY w 

 R XY    exp iw   d

S XY w 

 R    h  exp iw   d
XX

Using convolution identities of the Fourier Transform
(if you want the proof it isn’t bad, just tedious)
S XY w  S XX w  H w
The second cross-spectral density
RYX    R XX    h  
SYX w 

 RYX    exp iw   d

B.J. Bazuin, Spring 2016
62 of 63
ECE 3800
S YX w 

 R    h  
*
XX
 exp iw   d

Using convolution identities of the Fourier Transform
(if you want the proof it isn’t bad, just tedious)
*
S YX w  S XX w  H w 
The output power spectral density becomes
RYY    R XX    h   h  
S YY w 

 R    exp iw   d
YY

S YY w 

 R    h   h   exp iw   d
XX

Using convolution identities of the Fourier Transform
*
S YY w   S XX w   H w   H w
S YY w   S XX w   H w 
2
This is a very significant result that provides a similar advantage for the power spectral density
computation as the Fourier transform does for the convolution.
B.J. Bazuin, Spring 2016
63 of 63
ECE 3800
Download