ECE 3800 Probabilistic Methods of Signal and System Analysis, Spring 2015

advertisement
ECE 3800 Probabilistic Methods of
Signal and System Analysis, Spring 2015
Course Topics:
1.
2.
3.
4.
5.
6.
7.
8.
Probability
Random variables
Multiple random variables
Elements of Statistics
Random processes
Correlation Functions
Spectral Density
Reponses of Linear Systems
Course Objectives:
This course seeks to develop a mathematical understanding of basic statistical tools and
processes as applied to inferential statistics and statistical signal processing( with ABET
objectives are listed per course objective).
1. The student will learn how to convert a problem description into a precise mathematical
probabilistic statement (a)
2. The student will use the general properties of random variables to solve a probabilistic
problem (a, e)
3. The student will be able to use a set of standard probability distribution functions suitable
for engineering applications (a)
4. The student will be able to calculate standard statistics from probability mass, distribution
and density functions (a)
5. The student will learn how to calculate confidence intervals for a population mean (a)
6. The student will be able to recognize and interpret a variety of deterministic and
nondeterministic random processes that occur in engineering (a, b, e)
7. The student will learn how to calculate the autocorrelation and spectral density of an
arbitrary random process (a)
8. The student will understand stochastic phenomena such as white, pink and black noise (a)
9. The student will learn how to relate the correlation of and between inputs and outputs
based on the autocorrelation and spectral density (a)
10. The student will understand the mathematical characteristics of standard frequency
isolation filters (a, e)
11. The student will be exposed to the signal-to-noise optimization principle as applied to
filter design (a, e, k)
12. The student will be exposed to Weiner and matched noise filters (a, c, e)
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
1 of 34
ECE 3800
Exam
Tuesday 10:15-12:15 am.
Those with permission to take it early Monday, 12:30- , Room B-211.
Expect approximately 6-8 multipart problems

Problems will be similar or even identical to those from previous exams and homework.

I will try to insure that an element from every chapter is tested.
The exam is open paper notes. No electronic devices except for a calculator (and I intentionally
make calculator use minimal, possibly even useless, or even providing incorrect or incomplete
results).
Be familiar with or bring the appropriate math tables as needed (Appendix A p. 419-424 is a
good set).

Appendix A: Trig. Identities, Indefinite and Definite Integrals, Fourier Transforms,
Laplace Transforms.

Appendix D: Normal Probability Distribution Function

Appendix E: The Q-Function

Appendix F: Student’s t Distribution Function
Minimal calculator work is required … using rational fractions may be easier.

I intentionally make calculator use minimal, possibly even useless, or even providing
incorrect or incomplete results.
Additional Materials for studying
All homework solution sets have been posted and returned.
Skills are available on the web site.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
2 of 34
ECE 3800
Table of Contents
1. Introduction to Probability
1.1. Engineering Applications of Probability
1.2. Random Experiments and Events
1.3. Definitions of Probability
Experiment
Possible Outcomes
Trials
Event
Equally Likely Events/Outcomes
Objects
Attribute
Sample Space
With Replacement and Without Replacement
1.4. The Relative-Frequency Approach
NA
N
Pr  A  lim r  A
r  A 
Where Pr  A
1.
2.
3.
4.
N 
is defined as the probability of event A.
0  Pr  A  1
Pr  A  PrB   Pr C     1 , for mutually exclusive events
An impossible event, A, can be represented as Pr A  0 .
A certain event, A, can be represented as Pr  A  1 .
1.5. Elementary Set Theory
Set
Subset
Space
Null Set or Empty Set
Venn Diagram
Equality
Sum or Union
Products or Intersection
Mutually Exclusive or Disjoint Sets
Complement
Differences
Proofs of Set Algebra
1.6. The Axiomatic Approach
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
3 of 34
ECE 3800
1.7. Conditional Probability
Pr  A  B   Pr  A | B   Pr B  , for PrB   0
Pr  A  B 
Pr  A | B  
, for PrB   0
Pr B 
Joint Probability
Pr A, B 
Pr  A | B   Pr  A when A follows B
Pr  A, B   Pr B, A  Pr  A | B   Pr B   Pr B | A  Pr  A
Marginal Probabilities:
Total Probability
Pr B   Pr B | A1   Pr  A1   Pr B | A2   Pr  A2     Pr B | An   Pr  An 
Bayes Theorem
Pr B | Ai   Pr  Ai 
Pr  Ai | B  
Pr B | A1   Pr  A1   Pr B | A2   Pr  A2     Pr B | An   Pr  An 
1.8. Independence
Pr  A, B   Pr B, A  Pr  A  Pr B 
1.9. Combined Experiments
1.10. Bernoulli Trials
 n
Pr  A occuring k times in n trials   p n k     p k  q n k
k 
1.11. Applications of Bernoulli Trials
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
4 of 34
ECE 3800
2. Random Variables
2.1. Concept of a Random Variable
2.2. Distribution Functions
Probability Distribution Function (PDF)
 0  FX  x   1, for    x  
 FX     0 and FX    1
 FX is non-decreasing as x increases
 Pr  x1  X  x2   FX  x 2   FX  x1 
For discrete events
For continuous events
2.3. Density Functions
Probability Density Function (pdf)
 f X x   0, for    x  


 f x   dx  1
X


x
 f u   du
FX 
X


Pr  x1  X  x 2  
x2
 f x   dx
X
x1
Probability Mass Function (pmf)
for    x  
 f X  x   0,


 f x   1
x  


X
FX 
x
 f x 
x  
X
Pr  x1  X  x 2  
x2
 f x 
x  x1
X
Functions of random variables
Y  fn X  and X  fn 1 Y 


f Y  y   f X fn 1  y  
dx
dy
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
5 of 34
ECE 3800
2.4. Mean Values and Moments
1st, general, nth Moments

X  EX  
x f
X
or X  E  X  
x   dx
E g  X  


 g  X   Pr X  x
g  X   f X  x   dx or E g  X  
x  


   x
X EX
n
n
n
   x
n
n
 Pr  X  x 
x  


X  X 
n
X  X 
n
E XX

E XX


2
2  X X
2  X X

2
    x  X 

n
n


    x  X 
n
n
 f X  x   dx
 Pr  X  x 
x  
Variance and Standard Deviation

E XX

E XX
    x  X 
2

 f X  x   dx
2


    x  X 
2
2
 Pr  X  x 
x  
2.5. The Gaussian Random Variable


 x X 2 
, for    x  
f X x  
 exp
2


2  

 2 
X is the mean and  is the variance
1
where

 f X  x   dx or X  E X
n
Central Moments

 x  Pr X  x
x  





 v X 2 
  dv
 exp
FX  x  


2
2  
 2 

v  
Unit Normal (Appendix D)
x

x  
1
x
 u2 
  du
exp 
 2 
2


u  
1


 x   1   x 
x X 
x X 

 or FX  x   1  
FX  x   

  





The Q-function is the complement of the normal function, : (Appendix E)

 u2 
1
Q x  
  exp    du  1   x 
2 u  x
 2 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
6 of 34
ECE 3800
2.6. Density Functions Related to Gaussian
Rayleigh
Maxwell
2.7. Other Probability Density Functions
Exponential Distribution
1
  
 exp
f T   
,
M
M 
 0,
  
FT    1  exp
,
M 
 0,
T  E T   M
for 0  
for   0
for 0  
for   0
 
T 2  E T 2  2M 2


2
E    T    T2  T 2  ET 2  2  M 2  M 2  M 2


Binomial Distribution
n
 n
  k   p k  1  p n  k   x  k 
f B x  
FB  x  
k 0
n
 n k
   p  1  p n  k  u  x  k 
k
k 0 

2.8. Conditional Probability Distribution and Density Functions
Pr  A  B   Pr  A | B   Pr B  , for PrB   0
Pr  A | B  
Pr  A | B  
Pr  A  B 
, for PrB   0
Pr B 
Pr  A  B  Pr  A, B 

, for PrB   0
Pr B 
Pr B 
It can be shown that F x | M  is a valid probability distribution function with all the
expected characteristics:
 0  F  x | M   1, for    x  
 F   | M   0 and F  | M   1
 F x | M  is non-decreasing as x increases
 Pr  x1  X  x2 | M   F x2 | M   F  x1 | M 
2.9. Examples and Applications
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
7 of 34
ECE 3800
3. Several Random Variables
3.1. Two Random Variables
Joint Probability Distribution Function (PDF)
F  x, y   Pr  X  x, Y  y 
for    x   and    y  
 0  F  x, y   1,
 F  , y   F  x,   F  ,   0
 F ,    1
 F  x, y  is non-decreasing as either x or y increases
 F  x,    FX  x  and F , y   FY  y 
Joint Probability Density Function (pdf)
 2 FX x 
f  x, y  
xy
for    x   and    y  
 f  x, y   0,
 

  f x, y   dx  dy  1
 
y


x
  f u, v  du  dv
F  x, y  
 

f X x  
 f x, y   dy
and f Y  y  
 f x, y   dx




Pr  x1  X  x2 , y1  Y  y 2  
y 2 x2
  f x, y   dx  dy
y1 x1
Expected Values
E g  X , Y  
 
  g x, y   f x, y   dx  dy
  
Correlation
EX  Y  
 
  x  y  f x, y   dx  dy
 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
8 of 34
ECE 3800
3.2. Conditional Probability--Revisited
Pr  X  x | M  F  x, y 

Pr M 
FY  y 
F  x, y 2   F  x, y1 
FX  x | y1  Y  y 2  
FY  y 2   FY  y1 
f  x, y 
FX x | Y  y  
fY  y 
f  x, y 
FY  y | X  x  
f X x 
f  y | x   f X x 
f x | y  
fY  y 
f  x, y   f  x | Y  y   f Y  y   f  y | X  x   f X  x 
f  y | X  x   f X x 
f x | Y  y  
fY  y 
f x | Y  y   fY  y 
f  y | X  x 
f X x 
FX x | Y  y  
3.3. Statistical Independence
f  x, y   f X  x   f Y  y 
E X  Y   E X   E Y   X  Y
3.4. Correlation between Random Variables
 
EX  Y  
  x  y  f x, y   dx  dy
 
covariance
E  X  E X   Y  E Y  
 
  x   X    y  Y   f x, y   dx  dy
 
correlation coefficient or normalized covariance,
 X   X
  E 
  X
  Y  Y
  
  Y
 
 x  X


 
 X

 


  y  Y
  
  Y

  f x, y   dx  dy

E x  y    X   Y
 X  Y
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
9 of 34
ECE 3800
3.5. Density Function of the Sum of Two Random Variables
Z  X Y
y
F  x, y  
x
  f u, v  du  dv
 
FZ  z  


fY  y  

z y
 f X x dx  dy





 f X x  fY z  x  dx   fY  y   f X z  y   dy
f Z z  
3.6. Probability Density Function of a Function of Two Random Variables
Define the function as
Z  1  X , Y  and W   2  X , Y 
and the inverse as
X   1 Z ,W  and Y   2 Z ,W 
The original pdf is f x, y  with the derived pdf in the transform space of g z, w .
Then it can be proven that:
Pr  z1  Z  z 2 , w1  W  w2   Pr  x1  X  x2 , y1  Y  y 2 
or equivalently
w2 z 2
y2 x2
w1 z1
y1 x1
  g z, w  dz  dw    f x, y   dx  dy
Using an advanced calculus theorem to perform a transformation of coordinates.
x x
g z , w  f  1  z, w, 2 z , w  z w  f  1  z , w, 2  z , w  J
y y
z w
w2 z 2
w2 z 2
w1 z1
w1 z1
  g z, w  dz  dw    f  z, w, z, w  J  dz  dw
1
2
If only one output variable desired (letting W=X), integrate for all W to find Z, do not integrate
for z ….
g z  




 g z, w  dw   f  z, w, z, w  J  dw
1
2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
10 of 34
ECE 3800
3.7. The Characteristic Function (Not covered in class or homework)
 u   Eexp j  u  X 

 u  
 f x  exp j  u  x  dx

The inverse of the characteristic function is then defined as:
1
f x  
2

  u   exp j  u  x  du

Computing other moments is performed similarly, where:
d n  u 
du
du

u 0
 f x    j  x 
n

 exp j  u  x   dx


d n  u 
n
n

  j  x
n
 f  x   dx  j 
n

x
n
 
 f x   dx  j n  E X n


Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
11 of 34
ECE 3800
4. Elements of Statistics
4.1. Introduction
4.2. Sampling Theory--The Sample Mean
1
Xˆ 
n
Sample Mean
n
 Xi ,
where X i are random variables with a pdf.
i 1
Variance of the sample mean

 
   2   2
1
 n   X 2   X 2  X
Var Xˆ    X 2 

n
n2
n

 
n

 2  N n

Var Xˆ 

n  N 1 
4.3. Sampling Theory--The Sample Variance
 X  Xˆ 
n 1
E S  

n
N n 1


E S  
N 1 n
S2 
1
n
n
2
i
i 1
2
2
2
to make it unbiased
 
 
n
n 1
~
ES2 
E S2 

n 1
n 1 n

n
i 1
2



n
2
2
1
ˆ
Xi  X 
X i  Xˆ
n 1
i 1
When the population is not large, the biased and unbiased estimates become
  NN 1  n n 1  
n
N
~
E S  

 E S 
N 1 n 1
E S2 
2
2
2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
12 of 34
ECE 3800
4.4. Sampling Distributions and Confidence Intervals
Gaussian
Xˆ  X
Z

n
Student’s t distribution
Xˆ  X
Xˆ  X
 ~
S
S
n 1
n
k 
k 
 Xˆ  X 
X
n
n
T
where 
 is the gamma function.
k  1  k  k 
for any k
 k!
and
for k an integer
 2 
1

Confidence Intervals based on Gaussian and Student’s t distribution
Gaussian
Z
Xˆ  X

n
X
k 
k 
 Xˆ  X 
n
n
Student’s t distribution
Xˆ  X
Xˆ  X
T
 ~
S
S
n 1
n
~
~
X t S
 Xˆ  X  t  S
n
n
4.5. Hypothesis Testing
The null Hypothesis
Accept H0:
if the computed value “passes” the significance test.
Reject H0:
if the computed value “fails” the significance test.
One tail or two-tail testing
Using Confidence Interval value computations
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
13 of 34
ECE 3800
4.6. Curve Fitting and Linear Regression
For
a
y  a  bx
n
1  n

   y i  b   xi 
n  i 1
i 1

and
n

 n   n
n  y i  xi     xi     y i 
 i 1   i 1 
b  i 1
2
n
 n 
2
n  xi    xi 
i 1
 i 1 
nd
Or using 2 moment, correlation, and covariance values
Yˆ  R XX  Xˆ  R XY
Yˆ  R XX  Xˆ  R XY
a

2
C XX
ˆ
R XX  X
R  Yˆ  Xˆ
C
b  XY
 XY
2
C XX
R XX  Xˆ





  

4.7. Correlation Between Two Sets of Data
 xi
 
 xi 2
E X 2  R XX 
X
2
1
 C XX  
n
1

n


i 1

2


C XY  E X  X  Y  Y 

i 1
n
2
1 n



xi 
xi   R XX   X 2
n

i 1
 i 1 
n
1
R XY  E X  Y   
n

n
1
 X  E X   
n

 X  X Y Y 

r   XY  E 

Y 
 X
1

n
n
 xi  y i
1

n
i 1
n
  xi  y i    X  Y
i 1
n
  xi  y i    X  Y
i 1
 X  Y

C XY
 X  Y
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
14 of 34
ECE 3800
5. Random Processes
5.1. Introduction
A random process is a collection of time functions and an associated probability
description.
The entire collection of possible time functions is an ensemble, designated as xt ,
where one particular member of the ensemble, designated as xt  , is a sample function of
the ensemble. In general only one sample function of a random process can be observed!
5.2. Continuous and Discrete Random Processes
5.3. Deterministic and Nondeterministic Random Processes
A nondeterministic random process is one where future values of the ensemble cannot be
predicted from previously observed values.
A deterministic random process is one where one or more observed samples allow all
future values of the sample function to be predicted (or pre-determined).
5.4. Stationary and Nonstationary Random Processes
If all marginal and joint density functions of a process do not depend upon the choice of
the time origin, the process is said to be stationary.
Wide-Sense Stationary: the mean value of any random variable is independent of the
choice of time, t, and that the correlation of two random variables depends only upon the
time difference between them.
5.5. Ergodic and Nonergodic Random Processes
The probability generated means and moments are equivalent to the time averaged means
and moments.
A Process for Determining Stationarity and Ergodicity
a) Find the mean and the 2nd moment based on the probability
b) Find the time sample mean and time sample 2nd moment based on time
averaging.
c) If the means or 2nd moments are functions of time … non-stationary
d) If the time average mean and moments are not equal to the probabilistic mean
and moments or if it is not stationary, then it is non ergodic.
5.6. Measurement of Process Parameters
The process of taking discrete time measurements of a continuous process in order to
compute the desired statistics and probabilities.
5.7. Smoothing Data with a Moving Window Average
An example of using a “FIR filter” to smooth high frequency noise from a slowly time
varying signal of interest. .
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
15 of 34
ECE 3800
6. Correlation Functions
6.1 Introduction
The Autocorrelation Function
For a sample function defined by samples in time of a random process, how alike are the
different samples?
X 1  X t1  and X 2  X t 2 
Define:
The autocorrelation is defined as:




 dx1  dx2  x1x2 f x1, x2 
R XX t1 , t 2   E  X 1 X 2  
The above function is valid for all processes, stationary and non-stationary.
For WSS processes:
R XX t1 , t 2   E X t X t     R XX  
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
1
 XX    lim
T   2T
and
T
 xt   xt     dt 
xt   xt   
T
 XX    R XX  
As a note for things you’ve been computing, the “zoreth lag of the autocorrelation” is

    dx  x
R XX t1 , t1   R XX 0   E X 1 X 1   E X 1
2
2

2
2
2
1 f  x1    X   X

1
 XX 0   lim
T   2T
T
 xt 
2
 dt  xt 2
T
6.2 Example: Autocorrelation Function of a Binary Process
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
16 of 34
ECE 3800
6.3. Properties of Autocorrelation Functions
 
1) R XX 0   E X 2  X 2 or  XX 0   xt 2
The mean squared value of the random process can be obtained by observing the zeroth lag
of the autocorrelation function.
2) R XX    R XX   
The autocorrelation function is an even function in time. Only positive (or negative) needs to
be computed for an ergodic WSS random process.
3) R XX    R XX 0
The autocorrelation function is a maximum at 0. For periodic functions, other values may
equal the zeroth lag, but never be larger.
4) If X has a DC component, then Rxx has a constant factor.
X t   X  N t 
R XX    X 2  R NN  
Note that the mean value can be computed from the autocorrelation function constants!
5) If X has a periodic component, then Rxx will also have a periodic component of the same
period.
Think of:
X t   A  cosw  t   ,
0    2 
where A and w are known constants and theta is a uniform random variable.
A2
R XX    E  X t X t    
 cosw   
2
5b) For signals that are the sum of independent random variable, the autocorrelation is the
sum of the individual autocorrelation functions.
W t   X t   Y t 
RWW    R XX    RYY    2   X  Y
For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean)
RWW    R XX    RYY    2   X  Y
RWW    Rww    W 2  R xx     X 2  R yy    Y 2  2   X   Y
RWW    Rww    W 2  R xx    R yy     X 2  2   X   Y   Y 2
RWW    Rww    W 2  R xx    R yy     X   Y 2
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
17 of 34
ECE 3800
Then we have
W 2   X  Y 2
Rww    R xx    R yy  
6) If X is ergodic and zero mean and has no periodic component, then we expect
lim R XX    0
 
7) Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes
permissible is in terms of the Fourier transform of the autocorrelation function. That is, if

 R XX    exp jwt   dt
R XX   

then the restriction states that
R XX    0 for all w
Additional concept:
X t   a  N t 
R XX    a 2  E N t   N t     a 2  R NN  
6.4. Measurement of Autocorrelation Functions
We love to use time average for everything. For wide-sense stationary, ergodic random
processes, time average are equivalent to statistical or probability based values.
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
Using this fact, how can we use short-term time averages to generate auto- or cross-correlation
functions?
An estimate of the autocorrelation is defined as:
Rˆ XX   
1
T 
T 
 xt   xt     dt
0
Note that the time average is performed across as much of the signal that is available after the
time shift by tau.
For tau based on the available time step, k, with N equating to the available time interval, we
have:
Rˆ XX kt  
1
N  1t   kt 
N k
 xit   xit  kt   t
i 0
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
18 of 34
ECE 3800
Rˆ XX kt   Rˆ XX k  
1
N 1 k
N k
 xi   xi  k 
i 0
In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this
point the entire summation consists of one point and is therefore a poor estimate of the
autocorrelation. For useful results, k<<N!
As noted, the validity of each of the summed autocorrelation lags can and should be brought into
question as k approaches N. As a result, a biased estimate of the autocorrelation is commonly
used. The biased estimate is defined as:
~
R XX k  
1
N 1
N k
 xi   xi  k 
i 0
Here, a constant weight instead of one based on the number of elements summed is used. This
estimate has the property that the estimated autocorrelation should decrease as k approaches N.
6.5. Examples of Autocorrelation Functions
6.6. Crosscorrelation Functions
For a two sample function defined by samples in time of two random processes, how alike are
the different samples?
X 1  X t1  and Y2  Y t 2 
Define:
The cross-correlation is defined as:
R XY t1 , t 2   E  X 1Y2  
RYX t1 , t 2   E Y1 X 2  








 dx1  dy2  x1 y2 f x1, y2 
 dy1  dx2  y1x2 f  y1, x2 
The above function is valid for all processes, jointly stationary and non-stationary.
For jointly WSS processes:
R XY t1 , t 2   E X t Y t     R XY  
RYX t1 , t 2   EY t X t     RYX  
Note: the order of the subscripts is important for cross-correlation!
If the processes are jointly ergodic, the time average is equivalent to the probabilistic
expectation, or
1
 XY    lim
T   2T
T
 xt   yt     dt 
xt   y t   
T
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
19 of 34
ECE 3800
1
YX    lim
T   2T
and
T
 yt   xt     dt 
y t   xt   
T
 XY    R XY  
YX    RYX  
6.7. Properties of Crosscorrelation Functions
1)
The properties of the zoreth lag have no particular significance and do not represent
mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .
R XY 0  RYX 0 or  XY 0  YX 0
2)
Crosscorrelation functions are not generally even functions. However, there is an
antisymmetry to the ordered crosscorrelations:
R XY    RYX   
For
1
 XY    lim
T   2T
T
T
Substitute
 XY    lim
T 
 XY    lim
1
T   2T
1
2T
 xt   yt     dt 
xt   y t   
t  
T 
 x     y   d  x     y  
T 
T 
 y   x     d  y   x   
  YX   
T 
3)
The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes
sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation
should be a maximum when the lag equals the time delay!
R XY    R XX 0  R XX 0 
It can be shown however that
As a note, the crosscorrelation may not achieve the maximum anywhere …
4)
If X and Y are statistically independent, then the ordering is not important
R XY    E  X t   Y t     E  X t   E Y t     X  Y
and
R XY    X  Y  RYX  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
20 of 34
ECE 3800
5)
If X is a stationary random process and is differentiable with respect to time, the
crosscorrelation of the signal and it’s derivative is given by
dR XX  
R XX   
d
Defining derivation as a limit:
X t  e   X t 
X    lim
e
e 0
and the crosscorrelation

X t    e   X t    

R XX    E X t   X t     E  X t    lim

e
 e0


E  X t   X t    e   X t   X t   
R XX    lim
e
e 0
E X t   X t    e   E X t   X t   
R XX    lim
e
e 0
R   e   R XX  
R XX    lim XX
e
e 0
dR XX  
R XX   
d
Similarly,
d 2 R XX  
R XX    
d 2


6.8. Examples and Applications of Crosscorrelation Functions
6.9. Correlation Matrices For Sampled Functions
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
21 of 34
ECE 3800
7. Spectral Density
7.1. Introduction
7.2. Relation of Spectral Density to the Fourier Transform
This function is also defined as the spectral density function (or power-spectral density) and is
defined for both f and w as:
2
2
E  Y w 
EY  f  




or
SYY w  lim
SYY  f   lim
2T
2T
T 
T 
The 2nd moment based on the spectral densities is defined, as:
1
Y 
2
2


 SYY w  dw
and
Y
2


 SYY  f   df

Note: The result is a power spectral density (in Watts/Hz), not a voltage spectrum as (in V/Hz)
that you would normally compute for a Fourier transform.
Wiener-Khinchine relation
For WSS random processes, the autocorrelation function is time based and, for ergodic
processes, describes all sample functions in the ensemble! In these cases the Wiener-Khinchine
relations is valid that allows us to perform the following.

S XX w  R XX   
 EX t   X t    exp iw   d

For an ergodic process, we can use time-based processing to aive at an equivalent result …
1
 XX    lim
T   2T
T
 xt   xt     dt 
xt   xt   
T
X    X w
For
1
 XX    lim
T   2T
T
 xt   exp iwt X w  dt
T
 XX    X w  lim
1
T   2T
T
 xt   exp i wt  dt
T
 XX    X w  X  w  X w
2
We can define a power spectral density for the ensemble as:
S XX w  R XX   

 R XX    exp iw   d

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
22 of 34
ECE 3800
Based on this definition, we also have
S XX w  R XX  
1
R XX t  
2
R XX     1 S XX w

 S XX w  expiwt   dw

7.3. Properties of Spectral Density
The power spectral density as a function is always
 real,
 positive,
 and an even function in w.
As an even function, the PSD may be expected to have a polynomial form as:
S XX w  S 0
w 2n  a 2n  2 w 2n  2  a 2n  4 w 2n  4    a 2 w 2  a0
w 2m  b2m  2 w 2m  2  b2m  4 w 2m  4    b2 w 2  b0
where m>n.
Notice the squared terms, any odd power would define an anti-symmetric element that, by
definition and proof, can not exist!
Finite property in frequency. The Power Spectral Density must also approach zero as w
approached infinity …. Therefore,
w 2n  a2n2 w 2n2    a2 w 2  a0
w2n
1
S

lim
 lim S 0 2m  n   0
S XX w     lim S 0 2 m
0
m

2
m
2
2
2
w 
w 
w 
w  b2 m  2 w
w
w
   b2 w  b0
For m>n, the condition will be met.
7.4. Spectral Density and the Complex Frequency Plane
7.5. Mean-Square Values From Spectral Density
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
23 of 34
ECE 3800
7.6. Relation of Spectral Density to the Autocorrelation Function
The power spectral density as a function is always
 real,
 positive,
 and an even function in w/f.
You can convert between the domains using:
The Fourier Transform in w
S XX w 

 R XX    exp iw   d

1
R XX t  
2

 S XX w  expiwt   dw

The Fourier Transform in f

S XX  f  
 R XX    exp i2f   d


R XX t  
 S XX  f   expi2ft   df

The 2-sided Laplace Transform
S XX s  

 R XX    exp s   d

j
1
R XX t  
j 2
 S XX s   expst   ds
 j
Deriving the Mean-Square Values from the Power Spectral Density
The mean squared value of a random process is equal to the 0th lag of the autocorrelation
 
EX
2
1
 R XX 0  
2
 
E X 2  R XX 0  


1
S XX w  expiw  0   dw 
2



 S XX w  dw


 S XX  f   expi2f  0  dw   S XX  f   df


Therefore, to find the second moment, integrate the PSD over all frequencies.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
24 of 34
ECE 3800
7.7. White Noise
Noise is inherently defined as a random process. You may be familiar with “thermal” noise,
based on the energy of an atom and the mean-free path that it can travel.
 As a random process, whenever “white noise” is measured, the values are uncorrelated
with each other, not matter how close together the samples are taken in time.
 Further, we envision “white noise” as containing all spectral content, with no explicit
peaks or valleys in the power spectral density.
As a result, we define “White Noise” as
R XX    S 0   t 
N
S XX w   S 0  0
2
This is an approximation or simplification because the area of the power spectral density is
infinite!
For typical applications, we are interested in Band-Limited White Noise where
N0

S 0 
S XX w   
2
0

f W
W  f
The equivalent noise power is then:
  R
2
EX
XX 0  
W
 S 0  dw  2  W  S 0  2  W 
W
N0
 N 0 W
2
7.8. Cross-Spectral Density
The Fourier Transform in w
S XY w 

 R XY    exp iw   d


1
R XY t  
2

and SYX w 

 RYX    exp iw   d

1
S XY w  expiwt   dw and RYX t  
2

Properties of the functions

 SYX w  expiwt   dw

S XY w  conjSYX w
Since the cross-correaltion is real,
 the real portion of the spectrum is even
 the imaginary portion of the spectrum is odd
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
25 of 34
ECE 3800
7.9. Autocorrelation Function Estimate of Spectral Density
7.10. Periodogram Estimate of Spectral Density
7.11. Examples and Applications of Spectral Density
Determine the autocorrelation of the binary sequence.
xt  

 pt  t 0  k  T 
A
k
k  
xt   p t  

A
k  
k
  t  t 0  k  T 
Determine the auto correlation of the discrete time sequence (leaving out the pulse for now)
y t  

A
k
k  
  t  t 0  k  T 
 

  
E  y t   y t     E   Ak   t  t 0  k  T     A j   t    t 0  j  T 
  j  

 k  
  

R yy    E  y t   y t     E    A j  Ak   t  t 0  k  T    t    t 0  j  T 

 k   j  



2
  Ak   t  t 0  k  T    t    t 0  k  T 

 k  

R yy    E   

   A j  Ak   t  t 0  k  T    t    t 0  j  T 
 k   jj  

k
 E A

R yy   
k
k  

2
 E t  t
  EA


j
0
 k  T    t    t 0  k  T 

 Ak  E  t  t 0  k  T    t    t 0  j  T 
k   j  
jk
  T1      EA 
R yy    E Ak 
2
 
2
k
2
1 
     m  T 
T m  
m0
 T1      EA 
R yy    E Ak  E Ak  
2

2
k

1 
     m  T 
T m  
1
1  

      A2        m  T 
T
T m  


1
1 1
m 

S yy  f    A2    A2        f  
T 
T
T T m   
From here, it can be shown that
2
S xx  f   P  f   S yy  f 
R yy     A2 
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
26 of 34
ECE 3800

1
1
m 
2 

S xx  f   P f    A2    A2  2     f  
T
T 
T m   

2
2

m
2 
2 

S xx  f   P f   A  P f   A2     f  
T
T
T m   
This is a magnitude scaled version of the power spectral density of the pulse shape and numerous
impulse responses with magnitudes shaped by the pulse at regular frequency intervals based on
the signal periodicity.
The result was picture in the textbook as …
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
27 of 34
ECE 3800
8. Response of Linear Systems to Random Inputs
8.1. Introduction
Linear transformation of signals: convolution in the time domain
yt   ht   xt 
Y s   H s   X s 
System
x t 
h t 
X s 
H s 
y t 
Y s 
The convolution Integrals (applying a causal filter)
y t  

 xt     h   d
or y t  
t
 ht     x   d

0
8.2. Analysis in the Time Domain
8.3. Mean and Mean-Square Value of System Output
The Mean Value at a System Output



E Y t   E X t     h   d 


0


E Y t  

 EX t    h   d
0
For a wide-sense stationary process, this result in
E Y t  


0
0
 EX  h   d  EX   h   d
The coherent gain of a filter is defined as:


h gain  ht   dt
0
Therefore,
EY t   E X   hgain
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
28 of 34
ECE 3800
The Mean Square Value at a System Output
2
 
 


 
E Y t 2  E  X t     h   d  
 
 0
 

 


   h   R
E Y t  
2
XX
   h  d
0
Therefore, take the convolution of the autocorrelation function and then sum the filter-weighted
result from 0 to infinity.
8.4. Autocorrelation Function of System Output
RYY    EY t   Y t     Eht   xt   ht     xt   
 

 

 


RYY    E  xt  1   h1   d1    xt    2   h2   d 2 


 

 0
 0






RYY    h1   R XX   1  2   h2   d2   d1


0
0



RYY   
RYY   

 h1   R XX   1   h  1  d1
0
0
 h 1   R XX   1   h  1  d1

Notice that first, you convolve the input autocorrelation function with the filter function and then
you convolve the result with a time inversed version of the filter!
8.5. Crosscorrelation between Input and Output
R XY    E X t   Y t     Ext   ht     xt   

R XY     R XX   1   h1   d1
0
This is the convolution of the autocorrelation with the filter.
What about the other Autocorrelation?
RYX    EY t   X t     Eht   xt   xt   

RYX     R XX   1   h1   d1
0
This is the correlation of the autocorrelation with the filter, inherently different than the previous,
but equal at =0.
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
29 of 34
ECE 3800
8.6. Example of Time-Domain System Analysis
8.7. Analysis in the Frequency Domain
8.8. Spectral Density at the System Output
 





SYY w  RYY   
 d1  d2  h1   h2   R XX   1  2   exp iw   d



 0

0
SYY w  RYY    S XX w  H w  H  w
Therefore
 

SYY w  RYY    S XX w  H w
2
8.9. Cross-Spectral Densities between Input and Output
S XY s   S XX s   H s 
SYX s   S XX s   H  s 
8.10. Examples of Frequency-Domain Analysis
Noise in a linear feedback system loop.
X s 
N s 
A
s  s  1
Y s 
1
Linear superposition of X to Y and N to Y.
A
 X s   Y s   N s 
s  s  1

A 
A
Y s   1 

 X s   N s 

 s  s  1  s  s  1
 s2  s  A
A
Y s   
 X s   N s 

 s  s  1  s  s  1
Y s  
A
s2  s


X
s
 N s 


s2  s  A
s2  s  A
There are effectively two filters, one applied to X and a second apply to N.
Y s  
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
30 of 34
ECE 3800
s2  s
A


H
s
and

N
s2  s  A
s2  s  A
Y s   H X s   X s   H N s   N s 
Generic definition of output Power Spectral Density:
2
2
S YY w  H X w  S XX w  H N w  S NN w
H X s  
8.11. Numerical Computation of System Output
9. Optimum Linear Systems
9.1. Introduction
9.2. Criteria of Optimality
9.3. Restrictions on the Optimum System
9.4. Optimization by Parameter Adjustment
9.5. Systems That Maximize Signal-to-Noise Ratio
 
E s t 2

PNoise
N 0  B EQ
PSignal
SNR is defined as
For a linear system, we have:
s o t   no t  

 h   st     nt    d
0
The output SNR can be described as
SNRout 
PSignal
PNoise

   Es t  
E n t   N  B
E so t 2
2
o
2
o
o
EQ
or substituting
2
 
 

 
E  h   s t     d  
 
 0
 


SNRout 


1
N o   ht 2  dt
2
0
To achieve the maximum SNR, the equality condition of Schwartz’s Inequality must hold, or
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
31 of 34
ECE 3800
2



 




 

2
2
 h   s t     d    h   d    s t     d 


 


0

 0
0




h   K  st     u  
This condition can be met for
where K is an arbitrary gain constant.
The desired impulse response is simply the time inverse of the signal waveform at time t, a fixed
moment chosen for optimality. If this is done, the maximum filter power (with K=1) can be
computed as

 h 
2

 d 
0
 st   
2
t
 d 
2
 d   t 

0
maxSNRout  
 s 
2
  t 
No
This filter concept is called a matched filter.
9.6. Systems That Minimize Mean-Square Error
Err t   X t   Y t 
The error function
Y t  
where

 h   X t     N t    d
0
Computing the error power
j
1
2
E Err 
 S XX s   1  H s   1  H  s   S NN s   H s   H  s  ds
j 2 j



E Err
2

j
S XX s   S NN s   H s   H  s 

1

  
  ds
j 2  j  S XX s   H s   S XX s   H  s   S XX s 
Defining
FC s   FC  s   S XX s   S NN s 
Step 1:
So, let’s see about some interpretations. First, let H1 s   1
, which should be causal.
FC s 
Note that for this filter
S XX s   S NN s   1  1  1
FC s  FC - s 
This is called a whitening filter as it forces the signal plus noise PSD to unity (white noise).
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
32 of 34
ECE 3800
Step 2:

H s   H 1 s   H 2 s   H 2 s
Letting

E Err 2

FC s 

H 2 s  S XX s   
H 2  s  S XX s   


 FC s  
   FC  s  

FC s  FC  s   
FC  s  FC s   
1


 
  ds
j 2 j  S XX s   S NN s 


 F s   F  s 

C
C


j

S XX s   
S XX s   
 H 2 s  
   H 2  s  

FC s   
FC  s   
1

2
E Err 
 
  ds
j 2 j  S XX s   S NN s 



 F s   F  s 
C
C


Minimizing the terms containing H2(s), we must focus on
S s 
S s 
and H 2  s   XX
H 2 s   XX
FC  s 
FC s 
Letting H2 be defined for the appropriate Left or Right half-plane poles
 S s  
 S s 
Let
and H 2  s    XX 
H 2 s    XX 
 FC  s   LHP
 FC s   RHP

j

The composite filter is then
H s   H1 s   H 2 s  
 S s  
  XX 
FC s   FC  s   LHP
1
This solution is often called a Wiener Filter and is widely applied when the signal and noise
statistics are known a-priori!
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
33 of 34
ECE 3800
Appendices
A. Mathematical Tables
A.1. Trigonometric Identities
A.2. Indefinite Integrals
A.3. Definite Integrals
A.4. Fourier Transform Operations
A.5. Fourier Transforms
A.6. One-Sided Laplace Transforms
B. Frequently Encountered Probability Distributions
B.1. Discrete Probability Functions
B.2. Continuous Distributions
C. Binomial Coefficients
D. Normal Probability Distribution Function
E. The Q-Function
F. Student's t Distribution Function
G. Computer Computations
H. Table of Correlation Function--Spectral Density Pairs
I. Contour Integration
Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System
Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.
B.J. Bazuin, Spring 2015
34 of 34
ECE 3800
Download