ECE 3800 Probabilistic Methods of Signal and System Analysis, Spring 2015 Course Topics: 1. 2. 3. 4. 5. 6. 7. 8. Probability Random variables Multiple random variables Elements of Statistics Random processes Correlation Functions Spectral Density Reponses of Linear Systems Course Objectives: This course seeks to develop a mathematical understanding of basic statistical tools and processes as applied to inferential statistics and statistical signal processing( with ABET objectives are listed per course objective). 1. The student will learn how to convert a problem description into a precise mathematical probabilistic statement (a) 2. The student will use the general properties of random variables to solve a probabilistic problem (a, e) 3. The student will be able to use a set of standard probability distribution functions suitable for engineering applications (a) 4. The student will be able to calculate standard statistics from probability mass, distribution and density functions (a) 5. The student will learn how to calculate confidence intervals for a population mean (a) 6. The student will be able to recognize and interpret a variety of deterministic and nondeterministic random processes that occur in engineering (a, b, e) 7. The student will learn how to calculate the autocorrelation and spectral density of an arbitrary random process (a) 8. The student will understand stochastic phenomena such as white, pink and black noise (a) 9. The student will learn how to relate the correlation of and between inputs and outputs based on the autocorrelation and spectral density (a) 10. The student will understand the mathematical characteristics of standard frequency isolation filters (a, e) 11. The student will be exposed to the signal-to-noise optimization principle as applied to filter design (a, e, k) 12. The student will be exposed to Weiner and matched noise filters (a, c, e) Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 1 of 36 ECE 3800 Exam Tuesday 10:15-12:15 am. Those with permission to take it early Monday, 12:30- , Room B-211. Expect approximately 6-8 multipart problems Problems will be similar or even identical to those from previous exams and homework. I will try to insure that an element from every chapter is tested. The exam is open paper notes. No electronic devices except for a calculator (and I intentionally make calculator use minimal, possibly even useless, or even providing incorrect or incomplete results). Be familiar with or bring the appropriate math tables as needed (Appendix A p. 419-424 is a good set). Appendix A: Trig. Identities, Indefinite and Definite Integrals, Fourier Transforms, Laplace Transforms. Appendix D: Normal Probability Distribution Function Appendix E: The Q-Function Appendix F: Student’s t Distribution Function Minimal calculator work is required … using rational fractions may be easier. I intentionally make calculator use minimal, possibly even useless, or even providing incorrect or incomplete results. Additional Materials for studying All homework solution sets have been posted and returned. Skills are available on the web site. Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 2 of 36 ECE 3800 Table of Contents 1. Introduction to Probability 1.1. Engineering Applications of Probability 1.2. Random Experiments and Events 1.3. Definitions of Probability Experiment Possible Outcomes Trials Event Equally Likely Events/Outcomes Objects Attribute Sample Space With Replacement and Without Replacement 1.4. The Relative-Frequency Approach NA N Pr A lim r A r A Where Pr A 1. 2. 3. 4. N is defined as the probability of event A. 0 Pr A 1 Pr A PrB Pr C 1 , for mutually exclusive events An impossible event, A, can be represented as Pr A 0 . A certain event, A, can be represented as Pr A 1 . 1.5. Elementary Set Theory Set Subset Space Null Set or Empty Set Venn Diagram Equality Sum or Union Products or Intersection Mutually Exclusive or Disjoint Sets Complement Differences Proofs of Set Algebra 1.6. The Axiomatic Approach Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 3 of 36 ECE 3800 1.7. Conditional Probability Pr A B Pr A | B Pr B , for PrB 0 Pr A B Pr A | B , for PrB 0 Pr B Joint Probability Pr A, B Pr A | B Pr A when A follows B Pr A, B Pr B, A Pr A | B Pr B Pr B | A Pr A Marginal Probabilities: Total Probability Pr B Pr B | A1 Pr A1 Pr B | A2 Pr A2 Pr B | An Pr An Bayes Theorem Pr B | Ai Pr Ai Pr Ai | B Pr B | A1 Pr A1 Pr B | A2 Pr A2 Pr B | An Pr An 1.8. Independence Pr A, B Pr B, A Pr A Pr B 1.9. Combined Experiments 1.10. Bernoulli Trials n Pr A occuring k times in n trials p n k p k q n k k 1.11. Applications of Bernoulli Trials Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 4 of 36 ECE 3800 2. Random Variables 2.1. Concept of a Random Variable 2.2. Distribution Functions Probability Distribution Function (PDF) 0 FX x 1, for x FX 0 and FX 1 FX is non-decreasing as x increases Pr x1 X x2 FX x 2 FX x1 For discrete events For continuous events 2.3. Density Functions Probability Density Function (pdf) f X x 0, for x f x dx 1 X x f u du FX X Pr x1 X x 2 x2 f x dx X x1 Probability Mass Function (pmf) for x f X x 0, f x 1 x X FX x f x x X Pr x1 X x 2 x2 f x x x1 X Functions of random variables Y fn X and X fn 1 Y f Y y f X fn 1 y dx dy Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 5 of 36 ECE 3800 2.4. Mean Values and Moments 1st, general, nth Moments X EX x f X or X E X x dx E g X g X Pr X x g X f X x dx or E g X x x X EX n n n x n n Pr X x x X X n X X n E XX E XX 2 2 X X 2 X X 2 x X n n x X n n f X x dx Pr X x x Variance and Standard Deviation E XX E XX x X 2 f X x dx 2 x X 2 2 Pr X x x 2.5. The Gaussian Random Variable x X 2 , for x f X x exp 2 2 2 X is the mean and is the variance 1 where f X x dx or X E X n Central Moments x Pr X x x v X 2 dv exp FX x 2 2 2 v Unit Normal (Appendix D) x 1 x x u2 du exp 2 2 u 1 x 1 x x X x X or FX x 1 FX x The Q-function is the complement of the normal function, : (Appendix E) u2 1 Q x exp du 1 x 2 u x 2 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 6 of 36 ECE 3800 2.6. Density Functions Related to Gaussian Rayleigh Maxwell 2.7. Other Probability Density Functions Exponential Distribution 1 exp f T , M M 0, FT 1 exp , M 0, T E T M for 0 for 0 for 0 for 0 T 2 E T 2 2M 2 2 E T T2 T 2 ET 2 2 M 2 M 2 M 2 Binomial Distribution n n k p k 1 p n k x k f B x FB x k 0 n n k p 1 p n k u x k k k 0 2.8. Conditional Probability Distribution and Density Functions Pr A B Pr A | B Pr B , for PrB 0 Pr A | B Pr A | B Pr A B , for PrB 0 Pr B Pr A B Pr A, B , for PrB 0 Pr B Pr B It can be shown that F x | M is a valid probability distribution function with all the expected characteristics: 0 F x | M 1, for x F | M 0 and F | M 1 F x | M is non-decreasing as x increases Pr x1 X x2 | M F x2 | M F x1 | M 2.9. Examples and Applications Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 7 of 36 ECE 3800 3. Several Random Variables 3.1. Two Random Variables Joint Probability Distribution Function (PDF) F x, y Pr X x, Y y for x and y 0 F x, y 1, F , y F x, F , 0 F , 1 F x, y is non-decreasing as either x or y increases F x, FX x and F , y FY y Joint Probability Density Function (pdf) 2 FX x f x, y xy for x and y f x, y 0, f x, y dx dy 1 y f u, v du dv F x, y f X x x f x, y dy and f Y y f x, y dx Pr x1 X x2 , y1 Y y 2 y 2 x2 f x, y dx dy y1 x1 Expected Values E g X , Y g x, y f x, y dx dy Correlation EX Y x y f x, y dx dy Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 8 of 36 ECE 3800 3.2. Conditional Probability--Revisited Pr X x | M F x, y Pr M FY y F x, y 2 F x, y1 FX x | y1 Y y 2 FY y 2 FY y1 f x, y FX x | Y y fY y f x, y FY y | X x f X x f y | x f X x f x | y fY y f x, y f x | Y y f Y y f y | X x f X x f y | X x f X x f x | Y y fY y f x | Y y fY y f y | X x f X x FX x | Y y 3.3. Statistical Independence f x, y f X x f Y y E X Y E X E Y X Y 3.4. Correlation between Random Variables EX Y x y f x, y dx dy covariance E X E X Y E Y x X y Y f x, y dx dy correlation coefficient or normalized covariance, X X E X Y Y Y x X X y Y Y f x, y dx dy E x y X Y X Y Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 9 of 36 ECE 3800 3.5. Density Function of the Sum of Two Random Variables Z X Y y F x, y x f u, v du dv FZ z fY y z y f X x dx dy f X x fY z x dx fY y f X z y dy f Z z 3.6. Probability Density Function of a Function of Two Random Variables Define the function as Z 1 X , Y and W 2 X , Y and the inverse as X 1 Z ,W and Y 2 Z ,W The original pdf is f x, y with the derived pdf in the transform space of g z, w . Then it can be proven that: Pr z1 Z z 2 , w1 W w2 Pr x1 X x2 , y1 Y y 2 or equivalently w2 z 2 y2 x2 w1 z1 y1 x1 g z, w dz dw f x, y dx dy Using an advanced calculus theorem to perform a transformation of coordinates. x x g z , w f 1 z, w, 2 z , w z w f 1 z , w, 2 z , w J y y z w w2 z 2 w2 z 2 w1 z1 w1 z1 g z, w dz dw f z, w, z, w J dz dw 1 2 If only one output variable desired (letting W=X), integrate for all W to find Z, do not integrate for z …. g z g z, w dw f z, w, z, w J dw 1 2 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 10 of 36 ECE 3800 3.7. The Characteristic Function (Not covered in class or homework) u Eexp j u X u f x exp j u x dx The inverse of the characteristic function is then defined as: 1 f x 2 u exp j u x du Computing other moments is performed similarly, where: d n u du du u 0 f x j x n exp j u x dx d n u n n j x n f x dx j n x n f x dx j n E X n Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 11 of 36 ECE 3800 4. Elements of Statistics 4.1. Introduction 4.2. Sampling Theory--The Sample Mean 1 Xˆ n Sample Mean n Xi , where X i are random variables with a pdf. i 1 Variance of the sample mean 2 2 1 n X 2 X 2 X Var Xˆ X 2 n n2 n n 2 N n Var Xˆ n N 1 4.3. Sampling Theory--The Sample Variance X Xˆ n 1 E S n N n 1 E S N 1 n S2 1 n n 2 i i 1 2 2 2 to make it unbiased n n 1 ~ ES2 E S2 n 1 n 1 n n i 1 2 n 2 2 1 ˆ Xi X X i Xˆ n 1 i 1 When the population is not large, the biased and unbiased estimates become NN 1 n n 1 n N ~ E S E S N 1 n 1 E S2 2 2 2 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 12 of 36 ECE 3800 4.4. Sampling Distributions and Confidence Intervals Gaussian Xˆ X Z n Student’s t distribution Xˆ X Xˆ X ~ S S n 1 n k k Xˆ X X n n T where is the gamma function. k 1 k k for any k k! and for k an integer 2 1 Confidence Intervals based on Gaussian and Student’s t distribution Gaussian Z Xˆ X n X k k Xˆ X n n Student’s t distribution Xˆ X Xˆ X T ~ S S n 1 n ~ ~ X t S Xˆ X t S n n 4.5. Hypothesis Testing The null Hypothesis Accept H0: if the computed value “passes” the significance test. Reject H0: if the computed value “fails” the significance test. One tail or two-tail testing Using Confidence Interval value computations Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 13 of 36 ECE 3800 4.6. Curve Fitting and Linear Regression For a y a bx n 1 n y i b xi n i 1 i 1 and n n n n y i xi xi y i i 1 i 1 b i 1 2 n n 2 n xi xi i 1 i 1 nd Or using 2 moment, correlation, and covariance values Yˆ R XX Xˆ R XY Yˆ R XX Xˆ R XY a 2 C XX ˆ R XX X R Yˆ Xˆ C b XY XY 2 C XX R XX Xˆ 4.7. Correlation Between Two Sets of Data xi xi 2 E X 2 R XX X 2 1 C XX n 1 n i 1 2 C XY E X X Y Y i 1 n 2 1 n xi xi R XX X 2 n i 1 i 1 n 1 R XY E X Y n n 1 X E X n X X Y Y r XY E Y X 1 n n xi y i 1 n i 1 n xi y i X Y i 1 n xi y i X Y i 1 X Y C XY X Y Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 14 of 36 ECE 3800 5. Random Processes 5.1. Introduction A random process is a collection of time functions and an associated probability description. The entire collection of possible time functions is an ensemble, designated as xt , where one particular member of the ensemble, designated as xt , is a sample function of the ensemble. In general only one sample function of a random process can be observed! 5.2. Continuous and Discrete Random Processes 5.3. Deterministic and Nondeterministic Random Processes A nondeterministic random process is one where future values of the ensemble cannot be predicted from previously observed values. A deterministic random process is one where one or more observed samples allow all future values of the sample function to be predicted (or pre-determined). 5.4. Stationary and Nonstationary Random Processes If all marginal and joint density functions of a process do not depend upon the choice of the time origin, the process is said to be stationary. Wide-Sense Stationary: the mean value of any random variable is independent of the choice of time, t, and that the correlation of two random variables depends only upon the time difference between them. 5.5. Ergodic and Nonergodic Random Processes The probability generated means and moments are equivalent to the time averaged means and moments. A Process for Determining Stationarity and Ergodicity a) Find the mean and the 2nd moment based on the probability b) Find the time sample mean and time sample 2nd moment based on time averaging. c) If the means or 2nd moments are functions of time … non-stationary d) If the time average mean and moments are not equal to the probabilistic mean and moments or if it is not stationary, then it is non ergodic. 5.6. Measurement of Process Parameters The process of taking discrete time measurements of a continuous process in order to compute the desired statistics and probabilities. 5.7. Smoothing Data with a Moving Window Average An example of using a “FIR filter” to smooth high frequency noise from a slowly time varying signal of interest. . Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 15 of 36 ECE 3800 6. Correlation Functions 6.1 Introduction The Autocorrelation Function For a sample function defined by samples in time of a random process, how alike are the different samples? X 1 X t1 and X 2 X t 2 Define: The autocorrelation is defined as: dx1 dx2 x1x2 f x1, x2 R XX t1 , t 2 E X 1 X 2 The above function is valid for all processes, stationary and non-stationary. For WSS processes: R XX t1 , t 2 E X t X t R XX If the process is ergodic, the time average is equivalent to the probabilistic expectation, or 1 XX lim T 2T and T xt xt dt xt xt T XX R XX As a note for things you’ve been computing, the “zoreth lag of the autocorrelation” is dx x R XX t1 , t1 R XX 0 E X 1 X 1 E X 1 2 2 2 2 2 1 f x1 X X 1 XX 0 lim T 2T T xt 2 dt xt 2 T 6.2 Example: Autocorrelation Function of a Binary Process Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 16 of 36 ECE 3800 6.3. Properties of Autocorrelation Functions 1) R XX 0 E X 2 X 2 or XX 0 xt 2 The mean squared value of the random process can be obtained by observing the zeroth lag of the autocorrelation function. 2) R XX R XX The autocorrelation function is an even function in time. Only positive (or negative) needs to be computed for an ergodic WSS random process. 3) R XX R XX 0 The autocorrelation function is a maximum at 0. For periodic functions, other values may equal the zeroth lag, but never be larger. 4) If X has a DC component, then Rxx has a constant factor. X t X N t R XX X 2 R NN Note that the mean value can be computed from the autocorrelation function constants! 5) If X has a periodic component, then Rxx will also have a periodic component of the same period. Think of: X t A cosw t , 0 2 where A and w are known constants and theta is a uniform random variable. A2 R XX E X t X t cosw 2 5b) For signals that are the sum of independent random variable, the autocorrelation is the sum of the individual autocorrelation functions. W t X t Y t RWW R XX RYY 2 X Y For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean) RWW R XX RYY 2 X Y RWW Rww W 2 R xx X 2 R yy Y 2 2 X Y RWW Rww W 2 R xx R yy X 2 2 X Y Y 2 RWW Rww W 2 R xx R yy X Y 2 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 17 of 36 ECE 3800 Then we have W 2 X Y 2 Rww R xx R yy 6) If X is ergodic and zero mean and has no periodic component, then we expect lim R XX 0 7) Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes permissible is in terms of the Fourier transform of the autocorrelation function. That is, if R XX exp jwt dt R XX then the restriction states that R XX 0 for all w Additional concept: X t a N t R XX a 2 E N t N t a 2 R NN 6.4. Measurement of Autocorrelation Functions We love to use time average for everything. For wide-sense stationary, ergodic random processes, time average are equivalent to statistical or probability based values. 1 XX lim T 2T T xt xt dt xt xt T Using this fact, how can we use short-term time averages to generate auto- or cross-correlation functions? An estimate of the autocorrelation is defined as: Rˆ XX 1 T T xt xt dt 0 Note that the time average is performed across as much of the signal that is available after the time shift by tau. For tau based on the available time step, k, with N equating to the available time interval, we have: Rˆ XX kt 1 N 1t kt N k xit xit kt t i 0 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 18 of 36 ECE 3800 Rˆ XX kt Rˆ XX k 1 N 1 k N k xi xi k i 0 In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this point the entire summation consists of one point and is therefore a poor estimate of the autocorrelation. For useful results, k<<N! As noted, the validity of each of the summed autocorrelation lags can and should be brought into question as k approaches N. As a result, a biased estimate of the autocorrelation is commonly used. The biased estimate is defined as: ~ R XX k 1 N 1 N k xi xi k i 0 Here, a constant weight instead of one based on the number of elements summed is used. This estimate has the property that the estimated autocorrelation should decrease as k approaches N. 6.5. Examples of Autocorrelation Functions Addition W t X t Y t RWW E W t W t E X t Y t X t Y t E X t X t X t Y t Y t X t Y t Y t R XX RYY E xt y t E y t xt If X and Y are independent RWW R XX RYY 2 X Y If either X or Y are zero mean (along with being independent) RWW R XX RYY Subtraction RWW E W t W t W t X t Y t E X t Y t X t Y t E X t X t X t Y t Y t X t Y t Y t R XX RYY E xt y t E y t xt If X and Y are independent RWW R XX RYY 2 X Y If either X or Y are zero mean (along with being independent) RWW R XX RYY Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 19 of 36 ECE 3800 6.6. Crosscorrelation Functions For a two sample function defined by samples in time of two random processes, how alike are the different samples? X 1 X t1 and Y2 Y t 2 Define: The cross-correlation is defined as: R XY t1 , t 2 E X 1Y2 RYX t1 , t 2 E Y1 X 2 dx1 dy2 x1 y2 f x1, y2 dy1 dx2 y1x2 f y1, x2 The above function is valid for all processes, jointly stationary and non-stationary. For jointly WSS processes: R XY t1 , t 2 E X t Y t R XY RYX t1 , t 2 EY t X t RYX Note: the order of the subscripts is important for cross-correlation! If the processes are jointly ergodic, the time average is equivalent to the probabilistic expectation, or 1 XY lim T 2T 1 T 2T YX lim and T xt yt dt xt y t yt xt dt y t xt T T T XY R XY YX RYX 6.7. Properties of Crosscorrelation Functions 1) The properties of the zoreth lag have no particular significance and do not represent mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. . R XY 0 RYX 0 or XY 0 YX 0 2) Crosscorrelation functions are not generally even functions. However, there is an antisymmetry to the ordered crosscorrelations: R XY RYX Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 20 of 36 ECE 3800 For T 1 XY lim T 2T xt yt dt T Substitute XY lim T 1 T 2T XY lim 1 2T xt y t t T x y d x y T T y x d y x YX T 3) The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation should be a maximum when the lag equals the time delay! R XY R XX 0 R XX 0 It can be shown however that As a note, the crosscorrelation may not achieve the maximum anywhere … 4) If X and Y are statistically independent, then the ordering is not important R XY E X t Y t E X t E Y t X Y and R XY X Y RYX 5) If X is a stationary random process and is differentiable with respect to time, the crosscorrelation of the signal and it’s derivative is given by dR XX R XX d Defining derivation as a limit: X t e X t X lim e e 0 and the crosscorrelation X t e X t R XX E X t X t E X t lim e e0 E X t X t e X t X t R XX lim e e 0 E X t X t e E X t X t R XX lim e e 0 R e R XX R XX lim XX e e 0 dR XX R XX d Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 21 of 36 ECE 3800 Similarly, R XX d 2 R XX d 2 6.8. Examples and Applications of Crosscorrelation Functions RF signal receiver mixing and down conversion, A, B, and theta are R.V with a uniformly distributed random variable on 0,2 . that is also independent of either A or B. X t A cos2 f 0 t B sin 2 f 0 t Y t sin 2 f 0 t Forming the cross-correlation R XY t , t E A cos2 f 0 t B sin 2 f 0 t sin 2 f 0 t A cos2 f 0 t sin 2 f 0 t R XY t , t E B sin 2 f t sin 2 f t 0 0 A sin 2 f 0 sin 2 f 0 2t 2 2 R XY t , t E A, B E B 2 cos2 f 0 cos2 f 0 2t 2 B A R XY t , t E A, B sin 2 f 0 cos2 f 0 2 2 R XY E A E B sin 2 f 0 cos2 f 0 2 2 The mixing phase can modify the output E A R XY sin 2 f 0 cos cos2 f 0 sin 2 E B cos2 f 0 cos sin 2 f 0 sin 2 For 0 , ( n ) or the cross correlation becomes E A E B R XY sin 2 f 0 cos2 f 0 2 2 For , ( n ) or the cross correlation becomes 2 2 E A E B R XY cos2 f 0 sin 2 f 0 2 2 6.9. Correlation Matrices For Sampled Functions Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 22 of 36 ECE 3800 7. Spectral Density 7.1. Introduction 7.2. Relation of Spectral Density to the Fourier Transform This function is also defined as the spectral density function (or power-spectral density) and is defined for both f and w as: 2 2 E Y w EY f or SYY w lim SYY f lim 2T 2T T T The 2nd moment based on the spectral densities is defined, as: Y 2 1 2 SYY w dw and Y 2 SYY f df Note: The result is a power spectral density (in Watts/Hz), not a voltage spectrum as (in V/Hz) that you would normally compute for a Fourier transform. Wiener-Khinchine relation For WSS random processes, the autocorrelation function is time based and, for ergodic processes, describes all sample functions in the ensemble! In these cases the Wiener-Khinchine relations is valid that allows us to perform the following. S XX w R XX EX t X t exp iw d For an ergodic process, we can use time-based processing to arrive at an equivalent result … 1 XX lim T 2T T xt xt dt xt xt T X X w For 1 XX lim T 2T T xt exp iwt X w dt T 1 T 2T XX X w lim T xt exp i wt dt T XX X w X w X w 2 We can define a power spectral density for the ensemble as: S XX w R XX R XX exp iw d Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 23 of 36 ECE 3800 Based on this definition, we also have S XX w R XX 1 R XX t 2 R XX 1 S XX w S XX w expiwt dw 7.3. Properties of Spectral Density The power spectral density as a function is always real, positive, and an even function in w. As an even function, the PSD may be expected to have a polynomial form as: S XX w S 0 w 2n a 2n 2 w 2n 2 a 2n 4 w 2n 4 a 2 w 2 a0 w 2m b2m 2 w 2m 2 b2m 4 w 2m 4 b2 w 2 b0 where m>n. Notice the squared terms, any odd power would define an anti-symmetric element that, by definition and proof, can not exist! Finite property in frequency. The Power Spectral Density must also approach zero as w approached infinity …. Therefore, w 2 n a2n2 w 2 n2 a2 w 2 a0 w2n 1 S lim lim S 0 2m n 0 S XX w lim S 0 2 m 0 2 m 2 2 2 m w w w w b2 m 2 w w w b2 w b0 For m>n, the condition will be met. 7.4. Spectral Density and the Complex Frequency Plane 7.5. Mean-Square Values From Spectral Density 1 R XX t S XX w expiwt dw 2 1 R XX 0 2 S w dw XX Works for discrete time as well, but watch the integral bounds. Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 24 of 36 ECE 3800 7.6. Relation of Spectral Density to the Autocorrelation Function The power spectral density as a function is always real, positive, and an even function in w/f. You can convert between the domains using: The Fourier Transform in w S XX w R XX exp iw d 1 R XX t 2 S XX w expiwt dw The Fourier Transform in f S XX f R XX exp i2f d R XX t S XX f expi2ft df The 2-sided Laplace Transform S XX s R XX exp s d 1 R XX t j 2 j S XX s expst ds j Deriving the Mean-Square Values from the Power Spectral Density The mean squared value of a random process is equal to the 0th lag of the autocorrelation EX 2 1 R XX 0 2 E X 2 R XX 0 1 S XX w expiw 0 dw 2 S XX w dw S XX f expi2f 0 dw S XX f df Therefore, to find the second moment, integrate the PSD over all frequencies. Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 25 of 36 ECE 3800 7.7. White Noise Noise is inherently defined as a random process. You may be familiar with “thermal” noise, based on the energy of an atom and the mean-free path that it can travel. As a random process, whenever “white noise” is measured, the values are uncorrelated with each other, not matter how close together the samples are taken in time. Further, we envision “white noise” as containing all spectral content, with no explicit peaks or valleys in the power spectral density. As a result, we define “White Noise” as R XX S 0 t N S XX w S 0 0 2 This is an approximation or simplification because the area of the power spectral density is infinite! For typical applications, we are interested in Band-Limited White Noise where N0 S 0 S XX w 2 0 f W W f The equivalent noise power is then: R 2 EX XX 0 W S 0 dw 2 W S 0 2 W W N0 N 0 W 2 7.8. Cross-Spectral Density The Fourier Transform in w S XY w R XY exp iw d 1 R XY t 2 and SYX w RYX exp iw d 1 S XY w expiwt dw and RYX t 2 Properties of the functions SYX w expiwt dw S XY w conjSYX w Since the cross-correaltion is real, the real portion of the spectrum is even the imaginary portion of the spectrum is odd Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 26 of 36 ECE 3800 Generic Example of a Correlation and Power Spectral Density X t A B sin2 f1 t 1 C cos2 f 2 t 2 where the phase angles are uniformly distributed R.V from 0 to 2π. R XX E X t X t X t A B sin 2 f 1 t 1 C cos2 f 2 t 2 E X t A B sin 2 f 1 t 1 C cos2 f 2 t 2 A 2 AB sin 2 f1 t 1 AB sin 2 f 1 t 1 2 B sin 2 f 1 t 1 sin 2 f1 t 1 AC cos2 f t AC cos2 f t 2 2 2 2 R XX E 2 C cos2 f 2 t 2 cos2 f 2 t 2 BC sin 2 f1 t 1 cos2 f 2 t 2 BC cos2 f 2 t 2 sin 2 f 1 t 1 Assuming all possible R.V.s are independent and zero mean (cross products zero) A2 2 B sin 2 f1 t 1 sin 2 f 1 t 1 R XX E C 2 cos2 f 2 t 2 cos2 f 2 t 2 BC sin 2 f 1 t 1 2 f 2 t 2 If theta is independent of all other R.V. With practice, we can see that 1 1 R XX E A 2 E B 2 E cos2 f1 cos2 f 1 2t 2 1 2 2 1 1 E C 2 E cos2 f 2 cos2 f 2 2t 2 2 2 2 which lead to (assuming A,B, and C take on values) B2 C2 R XX A 2 cos2 f1 cos2 f 2 2 2 Forming the PSD And then taking the Fourier transform 2 1 1 1 C 1 f f1 f f1 f f 2 f f 2 2 2 2 2 2 2 B S XX f A 2 f 2 S XX f A 2 f B2 C2 f f1 f f1 f f 2 f f 2 4 4 Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 27 of 36 ECE 3800 7.9. Autocorrelation Function Estimate of Spectral Density 7.10. Periodogram Estimate of Spectral Density 7.11. Examples and Applications of Spectral Density Determine the autocorrelation of the binary sequence. xt pt t 0 k T A k k xt pt A k k t t 0 k T Determine the auto correlation of the discrete time sequence (leaving out the pulse for now) y t A k k t t 0 k T E y t y t E Ak t t 0 k T A j t t 0 j T j k R yy E y t y t E A j Ak t t 0 k T t t 0 j T k j 2 Ak t t 0 k T t t 0 k T k R yy E A j Ak t t 0 k T t t 0 j T k jj k E A R yy k k 2 E t t EA 0 k T t t 0 k T Ak E t t 0 k T t t 0 j T j k j jk T1 EA R yy E Ak 2 2 k 2 1 m T T m m0 T1 EA R yy E Ak E Ak 2 2 k 1 m T T m m T m 1 1 1 m S yy f A2 A2 f T T T T m From here, it can be shown that R yy A2 1 1 A2 T T Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 28 of 36 ECE 3800 S xx f P f S yy f 2 1 1 m 2 S xx f P f A2 A2 2 f T T T m 2 2 m 2 2 S xx f P f A P f A2 f T T T m This is a magnitude scaled version of the power spectral density of the pulse shape and numerous impulse responses with magnitudes shaped by the pulse at regular frequency intervals based on the signal periodicity. The result was picture in the textbook as … Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 29 of 36 ECE 3800 8. Response of Linear Systems to Random Inputs 8.1. Introduction Linear transformation of signals: convolution in the time domain yt ht xt Y s H s X s System x t h t X s H s y t Y s The convolution Integrals (applying a causal filter) y t xt h d or y t t ht x d 0 8.2. Analysis in the Time Domain 8.3. Mean and Mean-Square Value of System Output The Mean Value at a System Output E Y t E X t h d 0 E Y t EX t h d 0 For a wide-sense stationary process, this result in E Y t 0 0 EX h d EX h d The coherent gain of a filter is defined as: h gain ht dt 0 Therefore, E Y t E X hgain Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 30 of 36 ECE 3800 The Mean Square Value at a System Output 2 E Y t 2 E X t h d 0 h R E Y t 2 XX h d 0 Therefore, take the convolution of the autocorrelation function and then sum the filter-weighted result from 0 to infinity. 8.4. Autocorrelation Function of System Output RYY EY t Y t Eht xt ht xt RYY E xt 1 h1 d1 xt 2 h2 d 2 0 0 RYY h1 R XX 1 2 h2 d2 d1 0 0 RYY RYY h1 R XX 1 h 1 d1 0 0 h 1 R XX 1 h 1 d1 Notice that first, you convolve the input autocorrelation function with the filter function and then you convolve the result with a time inversed version of the filter! 8.5. Crosscorrelation between Input and Output R XY E X t Y t Ext ht xt R XY R XX 1 h1 d1 0 This is the convolution of the autocorrelation with the filter. What about the other Autocorrelation? RYX EY t X t Eht xt xt RYX R XX 1 h1 d1 0 This is the correlation of the autocorrelation with the filter, inherently different than the previous, but equal at =0. Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 31 of 36 ECE 3800 8.6. Example of Time-Domain System Analysis 8.7. Analysis in the Frequency Domain 8.8. Spectral Density at the System Output SYY w RYY d1 d2 h1 h2 R XX 1 2 exp iw d 0 0 SYY w RYY S XX w H w H w Therefore SYY w RYY S XX w H w 2 8.9. Cross-Spectral Densities between Input and Output S XY s S XX s H s SYX s S XX s H s 8.10. Examples of Frequency-Domain Analysis Noise in a linear feedback system loop. X s N s A s s 1 Y s 1 Linear superposition of X to Y and N to Y. A X s Y s N s s s 1 A A Y s 1 X s N s s s 1 s s 1 s2 s A A Y s X s N s s s 1 s s 1 Y s A s2 s X s N s s2 s A s2 s A There are effectively two filters, one applied to X and a second apply to N. Y s Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 32 of 36 ECE 3800 s2 s A H s and N s2 s A s2 s A Y s H X s X s H N s N s Generic definition of output Power Spectral Density: 2 2 S YY w H X w S XX w H N w S NN w H X s 8.11. Numerical Computation of System Output 9. Optimum Linear Systems 9.1. Introduction 9.2. Criteria of Optimality 9.3. Restrictions on the Optimum System 9.4. Optimization by Parameter Adjustment 9.5. Systems That Maximize Signal-to-Noise Ratio E s t 2 PNoise N 0 B EQ PSignal SNR is defined as For a linear system, we have: s o t no t h st nt d 0 The output SNR can be described as SNRout PSignal PNoise Es t E n t N B E so t 2 2 o 2 o o EQ or substituting 2 E h s t d 0 SNRout 1 N o ht 2 dt 2 0 To achieve the maximum SNR, the equality condition of Schwartz’s Inequality must hold, or Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 33 of 36 ECE 3800 2 2 2 h s t d h d s t d 0 0 0 h K st u This condition can be met for where K is an arbitrary gain constant. The desired impulse response is simply the time inverse of the signal waveform at time t, a fixed moment chosen for optimality. If this is done, the maximum filter power (with K=1) can be computed as h 2 d 0 st 2 t d 2 d t 0 maxSNRout s 2 t No This filter concept is called a matched filter. 9.6. Systems That Minimize Mean-Square Error Err t X t Y t The error function Y t where h X t N t d 0 Computing the error power j 1 2 E Err S XX s 1 H s 1 H s S NN s H s H s ds j 2 j E Err 2 j S XX s S NN s H s H s 1 ds j 2 j S XX s H s S XX s H s S XX s Defining FC s FC s S XX s S NN s Step 1: So, let’s see about some interpretations. First, let H1 s 1 , which should be causal. FC s Note that for this filter S XX s S NN s 1 1 1 FC s FC - s This is called a whitening filter as it forces the signal plus noise PSD to unity (white noise). Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 34 of 36 ECE 3800 Step 2: H s H 1 s H 2 s H 2 s Letting E Err 2 FC s H 2 s S XX s H 2 s S XX s FC s FC s FC s FC s FC s FC s 1 ds j 2 j S XX s S NN s F s F s C C j S XX s S XX s H 2 s H 2 s FC s FC s 1 2 E Err ds j 2 j S XX s S NN s F s F s C C Minimizing the terms containing H2(s), we must focus on S s S s and H 2 s XX H 2 s XX FC s FC s Letting H2 be defined for the appropriate Left or Right half-plane poles S s S s Let and H 2 s XX H 2 s XX FC s LHP FC s RHP j The composite filter is then H s H1 s H 2 s S s XX FC s FC s LHP 1 This solution is often called a Wiener Filter and is widely applied when the signal and noise statistics are known a-priori! Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 35 of 36 ECE 3800 Appendices A. Mathematical Tables A.1. Trigonometric Identities A.2. Indefinite Integrals A.3. Definite Integrals A.4. Fourier Transform Operations A.5. Fourier Transforms A.6. One-Sided Laplace Transforms B. Frequently Encountered Probability Distributions B.1. Discrete Probability Functions B.2. Continuous Distributions C. Binomial Coefficients D. Normal Probability Distribution Function E. The Q-Function F. Student's t Distribution Function G. Computer Computations H. Table of Correlation Function--Spectral Density Pairs I. Contour Integration Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9. B.J. Bazuin, Spring 2015 36 of 36 ECE 3800