Chapter 6: Correlation Functions

advertisement

Chapter 6: Correlation Functions

6-1 Introduction

6-2 Example: Autocorrelation Function of a Binary Process

6-3 Properties of Autocorrelation Functions

6-4 Measurement of Autocorrelation Functions

6-5 Examples of Autocorrelation Functions

6-7 Properties of Crosscorrelation Functions

6-8 Example and Applications of Crosscorrelation Functions

6-9 Correlation Matrices for Sampled Functions

Concepts

Autocorrelation o of a Binary/Bipolar Process o of a Sinusoidal Process

Linear Transformations

Properties of Autocorrelation Functions

Measurement of Autocorrelation Functions

An Anthology of Signals

Crosscorrelation

Properties of Crosscorrelation

Examples and Applications of Crosscorrelation

Correlation Matrices For Sampled Functions

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 1 of 20 ECE 3800

Chapter 6: Correlation Functions

The Autocorrelation Function

The autocorrelation is defined as:

R

XX

 t

1

, t

2

E

X

1

X

2

 

 dx

1

 dx

2

 x

1 x

2 f

 x

1

, x

2

 

The above function is valid for all processes, stationary and non-stationary.

For WSS processes:

R

XX

 t

1

, t

2

E

X

   t

   

R

XX

If the process is ergodic, the time average is equivalent to the probabilistic expectation, or and

XX

T lim

 

1

2 T

T

T

XX x

   t

  

R

XX dt

 x

   t

  

Properties of Autocorrelation Functions

1) R

XX

E

 

X

2

or

XX

 

 x

 

2

2)

3)

R

XX

R

XX

 

 

R

XX

R

XX

4) If X has a DC component, then Rxx has a constant factor.

X

 

X

N

R

XX

X

2 

R

NN

5) If X has a periodic component, then Rxx will also have a periodic component of the same period.

X

A

 cos

 w

 t

  

, 0

  

2

 

R

XX

E

X

   t

   

A

2

 cos

 w

  

2

6) If X is ergodic and zero mean and has no periodic component, then

 lim

 

R

XX

 

0

7) Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes permissible is in terms of the Fourier transform of the autocorrelation function.

R

XX

  

0 for all w

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 2 of 20 ECE 3800

The Crosscorrelation Function

The crosscorrelation is defined as:

R

XY

R

YX

 t

1

, t

2

 t

1

, t

2

E

X

1

Y

2

E

Y

1

X

2

 

 dx

1

 

 dy

 

 dy

1

  dx

2

2

 x

1 y

2

 y

1 x

2 f

 x

1

, y

2

  f

 y

1

, x

2

 

The above function is valid for all processes, jointly stationary and non-stationary.

For jointly WSS processes:

R

XY

R

YX

 t

1

, t

1

, t t

2

2

E

E

X

Y t

   t t

 

R

XY

R

YX

 

 

Note: the order of the subscripts is important for cross-correlation!

If the processes are jointly ergodic, the time average is equivalent to the probabilistic expectation, or

XY

YX

T lim

 

1

2 T

T lim

 

1

2 T

T

T x

   t

  

 dt

T

T y

   t

  

 dt

 x

   t

   y

   t

   and

XY

YX

 

 

R

XY

R

YX

Properties of Crosscorrelation Functions

1) The properties of the zoreth lag have no particular significance and do not represent mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .

R

XY

 

R

YX

or

XY

 

YX

2) Crosscorrelation functions are not generally even functions. However, there is an antisymmetry to the ordered crosscorrelations:

R

XY

 

R

YX

3) The crosscorrelation does not necessarily have its maximum at the zeroth lag.

It can be shown however that R

XY

 

R

XX

 

R

YY

 

As a note, the crosscorrelation may not achieve this maximum anywhere …

4) If X and Y are statistically independent, then the ordering is not important

R

XY

 

E

X

   t

   

E

X

E

Y

 t

   

X

Y and

R

XY

X

Y

R

YX

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 3 of 20 ECE 3800

5) If X is a stationary random process and is differentiable with respect to time, the crosscorrelation of the signal and it’s derivative is given by

Similarly,

R

X

 

 dR

XX d

 

R  

  d

2

R

XX d

 2

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 4 of 20 ECE 3800

Measurement of The Autocorrelation Function

We love to use time average for everything. For wide-sense stationary, ergodic random processes, time average are equivalent to statistical or probability based values.

XX

T lim

 

1

2 T

T

T

 x

   t

    dt

 x

   t

  

Using this fact, how can we use short-term time averages to generate auto- or cross-correlation functions?

An estimate of the autocorrelation is defined as:

XX

T

1

 

T

0

  x

   t

    dt

Note that the time average is performed across as much of the signal that is available after the time shift by tau.

Digital Time Sample Correlation

In most practical cases, the operation is performed in terms of digital samples taken at specific time intervals,

 t . For tau based on the available time step, k, with N equating to the available time interval, we have:

XX k

 t

N 1

1

 t k

 t i

N

 k

0 x

   i

 t

 k

 t

   t

XX

XX

N

1

1

 k

N i

0 k x

   i

 k

In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this point the entire summation consists of one point and is therefore a poor estimate of the autocorrelation. For useful results, k<<N!

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 5 of 20 ECE 3800

It can be shown that the estimated autocorrelation is equivalent to the actual autocorrelation; therefore, this is an unbiased estimate.

E

XX

E

XX

E

XX

E

N

1

1

 k

N i

0 k x

   i

 k

N

1

1

 k

N

1

1

 k

N i

 k

0

E

 x

   i

 k

 

N i

 k

0

R

XX

E

N

XX

1

1

 k

N

R

XX k k

1

R

XX

 

 

As noted, the validity of each of the summed autocorrelation lags can and should be brought into question as k approaches N.

Biased Estimate of Autocorrelation

As a result, a biased estimate of the autocorrelation is commonly used. The biased estimate is defined as:

~

R

XX

N

1

1

N i

0 k x

   i

 k

Here, a constant weight instead of one based on the number of elements summed is used. This estimate has the property that the estimated autocorrelation should decrease as k approaches N.

The expected value of this estimate can be shown to be

E

~

R

XX

1

N n

1

R

XX

The variance of this estimate can be shown to be (math not done at this level)

Var

~

R

XX

2

N

 k

M 

M

R

XX

2

This equation can be used to estimate the number of time samples needed for a useful estimate.

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 6 of 20 ECE 3800

6-5. Examples of Autocorrelation Functions

Differential binary data function already discussed: x

 

E

  k

 

A ,

A , k

 a k

 rect

 t

Pr

Pr

 

A

  p

1

0

T t a

T p

and 0

 t

0

2

 k

T

T ,

 pdf

 

0

 1

T

Then

R

XX

A

A

2

2 



T

T

4

 p

2

4

 p

4

 p

2 

4

1 ,

 p

1

T

 , for for

T

  

T

T

 

Single-Sided binary data function already discussed:

E

  k

 

A

0 ,

, y

 k

  a k

Pr

Pr

 

 

1 p

 rect

 t

 t

 T

0 p

and 0

 t

0

T

2

T , k

T

 pdf

 

0

 1

T

Note: this is the same as y

R

YY

1

2

 x

1

4

R

XX

A

, the previous signal scaled by 0.5 and offset by A/2.

2

2

E 

1

2 x

  



A

4

2

1

4

R

XX

A

4

2

Then

R

YY

A

2

4

A

2

4



T

T

4

 p

2 

4

4

 p

2 

4

 p

1

 p

1

T

A

2

,

4



A

2

,

4

See figure 6-5, the triangle on a pedestal. for

T

  

T for T

 

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 7 of 20 ECE 3800

Random Telegraph Autocorrelation

A binary signal with randomly spaced switching times.

R

XX

A

2  exp

   

See Figure 6-6.

Sinusoidal Oscillations and Autocorrelation

The waveforms already observed can be “modulated” functions where the described functions is

“mixed” with a sinusoid. Then the following autocorrelation functions may occur.

R

XX

A

2  cos

2

   f

  

R

XX

A

2  exp

   

 cos

2

   f

  

R

XX

A

2

T

 



 cos

2

   f

  

, for

T

  

T

It is also possible to derive the sinc function ….

R

XX

A

2  sin

 

  

 

 

  

A

2  sinc

See figure 6-7.

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 8 of 20 ECE 3800

Exercise 6-5.2

(a) R

XX

A

2  exp

 

In general, it is an appropriate function … differentiable for all time, goes to zero at infinity, and maximum is at 0.

(b) R

XX

   exp

 

It is not an appropriate function … the zeroth lag is not the maximum!

(c) R

XX

10

 exp

  

2

 

It is not an appropriate functions … not symmetric about 0 and the zeroth lag is not maximum.

(d) R

XX

 

A

2 

 sin

 

 

  



2

In general, it is an appropriate functions … differentiable for all time, goes to zero at infinity, and maximum is at 0.

(e) R

XX

2

2

2

4

In general, it is not an appropriate functions … the maximum is not at 0. fn

 

1 while fn

 

0 .

5

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 9 of 20 ECE 3800

Exercise 6-6.1

Find the cross-correlation of the two functions …

X

2

 cos

2

   f

 t

  

and Y

10

 sin

2

   f

 t

  

Using the time average functions

XY

T lim

 

1

2 T

T

T x

   t

  

 dt

 x

   t

  

XY

XY

T lim

 

1

2 T

T

T

 

2

 cos

2

   f

 t

   

10

 sin

2

   f

 t

  

   

 dt

20

1

1 f

1 f

0 cos

2

   f

 t

  

 sin

2

   f

 t

  

  

 dt

XY

XY

1

20

 f f

0

1

2

 sin

2

   f

2 t

  

2

 

 sin

2

   f

   

 dt

1

10

 f f

0 sin

2

   f

1

2 t

  

2

 

 dt

10

 f f

0 sin

2

   f

  

 dt

XY

XY

 

10

4

  f

 f

 cos

2

   f

2 t

  

2

 

0

1 f

10

 f

 sin

2

   f

1

  

0 f

  dt

10

4

 

 cos

 2

   f

 f

2

 



2



 cos

2

   f

 

2

 

 

10

 sin

2

   f

  

XY

XY

10

4

 

 cos

4

  

2

   f

  

2

 

 cos

2

   f

  

2

  

10

 sin

2

   f

  

10

4

 

 cos

2

   f

  

2

 

 cos

2

   f

  

2

  

10

 sin

2

   f

  

XY

10

 sin

2

   f

  

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 10 of 20 ECE 3800

Using the probabilistic functions

R

XY

R

XY

E

 

2

R

XY

 cos

2

 

 

20

E

 cos

2

  f

 t f

E

   

10

 sin

2

 

 t

 x

   t

   

  

 sin

2

   f

R

XY

10

E

 sin

2

   f

2 t

2

   f

   f

 t

  

    

 t

  

   

2

 

 sin

2

   f

   

R

XY

10

 sin

2

   f

  

10

E

 sin

2

   f

2 t

2

   f

  

2

  

From prior understanding of the uniform random phase ….

R

XY

10

 sin

2

   f

  

Exercise 6-6.2

If the phase is removed, the time average does not change and is the more appropriate to perform the computations as there are no random variables!

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 11 of 20 ECE 3800

Correlation Matrices for Sampled Functions

Collect a vector of time samples:

X

X

X

X

 

 

 

Forming the autocorrelation matrix:

R

XX

E

X

X

T

X

X

X

 

1

 

2

 

N

X

 

1

X

 

2

R

XX

E

X

X

X

   

1 1

   

2 1

X

X

   

1

   

2

2

2

   

N

 t

1

X

N

   

2

 X

 

N

X

X

X t

    t

1

2

X t

N

   

N

N

X t

N

If the random process is Wide Sense Stationary, the autocorrelation is related to the” lag times”.

Assuming that the data consists of consecutive time samples at a constant interval, t

2 t k

 t

1

 t

1

 t

 k

1

  t

Then

R

XX

E

 R

XX

R

R

XX

XX

 

 t

N

1

  t

R

R

XX

XX

 

 

R

XX

N

2

  t

R

XX

R

XX

N

N

1

2

R

XX

 t

 t

 but since the autocorrelation function is symmetric

R

XX

E

 R

XX

R

R

XX

XX

 

 

 

N

1

  t

R

R

XX

XX

 

 

R

XX

 

N

2

  t

R

XX

R

XX

N

N

1

2

R

XX

 t

 t

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 12 of 20 ECE 3800

Note that this matrix takes on a banded form along the diagonal, a Toeplitz Matrix. It has

“useful” properties when performing matrix computations!

The covariance matrix can also be described:

C

XX

E

X

X

 

X

X

T

E

X

X

 

1

 

2

X

N

X

X

X

X

 

1

X

C

XX

N 1

21

1

 

1

2

 

N

1

1

12

2

1

2

2

N 2

 

N

 

2

X

2

X 

1 N

2 N

N

 

1

2

N

N

 

N

X

 

N

X

For a WSS process, all the variances are identical and the correlation coefficients are based on the lag interval.

C

XX

 

XX

 

2 

1

1

N

1

1

1

N

2

N

1

N

2

1

An Example

Let R

XX

10

 exp

9

Generating a 3 x 3 matrix based on time samples that are “1 unit” apart yields:

R

XX

R

R

R

XX

XX

XX

 

 

2

 t

 

R

R

XX

R

XX

XX

 

 

 

R

R

R

XX

XX

XX

 

  t

19

12

10 .

.

68

35

12 .

68

19

12 .

68

10

12

.

35

.

68

19

C

XX

 

XX

R

XX

 

2 

19

 12 .

68

 10 .

35

C

XX

 

XX

3

1 .

10

.

68

35

3 .

68

10

3 .

68

1

3

.

35

.

68

10

10

0

0 .

1

.

368

135

0 .

368

1

0 .

368

12 .

68

19

12 .

68

10 .

35

12 .

68

19

9

0

0

.

135

.

368

1

10

 e

 e

1

1

2 e

1

1 e

1 e

2 e

1

1

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 13 of 20 ECE 3800

Welcome to concepts that you can discover more about in graduate school …

For an alternate interpretation, if you could measure multiple random processes, computation of the Autocorrelation matrix would provide the second moments for each of the processes and the correlation of each process to every other process!

X

X

X

X

1

2

N

 

 

1

T lim

  2 T

T

T

R

XX

R

XX

 dt

T lim

 

1

2 T

T lim

 

1

2 T

T

T

T

T

X

X

X

1

2

N

 

 

X

X

1

1

 

 

X

X

1

2

 

 

X

X

2

2

 

 

 

X

1

 

X

N

 

X

2

 

X

X

X

N

1

2

 

 

X

X

1

1

 

 

X

X

1

2

 

 

X

X

2

2

 

 

 

X

1

 

X

N

 

X

2

 

X

X

X

1

2

N

 

 

X

X

N

N

 

 

 

X

N

 

 dt

X

X

X

1

2

N

 

  

X

X

N

N

 

 

 

X

N

 

 dt

Sampling and averaging at time steps t

 i

  t

 i f s

R

XX

2 N

1

1

 i

N 

N

 X

X

1

X

2

 

 

X

X

1

1

 

 

X

X

1

2

 

 

X

X

2

2

 

 

N

 

X

1

 

X

N

 

X

2

 

R

XX

2 N

1

1

 i

N 

 

N

X

H

X

X

X

1

2

N

 

 

X

X

N

N

 

 

 

X

N

 

This concept is particularly useful when multiple sensors are measuring the same physical phenomenon. Using multiple sensors, the noise in each sensor is independent, but the signal-ofinterest is not. The correlations formed can be used to describe the relative strength of the SOI between sensors, while the autocorrelations relate the signal plus noise power observed in each sensor.

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 14 of 20 ECE 3800

This concept may also be applied for cross correlation, except the matrix dimensions may no longer be square.

Let X

 X

X

1

X

2

 

 

N

 

Y

Y

Y

1

2

M

 

 

The resulting cross-correlation matrix is now N x M.

R

XY

E

X

Y

T

E

X

X

X

1

2

N

   

1

   

1

X

X

1

2

   

2

   

2

   

1

X

N

   

2

X

X

X

1

2

N

 

 

Y

M

Y

M

 

 

 

Y

M

 

Sampling and averaging at time steps t

 i

  t

 i f s

R

XY

2 N

1

1

 i

N 

 

N

X

   

H

Now that you have the auto- and cross-correlation matrices, what do you do with them?

Concepts from more advanced courses …

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 15 of 20 ECE 3800

Noisy data sampling

Using the matrices examples based on “Estimation Theory” a graduate level class y

H

 x

 v where x is a signal or symbol of interest, H is a vector or matrix operator, v is typically a noise or interfering process.

A linear (affine) estimator would be formed as x ˆ

K

 y

The estimation error is defined as x

 x

Cost function for optimization (minimum mean square error)

J

 

A

E

 

Properties of optimal estimators

E x , y

 g

E x

 x , y g

  x

 

 x ˆ

   

~  x ˆ

0

For linear estimation

J

 y

 x ˆ

H

K

 x

 y x

 x

E

 *

E

 x

K

 v y x

K

 y

H

The mmse

E

* 

Tr

J

  

Tr

E

 *

 

Optimal Criteria solution

K opt

R yy

R xy

For x, v independent

R yy x ˆ

R xx

H

H

R

H xy

R xx

R xx

H

H

H

H

R vv

Then

R vv

H

R xx

H

H

1  y or equivalently x ˆ

R xx

1 

H

H 

R vv

1

And the cost function value becomes

J

  opt

R xx

H

1 

H

H 

R vv

1  y

R xy

R yy

1 

R xy

H or equivalently

J

  opt

R xx

1 

H

H 

R vv

1 

H

1

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 16 of 20 ECE 3800

Correlation of Sinusoidal signals

As an example, imagine that there are N systems that consist of noise and a signal at a unique frequency. If the signals are cross correlated with a reference signal with a desired frequency, you would expect only the element containing that frequency to be non-zero.

Let X

A

A

 exp exp

 j j 2

 f

1 t

2

 f

2 t

A

 exp

 j 2

 f

3 t

 j

1 j

2

 j

3

 

N

1

N

2

N

3

and Y

 

 exp

 j 2

 f d t

Sampling and averaging at time steps

R

XY t

 i

  t

 i f s

2 N

1

1

 i

N 

N

X

X

X

1

2

N

 

  

 conj Y i conj

 

 conj

R

XY

2 N

1

1

 i

N 

N

A

A

A

 exp exp exp





 j j 2

2

 j 2

 

 f

3 f

1 f

2

 f d f d

 f d

3

 i i f s f s i f s

 j j

1

2 j

3







N

N

1

'

'

2

N

'

3

Where the frequency averages “cancels” a single value will remain. Letting f d

 f

2

R

XY

2 N

1

1

 i

N 

N

A

A

 exp



A

 exp

 j 2

 exp

 j 2

 j 2

 f

1 f

3

 f

2 i

 f s f

2

 i i f s f s

 j

2

 j

1 j

3





N

2

'

N

N

3

'

1

'

A

0 exp

0

 

2

Uses

A Network spectrum analyzer for filter if f d

Rapid frequency estimation

is slowly time varying.

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 17 of 20 ECE 3800

Spatial Antenna Processing

Distance based on angle of arrival at an antenna array.

A linear antenna array.

2

 d

 cos

  d

 cos

  d

An arbitrary antenna array

 x

1

, y

1

  x

0

, y

0

2

 d

 x

2

, y

2

  x

3

, y

3

  cos

     d i

X

D

 x i

 cos

 y i

 sin

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 18 of 20 ECE 3800

Speed of light relation to distance d

 c

 t d

and c

    d

     t d d

   t d

Sinusoidal phase delay is related to this as

 d

2

   d

2

  

 t d

Therefore each signal is received at

X

A

A

 t

 t

 t t d d

2

1

 exp exp

A

 t

 t dn

 exp

 j 2

 f

1 t j 2

 f

1 t j

2

 f

1 t

 j

 d 1 j

 d 2

 j

 dn

N

N

2

N

1 n

 for signals of the same freq.

If

X

Y

 

 exp

 j 2

 f

1 t

A

A

 t

 t

 t t d d

2

1

 exp exp

 j

 j

 d d

1

2

A

 t

 t dn

 exp

 j

 dn

N

N

N

1 n

2

'

'

'

A

 

A

 tt

 d 1

A

 t

 t d 2

  

A

 t

 t dn

Z

 

X

   

A

 

 exp exp exp

 j

 j

 j

 d d

1

 

2 dn

N

N

N

1

2 n

'

'

' 

Now find the optimal K to form a single output to extract the “signal of interest” ….

S

K

Z

K

A

 

 exp exp exp

 j

 j

 j

 d d

1

  dn

2

K

N

1

N

N

2 n

'

'

' 

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 19 of 20 ECE 3800

Matlab Aperture Generation Example: ANT_AP_GEN.m

Optimal weights

1

0.5

0

-0.5

-1

-1.5

-1 -0.5

0 0.5

1 1.5

The appropriate K to extract each of the individual signals would be (read the column) aoa_weights =

-0.1381 + 0.0989i 0.3498 + 0.0918i 0.2350 - 0.1427i

-0.2395 + 0.2568i 0.3906 + 0.0591i 0.0999 - 0.2660i

-0.1019 + 0.1418i -0.0585 - 0.1032i 0.0035 - 0.1888i

0.2208 + 0.2095i -0.1121 + 0.0546i -0.0935 - 0.2342i

K for the optimal value nulls the signals not of interest and optimize the signal of interest. With these weights, the SINR (signal-to-interference plus noise) shown can be achieved all while operating at exactly the same frequncey!

Notes and figures are based on or taken from materials in the course textbook: Probabilistic Methods of Signal and System

Analysis (3rd ed.) by George R. Cooper and Clare D. McGillem; Oxford Press, 1999. ISBN: 0-19-512354-9.

B.J. Bazuin, Spring 2015 20 of 20 ECE 3800

Download