Summary of binary detection with vector ob­

advertisement

Summary of binary detection with vector ob­ servation in iid Gaussian noise.:

First remove center point from signal and its effect on observation.

Then signal is

± a . and v =

± a + Z .

Find v, a and compare with threshold (0 for

ML case).

This does not depend on the vector basis ­ becomes trivial if a normalized is a basis vector.

Received components orthogonal to signal are irrelevant.

1

QAM DETECTION

Input

Signal encoder a

∈A

Baseband modulator u ( t

Baseband

) passband x ( t )

+

WGN

Output

Detector

� v

Baseband

Demodulator v ( t )

Passband

→ baseband

� y ( t )

We have seen how to design a MAP or ML detector from observing v . We generalize to an arbitrary complex signal set

A

We also question what the entire receiver should do from observation of y ( t ) or v ( t ) .

2

The baseband waveform is u ( t ) = ap ( t ) where a = a

1

+ ia

2 with a

1

=

{ a

}

, a

2

=

{ a

}

.

The passband transmitted waveform is x ( t ) = a

1

Ψ

1

( t ) + a

2

Ψ

2

( t ) .

Ψ

1

( t ) =

{

2 p ( t ) e

2 πif c t }

; Ψ

2

( t ) =

{

2 p ( t ) e

2 πif c t }

These two waveforms are orthogonal (in real vector space), each with energy 2. For p ( t ) real, they are just p ( t ) modulated by cosine and sine.

The received waveform is

Y ( t ) = ( a

1

+ Z

1

1 where Z

( t ) + ( a

2

+ Z

2

2

( t ) + Z ( t )

( t ) is real passband WGN in other de­ grees of freedom.

3

Y ( t ) = ( a

1

+ Z

1

1

( t ) + ( a

2

+ Z

2

2

( t ) + Z ( t )

Since Ψ

1 and Ψ

2 are orthogonal and equal en­ ergy, Z

1 and Z

2 are iid Gaussian.

After translation of passband signal and noise to baseband,

V ( t ) = [ a

1

+ Z

1

+ i ( a

2

+ Z

2

)] p ( t ) + Z ”( t )

Z ”( t ) is the noise orthogonal (in complex vec­ tor space) to p ( t ) .

First consider detection in real vector space.

Here ( a

1

, a

2

) represents the hypothesis and Z

1 are iid Gaussian,

N

(0 , N

0

/ 2) .

, Z

2

4

V ( t ) = [ a

1

+ Z

1

+ i ( a

2

+ Z

2

)] p ( t ) + Z ”( t )

Let Y

1

= a

1

+ Z

1

, Y

2

= a

2

+ Z

2

.

Note that V ( t ) p

( τ

− t ) dt is the output from a complex matched filter. Sampling this at τ = 0 yields ( Y

1

+ iY

2

) .

The components in an expansion Z

3

, Z

4

, . . .

, in an orthonormal expansion of Z ”( t ) are also ob­ servable, although we will not need them.

5

Z

3

, Z

4

, . . .

, are real Gaussian rv’s and can also be viewed as passband rv’s. Assume a finite number of these variables. For any two hy­ potheses, a and a , the likelihoods are f f

Y

|

H

Y

( y

| a ) =

1

( πN

0

) k

2 −

( y j

− a j

)

2

+

2 k − z

2 j exp j =1

N

0 j =3

N

0

|

H

( y

|

LLR ( y

1

) =

( πN

0

) =

2 j =1

2

) k y j a exp j

N j =1

2

0

2 y j a

− j

( y j

− a j

N

=

0

2 j =1

)

2

+

2 y j

2 k

N

0

− z

( a j j =3

− a j

)

N

0

2 j

As usual, ML decoding is minimum distance decoding.

6

We can rewrite the likelihood as f

Y

|

H

( y

| a ) = f ( y

1 y

2

| a ) f ( z

3

, z

4

, . . .

, )

So long as Z

3

, Z

4

, . . .

are independent of Y

1

, Y

2 and H , they cancel out in the LLR.

This is why it makes sense to use the WGN model - all we need in detection is the inde­ pendence from the relevant rv’s.

In other words, ( Y

1

, Y

2

) is a sufficient statistic and Z

3

, Z

4

, . . .

, are irrelevant (so long as they are independent of Y

1

, Y

2

, H ).

This is true for all pairwise comparisons be­ tween input signals.

7

Now view this detection problem in terms of complex rv’s. Let α = a

− a .

LLR ( y ) =

2 2 y j

( a j

− a j

= j =1

N

0

)

=

2 j =1

2 y

2

{ y

}{

α

}

+

{ y

}{

α

} j

α

N

0 j

=

2

{ yα

N

0

N

0

}

=

2

{ y, α

}

N

0

In real vector space, we project y onto α .

In complex space, 2-vectors become scalars, inner product needs real part to be taken.

The real part is a “further projection” of a complex number to a real number.

8

Now let’s look at the general case in WGN.

Must consider problem of real signals and noise for arbitrary modulation.

The signal set

A

=

{ a

1

, . . .

, a

M

}

, is a set of k tuples. a m

= ( a m, 1

, . . .

, a m,k

)

T

.

a m is then modulated to k b m

( t ) = a m,j

φ j

( t ) j =1 where

{

φ

1

( t ) , . . .

, φ k

( t )

} is a set of k orthonormal waveforms.

Successive signals are independent and mapped arbitrarily, using orthogonal spaces.

9

Let X ( t )

∈ { b

1

( t ) , . . .

, b

M

( t )

}

. Then k

X ( t ) = X j

φ j

( t ) j =1 where, under hypothesis m ,

X j

= a m,j for 1

≤ j

≤ k

Let φ k +1

( t ) , φ k +2

( t ) . . .

be an additional set of orthonormal functions such that the entire set

{

φ j

( t ); j

1

} spans the space of real

L

2 wave­ forms. k

Y ( t ) = Y j

φ j

( t ) = ( X j

+ Z j

) φ j

( t ) + j =1 j =1 j = k +1

Z j

φ j

( t ) .

10

k

Y ( t ) = Y j

φ j

( t ) = ( X j

+ Z j j =1 j =1

) φ j

( t ) + j = k +1

Z j

φ j

( t ) .

Assume Z =

{

Z

1

, . . .

, Z k

} are iid Gauss.

{

Z k +1

, . . .

,

} is independent of Z and of a .

Z f

Y ,Z

|

H

( z

| m ) =

Z

( y

− a m

) f

Z

( z ) .

=

Λ m,m

= f f

Z

Z

( y

− a m

)

( y

− a m

)

.

The MAP detector depends only on Y . The other signals and other noise variables are ir­ relevant.

11

This says that detection problems reduce to finite dimensional vector problems; that is, sig­ nal space and observation space are for all practical purposes finite dimensional.

The assumption of independent noise and in­ dependent other signals is essential here.

With dependence, error probability is lowered; what you don’t know can’t hurt you.

12

Detection for Orthogonal Signal Sets

We are looking at an alphabet of size m , map­ ping letter j into Eφ j

( t ) where

{

φ j

( t )

} is an orthonormal set.

WGN of spectral density N

0

/ 2 is added to the transmitted waveform.

The receiver gets

Y ( t ) = Y j

φ j

( t ) = ( X j

+ Z j

) φ j

( t ) j j

Only

{

Y

1

, . . .

, Y m

} is relevant.

Under hypothesis k , Y k

=

E + Z j for j = k . and Y j

= Z j

13

Use ML detection. Choose k for which y, k is smallest.

| y, k

| 2

= y

2 j

+ ( y k j = k

− n k

)

2

= m j =1 y

2 y

+ E

2

Ey k

ML: choose k for which y k is largest.

By symmetry, the probability of error is the same for all hypotheses so we look at hypoth­ esis 1.

First scale outputs by N

0

/ 2 , i.e., W j

= Y j

2 /N

0

.

Under H

1

, W

1 for j = 1 .

∼ N

E/N

0

, 1) and W

Pr( e ) =

−∞ f

W

1

|

H

( w

1

|

1) Pr

 m j =2 j

(0 ,

1)

( W j

≥ w

1

|

1)

 dw

1

14

MIT OpenCourseWare http://ocw.mit.edu

6.450 Principles of Digital Communication I

Fall 2009

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .

Download