Notes on PAM

advertisement
EE 451
Spring 2011
Lecture Notes on Baseband Digital Signaling
1
Binary Signaling
Assume the following model:
a. Let φ(t) be a unit-energy pulse, localized in time to (approximately) [0, T ]
sec.
b. A binary data sequence is mapped to the waveform
x(t) =
∞
X
n=−∞
where
an =
an φ(t − nT )
A,
u(n) = 1;
−A, u(n) = 0.
c. A matched filter is used at the receiver with impulse response h(t) =
φ(−t).
d. The received waveform is y(t) = x(t) + z(t), where z(t) is a zero-mean,
white, Gaussian noise process. The output of the matched filter, sampled at time kT , is
y(kT ) =
∞
X
n=−∞
an s(kT − nT ) + w(kT )
where s(t) = φ(t) ∗ φ(−t) and w(t) = z(t) ∗ h(t). We assume that s(t)
is a zero intersymbol interference (zero ISI) pulse, so
1, n = 0;
s(nT ) =
0, n 6= 0.
Assuming that the white Gaussian noise has power spectral density
N0 /2, it follows that samples of w(t) at times t = nT are independent
and identically distributed, zero-mean, Gaussian random variables of
variance σw2 = N0 /2.
1
e. The discrete-time equivalent model is
yn = an + wn
(1)
where yn = y(nY ) is the sample at the output of the matched filter,
an ∈ {−A, A} is a transmitted symbol, and wn is a zero-mean, variance
σw2 = N0 /2, Gaussian random variable (denoted N(0, σw2 )).
There are several questions to answer.
1. What is the bandwidth of the signal x(t)?
2. How should the receiver process the matched filter output sequence,
yn , to form estimates of the transmitted bits?
3. What is the probability of receiver bit error?
4. How can this be generalized for transmission of multiple bits per symbol?
2
MAP and ML Receiver
Based on (1), consider the following discrete-time equivalent channel model:
The symbol amplitude an ∈ {−A, A} is transmitted; the channel adds the
additive, white Gaussian noise (AWGN) sample wn , and the receiver matched
filter output is
yn = an + wn
(2)
where wn is a zero-mean Gaussian random variable with variance σw2 . The
detector selects symbol ân to maximize the a posteriori probability (MAP)
P (an |yn ). That is, given the received value yn , the MAP detector selects
that symbol, an , that maximizes P (an |yn ). Using Bayes formula for mixture
densities, this becomes
max P (an |yn ) = max
an
an
p(yn |an )P (an )
.
p(yn )
(3)
Since the denominator in (3) is independent of an , it can be ignored in the
maximization. Further, suppose that P (an ) is constant for all possible symbols. Then it, too, can be ignored in the maximization, and (3) simplifies to
2
the maximum likelihood (ML) decision rule: choose an to
max p(yn |an ).
an
(4)
Since an ∈ {−A, A}, this implies the following decision rule: Choose ân = −A
if
p(yn |an = −A) > p(yn |an = A);
(5)
otherwise, choose ân = A.
Since the logarithm function is convex, the problem of selecting an to maximize p(yn |an ) has the same solution as selecting an to maximize loge [p(yn |an )].
The optimum (binary) decision rule can then be expressed as follows: Choose
ân = −A if loge [p(yn |an = −A)] > loge [p(yn |an = A)]; otherwise, choose
ân = A. Manipulation of this decision rule leads to the log-likelihood criterion: Choose ân = −A if
loge [
p(yn |an = −A)
] > 0;
p(yn |an = A)
(6)
otherwise, choose ân = A.
From (2), p(yn |an ) is a Gaussian probability density function (pdf), with
mean an and variance σw2 ; that is
p(yn |an ) = √
(yn −an )2
1
e 2σw2 .
2πσw
(7)
Using this pdf in (5) leads to the ML decision rule: Choose ân = −A if
yn < 0; otherwise choose ân = A. That is, the ML detector output is
−A, if yn < 0;
ân =
(8)
A,
if yn ≥ 0.
Using the pdf in (6) leads to the same conclusion.
Problem 1. Use (7) in (5) and (6), and verify that the decision rule in (8)
is correct.
Figure 1 shows p(yn |an ) for an = A = 1 and an = −A = −1. The
optimum detection threshold is at T = 0, and corresponds to the point
at which the two conditional pdfs intersect. Figure 2 shows p(yn |an ) for
an = A = 1 and an = −A = −1, but for different symbol probabilities. The
3
ML decision threshold (which is independent of the symbol probabilities)
remains at T = 0, however, the MAP detection threshold corresponds to
where the two pdfs intersect, and in the figure corresponds to a threshold
T < 0 since P (an = −A) < P (an = A).
Conditional Probability, p(y|a)
0.8
p(y|a=1)
p(y|a=−1)
0.7
p(y|a=1) and p(y|a=−1)
0.6
0.5
0.4
0.3
0.2
0.1
0
−4
−3
−2
−1
0
1
Receiver Sample, y
2
3
4
Figure 1: Conditional densities for binary PAM.
Problem 2. Determine the MAP detector threshold, T , as a function of yn
and P (an = A) for the binary signaling modeled in (2), with unequal symbol
probabilities, as in (3).
Continuing with the ML decision rule in (8), and assuming equally probable binary symbols, the probability of detector error is given by
Pe = P (e|an = −A)P (an = −A) + P (e|an = −A)P (an = −A).
The conditional probability of error, P (e|an = −A), is the probability that
yn > 0, given that the transmitted symbol is an = A. This is
Z ∞
Z ∞
−(y+A)2
1
√
P (e|an = −A) =
p(y| − A)dy =
e 2σw2 dy.
2πσw
0
0
4
Conditional Probability Density, p(y|a)× P(a)
p(y|a=1)× P(a=1) and p(y|a=−1)× P(a=−1)
0.7
P(a=1) p(y|a=1), P(a=1) = 0.7
P(a=−1) p(y|a=−1), P(a=−1) = 0.3
0.6
0.5
0.4
0.3
0.2
0.1
0
−4
−3
−2
−1
0
1
Receiver Sample, y
2
3
4
Figure 2: Conditional densities for binary PAM with P (an = −a) = 0.3,
P (an = A) = 0.7.
Using the change of variables u = (y + A)/σw , this becomes
Z ∞
A
1 −u2
√ e 2 du = Q( ),
P (e|an = −A) =
σw
2π
A/σw
where Q(·) is the “Q-function” defined as
Z ∞
1 −u2
√ e 2 du.
Q(x) =
2π
x
There is no closed-form expression for the Q-function. However, it has been
extensively tabulated, and is readily available numerically, for example, using
the Matlab ”qfunc” command, or the Matlab
√ complementary error function.
Specifically, Q(x) = qfunc(x) = 0.5erfc(x/ 2), where erfc(x) is the complementary error function.
By symmetry, P (e|an = A) = Q(A/σw ), and the binary PAM probability
of bit error is
A
Pe,2P AM = Q 2 .
(9)
σw
5
It is common to express the bit error probability in terms of signal-tonoise ratio. Define the energy per symbol, Es as
X
|a|2 P (a)
Es =
symbols,a
and the energy per bit, Eb , as the energy per symbol divided by the number
of bits per symbol (so Eb = Es for binary PAM). Using Es = A2 in (9), yields
s
E s
Pe,2P AM = Q
.
(10)
2
σw
Alternatively, using σw2 = N0 /2 and Eb = Es , (9) becomes
r 2E b
Pe,2P AM = Q
.
N0
(11)
Binary PAM Bit Error Probability vs. Signal−to−Noise Ratio
0
10
−1
Binary PAM Bit Error Probability, Pe
10
−2
10
−3
10
−4
10
−5
10
−6
10
−7
10
2
4
6
8
10
12
14
Signal−to−Noise Ratio, Es/σ2w, dB
Figure 3: Maximum likelihood detector bit error probability for binary PAM
with P (an = −A) = P (an = A) = 0.5.
6
3
M -ary PAM
To generalize the binary PAM analysis of the preceding section, assume integer R bits are to be transmitted per PAM symbol using M = 2R symbols.
Typically, the symbol levels are selected as the set
A = {±∆, ±3∆, · · · , ±(M − 1)∆}.
This set is called a (one-dimensional) signal constellation. Each R-tuple of
input bits is mapped to a symbol, a ∈ A, and the PAM waveform is generated
as
∞
X
x(t) =
an φ(t − nT ).
n=−∞
The output of the matched filter is of the form in (1), with a ∈ A. The
maximum likelihood detector selects symbol ân to maximize p(yn |an ), and
this yields the minimum Euclidean distance detection rule: Map the received
value of yn to the closest symbol level in A. This detection rule can be
implemented using a M-level, uniform scalar quantizer.
The probability of detection symbol error is determined as follows. Assume all M symbols are equally probable. Then the ML detector partitions
the real line (of possible receiver samples yn ) into M disjoint regions. Detection region Di is
Di = {y : (y − ai )2 < (y − ak )2 , ∀k 6= i}.
The probability of error is
Pe =
M
X
P (e|ai)P (ai ).
(12)
i=1
Now, suppose a given symbol is transmitted, say the symbol aM/2 = −∆.
Then the probability of detector error, conditioned on symbol −∆ being
transmitted, is the probability that the received y is not in the corresponding
decision region. This is
Z
p(y|a = −∆)dy.
P (e|a = −∆) =
y ∈D
/ M/2
7
Since the decision region for the symbol −∆ is the range [−2∆, 0), this
conditional probability of error is
Z −2∆
Z ∞
−(y+∆)2
−(y+∆)2
1
1
2
2σ
√
√
P (e|a = −∆) =
e w dy +
e 2σw2 dy.
2πσw
2πσw
∞
0
Evaluating, this becomes
∆
.
P (e|a = −∆) = 2Q
σw
(13)
This calculation can be done for each symbol in the alphabet A. For all
symbols in the interior of the signal constellation (that is, all except the
symbols ±(M − 1)), the conditional probability of symbol detection error
is the same as in (13). For the two signal points at the edges of the signal
constellation, the result is
∆
P (e|a = −∆) = Q
.
(14)
σw
Using (12), and assuming the probability of each symbol is 1/M, the probability of symbol error is
∆
1
Pe,M −P AM = 2(1 − )Q
.
(15)
M
σw
Now, (15) can be interpreted as follows. The signal constellation is “regular” with every signal point at distance 2∆ from another signal point, so
that that the minimum distance between signal points is dmin = 2∆. Additionally, the terms multiplying the Q-function in (15) is interpreted as the
average number of nearest neighbors in the signal constellation,
X
Nave =
N(x)P (x)
x∈A
where N(x) is the number on signal points at distance dmin from signal point
x, and P (x) is the probability of sending signal point x. Equation (15) is
then of the form
d min
Pe = Nave Q
.
(16)
σw
Equation (16) is a good approximation to the probability of symbol error for
a variety of modulation methods used over a AWGN channel.
8
Finally, express (15) in terms of signal-to-noise ratio (SNR). The energy
per symbol for M-ary PAM is
M2 − 1 2
Es =
|x| P (x) =
∆,
3
x∈A
X
2
(17)
and the energy per bit is Eb = Es /R, so that
r
r
3Es
3REb
dmin
=∆=
=
.
2
M2 − 1
M2 − 1
Using this in (15), along with σw2 = N0 /2, yields
s
1
3Es
Pe,M −P AM = 2(1 − )Q
M
(M 2 − 1)σw2
and
Pe,M −P AM
1
= 2(1 − )Q
M
s
6REb
(M 2 − 1)N0
(18)
(19)
Define the signal-to-noise ratio as SNR = Es /σw2 . From (18) is can then be
seen that it takes about a factor of four increase in SNR (6 dB) to compensate
for each additional bit represented by the signal constellation (each doubling
of M). So, at least for large M, PAM requires 6 dB of SNR per bit in
order for the probability of symbol error to be maintained at a (roughly)
constant level. Figure 4 shows the M-ary PAM probability of symbol error
as a function of SNR. Note that, for small error probability, it takes about 6
dB more SNR for each additional bit in number of bits per symbol.
The bit error probability of M-ary PAM, Pb , can be bounded as
1
Pe,M −P AM ≤ Pb ≤ Pe,M −P AM
R
since (lower bound) if there is a symbol error, at least one of the R bits must
be in error, and (upper bound) at most all of the R bits are in error. If a
Gray code is used, then the lower bound is a good approximation and
s
1
1
1
6REb
.
(20)
Pb ≃ Pe,M −P AM = 2(1 − )Q
R
R
M
(M 2 − 1)N0
9
M−ary PAM Maximum Likelihood Probability of Symbol Error
0
10
−1
Probability of Symbol Error, Pe
10
−2
10
−3
10
−4
10
−5
10
2−PAM
4−PAM
8−PAM
16−PAM
−6
10
−7
10
0
5
10
15
20
25
30
35
Signal−To−Noise Ratio, Es/σ2w, dB
Figure 4: Maximum likelihood detector symbol error probability for M-ary
PAM.
10
Download