Signal Constellations, Optimum Receivers and Error Probabilities

advertisement
EE810: Communication Theory I
Page 83
Signal Constellations, Optimum Receivers and Error
Probabilities
Source
mi
Modulator
(Transmitter)
si (t )
r (t )
Demodulator
(Receiver)
Estimate
of mi
w(t )
• There are M messages. For example, each message may carry λ = log 2 M bits.
- dimensiona
observatio
If theN bit
durationl is
Tb , thenn space
the message (or symbol) duration is Ts = λTb .
r1 , rtwo
rN )
,different
The messages( in
time slots are statistically independent.
2,
• The M signals si (t), i = 1, 2, . . . , M , can be arbitrary but of finite energies
ℜ1
Ei . The a priori message probabilities are Pi . Unless stated otherwise,
it is
Choose s1 (t ) or m1
assumed that the messages are equally probable.
• The noise w(t) is modeled as a stationary, zero-mean, and white Gaussian
process with power spectral density of N0 /2.
• The receiver observes the received signal r(t) = si (t) + w(t) over one symbol
duration and makes a decision to what message was transmitted. Theℜcriterion
2
ℜ M for the optimum receiver is to minimize the probability of making an error.
Choose s2 (t ) or m2
Choose s M (t ) or m M
• Optimum receivers and their error probabilities shall be considered for various
signal constellations.
t = kTs
si (t )
r (t )
kTs
(• )dt
( k −1) Ts
University of Saskatchewan
r1
Decision
Device
Decision
EE810: Communication Theory I
Page 84
Binary Signalling
Received signal:
On-Off
Orthogonal
Antipodal
H1 (0):
r(t) = w(t)
r(t) = s1 (t) + w(t) r(t) = −s2 (t) + w(t)
H2 (1): r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t) r(t) = s2 (t) + w(t)
R Ts
0 s1 (t)s2 (t)dt = 0
Decision space and receiver implementation:
Choose 0 ⇐
s1 (t )
0
Choose 1
s 2 (t )
φ1 (t ) =
s2 ( t )
r (t )
E
Tb
Choose 0
0D
r (t )
E
0
s1 (t )
(• )d t
r1
0
Choose 1
φ1 (t )
r1
Tb
φ1 (t )
(• )d t
r2 ≥ r1
1D
r2 < r1
0D
r2
0
φ2 (t )
(b) Orthogonal
s1 (t )
1D
(a) On-Off
E
Choose 0 ⇐
(• )d t
φ1 (t )
r2 φ2 (t )
s 2 (t )
E
2
E
r1 <
2
r1 ≥
0
E 2
E 2
Tb
Choose 1
s 2 (t )
0
E
φ1 (t ) =
s2 ( t )
E
E
(c) Antipodal
University of Saskatchewan
Tb
r (t )
0
φ1 (t )
(• )d t
r1 ≥ 0
1D
r1 < 0
0D
EE810: Communication Theory I
Page 85
Error performance:
On-Off Orthogonal Antipodal
E
E
From decision space: Q 2N0
Q N0
Q 2E
N0 Eb
Eb
b
With the same Eb : Q N
Q N
Q 2E
N0
0
0
The above shows that, with the same average energy per bit Eb = 21 (E1 + E2 ),
antipodal signalling is 3 dB more efficient than orthognal signalling, which has the
same performance as that of on-off signalling. This is graphically shown below.
10
10
−1
−2
On−Off and Orthogonal
Pr[error]
10
10
10
10
−3
Antipodal
−4
−5
−6
0
2
4
6
8
E /N (dB)
b
University of Saskatchewan
0
10
12
14
EE810: Communication Theory I
Page 86
Example 1: In passband transmission, the digital information is encoded as a variation of the amplitude, frequency and phase (or their combinations) of a sinusoidal
carrier signal. The carrier frequency is much higher than the highest frequency of
the modulating signals (messages). Binary amplitude-shift keying (BASK), binary
phase-shift keying (BPSK) and binary frequency-shift keying (BFSK) are examples
of on-off, orthogonal and antipodal signallings, respectively.
(a) Binary Data
0
1
1
1
0
0
1
0
1
1
(b) Modulating Signal 0
t
0
Tb
3Tb
2Tb
4Tb
5Tb
6Tb
7Tb
8Tb
9Tb
V
(c) ASK Signal
t
0
−V
V
(d) PSK Signal
t
0
−V
V
(e) FSK Signal
t
0
−V
BASK
a0 = 1
a01 = 1
s1 (t):
s2 (t): V V cos(2πfc t)
≤ t ≤ T0b
fc = Tnb
(a) QPSK0Signal
n is an integer
−V
0
a0 = 1
a1 = 1
a 2 BFSK
a 4 = 0 BPSKa6 = 1
=0
1
a7 =c t)
=0
= 1 cos(2πf
Va3cos(2πf
1 t) a5 −V
V cos(2πf2 t) V cos(2πfc t)
f2 + f1 = 2Tnb
fc = Tnb
m
f2 − f1 = 2T
n is an integer
b
2Tb
4Tb
6Tb
a2 = 0
a4 = 0
a6 = 1
a3 = 0
a5 = 1
t
8Tb
a7 = 1
V
University of Saskatchewan
(b) OQPSK Signal
0
t
EE810: Communication Theory I
Page 87
Example 2: Various binary baseband signaling schemes are shown below. The
optimum receiver and its error performance follows easily once the two signals
s1 (t) and s2 (t) used in each scheme are identified.
Binary Data
1
0
1
1
0
0
1
0
1
Clock
Tb
V
(a) NRZ Code
0
V
(b) NRZ - L
0
−V
V
(c) RZ Code
0
V
(d) RZ - L
0
−V
V
(e) Bi - Phase
0
V
(f) Bi - Phase - L
0
−V
V
(g) Miller Code
0
V
University
of Saskatchewan
(h) Miller
-L
0
Time
φ1 (t )
− 2π M
0
EE810: Communication Theory I
Page 88
s (t )
M
M -ary Pulse Amplitude Modulation
si (t) = (i − 1)∆φ1 (t),
s1 (t )
s2 (t )
s3 (t )
0
∆
2∆
i = 1, 2, . . . , M
sk (t )
(1)
sM −1 (t )
sM (t )
(M − 2) ∆ (M − 1) ∆
(k − 1) ∆
φ1 (t )
f (r1 sk (t ) )
r1
∆
0
(k − 1)∆
Choose s1 (t ) ⇐
Choose s M (t )
Choose sk (t )
f (r1 sk (t ) )
Decision rule:
3
choose sk−1 (t) if k − 2 ∆ < r1 < k − 21 ∆, k = 2, 3, . .. , M − 1
choose s1 (t) if r1 < ∆2 and choose sM (t) if r1 > M − 23 ∆
R kTs
r(t)φ1 (t)dt and determines which
⇒ The optimum receiver computes r1 = (k−1)T
s
r1
signal r1 is closest to:
t∆= kT∆s
kTs
r (t )
2
2
∆
−
2
0
(•)dt
r1
Decision
Device
( k −1) Ts
3∆
−
2
φ1 (t )
− 2∆
−∆
∆
2
φ1 (t )
3∆
2
∆
0
2∆
φ1 (t )
t = Ts
Ts
University of Saskatchewan
0
(• )dt
r1
Compute
2
2
)⇐
EE810: Communication Theory I
Page 89
f (r1 sk (t ) )
Probability of error:
f (r1 sk (t ) )
r1
∆
(k − 1)∆ (k − 1)∆
∆
0
Choose s1 (t ) ⇐
Choose s M (t )
Choose s M (t
Choose sk (t )
Choose sk (t )
√
• For the M − 2 inner signals: Pr[error|si (t)]
2Q
∆/
2N
.
f (r1 s=
(
t
)
)
0
k
√
• For the two end signals: Pr[error|si (t)] = Q ∆/ 2N0
f (r1 sk (t ) )
• Since Pr[si (t) = 1/M , then
Pr[error] =
M
X
i=1
2(M − 1) r p
1
Q ∆/ 2N0
Pr[si (t)] Pr[error|si (t)] =
M
∆
∆
• The above is symbol (or message) 2error2probability. If each message carries
λ = log2 M bits, the bit error probability depends on the mapping
and it is
r1
often tedious to compute exactly. If the mapping is a Gray mapping
the
)
φ1 (t(i.e.,
3∆
∆
∆
3∆
0
mappings of the nearest −signals differ
∆2
∆ − 2 in only 2one bit), 2a good approximation
is Pr[bit error = Pr[error]/M .
2
2
φ1 (t )
• A modified signal set
to minimize
the average transmitted energy (M is even):
− 2∆
−∆
3∆
2
−
∆ T 0
2 0 (•)dt
s
2∆
∆
0
−
t = Ts
∆
2
r1
r (t )
3∆
2
φ1 (t )
Compute
(r1 − si1 )2 + (r2 − si 2 )2
Decision
φ (t )
for i = 1, 2, , M
• If φ1 (t) is a sinusoidal
i.e., φ1 (t)
cos(2πfc t), where 0 ≤ t ≤ 1
φ1 (carrier,
t)
− 2∆
−∆
2∆choose
∆
0
and
Ts ; fc = k/Ts ; k is an integer,Ts the scheme is alsor2known as M -ary amplitude
the smallest
(•)dt
shift keying (M -ASK).
q
=t = T T2s
s
0
University of Saskatchewan
r1
Ts
φ 2 (t )
(• )dt
t = Ts
r1
EE810: Communication Theory I
Page 90
To express the probability in terms of Eb /N0 , compute the average transmitted
energy per message (or symbol) as follows:
PM
M
∆2 X
i=1 Ei
=
(2i − 1 − M )2
Es =
M
4M i=1
(M 2 − 1)∆2
∆2 1
M (M 2 − 1) =
(2)
=
4M 3
12
Thus the the average transmitted energy per bit is
(M 2 − 1)∆2
Es
=
⇒∆=
Eb =
log2 M
12 log2 M
2(M − 1)
Q
⇒ Pr[error] =
M
s
r
(12 log2 M )Eb
M2 − 1
(6 log2 M )Eb
(M 2 − 1)N0
!
(4)
−1
10
M=16
−2
Pr[symbol error]
10
M=8
−3
10
M=4
−4
10
M=2
−5
10
−6
10
0
University of Saskatchewan
5
10
Eb/N0 (dB)
15
(3)
20
EE810: Communication Theory I
Page 91
M -ary Phase Shift Keying (M -PSK)
The messages are distinguished by M phases of a sinusoidal carrier:
(i − 1)2π
si (t) = V cos 2πfc t −
, 0 < t < Ts ; i = 1, 2, . . . , M
M
√
√
(i − 1)2π
(i − 1)2π
E cos
=
φ1 (t) + E sin
φ2 (t)
M
M
|
{z
}
|
{z
}
si1
si2
(5)
where E = V 2 Ts /2 and the two orthonormal basis functions are:
φ1 (t) =
V sin(2πfc t)
V cos(2πfc t)
√
√
; φ2 (t) =
E
E
(6)
M = 8:
φ2 (t )
s3 (t ) ↔ 011
010 ↔ s4 (t )
s2 (t ) ↔ 001
E
π 4
110 ↔ s5 (t )
s1 (t ) ↔ 000
0
φ1 (t )
111 ↔ s6 (t )
s8 (t ) ↔ 100
s7 (t ) ↔ 101
Figure 1: Signal space for 8-PSK with Gray mapping.
φ2 (t )
s2 ( t )
University of Saskatchewan
E
EE810: Communication Theory I
Page 92
Decision regions and receiver implementation:
r2
Region Z 2
Choose s2 (t )
s 2 (t )
E
π M
s1 (t )
r1
0
Region Z1
Choose s1 (t )
sM ( t )
Ts
(•)dt
r1
Compute
0
(r1 − si1 )2 + (r2 − si 2 )2
r(t )
Decision
for i = 1, 2, , M
and choose
φ1 (t )
Ts
(•)dt
r2
the smallest
0
φ 2 (t )
Choose m2=01
φ2 (t )
Probability of error:
M
1 X
s 2 (t )i (t)] = Pr[error|s1 (t)]
Pr[error] =
Pr[error|s
r̂2
M i=1
φˆ (t )
= 1 − Pr[(r1 , r2 ) ∈ Z1 |s1 (t)]
Z Z
= 1−
p(r1 , r2 |s1 (t))dr1 dr2
(r1 ,r2 )∈Z1
s3 (t )
s1 (t )
#
√ 2
2
(r1 0− E) + r2
1
exp −
.
where p(r1 , r2 |s1 (t)) =
πN0
N0
"
University of Saskatchewan
s4 ( t )
r̂1
φ1 (t )
φˆ1 (t )
Choose m1=00
Choose m3=11
2
(7)
EE810: Communication Theory I
Page 93
p
Changing the variables V = r12 + r22 and Θ = tan−1 rr12 (polar coordinate system), the join pdf of V and Θ is
!
√
2
V
V + E − 2 EV cos Θ
p(V, Θ) =
exp −
(8)
πN0
N0
Integration of p(V, Θ) over the range of V yields the pdf of Θ:
Z ∞
p(V, Θ)dV
p(Θ) =
0
Z
2
√
1 − NE sin2 Θ ∞ − V − 2E/N0 cos Θ /2
=
Ve
dV
e 0
2π
0
With the above pdf of Θ, the error probability can be computed as:
Pr[error] = 1 − Pr[−π/M ≤ Θ ≤ π/M |s1 (t)]
Z π/M
= 1−
p(Θ)dΘ
(9)
(10)
π/M
In general, the integral of p(Θ) as above does not reduce to a simple form and
must be evaluated numerically, except for M = 2 and M = 4.
An approximation to the error probability for large values of M and for large
symbol signal-to-noise ratio (SNR) γs = E/N0 can be obtained as follows. First
the error probability is lower bounded by
Pr[error] = Pr[error|s1 (t)]
> Pr[(r1 , r2 ) is closer to s2 (t) than s1 (t)|s1 (t)]
o
n π p
E/N0
= Q sin
M
The upper bound is obtained by the following union bound:
(11)
Pr[error] = Pr[error|s1 (t)]
< Pr[(r1 , r2 ) is closer to s2 (t) OR sM −1 (t) than s1 (t)|s1 (t)]
π p
E/N0
(12)
< 2Q sin
M
Since the lower and upper bounds only differ by a factor of two, they are very
tight for high SNR. Thus a good approximation to the error probability of M -PSK
is:
π p
E/N0
Pr[error] ≈ 2Q sin
π p M
λ(2Eb /N0 ) sin
= 2Q
(13)
M
University of Saskatchewan
for i = 1, 2, , M
and choose
φ1 (t )
EE810: Communication Theory I
Ts
r2
(•)dt
Page 94
the smallest
0
Quadrature Phase Shift Keying (QPSK):
φ 2 (t )
Choose m2=01
φ2 (t )
r̂2
s3 (t )
φˆ2 (t )
Choose m1=00
Choose m3=11
s 2 (t )
s1 (t )
φ1 (t )
0
r̂1
s4 ( t )
φˆ1 (t )
Choose m4=10
"
Pr[correct] = Pr[r̂1 ≥ 0 and r̂2 ≥ 0|s1 (t)] = 1 − Q
"
⇒ Pr[error] = 1 − Pr[correct] = 1 − 1 − Q
= 2Q
University of Saskatchewan
r
E
N0
!
− Q2
r
E
N0
!
r
r
E
N0
E
N0
!#2
(14)
!#2
(15)
EE810: Communication Theory I
Page 95
The bit error probability of QPSK with Gray mapping: Can be obtained by considering different conditional message error probabilities:
r !#
r !"
E
E
1−Q
(16)
Pr[m2 |m1 ] = Q
N0
N0
r !
E
Pr[m3 |m1 ] = Q2
(17)
N0
r !#
r !"
E
E
Pr[m4 |m1 ] = Q
1−Q
(18)
N0
N0
The bit error probability is therefore
Pr[bit error] = 0.5 Pr[m2 |m1 ] + 0.5 Pr[m4 |m1 ] + 1.0 Pr[m3 |m1 ]
!
r
r !
E
2Eb
=Q
= Q
N0
N0
(19)
where the viewpoint is taken that one of the two bits is chosen at random, i.e.,
with a probability of 0.5. The above shows that QPSK with Gray mapping has
exactly the same bit-error-rate (BER) performance with BPSK, while its bit rate
can be double for the same bandwidth.
In general, the bit error probability of M -PSK is difficult to obtain for an arbitrary mapping. For Gray mapping, again a good approximation is Pr[bit error] =
Pr[sumbol error]/λ. The exact calculation of the bit error probability can be found
in the following paper:
J. Lassing, E. G. Ström, E. Agrell and T. Ottosson, “Computation of the Exact BitError Rate of Coherent M -ary PSK With Gray Code Bit Mapping”, IEEE Trans.
on Communications, vol. 51, Nov. 2003, pp. 1758–1760.
University of Saskatchewan
EE810: Communication Theory I
Page 96
Comparison of M -PSK and BPSK
p
π Pr[error]M -ary ' 2Q
λ(2Eb /N0 ) sin
,
M
!
!
r
r
2Eb
2E
b
Pr[error]QPSK = 2Q
− Q2
N0
N0
p
Pr[error]BPSK = Q( 2Eb /N0 )
(M > 4)
−1
10
M=32
−2
10
M=16
M=4
Pr[symbol error]
−3
10
M=8
M=2
−4
10
−5
10
−6
10
−7
10
Lower bound
Upper bound
0
5
10
E /N (dB)
b
λ M
3
4
5
6
15
20
0
M -ary BW/binary BW λ sin2 (π/M ) M -ary energy/binary energy
8
16
32
64
University of Saskatchewan
1/3
1/4
1/5
1/6
0.44
0.15
0.05
0.44
3.6 dB
8.2 dB
13.0 dB
17.0 dB
EE810: Communication Theory I
Page 97
M -ary Quadrature Amplitude Modulation
• In QAM modulation, the messages are encoded into both the amplitudes and
phases of the carrier:
r
r
2
2
cos(2πfc t) + Vs,i
sin(2πfc t)
(20)
si (t) = Vc,i
Ts
Ts
r
p
2
0 ≤ t ≤ Ts
cos(2πfc t − θi ),
=
(21)
Ei
i = 1, 2, . . . , M
Ts
2
where Ei = Vc,i
+ Vs,i2 and θi = tan−1 (Vs,i /Vc,i ).
• Examples of QAM constellations are shown on the next page.
• Rectangular QAM constellations, shown below, are the most popular. For
rectangular QAM, Vc,i , Vs,i ∈ {(2i − 1 − M )∆/2}.
University of Saskatchewan
EE810: Communication Theory I
Examples of QAM Constellations:
University of Saskatchewan
Page 98
EE810: Communication Theory I
Page 99
Another example: The 16-QAM signal constellation shown below is an international standard for telephone-line modems (called V.29). The decision regions of
the minimum distance receiver are also drawn.
1011
5
1010
3
1100
0010
1
r2
0101
0100
0000
-3
-5
1110
0011
-1
E1000
1001
0110
3
1
0001
0111
Region Z 2
Choose s2 (t )
-1
5
s 2 (t )
1111
-3
π M
s1 (t )
r1
0
Region Z1
1101
Choose s1 (t )
-5
sM ( t )
Receiver implementation (same as for M -PSK):
Ts
(•)dt
r1
Compute
0
(r1 − si1 )2 + (r2 − si 2 )2
r(t )
for i = 1, 2, , M
and choose
φ1 (t )
Ts
(•)dt
r2
the smallest
0
φ 2 (t )
Choose m2=01
φ2 (t )
University of Saskatchewan
s 2 (t )
r̂2
Decision
EE810: Communication Theory I
Page 100
Error performance of rectangular M -QAM: For M = 2λ , where λ is even, QAM
√
consists of two ASK signals on quadrature carriers, each having M = 2λ/2 signal
points. Thus the probability of symbol error can be computed as
2
Pr[error] = 1 − Pr[correct] = 1 − 1 − Pr√M [error]
(22)
√
where Pr√M is the probability of error of M -ary ASK:
!
r
1
3 Es
Pr√M [error] = 2 1 − √
(23)
Q
M − 1 N0
M
Note that in the above equation, Es is the average energy per symbol of the QAM
constellation.
−1
10
−2
10
Pr[symbol error]
M=64
−3
10
M=16
−4
10
M=4
−5
10
−6
10
0
University of Saskatchewan
5
10
Eb/N0 (dB)
15
20
EE810: Communication Theory I
Page 101
Comparison of M -QAM and M -PSK:
r
Pr[error]PSK ≈ Q
Pr[error]PSK
π
2Es
sin
N0
M
1
≈ Pr√M −ASK [error] = 2 1 − √
M
Q
!
(24)
r
3 Es
M − 1 N0
!
(25)
Since the error probability is dominated by the argument of the Q-function, one
may simply compares the arguments of Q for the two modulation schemes. The
ratio of the arguments is
⇒ RM =
3/(M − 1)
2 sin2 (π/M )
(26)
M 10 log10 RM
8
1.65 dB
16
4.20 dB
32
7.02 dB
64
9.95 dB
Thus M -ary QAM yields a better performance than M -ary PSK for the same bit
rate (i.e., same M ) and the same transmitted energy (Eb /N0 ).
University of Saskatchewan
EE810: Communication Theory I
Page 102
M -ary Orthognal Signals
si (t) =
φ 2 (t )
φ 2 (t )
√
Eφi (t),
i = 1, 2, . . . , M
φ 2 (t )
φ 2 (t )s 2 (t )
s 2 (t )
sE2 (t )
E
s1 (t )
E
E
s1 (t )
E
(a) M=2
s 2 (t )
0
φ1 (t )
0
0
(27)
E
s1 (t )
E
s1 (t )
E
s 3 (t )
E
0
φ1 (t )
E
The decision rule of the minimum distance receiver
easily:
t = Ts follows
(a) M=2
(b) M=3
T φ (t )
Choose si (t) if ri > rj3, j = 1, 2, 3, . . r.1, M ; j 6= i
Minimum distance receiver:
r (t )
t = Ts
s
(• )dt
0
s (t )
φ1 (t ) = 1
s (t )
φ (t ) =E M
E Ts
t = Ts r1
rM
(•T)dt
0
M
(28)
(• )dt
0
s (t )
φ1 (t ) = 1 Ts
E
r (t )
t = Ts
Choose
the
largest
Decision
Choose
the
largest
Decision
rM
(• )dt
0
φ 2 (t )
φ 2 (t )
s (t )
φ M (t ) = M
s 2 (t )
E
s 2 (t )
E
(Note: Instead
of φi (t) one may use si (t))
− s1 (t )
φ 2 (t )
s1 (t )
0
φ1 (t )
− s1 (t )
E
University ofs Saskatchewan
2 (t )
φ1 (t )
(b) M=3
φ3 ( t )
s 3 (t )
s
φ1 (t )
s3 ( t )
− s3 (t )
E
s1 (t )
0
φ 2 (t )
E
E
s 2 (t )
φ1 (t )
EE810: Communication Theory I
Page 103
When the M orthonormal functions are chosen to be orthogonal sinusoidal carriers, the signal set is known as M -ary frequency shift keying (M -FSK):
si (t) = V cos(2πfi t), 0 ≤ t ≤ Ts ; i = 1, 2, . . . , M
(29)
where the frequencies are chosen so that the signals are orthogonal over the interval
[0, Ts ] seconds:
1
, i = 1, 2, . . . , M
(30)
fi = (k ± (i − 1))
2Ts
Error performance of M orthogonal signals:
To determine the message error probability consider that message s1 (t) was transmitted. Due to the symmetry of the signal space and because the messages are
equally likely
Pr[error] = Pr[error|s1 (t)] = 1 − Pr[correct|s1 (t)]
= 1 − Pr[all rj < r1 ) : j 6= 1|s1 (t)]
M
Y
Pr[rj < r1 ) : j 6= 1|s1 (t)]
= 1−
j=2
= 1−
Z
∞
M
Y
R1 =−∞ j=2
Pr[(rj < R1 )|s1 (t)]p(R1 |s1 (t))dR1
M −1
2
R
(πN0 )−0.5 exp − 2 dR2
= 1−
N0
R2 =−∞
R1 =−∞
(
√ 2)
(R1 − E)
dR1
(31)
×(πN0 )−0.5 exp −
N0
Z
∞
Z
R1
The above integral can only be evaluated numerically. It can be normalized so
that only two parameters, namely M (the number of messages) and E/N0 (the
signal-to-noise) enter into the numerical integration as given below:
M −1 #
Z ∞"
Z y
1
1
2
Pr[error] = √
1− √
e−x /2 dx
2π −∞
2π −∞

r !2 
2E 
1
dy
(32)
y−
· exp −
2
N0
University of Saskatchewan
EE810: Communication Theory I
Page 104
The relationship between probability of bit error and probability of symbol error
for an M -ary orthogonal signal set can be found as follows. For equiprobable
orthogonal signals, all symbol errors are equiprobable and occur with probability
Pr[symbol error] Pr[symbol error]
=
M −1
2λ − 1
There are λk ways in which k bits out of λ may be in error. Hence the average
number of bit errors per λ-bit symbol is
λ
X
2λ−1
λ Pr[symbol error]
k
=
λ
Pr[symbol error]
(33)
2λ − 1
2λ − 1
k
k=1
The probability of bit error is the the above result divided by λ. Thus
2λ−1
Pr[bit error]
=
Pr[symbol error] 2λ − 1
Note that the above ratio approaches 1/2 as λ → ∞.
Figure 2: Probability of bit error for M -orthogonal signals versus γb = Eb /N0 .
University of Saskatchewan
(34)
EE810: Communication Theory I
Page 105
Union Bound: To provide insight into the behavior of Pr[error], consider upper
bounding (called the union bound) the error probability as follows.
Pr[error] = Pr[(r1 < r2 ) or (r1 < r3 ) or, . . . , or (r1 < rM )|s1 (t)]
< Pr[(r1 < r2 )|s1 (t)] + . . . + Pr[(r1 < rM )|s1 (t)]
p
p
= (M − 1)Q
E/N0 ) < M Q( E/N0
< M e−(E/2N0 )
where a simple bound on Q(x), namely Q(x) < exp
M = 2λ , then
n
2
− x2
o
(35)
has been used. Let
Pr[error] < eλ ln 2 e−(λEb /2N0 ) = e−λ(Eb /N0 −2 ln 2)/2
(36)
The above equation shows that there is a definite threshold effect with orthogonal
signalling. As M → ∞, the probability of error approaches zero exponentially,
provided that:
Eb
> 2 ln 2 = 1.39 = 1.42dB
(37)
N0
A different interpretation of the upper bound in (36) can be obtained as follows.
Since E = λEb = Ps Ts (Ps is the transmitted power) and the bit rate rb = λ/Ts ,
(36) can be rewritten as:
Pr[error] < eλ ln 2 e−(Ps Ts /2N0 ) = e−Ts [−rb ln 2+Ps /2N0 ]
(38)
Ps
the probability
2 ln 2N0
or error tends to zero as Ts or M become larger and larger. This behaviour of
error probability is quite surprising since what it shows is that: provided the bit rate
is small enough, the error probability can be made arbitrarily small even though
the transmitter power can be finite. The obvious disadvantage, however, of the
approach is that the bandwidth requirement increases with M . As M → ∞, the
transmitted bandwidth goes to infinity.
Since the union bound is not a very tight upper bound
at sufficiently low SNR
n 2o
due to the fact that the upper bound Q(x) < exp − x2 for the Q function is
loose. Using a more elaborate bounding techniques it can be shown that (see
texts by Proakis, Wozencraft and Jacobs) Pr[error] → 0 as λ → ∞, provided that
Eb /N0 > ln 2 = 0.693 (−1.6dB).
The above implies that if −rb ln 2 + Ps /2N0 > 0, or rb <
University of Saskatchewan
(• )dt
0
EE810: Communication Theory I
φ M (t ) =
Page 106
s M (t )
E
Biorthognal Signals
φ 2 (t )
φ 2 (t )
s 2 (t )
− s1 (t )
s 2 (t )
E
s1 (t )
0
φ1 (t )
− s1 (t )
E
s3 ( t )
− s2 (t )
φ3 ( t )
(a) M=2
− s3 (t )
E
s1 (t )
0
φ1 (t )
E
E
− s 2 (t )
(b) M=3
A biorthogonal signal set can be obtained from an original orthogonal set of N signals by augmenting it with the negative of each signal. Obviously, for the biorthogonal set M = 2N . Denote the additional signals by −si (t), i = 1, 2, . . . , N and
assume that each signal has energy E.
The received
r(t) is closer√than si (t) than −si (t) if and only if (iff)
√ signal
R Ts
R Ts
R Ts
2
2
[r(t)−
Eφ
Eφ
(t)]
dt
<
[r(t)+
(t)]
dt.
This
happens
iff
r
=
i
i
i
0
0
0 r(t)φi (t)dt >
0. Similarly, r(t) is closer to si (t) than to sj (t) iff ri > rj , j 6= 1 and r(t) is closer
to si (t) than to −sj (t) iff ri > −rj , j 6= 1. It follows that the decision rule of the
minimum-distance receiver for biorthogonal signalling can be implemented as
Choose ± si (t) if |ri | > |rj | and ± ri > 0,
∀j 6= i
(39)
The conditional probability of a correct decision for equally likely messages, given
that s1 (t) is transmitted and that
√
r1 = E + w 1 = R 1 > 0
(40)
is just
Pr[correct|s1 (t), R1 > 0] = Pr[−R1 < all rj < R1 ) : j 6= 1|s1 (t), R1 > 0]
= (Pr[−R1 < rj < R1 |s1 (t), R1 > 0])N −1
N −1
Z R1
2
R
(πN0 )−0.5 exp − 2 dR2
=
N0
R2 =−R1
(41)
University of Saskatchewan
EE810: Communication Theory I
Page 107
Averaging over the pdf of R1 we have
Z ∞ Z R1
Pr[correct|s1 (t)] =
N −1
2
R
(πN0 )−0.5 exp − 2 dR2
N0
R2 =−∞
R1 =0
(
√ 2)
(R1 − E)
dR1
(42)
×(πN0 )−0.5 exp −
N0
Again, by virtue of symmetry and the equal a priori probability of the messages
the above equation is also the Pr[correct]. Finally, by noting that N = M/2 we
obtain
M2 −1
Z ∞ Z R1
2
R
Pr[error] = 1 −
(πN0 )−0.5 exp − 2 dR2
N0
R2 =−∞
R1 =0
(
√ 2)
(R1 − E)
dR1
(43)
×(πN0 )−0.5 exp −
N0
The difference in error performance for M biorthogonal and M orthogonal signals
is negligible when M and E/N0 are large, but the number of dimensions (i.e.,
bandwidth) required is reduced by one half in the biorthogonal case.
University of Saskatchewan
EE810: Communication Theory I
Page 108
Hypercube Signals (Vertices of a Hypercube)
s 2 (t )
φ 2 (t )
φ 2 (t )
s 2 (t )
s1 (t )
s1 (t )
s3 (t )
s 2 (t )
0
s1 (t )
0
φ1 (t )
s 4 (t )
0
φ1 (t )
φ1 (t )
φ3 ( t )
2 Eb
s5 (t )
s6 (t )
s 4 (t )
s3 (t )
s7 (t )
s8 (t )
2 Eb
(a) M = 2
2 Eb
(b) M = 22 = 4
(c) M = 23 = 8
φ (t )
Here the M = 2λ signals are located on the vertices of 2an λ-dimensional hypers 2 (t ) geometrically above for
cube centered at the origin. This configuration is shown
λ = 1, 2, 3. The hypercube signals can be formed as follows:
s 2p
(t )
si (t) =
Eb
λ0
X
E j=1
2
E
2
s1 (t )
s1 (t )
0
φ
(
t
)
bij φj (t), 1 bij ∈ {±1}, i = 1, 2, . . . , M = 2λ φ1 (t ) (44)
E
2
E
6
2E
√
Thus the components of the signal vector si = [si1 , ssi2(t,) . . . , siλ ]> are ± Eb .
3
To evaluate the error probability, assume that the signal
p
p(b) (a) M4= 2 p
M =3
=
−
s
E
,
−
E
,
.
.
.
,
−
E
(45)
1
b
b
b
(binary antipodal)
(equilateral triangle)
φ 2 (t )
is transmitted. First claim that no error is made if the noise components along
φj (t) satisfy
p
s 2 (t )
wj < Eb , for all j = 1, 2, . . . , λ
(46)
The proof is immediate. When 2E
r = x = s1 + w
x − si is:
0
w√
,
j
(xj − ssij3 ()t )=
2 Eb − w j ,
is received, the jth component of
2E
√
if sij = −√Eb
φ (t )
if sij = + 1Eb
φ3 ( t )
Since Equation (46) implies:
2E
sp
4 (t )
2 Eb − wj > wj for all j,
University of Saskatchewan
(c) M = 4
(regular tetrahedron)
(47)
s1 (t )
(48)
EE810: Communication Theory I
Page 109
it follows that
2
|x − si | =
λ
X
j=1
2
(xj − sij ) >
λ
X
j=1
wj2 = |x − s1 |2
for all si 6= s1 whenever (46) is satisfied.
Next claim that an error is made if, for at least one j,
p
wj > E b
(49)
(50)
This follows from the fact that x is closer to sj than
√ to s1 whenever (50) is satisfied,
where
√ sj denotes that signal with components Eb along the jth direction and
− Eb in all other directions. (Of course, x may be still closer to some signal
other than sj , but it cannot be closest to s1 ).
Equations (49) and (50) together imply that a correct decision is made if and only
if (46) is satisfied. The probability of this event, given that m = m1 , is therefore:
h
i
p
Pr[correct|m1 ] = Pr all wj < Eb ; j = 1, 2, . . . , λ
=
λ
Y
j=1
h
p iλ
p i Pr wj < Eb = 1 − Pr wj ≥ Eb
h
= (1 − p)λ
in which,
(51)
!
r
h
p i
2Eb
p = Pr wj ≥ Eb = Q
N0
is√again the probability of error for two equally likely signals separated by distance
2 Eb . Finally, from symmetry:
Pr[correct|mi ] = Pr[correct|m1 ] for all i,
hence,
"
Pr[correct] = (1 − p)λ = 1 − Q
r
2Eb
N0
!#λ
(52)
(53)
In order to express this result in terms of signal energy, we again recognize that
the distance squared from the origin to each signal si is the same. The transmitted
energy is therefore independent of i, hence can be designated by E. Clearly
2
|si | =
University of Saskatchewan
λ
X
j=1
s2ij = λEb = E
(54)
EE810: Communication Theory I
Page 110
Thus
p=Q
r
2E
λN0
!
(55)
The simple form of the result Pr[correct] = (1 − p)λ suggests that a more
immediate derivation may exist. Indeed one does. Note
√ that the√jth coordinate of
the random signal s is a priori equally likely to be + Eb or − Eb , independent
of all other coordinates. Moreover, the noise wj disturbing the jth coordinate is
independent of the noise in all other coordinates. Hence, by the theory of sufficient
statistics, a decision may be made on the jth coordinate without examining any
other coordinate. This single-coordinate
√ decision corresponds to the problem of
binary signals separated by distance 2 Eb , for which the probability of correct
decision is 1 − p. Since in the original hypercube problem a correct decision is
made if only if a correct decision is make on every coordinate, an since these
decisions are independent, it follows immediately that Pr[correct] = (1 − p)λ .
University of Saskatchewan
Download