Solutions

advertisement
University
of Illinois
Problem Set #2: Solutions
Page 1 of 2
ECE 361
Spring 2001
Problems and Solutions:
1.
Q(x) = p for x ≈ –2•ln p minus a correction term as given by (26.2.22) or (26.2.23) of Abramowitz and
Stegun. Note that if we approximate Q(x) by its upper bound 0.5•exp(–x2 /2), we get x ≈ –2•ln 2p which
is slightly smaller than –2•ln p (Prove this!). The approximations in Abramowitz and Stegun are not
too useful in the sense that evaluating Q(•) at their spproximate solution can give a p-value that is off by as
much as 4.5×10–4 ; not too helpful if p = 10–5 !! Note that Q(y) = 10 –5 for y ≈ 4.8 using ≈ –2•ln p or
4.65 using ≈ –2•ln 2p. Both numbers are larger than the “actual” value Q(4.26504) = 10–5 which I
found iteratively using the 1/(x 2π)•exp(–x2 /2) approximation to Q(x). Hence, Q( 2x) = 10 –5 for
x = 9.10 or 9.6dB. Since binary PSK has error probability Q(2Eb /N0 ) as you learned in ECE 359, we have
just discovered that an SNR of 9.6 dB is required to achieve an error probability of 10–5 .
2T cos(πtT)
↔ rect(f/T)cos(πf/T) (what happened to s(–f)?) This is not quite
π 1 – (2tT) 2
cos(απt/T)
Tπ
what we need: replace the constant T by the constant α/T to get
↔
rect(fT/α)cos(πfT/α).
1 – (2αt/T)2
2α
X(f) is a cosinusoidal function of amplitude X(0) = (Tπ/2α) > T (because π > 2 and α ≤ 1) and support
[–α/2T,α/2T] ⊂ [–1/2T,1/2T] for α < 1. Area under X(f) = x(0) = 1.
(b),(c) The two functions are as shown in the leftmost figure below. Now, x(t)y(t)↔ X(f)✳Y(f) = Z(f) where ✳
2.(a)
By duality, we have that
∞
denotes convolution. Z(f) =
∞
∫ X(λ)Y(f–λ)dλ = ∫ X(λ)Y(λ–f)dλ since Y(f) is an even function. Thus, we
-∞
-∞
see that “flipping” is not necessary; just “sliding” Y to the right by f suffices. Since X and Y are even
functions of f, so must Z be an even function of f (why?). Now, if 0 ≤ f ≤ (1–α)/2T, then Y(λ–f) is as
π/2
+α/2T
shown in the second figure below, and we get Z(f) =
∫ (Tπ/2α)cos(πλT/α)Tdλ = (T/2)
∫ cos x dx = T.
–π/2
–α/2T
On the other hand, if (1–α)/2T ≤ f ≤ (1+α)/2T, then Y(λ–f) is as shown in the third figure below where
the left end point of Y(λ–f) is f–1/2T (be sure you understand why this is so!), and thus we get
π/2
+α/2T
Z(f) =
∫ (Tπ/2α)cos(πλT/α)Tdλ = (T/2) ∫ cos x dx = (T/2)(1 – sin β) where β = π(fT/α–1/2α).
β
f–1/2T
Finally, if f > (1+α)/2T, then Y(λ–f) is as shown in the right hand figure below and hence Z(f) = 0.
X(f) Y(f)
Τπ/2α
T
α 1
2T 2T
−1
2T
−α
2T
−α
2T
Putting the results together, we get the following result (cf. Blahut, p. 28)
T,
πT(|f| – 1/2T)
Z(f) =  [1 – sin (
)],
α
0,
T
2
∞
3.(a)
∞
1– α
0 ≤ |f| ≤ 2T ,
1– α
1+α
≤ |f| ≤ 2T ,
2T
1+α
|f| > 2T .
∞
∞
M
M
M

 M
*
⌠ M
2
∑si (t) dt =  ∑si (t)  ∑sj (t) dt = ∑ ∫|si (t)| dt + ∑ ∑ ∫si (t)[sj (t)]*dt
i=1 –∞
i=1 i ≠j j=1 –∞
⌡i=1 
⌡ i=1   j=1 
⌠ M
–∞
2
–∞
University
of Illinois
Problem Set #2: Solutions
Page 2 of 2
M
= MEs +
(b)
M
∑ ∑
ECE 361
Spring 2001
ρij = ME s + M(M–1) –ρ where –ρ is the average of the M(M–1) inner products. But
i=1 i ≠j j=1
the integral must be nonnegative (why?) and hence MEs + M(M–1) –ρ ≥ 0, that is, –ρ ≥ – E s /(M–1).
∞
∞
∞
2
M
M

 M
*
⌠
⌠ M
|ai si (t)|2 dt > 0 because each integral is
 ai si (t) dt =  ai si (t)  aj sj (t) dt =
i=1 –∞

  j=1

⌡i=1
⌡ i=1
–∞
–∞
M
nonnegative, and at least one is strictly positive. Hence, the sum
ai si (t) cannot be identically zero.
i=1
∞
∞
∞
∞
∞
–
–
|qi (t)|2dt = |si (t) – s(t)| 2 dt = |si (t)|2dt + |s(t)| 2 dt – si (t)[–s(t)]* + [s i (t)]*–s(t)dt
–∞
–∞
–∞
–∞
–∞
1
1
1
= Es + M 2 E s – 2 Es = Es 1 –
because of the orthogonality of the signals si (t). It follows from
M
M
M
part (a) that some inner product of the qi (t)’s must be at least as large as [E s (M–1)/M]/(M–1) = –Es /M. It is
easy to show that the sum of the qi (t) is 0 and thus the signals cannot be independent.
∞
∞
1
1
1
σij = qi (t)[qj (t)]*dt = [si (t) – –s(t)][sj (t) – –s(t)]*dt = 0 + M 2 E s – 2 Es = – Es for all i ≠ j.
M
M
M
–∞
–∞
Hence, all the inner products have value equal to the minimum possible average value of the inner product.
∑
∑
∑ ∫
∑
∑
(c)
∫
∫
∫
∫
(
(d)
)
∫
∫
T/2
4.(a)
E0 =
∫
∫|s0(t)|2dt = ||s0||2 = A2T.
–T/2
T/2
E1 =
∫|s1(t)|2dt = ||s1||2 = A2T. This latter is obtained via a standard
–T/2
integral that occurs so often that you should just memorize the following result! The energy
delivered by a sinusoidal signal over any time interval that is a multiple of one quarter of the period is
T/2
0.5×(amplitude)2 ×(time interval). Note also that ⟨s0 ,s 1 ⟩ =
2A2 cos(πt/T)dt = 2
∫
–T/2
2A2 T/π. The signal
outputs are thus A 2 T – 2 2A2 T/π if a 0 is transmitted and 2 2A2 T/π – A 2 T if a 1 is transmitted.
T/2
(b)
σ2 =
∫
N0
N
N
|
s0 (t)–s1 (t)|2 dt = 0 ||s0 –s1 ||2 = 0
2 –T/2
2
2
[||s0||2 +
||s1 ||2 – 2 ⟨s0 ,s 1 ⟩] = N0 A2 T(1–2 2/π).
If a 0 is transmitted, z is a Gaussian random variable with mean A2 T – 2 2A2 T/π = A2 T(1–2 2/π) and
variance N0 A2 T(1–2 2/π). Since the threshold is (E0 –E1 )/2 = 0, Pe,0 = P{z < 0 | 0 transmitted}
 = Q A2 T(1–2 2/π) = Q ||s0 –s1 || exactly as obtained in class.
 2N 
N0




2 T(1–2 2/π)
0
N
A

0

 ||s –s ||
Similarly, Pe,1 = P e = Q 0 1  also.
 2N0 
= Q
A2 T(1–2 2/π)
T/2
(c)
The difference now is that ⟨s0 ,s 1 ⟩ =
∫
2A2 sin(πt/T)dt = 0, that is, the signals are orthogonal. Hence,
–T/2
the signal outputs are ±A2 T while the noise variance is N 0 A2 T. Hence, Pe =
(d)
Q
A2 T
= Q
N0 

as you might remember from a previous course (or Problem Set #1).
Since 1 – 2 2/π < 1, the error probability in part (b) is larger than the error probability on part (c)
Eb 
N0 
Download