solutions to problems

advertisement
CHAPTER 8
SOLUTIONS TO PROBLEMS
8.1 SOLUTIONS TO PROBLEMS OF Chapter 1
Solution to Problem
1.1 From
Tab. 1.2 using the scaling, time shift and time inverse properties and
writing y(t) = x −2 t − t20 , we obtain
Y(f ) =
1
f
X −
2
2
e−j2π
t0
f
2
.
Solution to Problem 1.2 From Tab. 1.2 using the conjugate, frequency shift and linearty properties
and writing Y(f ) = −2X ∗ [−(f − f0 )], we obtain
y(t) = −2x∗ (t)ej2πtf0 .
Solution to Problem 1.3 Using Tab. 1.3 and the linearity property of Tab. 1.2, we obtain
X (f ) =
3
1
[δ(f − f0 ) + δ(f + f0 )] + [δ(f − f1 ) + δ(f + f1 )] .
2
2
Communication Systems: Fundamentals and Design Methods N. Benvenuto, R. Corvaja, T. Erseghe and N. Laurenti
c 2007 John Wiley & Sons, Ltd.
510
SOLUTIONS TO PROBLEMS
For the computation of z(t) we first compute Z(f ) and then compute its inverse Ftf. From the convolution properties of Tab. 1.2 and the Ftf of sinc(t) in Tab. 1.3 we obtain
Z(f ) = X (f ) rect(f ) .
Since f1 > 0.5 the second term of X (f ) related to f1 is multiplied by zero in Z(f ), while the first term
(with f0 < 0.5) is multiplied by one. Hence we obtain
3
[δ(f − f0 ) + δ(f + f0 )]
2
Z(f ) =
which is the transform of a scaled cosine function, i.e. z(t) = 3 cos(2πf 0 t).
Solution to Problem 1.4
a) We have
X (0) =
Z
+∞
−∞
x(t) dt ≥ 0
since x(t) ≥ 0, for all t.
b) A well-known property of the Lebesgue integral states that for any complex-valued function y(t)
Z
+∞
−∞
Z
y(t) dt ≤
+∞
−∞
|y(t)| dt .
We apply it to y(t) = x(t)e−j2πf t and obtain
Z +∞
Z +∞
−j2πf t
x(t)e−j2πf t dt
x(t)e
dt ≤
|X (f )| = −∞
−∞
Z +∞
Z +∞
−j2πf t dt =
x(t) dt = X (0)
=
|x(t)| e
−∞
−∞
where in the last step we exploited the fact that |e−j2πf t | = 1 and that |x(t)| = x(t), since x(t)
is real-valued and non negative.
Solution to Problem 1.5
a) By writing
x(t) = A2 rect
t
2t2
+ (A1 − A2 ) rect
F
t
2t1
and making use of the signal/transform pair rect(t) −→ sinc(f ) we get
X (f ) = 2A2 t2 sinc(2t2 f ) + 2(A1 − A2 )t1 sinc(2t1 f ) .
b) From the signal/transform pair e−πt
y(t) = Ae−3π e−πt
2
2
F
2
−→ e−πf in Tab. 1.3 and the linearity rule we get
2
F
−→ Y(f ) = Ae−3π e−πf = Ae−π(f
2
+3)
c) From the same signal/transform pair in Tab. 1.3 and the time shift rule we get
2
Z(f ) = Ae−πf e−j2πf (−3) = Ae−π(f
2
−j6f )
.
.
511
SOLUTIONS TO PROBLEMS OF CHAPTER 1
d) By the inverse Fourier transform formula
x(t) =
Z
Z
+∞
−∞
X (f )e
j2πf t
df =
5/(2T )
e2(T +jπt)f df
−5/(2T )
2(T +jπt)f 5/(2T )
1
e
−5/(2T )
2(T + jπt)
=
e5(1+jπt/T ) − e−5(1+jπt/T )
,
2(T + jπt)
=
t∈R
Solution to Problem 1.6
a)
t2
2f T 2
F
cos
2πt/T
−→
X
(f
)
=
t2 + T 2
1 + f 4T 4
violates the even signal rule, since the Ftf is odd. It also violates the real signal rule since the Ftf
is not Hermitian.
6
x(t) = log
b)
x(t) =
t4
t2 T
+ T4
6
F
X (f ) = e−f
−→
by the inverse Ftf formula (1.11) it should be x(0) =
c)
x(0) = 0 while X (f ) > 0, hence
x(t) = 2j sin 2π
R +∞
−∞
t3
t
+ 3
T
T
R +∞
X (f ) df > 0.
6
F
−→
−∞
4
T4
X (f ) df , but in the given pair we have
X (f ) = T
1+j
log(1 + f 2 T 2 )
violates the odd signal rule, since the Ftf is even.
Solution to Problem 1.7
A triang(f /F1 )
A
rect(f /F2 )
6
6
1
0
−F1
X (f )
A
A 1−
F2
2F1
F2
2
-
F1 f
X1 (f )
6
A 1−
F2
2F1
-
− F22
0
F2
2
f
0
− F22
X2 (f )
6
6
F2
A 2F
1
0
− F22
F1 f
F2
2
F2
2
f
0
− F22
F2
2
f
From the “house-shape” plot of X (f ), it be written as the sum of a rectangular function and a triangular
function, X (f ) = X1 (f ) + X2 (f ) with
X1 (f ) = A 1 −
F2
2F1
rect
f
F2
,
F2
triang
X2 (f ) = A 2F
1
2f
F2
512
SOLUTIONS TO PROBLEMS
Then from the signal/transform pairs
rect
f
F2
F −1
−→
F2 sinc(F2 t) ,
triang
we get the expression of x(t).
f
F2 /2
F −1
F2
2
−→
sinc2
F2
t
2
Solution to Problem 1.8
a) By the trigonometric identity sin2 x = (1 − cos 2x)/2 we get
X` =
(
A/2
−A/4
0
, `=0
, ` = ±2
, elsewhere
By direct integration in (1.14) we get
(
A Y` =
1 − (−1)` =
jπ`
−j
0
2A
π`
, ` odd
, ` even
By the identity 2 sin x cos x = sin(2x) we write
z(t) = sin(4πt/Tp ) + 1 = 1 +
1 j4πt/Tp
e
2j
−
1 −j4πt/Tp
e
2j
.
Then the only nonzero Fourier coefficients are
Z0 = 1 ,
Z−2 = 21 j ,
Z2 = − 21 j
b) x(t) is real-valued and non negative, so must be its zero order Fourier coefficient X 0 .
Solution to Problem 1.9
a) By explicitly calculating the sum (1.17), that in this case has a finite number of terms, we get
X (f ) =
6
X
Ae(−2−j2πf T )n = A
n=−3
=A
e(−2−j2πf T )(−3) − e(−2−j2πf T )7
1 − e−2−j2πf T
e6+j6πf T − e−14−j14πf T
1 − e−2−j2πf T
where we have used the general formula
n2
X
zn =
n=n1
z n2 +1 − z n1
z−1
By writing the series (1.17) as
Y(f ) =
=
+∞
X
1 −j2πf T
e
2
n=0
+∞
X
n=0
1 −j2πf T
e
2
n
+
n
+
−1
X
1 j2πf T
e
2
n=−∞
+∞
X
k=1
1 j2πf T
e
2
k
−n
SOLUTIONS TO PROBLEMS OF CHAPTER 1
513
we observe that both series converge and
Y(f ) =
1
1−
1 −j2πf T
e
2
+
1
3
=
5 − 4 cos(2πf T )
1 j2πf T
e
2
− 21 ej2πf T
b) X (f ) is a periodic function of f with period F . Then it is the Ftf of a discrete time signal x with
quantum T = 1/F . Within the interval [−F/2, F/2) its expression is X (f ) = 2Af /F . By the
inverse Ftf (1.18)
1
x(nT ) =
F
Z
2A
= 2
F
F/2
−F/2
2Af j2πf nT
e
df
F
1
f
−
j2πnT
(j2πnT )2
e
F
sin πn
2A 2j
− cos πn
= 2
F 2πnT 2πnT
2
A
n
= −j
(−1) .
nπ
j2πf nT
F/2
−F/2
c) The supposed Ftf is not a periodic function.
Solution to Problem 1.10 The signal can also be written as
Ae
x(t) =

 0
, t < −T
, −T ≤ t < T
, t≥T
Ae−t/T

2Ae−t/T
x(t)
6
2A/e
A/e
0
−T
T
whose plot is shown above.
By the above expression and (1.20) we can calculate the signal energy as
Ex =
Z
T
A2 e−2t/T dt +
−T
=A
2T
2
2
(e − e
−2
Z
t
+∞
4A2 e−2t/T dt
T
) + 2A2 T e−2 = A2
T 2
(e + 3/e2 ) = 3.9 · 10−3 V2 s
2
To determine the spectral density Ex (f ) = |X (f )|2 we must first find the Ftf X , which can be obtained
by evaluating the integral (1.10)
X (f ) =
Z
T
1
Ae−t( T +j2πf ) dt +
−T
Z
+∞
1
2Ae−t( T +j2πf ) dt .
T
514
SOLUTIONS TO PROBLEMS
However, we prefer to start from the signal/transform pair
F
y(t) = e−t/T 1(t) −→ Y(f ) = T /(1 + j2πf T )
given in Tab. 1.3 and write
x(t) = Ae · e−(t+T )/T 1(t + T ) + Ae−1 · e−(t−T )/T 1(t − T )
= Aey(t + T ) + Ae−1 y(t − T ) .
By the time shift rule (see Tab. 1.2) we get
X (f ) = AeY(f )ej2πf T + e−1 Y(f )e−j2πf T
= AT
e1+j2πf T + e−(1+j2πf T )
1 + j2πf T
Hence we get
Ex (f ) = A2 T 2
e2 + e−2 + 2 cos(2πf T )
.
1 + 4π 2 f 2 T 2
Solution to Problem 1.11
a) In the frequency domain we have (see the plots below)
X (f ) =
f −F
A
triang
F
F
Z(f ) = X (f )Y(f ) =
X (f )
6
(
Y(f ) =
,
A2
f
2F 3
0
Y(f )
6
−F
0
0
F
A
f
rect
2F
2F
, 0<f <F
, elsewhere
Z(f )
F f
b) By the Parseval Theorem 1.1 and since Z(f ) is real-valued
Ez =
Z
F
0
A4 F 3
A4
A4 2
f df =
=
6
6
4F
4F 3
12F 3
c) Again by the Parseval Theorem 1.1 we get
Exy =
Eyz =
6
-
2F f
Z
Z
F
0
F
0
A2 F 2
A2
A2
f df =
=
3
3
2F
2F 2
4F
A3 F 2
A3
A A2
f
df
=
=
2F 2F 3
4F 4 2
8F 2
0
F f
SOLUTIONS TO PROBLEMS OF CHAPTER 1
Solution to Problem 1.12
a) Since it is
x(t) =
(
j2πt/T − 1
jπt/T − 1/2
0
by (1.20) we get
Ex =
Z
T /2
−T /2
4π 2
= 2
T
Z
|j2πt/T − 1|2 dt =
T /2
2
t dt +
−T /2
Z
T /2
Z
515
, |t| < T /2
, |t| = T /2
, |t| > T /2
T /2
(2πt/T )2 + 1 dt
−T /2
dt = T
−T /2
π2
+1
3
' 8.58 µs
The cross energy between x and x∗ is obtained by (1.21)
Exx∗ =
=
Z
Z
+∞
x(t)[x∗ (t)]∗ dt =
−∞
T /2
Z
+∞
x2 (t) dt =
−∞
2
−T /2
−(2πt/T ) − j4πt/T + 1 dt = T
Z
T /2
−T /2
(j2πt/T − 1)2 dt
π2
−
+1
3
' −4.58 µs
b) To find the energy spectral densities we must first find the Ftf of x. We consider the signal
y(t) = rect(t/T ), with Y(f ) = T sinc(f T ). Then, since
x(t) = −
1
(−j2πt)y(t) − y(t) ,
T
by the linearity and differentiation rules (see Tab. 1.2) we get
X (f ) = −
1
Ẏ(f ) − Y(f ) = −T [d(f T ) + sinc(f T )]
T
where d(u) is the derivative of sinc(u) and has the expression
d(u) =
yielding
X (f ) =
cos πu − sinc(u)
d sin πu
=
du πu
u
1
−T
f
sinc(f T ) −
cos πf T
.
f
Observe that X is real-valued (and odd), hence
Ex (f ) = X 2 (f )
=
cos2 πf T
1 − fT
1 + f 2 T 2 − 2f T
sinc2 (f T ) +
−2
sinc(f T ) cos πf T
2
f
f2
f2
and by the conjugate signal rule (see Tab. 1.2)
∗
Exx∗ (f ) = X (f ) [X ∗ (−f )] = −X 2 (f ) .
516
SOLUTIONS TO PROBLEMS
Solution to Problem 1.13 To have zero average power, it is sufficient that the signal x(t) (and so
|x(t)|2 ) vanishes for t → ∞. However to have also infinite energy, the decay of |x(t)| 2 must be slow
enough that its integral be infinite. For example we can choose
|x(t)|2 =
1
1(t) ,
αt + 1
with α > 0
that is x(t) = (αt + 1)−1/2 · 1(t). We check that its energy is
Ex = lim
u→∞
Z
u
0
and its power is
Mx = lim
u→∞
1
2u
1
1
dt = lim
ln(αu + 1) = +∞
u→∞ α
αt + 1
Z
u
ln(αu + 1)
1
dt = lim
=0.
u→∞
αt + 1
2αu
0
Solution to Problem 1.14
a) x(t) is periodic, and since cos(2πt/Tp ) > 0 for kTp − Tp /4 < t < kTp + Tp /4 and
cos(2πt/Tp ) ≤ 0 for kTp + Tp /4 ≤ t ≤ kTp + 3Tp /4 we can write
x(t) =
, kTp − Tp /4 < t < kTp + Tp /4
, kTp + Tp /4 ≤ t ≤ kTp + 3Tp /4
2 cos 2πt/Tp
0
Thus we get
Ms =
1
Tp
Z
Tp /4
4 cos2 2πt/Tp dt = 1
−Tp /4
The signal y(t) is expressed in the form of a Fourier series with coefficients
Y` = 1/(3 + j)` . Then by Theorem 1.3 its average power is
Mx =
+∞
X
`=0
+∞
X 1
1
10
=
=
.
2`
`
10
9
|3 + j|
`=0
b) By the Parseval theorem, the signal energy is finite,
Ex =
Z
B
f 4 df =
−B
2 5
B .
5
Then by Proposition 1.2, the average power of the signal is zero.
Solution to Problem 1.15
a) The circuit splits the voltage between the resistance and the capacitor, hence
H1 (f ) =
R
1
j2πf C
1
+ j2πf
C
=
1
1
=
.
1 + j2πf RC
1 + j2πf 10−5
SOLUTIONS TO PROBLEMS OF CHAPTER 1
517
b) For a constant input signal, the output signal is the constant scaled by the frequency response of
the filter evaluated at f = 0, i.e.
vo (t) = V0 H1 (0) = V0 = 2 V.
c) For a sinusoidal input signal, the output is a sinusoidal signal
vo (t) = |H1 (10000)| 10 cos (2π10000t + π/8 + arg H1 (10000))
+ |H1 (5000)| 5 sin (2π5000f + π/5 + arg H1 (5000)) .
The value of the frequency response at the frequencies of the input signal are
H1 (10000) =
1
1
'
1 + j2π · 104 · 10−5
1 + j0.628
with amplitude and phase
1
' 0.85
|1 + j0.628|
1
= − arctan(0.628) ' 0.18π
arg H1 (10000) = arg
1 + j0.628
|H1 (10000)| =
and
H1 (5000) =
with amplitude and phase
1
1
'
1 + j2π · 5 · 103 · 10−5
1 + j0.314
1
' 0.95
|1 + j0.314|
1
arg H1 (5000) = arg
= − arctan(0.314) ' 0.096π .
1 + j0.314
|H1 (5000)| =
Solution to Problem 1.16
hence
The circuit splits the current between the resistance and the inductance,
j2πf L
j2πf 10−4
j6.28 · 10−4 f
=
=
j2πf L + R
j2πf · 10−4 + 50
j6.28 · 10−4 f + 50
For a sinusoidal input signal, the output is a sinusoidal signal
H(f ) =
io (t) = |H(1000)| 20 sin (2π1000t + π/8 + arg H(1000))
+ |H(2000)| 13 sin (2π2000t + π/4 + arg H(2000)) .
The values of H(f ) at the frequencies of the input signal are
H(1000) '
j0.628
j0.628 + 50
with amplitude
and phase
|H(1000)| = 12.56 · 10−3
arg H(1000) = π/2 − arctan(0.628/50) = 1.558
518
SOLUTIONS TO PROBLEMS
and
j1.256
j1.256 + 50
H(2000) =
with amplitude and phase
|H(2000)| = 25.1 · 10−3
arg H(2000) = π/2 − arctan(1.256/50) = 1.546 .
Solution to Problem 1.17 For a sinusoidal input signal, the output is a sinusoidal signal, scaled by the
amplitude of the frequency response and phase shifted by the phase of the frequency response, evaluated
at the input sinusoid frequency. Note that the first sinusoid will pass through the filter, since f 0 < B,
while the second is suppressed, since f1 > B. Hence we obtain
y(t) = 3|H(f0 )| cos(2πf0 t + arg H(f0 )) ,
where
|H(f0 )| =
and arg H(f0 ) = 0. Then
with power My =
(14.4)2
2
4
C = 4.8
5
y(t) = 14.4 cos(2πf0 t + arg H(f0 )) ,
= 103.68 V2 .
Solution to Problem 1.18 The properties of the various transformations are shown in the table below
transformation
causal
memoryless
time invariant
linear
a)
yes
yes
no
yes
b)
no
no
no
yes
c)
yes
yes
yes
no
d)
yes
no
no
no
Some comments
• Transformation b) is not time invariant. Let t0 be an arbitrary time shift of the input signal, the
3
corresponding output signal will be x(t
a shifted version of the output signal
− t03), whereas
3
would yield y(t − t0 ) = x (t − t0 ) = x(t − 3t2 t0 + 3tt20 − t30 ). It is also not causal (and
hence not even memoryless) since for t > 1, t3 > t so that y(t) depends on a “future” value of
x.
• Transformation d) is neither linear nor time invariant, due to the B sin 2πt/T term (recall x is
the only input signal to the system).
Solution to Problem 1.19
a) Let x1 (t) = x(t − Tp ). Then by the shift-invariance property in Tab. 1.4, we have y ∗ x 1 = z1 ,
with z1 (t) = z(t − Tp ). On the other hand, by the periodicity of x, we have x1 = x, and hence
y ∗ x1 = y ∗ x = z. Thus it must be z1 (t) = z(t − Tp ) = z(t) for all t. A simpler proof can be
derived in the frequency domain by making use of the Ftf of a periodic signal (1.15).
SOLUTIONS TO PROBLEMS OF CHAPTER 1
b) Suppose the amplitude of x is bounded by A > 0 and let K =
the amplitude of z as
Z
|z(t)| = ≤
Z
−∞
|y(t)| dt. Then we can write
+∞
−∞
+∞
R +∞
519
x(t − u)y(u) du
|x(t − u)y(u)| du
−∞
+∞
≤A
Z
|y(u)| du = AK
−∞
which is therefore bounded by AK.
c) An immediate counterexample is obtained by choosing the step function x(t) = y(t) = 1(t).
Their convolution yields z(t) = t · 1(t) whose amplitude is not bounded.
Solution to Problem 1.20 By making use of the results in Tab. 1.3 and the duality and scaling rules it
is
A
Aπ −2πT1 |f |
1
F
x(t) = 2
e
−→ X (f ) =
T1
T1 1 + (t/T1 )2
t
F
y(t) = B sinc
−→ Y(f ) = BT1 rect(T1 f )
T1
Hence from (1.45)
Z(f ) = ABπe−2πT1 |f | rect(T1 f ) =
ABπe−2πT1 |f |
0
, |f | < 1/(2T1 )
, elsewhere
,
and for the inverse Ftf of Z(f ) we get
z(t) = ABπ
Z
1/(2T1 )
e2π(jf t−T1 |f |)
−1/(2T1 )
= ABπ
Z
0
e
2πf (jt+T1 )
−1/(2T1 )
e2πf (jt+T1 )
= ABπ
2π(jt + T1 )
= ABπ
+ ABπ
0
−1/(2T1
−π−jπt/T1
1−e
2π(jt + T1 )
+
e
Z
1/(2T1 )
e2πf (jt−T1 )
0
e2πf (jt−T1 )
+ ABπ
2π(jt − T1 )
)
−π+jπt/T1
−1
2π(jt − T1 )
1/(2T1 )
0
.
As regards the cross energy, since Y(f ) is real, we get
Exy =
Z
+∞
−∞
X (f )Y(f ) df =
Z
+∞
−∞
Z(f ) df = z(0) =
AB
(1 − e−π ) .
T1
f ∈R
520
SOLUTIONS TO PROBLEMS
Solution to Problem 1.21
a) Let τ = RC = 100 ns, the filter frequency response can be written as
G(f ) =
1
1
=
1 + [R + 1/(j2πf C)]j2πf C
2 + j2πf τ
and the impulse response is obtained by inverse Ftf as
g(t) =
1 −2t/τ
e
1(t) .
τ
b) In this case the frequency response is
1
1 + [R + 1/(j2πf C)](1/R + j2πf C)
j2πf τ
=
1 − (2πf τ )2 + 3j2πf τ
G(f ) =
By the linearity of the filter we can find the filter output to the input signal v i (t) as the sum
vo (t) = vo,0 (t) + vo,1 (t) + vo,2 (t) of the responses to each of the following inputs, respectively,
e0 (t) = V0 ,
e1 (t) = V0 cos 2πf0 t ,
e2 (t) = V0 sin 2πf1 t
Since the filter is real, its response to a sinusoid with frequency fi ≥ 0 is as given in Example
1.3 B. Thus, by observing that 2πf0 τ = 1 and 2πf1 τ = 100, it is
G(0) = 0 and vo,0 (t) = 0
j
1
=
,
1 − 1 + 3j
3
V0
cos 2πf0 t
vo,1 (t) =
3
G(f0 ) =
|G(f0 )| =
1
,
3
6
G(f0 ) = 0
100j
1
1
'−
j , |G(f1 )| '
,
1 − 104 + 300j
100
100
V0
V0
vo,2 (t) '
sin(2πf1 t − π/2) = −
cos 2πf1 t
100
100
G(f1 ) =
6
The output signal is
vo (t) ' V0
1
1
cos 2πf0 t −
cos 2πf1 t
3
100
Solution to Problem 1.22
a) By Theorem 1.4 we get
Y(f ) =
+∞
X
`=−∞
X2 (f − `/Ts )
.
G(f1 ) ' −
π
2
SOLUTIONS TO PROBLEMS OF CHAPTER 1
521
On the other hand by (1.45) we get
X2 (f ) = G1 (f )X1 (f ) = G1 (f )G2 (f )X (f )
= G1 (f )G2 (f ) = rect(2Ts f ) cos(2πf Ts )
and Y has the plot shown below.
1
Ts
6
-
− T1s
− 4T1 s
1
Ts
1
4Ts
f
b) The cascade of the two filters g1 and g2 is equivalent to a single filter with frequency response
G(f ) = G1 (f )G2 (f ). Then we can apply the results of Example 1.3 B, as the input signal is
the sum of two sinusoids with frequency f0 and 2f0 , respectively. Since 2f0 > 4T1 s , we have
G1 (2f0 ) = 0, i.e. the filter g1 removes the sinusoid at frequency 2f0 . For the sinusoid at frequency
f0 < 4T1 s we get
1
π
G(f0 ) = G2 (f0 ) = cos =
3
2
and therefore
1
x2 (t) = cos(2πf0 t) .
2
By sampling x2 we obtain
yn = x2 (nTs ) =
π
1
cos n
2
3
.
Solution to Problem 1.23 By Proposition 1.6 we need g to satisfy (1.58) for a correct interpolation. If
we consider the sampled version of the given impulse response we get
g(nT ) =
1
T
cos(2πf0 nT ) rect n
T
τ
.
Observe that g(0) = 1 is always met, for all values of f0 and τ . We therefore require the interpolate
filter to satisfy g(nT ) = 0, for all n 6= 0.
a) To have g(nT ) = 0, for all n 6= 0, independently of the value of f 0 , it must be rect(nT /τ ) = 0.
Since rect(a) = 0 if and only if |a| > 1/2, the condition is met for |nT /τ | > 1/2, with
n = ±1, ±2, . . ., that is for Tτ > 21 , i.e., τ < 2T .
b) In the case τ = 3T , as τ > 2T , condition (1.58) is not met for all values of f 0 . We have
g(nT ) = cos(2πf0 nT ) rect
n
3
=
cos(2πf0 T )
0
π
+ kπ
2
that is f0 must be an odd multiple of 1/(4T ).
cos(2πf0 T ) = 0
⇔
2πf0 T =
⇔
f0 =
, n = ±1
, n = ±2, ±3, . . .
1
1
2k + 1
+k
=
4T
2T
4T
Solution to Problem 1.24 The sampled version of g has the expression
g(nT ) =
1
A sinc2 (nT /τ )
, n=0
, n=
6 0
522
SOLUTIONS TO PROBLEMS
The interpolate filter satisfies condition (1.58) if
A/T sinc2 (nT /τ ) = 0 ,
, n ∈ Z, n 6= 0
or equivalently
A = 0 or sinc(nT /τ ) = 0 , n ∈ Z, n 6= 0
that is
A = 0 or τ = T /n
Thus we require either A = 0 or τ a submultiple of T .
Concerning the bandwidth of g, in the case A = 0, we get G(f ) = T rect(T f ), i.e. the filter with
rectangular frequency response and minimum bandwidth B = 1/2T . In the case A 6= 0, τ = T /n,
with n a positive integer, we get
G(f ) = (1 − A)T rect(T f ) + (AT /n) triang(T f /n) .
Thus
h
i
h
i
h
i
1
n
n
∪ 0,
= 0,
2T
T
T
1
and Bg = 2n/T , so g has bandwidth greater than 2T
, as shown below.
Bg = 0,
A = −1 , τ = T /2
G(f )
A = 1/2 , τ = T
G(f ) 6
0
6
1
2T
1
T
1
2T
2
T
f
f
Solution to Problem 1.25
a) Since f0 < B, the bandwidth of y1 is By1 = f0 + B = 120 Hz.
b) Since A + x(t) has the same bandwidth of x(t), we have By2 = By1 = 120 Hz.
c) Let y4 (t) = x(t) cos(2πf1 t), y5 (t) = sinc(t/T )ejt/T cos(2πf1 t). Then it is
By4 = [−B − f1 , B + f1 ] = [−130, 130] Hz
By5 = [1/(2T ) − f1 , 3/(2T ) + f1 ] = [20, 180] Hz
Hence, y3 = y4 ∗ y5 has full band B y3 = [20, 130] Hz.
Solution to Problem 1.26 Since the phase of the filter is zero, the signal is not distorted if the filter
frequency response has the same amplitude at f3 and f4 , hence
0 ≤ f 3 , f4 ≤ f 1
or
f 1 ≤ f 3 , f4 ≤ f 2 .
Solution to Problem 1.27
a) From the figure, it is
H(f ) =
Vi (f )
j2πf L
=
,
Vo (f )
R + j2πf L
523
SOLUTIONS TO PROBLEMS OF CHAPTER 1
so that
|H(f )| = p
|2πf L|
R2 + (2πf L)2
arg H(f ) =
,
2πf L
π
− arctan
2
R
.
b) For f0 = 40 kHz, the output signal is
vo (t) = 6|H(f0 )| sin(2πf0 t + π/3 + arg H(f0 )) = Avi (t − t0 )
with A = |H(f0 )| and t0 = − arg H(f0 ))/(2πf0 ) and there is no distortion according to Heaviside’s conditions. We have
t0 = −
60π
A = |H(f0 )| = p
= 0.88
104 + (60π)2
1
1
arg [H(f0 )] = −
2πf0
2πf0
c) In this case, it is
h
π
2π · 30
− arctan
2
100
i
=−
0.16π
= −2.66 µs .
2πf0
vo (t) = 6|H(f0 )| sin(2πf0 t + π/3 + arg H(f0 )) + 42|H(f1 )| cos(2πf1 t + π/4 + arg H(f1 ))
and
vo (t) 6= Avi (t − t0 ) .
Hence there is distortion.
d) The band for which Heaviside conditions are satisfied is represented by all the intervals composed
of a single frequency, i.e. only pure sinusoidal signals are not distorted.
Solution to Problem 1.28 By the frequency shift and scaling rules we find the Ftf
X (f ) = T triang [T (f − 1/T )]
and the full band is therefore B x = (0, 2/T ), as shown below.
X (f ) 6
T
0
1
T
2
T
The energy is found by the Parseval theorem
Ex =
Z
2/T
2
0
2
T triang (T f − 1) df = T
2
Z
Solution to Problem 1.29
a)
f − F/4
A
X (f ) =
rect
,
F
F
1/T
2
(T f ) df + T
0
2
Z
2/T
1/T
Bx = [−F/4, 3F/4] ,
(2 − T f )2 df =
Bx = F
2
T .
3
524
SOLUTIONS TO PROBLEMS
b) Let y1 (t) = sinc(t/T1 ), with Y1 (f ) = T1 rect(T1 f ) and By1 =
Y(f ) = A Y1 ∗ · · · ∗ Y1 ,
|
k
{z
terms
1
2T1
By = kBy1 =
}
it is
k
2T1
c) Let s1 (nT ) = e−n 10 (n), with S1 (f ) = T /(1 − e−1−j2πf T ), it is
s(nT ) = A [s1 (nT ) + s1 (−nT ) − δn ]
and
S(f ) = [S1 (f ) + S1 (−f ) − 1] = A
so that S(f ) > 0. Hence by (1.61)
h
Bs = −
d)
Z(f ) =
1 1
,
2T 2T
i
,
e2
e2 − 1
+ 1 − 2e cos 2πf T
Bs =
1
.
2T
jA
[δ(f − 3F ) − 3δ(f − F ) + 3δ(f + F ) − δ(f + 3F )] .
8
Bz = {F, 3F } .
Although strictly speaking the measure of the set Bz is null, it is customary to consider its bandwidth
as its supremum, that is Bz = 3F .
Solution to Problem 1.30
a) From Tab. 1.3 and the duality and frequency shift rules (see Tab. 1.2)
X (f ) = T e−T (f +1/T ) 1(f + 1/T )
Hence
Bx = [−1/T, +∞)
and
Bx = ∞ .
b) From the complex conjugate and convolution rules (see Tab. 1.2)
Y(f ) = X (f )X ∗ (−f )
Y(f ) is nonzero for f such that both X (f ) and X (−f ) are nonzero, hence
By = B x ∩ (−B x ) = [−1/T, 1/T ] ,
B y = 2/T .
c) Observe that z(t) has a limited support. Hence, by Theorem 1.7 its band is nonlimited. Here
Bz = [0, +∞) and Bz = ∞.
Solution to Problem 1.31
a) We find from Tab. 1.3
X (f ) =
T
e−j2πf t0 ,
1 + j2πf T
|X (f )| = p
T
1 + (2πf T )2
SOLUTIONS TO PROBLEMS OF CHAPTER 1
525
|X (f )|
T 6
0
− T1
1
T
f
Hence the maximum amplitude is max |X (f )| = |X (0)| = T and by solving the inequality
1
p
|X (f )| ≤ ε|X (0)|
1 + (2πf T )2
≤ε
1 p
1 − ε2
ε2πT
|f | ≥
we get
B=
√
1 − ε2
ε2πT
For the given parameter values it results
B'
b) Since
Ex =
Z
1
' 15.92 kHz
2πεT
+∞
−∞
|x(t)|2 dt =
Z
+∞
e2(t0 −t)/T dt =
t0
T
2
from the integral in (1.64) and choosing f1 = 0
Z
f2
0
|Ex (f )|2 df =
Z
f2
0
T2
T2
df =
[arctan(2πf T )]f02
2
1 + (2πf T )
2πT
T
=
arctan(2πf2 T )
2π
we get the inequality
T
T
arctan(2πf2 T ) ≥ (1 − ε)
π
2
1
f2 ≥
tan[π/2(1 − ε)]
2πT
so that
B=
tan[π/2(1 − ε)]
2πT
and for the given values it results
B'
1
' 10.13 kHz
π 2 εT
Observe that the bandwidths do not depend on t0 , since a time shift does not alter the Ftf amplitude.
526
SOLUTIONS TO PROBLEMS
Solution to Problem 1.32 We proceed as in Problem 1.31. Here
X (f ) =
πA −2πT |f |
e
T
a)
B=−
ln ε
' 366 Hz
2πT
B=−
ln ε
' 366 Hz
4πT
b)
c) We can write
Y(f ) =
=
with cosh(α) =
1
2
πA −2πT |f −f0 |
e
+ e−2πT |f +f0 |
2T
( πA −2πT f
0
e
cosh(2πT |f |) , 0 ≥ |f | < f0
T
eα + e
πA −2πT |f |
e
T
−α
cosh(2πT f0 ) , |f | ≥ f0
the hyperbolic cosine function.
πA
2T
Y(f )
6
−f0
f0
f
As can be seen in the plot above, y will result either baseband or passband, depending on the
chosen value for ε. For ε ≤ Y(0)/Ymax = 1/ cosh(2πT f0 ) the signal is baseband with
B = f 2 = f0 −
ln ε
.
2πT
For ε > 1/ cosh(2πT f0 ) the signal is passband with f2 as given above and
f1 =
where cosh−1 α = ln α +
bandwidth is
cosh−1 [ε cosh(2πT f0 )]
,
2πT
√
α2 − 1 is the inverse hyperbolic cosine function, with α ≥ 1. The
ln ε + cosh−1 [ε cosh(2πT f0 )]
2πT
In particular, if ε e−2πf0 T , f1 is accurately approximated by
B = f0 −
f1 ' f 0 +
ln ε
2πT
,
B=−
ln ε
,
πT
with a bandwidth that is doubled with respect to the baseband case. For the assigned values of
f0 , T and ε, it is e−2πf0 T = e−20π ' 10−27 ε, so by using the above approximation we get
B ' 733 Hz.
SOLUTIONS TO PROBLEMS OF CHAPTER 1
527
Solution to Problem 1.33 Let hi (t) = Fi sinc [Fi (t − ti )], for i = 1, 2. Then
Hi (f ) = rect(f /Fi ) e−j2πf ti
and by the product rule, since h(t) =
1
h (t)h2 (t)
F1 1
H(f ) =
we get
1
[H1 ∗ H2 ] (f )
F1
a) In particular for t1 = t2 it is easy to calculate the above convolution which results
 −j2πf t
1

 e
|f | −j2πf t1
1
H(f ) =
− −
e

2
F1

0
, |f | ≤ 21 F1
,
1
F
2 1
< |f | < 23 F1
, |f | ≥
.
3
F
2 1
Hence the filter satisfies the Heaviside conditions in the band [−2.5 kHz, 2.5 kHz] with amplitude
A = 1 and delay t1 = 2 ms.
b) In the more general case the convolution is more cumbersome. However to prove the statement
1 F2 −F1
we do not need to calculate the convolution for all f ∈ R, but only for f ∈ − F2 −F
.
, 2
2
We get
H(f ) =
1
F1
Z
+∞
−∞
H2 (f − u)H1 (u) du
1 −j2πf t2
e
=
F1
=
1 −j2πf t2
e
F1
Z
Z
F1 /2
rect
−F1 /2
F1 /2
f − u j2πu(t2 −t1 )
e
du
F2
ej2πu(t2 −t1 ) du
−F1 /2
= sinc [F1 (t2 − t1 )] e−j2πf t2
1
where we used the fact that, for − F2 −F
<f <
2
1
1
− F2 < f − u < F2 ,
2
2
F2 −F1
2
rect
and − 12 F1 < u < 12 F1 , we have
f −u
F2
=1.
Hence the statement is proved, and the values of amplitude and delay are A = sinc [F 1 (t2 − t1 )]
and t2 , respectively.
Solution to Problem 1.34 In the frequency domain the input-output relationship of the given system
can be written as
Y(f ) = 2X (f ) + j2πf τ X (f ) ,
so that the transformation is a filter with frequency response
G(f ) = 2 + j2πf τ .
528
SOLUTIONS TO PROBLEMS
The required equivalent frequency response for the cascade of the two filters is G 1 (f ) = 21 e−j2πf t0 , so
the frequency response of h must be
H(f ) =
G1 (f )
1 e−j2πf t0
=
G(f )
4 1 + jπf τ
which yields the impulse response
h(t) =
1 −2(t−t0 )/τ
e4 −2t/τ
e
1(t − t0 ) =
e
1(t − 2τ )
2τ
2τ
that is causal.
Solution to Problem 1.35 We start by specifying the filter in the frequency domain. By requirement
i) we must have
H(f ) = Ae−j2πf t0 , |f | < B1 ,
with A > 0 so that Bh ≥ B1 . Then to meet also requirement ii) we set Bh = B1 and
H(f ) = Ae−j2πf t0 rect
f
2B1
=
Ae−j2πf t0
0
, |f | < B1
, |f | ≥ B1
In the time domain
h(t) = 2B1 A sinc [2B1 (t − t0 )]
To meet the requirement iii) we first find the energy of h
Eh =
then use the inequality |h(t)| ≤
Z
A
π(t−t0 )
Z
B1
A2 df = 2B1 A2
−B1
and get
0
A2
|h(t)| dt ≤ 2
π
−∞
2
Z
0
−∞
A2
1
t − t0 )2 dt = 2
(
π t0
By solving the inequality
A2
≤ ε2B1 A2
π 2 t0
with respect to t0 , we obtain
1
= 5.07 µs .
ε2π 2 B1
Hence we guarantee that the requirement iii) is met, at the expense of a delay of 5.07 µs.
t0 ≥
Solution to Problem 1.36 By the time shift rule and the rect → sinc signal/transform pair we get
X (f ) = rect(2τ f ) cos(2πf τ ) ,
f ∈R.
SOLUTIONS TO PROBLEMS OF CHAPTER 1
529
X (f )
4τ
6
1
− 4τ
1
4τ
f
1
The real-valued signal x is therefore baseband with Bx = 4τ
.
a) By the sampling theorem the minimum sampling frequency allowed for perfect reconstruction is
1
. In this case we must choose the ideal interpolate filter with frequency response
Fs = 2τ
G(f ) = 2τ rect(2τ f )
and impulse response
t
.
2τ
b) In this case Fs < 1/(2τ ), the nonaliasing condition is not satisfied. By using ideal antialiasing and
interpolate filters with bandwidth Fs /2, the energy of the reconstruction error is given by (1.84)
X̃ (f ) − X (f )
6
1
1
1
1
− 4τ
− 6τ
6τ 4τ
g(t) = sinc
-
f
Ee = 2
=
Z
1/(4τ )
16τ 2 cos2 (2πf τ ) df = 16τ 2
1/(6τ )
4
2
τ − τ ' 76 · 10−3 [V2 s] .
3
π
Z
1/(4τ )
1 + cos(4πf τ ) df
1/(6τ )
Solution to Problem 1.37 Since the signal xi is bandlimited with
Xi (f ) =
f
1
triang
F1
F1
Bx = F 1
perfect reconstruction is achieved if Fs ≥ 2F1 , while for Fs < 2F1 we have an out of band error. Since
we seek to minimize Fs and allow an error. By (1.84)
Ee = 2
=
Z
+∞
Fs /2
1
triang2
F12
2
F1
−
3
F12
1−
f
F1
f
F1
3 F 1
2
F12
Z
2
3F1
df =
=
Fs /2
F1
1−
1−
f
F1
Fs /2
1−
Fs
2F1
f
F1
3
2
df
The energy of xi is
E xi = 2
Z
+∞
0
1
triang2
F12
f
F1
df =
2
F12
Z
0
F1
2
df =
2
3F1
530
SOLUTIONS TO PROBLEMS
By imposing the condition Ee ≤ Exi /100 we have
1−
Fs
2F1
3
≤
1
,
100
⇒
1
Fs ≥ 2 1 − √
3
100
Solution to Problem 1.38
a) Since
X (f ) =
A −|f |/B
e
,
B
Y(f ) =
F1 ' 1.57 F1 .
f
1
triang
2B
2B
it is
A −|f |/B
f
e
triang
.
2B 2
2B
The bandwidth of z is Bz = 2B. Hence by the nonaliasing condition in the Sampling Theorem it
must be Fs ≥ 4B. We can choose the minimum sampling rate Fs = 4B provided the interpolate
filter is ideal over [−Bz , Bz ].
b) In this case, it must be
Fs /4 ≥ Bz , 3Fs /4 ≤ Fs − Bz
Z(f ) = X (f )Y(f ) =
and hence Fs ≥ 4Bz = 8B.
Solution to Problem 1.39
a) A bandlimited signal x, with Bx ≤ (1 − ρ)F1 /2, is not distorted by the filter h, since
Y(f ) = H(f )X (f ) =
1
X (f ) for f ∈ [−Bx , Bx ]
2
and y(t) = 12 x(t). By sampling with Ts = 0.1 ms, we observe that the signal y meets the
nonaliasing condition ii) of Theorem 1.8, since
By = B x <
Fs
= 5 kHz .
2
Hence we can perfectly recover y from its samples y(nTs ). The interpolate filter yields the signal
x̃ with Ftf
X̃ (f ) = G(f )Fs
+∞
X
k=−∞
Y(f − kFs )
where G(f ) = 0, for |f | > (1 + ρ)F1 /2 = 5625 Hz, and G(f ) = 1/2, for |f | ≤ 3375 Hz. By
considering the band of {yn }, we get
X̃ (f ) =
1
1
Y(f ) and x̃(t) =
y(t) .
2Ts
2Ts
The overall input-output relation is
x̃(t) =
1
x(t)
4Ts
and Heaviside conditions are met with attenuation A = 41 Fs and delay t0 = 0.
SOLUTIONS TO PROBLEMS OF CHAPTER 1
531
b) We show it by a counterexample. Let the input signal x have bandwidth B x such that Fs − Bg <
Bx ≤ Fs /2, for example
X (f ) = rect(f Ts ) .
Then the output y of the filter h will have the same bandwidth of x, i.e. B y = Bx , as shown below.
X (f )
Y(f )
6
1
6
1/2
−5
−5
5 f [kHz]
5 f [kHz]
The sampled version of y will therefore have frequency components in the interval [F s /2, Bg ] that
are not rejected by the interpolate filter g. Hence the Ftf of the reconstructed signal x̃ will be
X̃ (f ) =
Fs G(f )Y(f )
Fs G(f )Y(f ± Fs )
, f ≤ Fs /2
, f > Fs /2
with a bandwidth Bx̃ > Bx , as shown below. Therefore x̃ cannot be thought as obtained from x
through a filter.
P
`
Y(f − `Fs )
X̃ (f )
6
1
F
4 s
1/2
6
−5
-
5 f [kHz]
−5.6
5.6 f [kHz]
c) With Bx = 4.5 kHz, the periodic Ftf of {yn } vanishes over the intervals [`Fs +Bx , (`+1)Fs −Bx ],
with ` ∈ Z. Thus, if G(f ) = 0, for f ≥ Fs − Bx = 5.5 kHz, the overall input-output relationship
is
X̃ (f ) = X (f )H(f )G(f ) .
For the Heaviside conditions to be met, it must be
Fs H(f )G(f ) =
A0 e−j2πf t0
arbitrary
, |f | ≤ F1 /2
, |f | > F1 /2
Therefore the solution with minimum bandwidth is
G(f ) =
(
A0 Ts e−j2πf t0
H(f )
0
, |f | ≤ F1 /2
, |f | > F1 /2
whereas the solution with maximum bandwidth is
G(f ) =

A T e−j2πf t0

 0 s
H(f )
arbitrary


0
Both solutions are shown in the plot below
, |f | ≤ F1 /2
, F1 /2 < |f | < Fs − F1 /2
, |f | ≥ Fs − F1 /2
532
SOLUTIONS TO PROBLEMS
|G(f )|
|G(f )|
6
2A0 Ts
6
2A0 Ts
−4.5
4.5 −5.5
5.5 f [kHz]
maximum bandwidth
−4.5
4.5 f [kHz]
minimum bandwidth
Solution to Problem 1.40 We follow the procedure outlined on page 41. Since g has the minimum
1
bandwidth Bg = 2T
, we must choose the only possible equivalent filter g1 that satisifes (1.91) with
1
, that is G1 (f ) = rect(T f ).
Bg1 = 2T
The corresponding filter h has frequency response, according to (1.92)
H(f ) =
2
e−π(T f )
arbitrary
, |f | <
, |f | ≥
1
2T
1
2T
2
A particular choice of H that satisfies the above requirements is H(f ) = e −π(T f ) , which corresponds
2
to the impulse response h(t) = T1 e−π(t/T ) .
Solution to Problem 1.41
a) x(h) (t) = − cos(2πf0 t + π/4);
b) x(h) (t) = sin(2πBt + π/8) sin(2πf0 t);
c) x(h) (t) = sinc(Bt) sin(πBt).
Solution to Problem 1.42
a)
∆ϕ(t) = 0.2 cos 2πfa t .
b)
Ps =
8
42 1
=
= 80 mW
2 R
100
=⇒
(Ps )dBm = 19.03 dBm .
Solution to Problem 1.43
a)
∆ϕ(t) = 0.2 cos 2πf1 t + 0.3 cos 2πf2 t + 0.3 cos 2πf3 t .
b)
1
(0.2)2 + (0.3)2 + (0.4)2 = 2.9 · 10−2 rad2 .
2
c) The condition on the carrier frequency assures that the analytic signal associated to s(t) is
M∆ϕ =
s(a) (t) = 4 exp {j(2πf0 t + 0.2 cos 2πf1 t + 0.3 cos 2πf2 t + 0.3 cos 2πf3 t)} ,
The envelope of s results
es (t) = s(a) (t) = 4 .
Solution to Problem 1.44 We can write
x(t) = < Aej2π[f0 +λ(t)]t
SOLUTIONS TO PROBLEMS OF CHAPTER 1
533
with the signal Aej2π[f0 +λ(t)]t having only positive frequency components. Hence, by Proposition 1.12
x(a) (t) = Aej2π[f0 +λ(t)]t
φx (t) = 2π [f0 + λ(t)] t
fx (t) = f0 + λ(t) + tλ0 (t)
∆fx (t) = λ(t) + tλ0 (t)
Observe that the frequency deviation of x is not λ(t) as one could assume at a first glance.
Solution to Problem 1.45
a) x(bb) = e−jπ/4 + 2 ejπ/8 ;
b) x(bb) = ejπ/8 sinc2 (Bt);
c) x(bb) = e−jπ/4 rect(Bt).
Solution to Problem 1.46
a) In the frequency domain
A
e−|f −f0 |/B + e−|f +f0 |/B
2B
A
(a)
X (f ) = e−f0 /B e−f /B 1(f )
B
f − f0 /2
A −f0 /B f /B
A
e
rect
+ e
+ e−(f −f0 )/B 1(f − f0 )
B
f0
B
X (f ) =
By inverse Ftf we get
x
(a)
Ae−f0 /B
A
(t) =
+ e−f0 /B
1 − j2πBt
B
Z
f0
e(j2πt+1/B)f df +
0
Aej2πf0 t
1 − j2πBt
A(ej2πf0 t − e−f0 /B )
Ae−f0 /B
Aej2πf0 t
=
+
+
1 − j2πBt
1 + j2πBt
1 − j2πBt
and
A(1 − e−f0 (j2πt+1/B) )
Ae−f0 (j2πt+1/B)
A
+
+
1 − j2πBt
1 + j2πBt
1 − j2πBt
2A
=
1 + j2πBte−f0 (j2πt+1/B) .
2
1 + (2πBt)
x(bb) (t) =
b) We make use of the Parseval Theorem. By writing
X (bb) (f ) =
so that with A(f ) =
A −|f |/B
e
B
A −|f |/B
e
+ e−|f +2f0 |/B 1(f + f0 )
B
we get
A(f ) − X (bb) (f ) =

 A ef /B
B
 A e−2f0 /B e−f /B
B
, f < −f0
, f > −f0
534
SOLUTIONS TO PROBLEMS
The frequency domain integration yields
Ee =
Z
+∞
−∞
2
A(f ) − X (bb) (f )2 df = 2A e−f0 /B
B
With Ea = 2A2 /B, the energy ratio results
Ee
= e−f0 /B ,
Ea
so that for f0 B the error is negligible.
Solution to Problem 1.47
X (f ) =
1 −(f −f0 )T1
e
1(f − f0 ) + e(f +f0 )T1 1(−f − f0 )
2T1
X (f )
1
6
2t
−f0
j2πf0 t
f0
f
e
has only positive frequency components it is the analytical signal x (a)
Since the signal 1−j2πt/T
1
associated with x, by Proposition 1.12.
a) The Hilbert transform of x is
x
(h)
ej2πf0 t
(t) = =
1 − j2πt/T1
b) The envelope of x is
ej2πf0 t 1
= p
ex (t) = 1 − j2πt/T1 1 + 4π 2 t2 /T12
and the instantaneous frequency of x is fx (t) = f0
c) If we choose as reference frequency f0 we would get x(bb) with full band B = [0, +∞), which
is not baseband. It is therefore more convenient to assume a practical bandwidth B for x (a) (e.g.
B = 5/T1 ) and choose as reference frequency f1 = f0 + B/2. In this case we get
x(bb) (t) =
e−jπBt
.
1 − j2πt/T1
Solution to Problem 1.48
P [y > 0|x = −1] = P [x + w > 0|x = −1]
= P [−1 + w > 0|x = −1] = P [w > 1] = Q
1
σw
.
Solution to Problem 1.49 The probability of being in the upper right sub-plane having lower left corner
(1, 1) is
P [x1 > 1, x2 > 1] .
SOLUTIONS TO PROBLEMS OF CHAPTER 1
535
Since x1 and x2 are independent we obtain
P [x1 > 1, x2 > 1] = P [x1 > 1] P [x2 > 1] = Q
1
σ1
Q
1
σ2
.
For the computation of the probability of being in the complementary sub-plane we resort to the total
probability theorem and obtain
P [x1 < 1 or x2 < 1] = 1 − P [x1 > 1, x2 > 1] .
Solution to Problem 1.50
a) PMD functions must obey to both the condition 0 < p(a) ≤ 1 and the normalization (1.150),
therefore proper values of K and M for each of the given functions are
p1 :
p2 :
p3 :
log(1 − K)
log K
there are no suitable values for K and M
1
K2 + K
< K < 1, M = −
2
3
0 < K < 1, M =
b) PDF functions must obey to both the nonnegative condition (1.155) and the normalization (1.156),
therefore suitable values of K and M for each of the given functions are
p1 :
p2 :
p3 :
K=M >0
K, M ≥ 0, K + M = 1
M = 0, K = −π .
Solution to Problem 1.51
a) Observe that
2
1
px (a) = p √ e−(a−1) ,
π 2
Hence by the linearity of expectation
g(a) = −2(a − 1) .
E [g(x)] = E [−2(x − 1)] = −2 (E [x] − 1) = −2(mx − 1) = 0 .
b) In general, with the given hypotheses on px (a), we find that
g(a) =
1
p0x (a) .
px (a)
Thus by applying Theorem 1.16
E [g(x)] =
Z
+∞
−∞
p0x (a) da = [px (a)]+∞
−∞ = lim px (a) − lim px (a) .
a→+∞
a→−∞
Since px (a) must have a finite integral, it also must vanish for a → ±∞, so the expectation is
null.
536
SOLUTIONS TO PROBLEMS
Solution to Problem 1.52
a) By differentiating px (2) with respect to Λ
px (2) = e−Λ
we find that
Λ2
,
2
∂px (2)
Λ
= e−Λ (2 − Λ)
∂Λ
2

∂px (2)


>0
, 0<Λ<2
∂Λ

 ∂px (2) < 0
∂Λ
hence its maximum is attained at Λ = 2.
b) We have
, Λ>2
P [x > 2] = 1 − [px (0) + px (1) + px (2)] = 1 − e
that is a continuous function of Λ. Since
∂ P [x > 2]
Λ2
= e−Λ
>0 ,
∂Λ
2
−Λ
1+Λ+
for all Λ > 0
we prove the statement
c) By theorem 1.16
x
−Λ
E [e ] = e
∞
X
Λn en
n!
n=0
and making use of the exponential series
P∞
n=0
αn /n! = eα , we get
x
Λ(e−1)
.
E [e ] = e
Solution to Problem 1.53 The PDF of x is
2
1
−1 a
px (a) = √
e 2 σ2
2πσ
a) By writing
P [y ≤ b] = P |x| ≤
=
=
we get by differentiation
py (b) =
(
Z
√
√
1−b
1−b
√
− 1−b
0
px (a) da
1 − 2Q
√
b−1
σ
, b≤1
, b>1
0
, b<1
1
− 1 b−1
√
e 2 σ2
√
2πσ b − 1
, b>1
1 2
Λ
2
SOLUTIONS TO PROBLEMS OF CHAPTER 1
537
while py (1) is unspecified.
b) By the linearity of expectation
2
3
2
2
E x (x + 1 + sin x) = E x + E x + E x sin x
and applying Theorem1.16 to each term
2
E x (x + 1 + sin x) =
Z
+∞
−∞
+
Z
2
1
−1 a
a3 √
e 2 σ2 da +
2πσ
+∞
Z
+∞
−∞
2
1
−1 a
a2 √
e 2 σ2 da
2πσ
2
1
−1 a
a sin a √
e 2 σ2 da
2πσ
2
−∞
All integrals above are finite. The first and the third are
function is odd,
null since the integrate
whereas the second integral yields Mx = σ 2 . Hence E x2 (x + 1 + sin x) = σ 2 .
Solution to Problem 1.54 Let R be the maximum cell radius.
a) The PDF of x is
k , a ∈ CR
px (a) =
0 , a 6∈ CR
with CR the circle centered at the origin with radius R. By the normalization condition
Z
k da = 1
CR
it must be k = 1/(πR2 ).
b) To find the PDF of r we first find
P [r ≤ b] = P [x ∈ Cb ] =
Z
k da =
Cb
then by taking its derivative we get
pr (b) =
(
2b
R2
0

 0
b2 /R2
 1
, b≤0
, 0<b<R
, b≥R
, 0<b<R
, elsewhere.
c) Since τ = r/c, from P [τ ≤ a] = P [r ≤ ca], we find the PDF
pτ (a) = c pr (ca) =
(
2a
2
τmax
0
, 0 < a < τmax
, elsewhere.
with a maximum propagation delay τmax = R/c ' 117 µs. Hence we get
mτ =
Mτ =
2
2
τmax
Z
2
2
τmax
τmax
0
Z
τmax
a2 da =
0
a3 da =
1 2
τmax ,
2
2
τmax
3
στ2 =
1 2
τmax
18
538
SOLUTIONS TO PROBLEMS
(2τmax /3)2
5
=
2
9
τmax
In GSM the guard period between transmission of different terminals is set higher than the maximum delay Tg ≥ τmax . Had it be set equal to the average delay mτ one would save 1/3 (33%) of
the guard period, however this would introduce interference among different transmissions with
probability 5/9 (56%).
P [τ > mτ ] = 1 − P [r ≤ cmτ ] = 1 −
Solution to Problem 1.55 The alphabet of x is shown below
a2
6
1
a1
0 1
a) From the bounds on px (a1 , a2 ) we get K > 0, and by (1.150)
K
∞ ∞
X
X
1 a1 +2a2
2
a1 =0 a2 =a1
=1
The double sum yields
∞
∞
X
X
a1 =0 a2 =a1
1
=
2a1 +2a2
∞
X
1
a1 =0
=
2 a1
a2 =a1
∞
4 X 1
3
a1 =0
∞
X
8 a1
∞
X
1 1/4a1
1
=
a
2
4
2a1 1 − 1/4
a1 =0
1
4
32
=
3 1 − 1/8
21
=
and hence K = 21/32.
b) We first find px1 (a1 ) by the marginal rule
px1 (a1 ) =
∞
X
px (a1 , a2 ) = K
a2 =a1
∞ a1 X
1
1 a2
2
a2 =a1
4
=K
4
3
a 1
1
8
=
7
8
a 1
1
8
then by (1.174)
px2 |x1 (a2 |a1 ) =
21/32(1/2)a1 +2a2
3
=
7/8(1/8)a1
4
a2 −a1
1
4
,
0 ≤ a 1 ≤ a2 .
−1
Solution to Problem 1.56
a) The covariance matrix is given by
k x = r x − mx mH
x =
5
−1
−1
2
−
1
−1
1
=
4
0
0
1
Since kx1 x2 = 0, x1 and x2 are uncorrelated. By Proposition 1.20 they are also statistically
independent. We write the set C as
C = (a1 , a2 ) ∈ R2 : 0 ≤ a1 ≤ 2, a2 ≥ −3/2
SOLUTIONS TO PROBLEMS OF CHAPTER 1
539
and by the statistical independence, as in Example 1.6 E
P [x ∈ C] = P [0 ≤ x1 ≤ 2] P [x2 ≥ −3/2] .
Moreover, from
σx21 = 4 ,
m x1 = 1 ,
we can write
P [x1 ≤ a1 ] = 1 − Q
P [x2 ≤ a2 ] = 1 − Q
h
mx2 = −1 ,
a1 − m x 1
σx 1
a2 − m x 2
σx 2
P [x ∈ C] = 1 − 2 Q
i
1
2
σx22 = 1
=1−Q
a1 − 1
2
= 1 − Q(a2 + 1)
Q −
1
2
' 0.265 .
Solution to Problem 1.57 Since kx is diagonal, x1 and x2 are statistically independent, as seen in the
proof of Proposition 1.21. As regards z, being a linear transformation of z as in Proposition 1.20 with
A = [1, 1] and b = 0, it is itself Gaussian with
m z = m x1 + m x2 = 1 ,
Hence we write
σz2 = σx21 + σx22 = 4
(a−1)2
1
pz (a) = √ e− 8
.
2 2π
Solution to Problem 1.58 The support of px (a1 , a2 ) shown below can be written as
(a1 , a2 ) ∈ R2 : −1 ≤ a2 ≤ 1, |a2 | − 1 ≤ a1 ≤ 1 − |a2 |
a2
16
-
1 a1
−1
−1
a) For x1 and x2 to be statistically independent, it must be px (a1 , a2 ) = px1 (a1 )px2 (a2 ). If this
were the case, the support of px (a1 , a2 ) would be the Cartesian product of the supports of px1 (a1 )
and px2 (a2 ), which is not. Hence x1 , x2 are statistically dependent.
We calculate px2 via the marginal rule
px2 (a2 ) =
Z


1−|a2 |
|a2 |−1
eλ(a1 +a2 )+c da1 , |a2 | ≤ 1
, |a2 | > 1
2 λa2 +c
a2
= e
sinh [λ(1 − |a2 |)] rect
λ
2
0
540
SOLUTIONS TO PROBLEMS
with sinh α = 12 eα − e−α the hyperbolic sine function. Then the conditional PDF is straightfrowardly obtained by (1.174)
px1 |x2 (a1 |a2 ) =
(
λeλa1
2 sinh [λ(1 − |a2 |)]
0
, |a1 | + |a2 | ≤ 1
, |a1 | + |a2 | > 1
b) By applying the normalization condition (1.156) to x2 we get
e
c
Z
0
e
2a2 +1
−1
e
c
and hence
−e
−1
da2 + e
h 1
1
e−
2
e
c
Z
1
0
e − e2a2 −1 da2 = 1
1
1
1
− +e−
e−
e
2
e
i
=1
1
e
c) The answer is no, and we prove it by contradiction. If such a transformation existed, its inverse
A−1 would be itself linear and we would have x = A−1 y. Then, if y were a Gaussian rve, so
would be x by Proposition 1.20, which is contradicted by the expression of its PDF p x .
c = − ln e −
Solution to Problem 1.59 A few realizations of x are shown below, labeled with the corresponding
value of A
x(t) [V]
16
A = 0.58 V
A = 0.83 V
A = 0.25 V
1
2
3
4
5
t [s]
a) The statistical power is obtained as
Mx (t) = E x2 (t) = E A2 e−2t/T1 = MA e−2t/T1 =
while
h
−2T1 /T1
>
P [x(2T1 ) > 0.1 V] = P Ae
e2 Amax
=P A>
10
1
Amax
10
=1−
1 2
Amax e−2t/T1
3
i
e2
' 0.26
10
b) For t < 0, the rv x(t) is almost surely the constant 0.
For t ≥ 0 we first find the expression of P [x(t) ≤ a]
P [x(t) ≤ a] = P A ≤ ae
t/T1
=

 0
aet/T1

1
, a≤0
, 0 < a < Amax e−t/T1
, a ≥ Amax e−t/T1
541
SOLUTIONS TO PROBLEMS OF CHAPTER 1
Hence the rv x(t) for t ≥ 0 is uniform in the interval 0, Amax e−t/T1 . In general we get
px (a; t) =

 δ(a)
et/T1
rect
 A
max
6
6
-
aet/T1
1
−
Amax
2
, t<0
, t≥0
px (a; 21 T1 ) [V−1 ]
px (a; −T1 ) px (a; 0) [V−1 ]
4
px (a; T1 ) [V−1 ]
6
2.72 6
1.65
1
-
a
1
0.61 a [V]
a [V]
0.37
a [V]
Solution to Problem 1.60 The second-order description of A and B is
mA = mB = 1/2 ,
MA = MB = 1/3 ,
rAB = ma mB = 1/4
Then we proceed similarly to Example 1.7 D by writing the time-varying mean and autocorrelation of
x(t)
1 + cos 2πf0 t
mx (t) = mA + mB cos 2πf0 t =
2
rx (t, τ ) = MA + rAB [cos 2πf0 t + cos 2πf0 (t − τ )] + MB cos 2πf0 t cos 2πf0 (t − τ )
1
1
1
1
1
= + cos 2πf0 t + cos 2πf0 (t + τ ) + cos 2πf0 τ + cos 2πf0 (2t + τ )
3
4
4
6
6
From the above expressions we see that mean and correlation are both periodic in t with period T c =
1/f0 , hence x(t) is cyclostationary with period Tc . The average second order description is then
obtained as
Z Tc
Z Tc
1
1
1
mx =
1 dt +
cos 2πf0 t dt =
2Tc 0
2Tc 0
2
1
rx (τ ) =
3Tc
Z
Tc
0
Z
1
1 dt +
4Tc
T
Z
Tc
0
1
cos 2πf0 t dt +
4Tc
c
1
1
cos 2πf0 τ dt +
6Tc 0
6Tc
1
1
= + cos 2πf0 τ
3
6
+
Z
Tc
Z
Tc
cos 2πf0 (t + τ ) dt
0
cos 2πf0 (2t + τ ) dt
0
By taking the Ftf of the average rx we find the average PSD
Px (f ) =
1
1
1
δ(f ) +
δ(f − f0 ) +
δ(f + f0 ) .
3
12
12
Solution to Problem 1.61 We check whether each function satisfies the properties of PSDs (see page 75)
542
SOLUTIONS TO PROBLEMS
a) We can write, assuming T > 0,
P1 (f ) =
(
A + B(1 − |T f /2|)
B(1 − |T f /2|)
0
1
, |f | ≤ 2T
1
, 2T < |f | ≤
, |f | > T2
2
T
so P1 is a PSD if and only if B ≥ 0 and 34 B + A ≥ 0, that is A ≥ − 34 B.
In order for P2 to be nonnegative and yield a finite integral it must be B ≤ A < 0.
b) By taking the Ftf of r3 (mT ) we get
P3 (f ) = AT + 2BT cos 2πf T
and thus T (A − 2|B|) ≤ P3 (f ) ≤ T (A + 2|B|). Hence we require A ≥ 0 and |B| ≤ A/2.
By taking the Ftf of r4 (τ ) we get
P4 (f ) = AT rect(T f )e−j2πf BT
so that P4 (f ) is real-valued and non negative for A ≥ 0 and B = 0.
Solution to Problem 1.62 We first derive the second order description of x
mx = 0 ,
Mx = 1 .
Since x has iid rvs, from the results in Example 1.7 C we get
rx (mT ) =
1
0
, m=0
, m=
6 0
,
Px (f ) = T
a) Since y is a linear transformation of x, by the linearity of expectation we get
my (nT ) = E
x(nT ) + x(nT − T )
2
=
mx (nT ) + mx (nT − T )
=0
2
x(nT ) + x(nT − T ) x(nT − mT ) + x(nT − T − mT )
ry (mT ) = E
2
2
1
= [2rx (mT ) + rx (mT + T ) + rx (mT − T )]
4
(
1/2 , m = 0
= 1/4 , m = ±1
0
, |m| ≥ 2
and by taking the Ftf
1
T (1 + cos 2πf T )
2
As regards the first order statistical description we see that yn is a discrete rv with alphabet
Ay = {−1, 0, 1}. Its PMD can be found as
Py (f ) =
py (1; nT ) = P [xn = 1, xn−1 = 1] = px (1; nT )px (1; nT − T ) =
1
4
SOLUTIONS TO PROBLEMS OF CHAPTER 1
543
and analogously
1
1
, py (0; nT ) = .
4
2
The rvs of y are all identically distributed since their PMDs do not depend on n. However, since
ry (T ) 6= m2y , the variables yn and yn−1 are not uncorrelated and hence they are not statistically
independent either.
b) The alphabet of z is Az = {−1, 1} and its first order PMD is given by
py (−1; nT ) =
pz (1; nT ) = P [xn = 1, xn−1 = 1] + P [xn = −1, xn−1 = −1] =
1
2
1
2
so that its rvs are identically distributed. We also observe that zn and zn+m are statistically
independent for m ≥ 2, since xn , xn−1 , xn+m , xn+m−1 are all statistically independent. Thus
we only need to check for statistical independence between zn and zn+1 . Since
pz (−1; nT ) = P [xn = 1, xn−1 = −1] + P [xn = −1, xn−1 = 1] =
pzn zn+1 (1, 1) = P [zn = 1, zn+1 = 1]
= P [xn = 1, xn−1 = 1, xn+1 = 1] + P [xn = −1, xn−1 = −1, xn+1 = −1] =
1
4
= pzn (1)pzn+1 (1)
and analogously
pzn zn+1 (−1, 1) = pzn (−1)pzn+1 (1)
pzn zn+1 (−1, −1) = pzn (−1)pzn+1 (−1)
pzn zn+1 (1, −1) = pzn (1)pzn+1 (−1)
the statistical independence between zn and zn+1 is proved.
Solution to Problem 1.63 By using the independence of a, b and λ we get
mx (t) = E [x(t)] = E [a cos 2πλt] + E [b sin 2πλt] = ma E [cos 2πλt] + mb E [sin 2πλt] = 0
since ma = mb = 0. Hence the process x is stationary in its mean. Analogously
rx (t, τ ) = E [x(t)x(t − τ )]
= E a2 cos 2πλt cos 2πλ(t − τ ) + E b2 sin 2πλt sin 2πλ(t − τ )
+ E [ab sin 2πλt cos 2πλ(t − τ )] + E [ab cos 2πλt sin 2πλ(t − τ )]
Ma
{E [cos 2πλτ ] + E [cos 2πλ(2t − τ )]}
=
2
Mb
+
{E [cos 2πλτ ] − E [cos 2πλ(2t − τ )]} + ma mb E [sin 2πλ(2t − τ )]
2
1
= E [cos 2πλτ ]
3
since Ma = Mb = 1/3. As rx (t, τ ) does not depend on t, the process x is also stationary in its
autocorrelation and hence WSS.
544
SOLUTIONS TO PROBLEMS
We evaluate the statistical power of x as
Mx = rx (0) =
1
1
E [cos 0] =
3
3
For the PSD we first evaluate rx (τ ) through Theorem 1.16:
rx (τ ) =
=
Z
1
1
E [cos 2πλτ ] =
3
3
1
6
sin(2πuτ )
2πτ
1
+∞
cos(2πuτ )fλ (u) du =
−∞
1
3
Z
1
−1
1
cos(2πuτ ) du
2
1
sinc(2τ )
3
=
−1
then by taking the Ftf
Px (f ) =
1
rect(f /2) .
6
Solution to Problem 1.64
a)
Px (f ) = δ(f ) + T sinc2 (T f )
b) The covariance between the rvs x(0), x(T ) is
k = rx (T ) − m2x = 0
so they are uncorrelated, and by Proposition 1.21 statistically independent.
c) Consider the rv y = x(0)+x(T ). Being y a linear transformation of the Gaussian rve [x(0), x(T )],
it is itself Gaussian with
my = 2mx = 2 ,
My = 2Mx + 2rx (T ) = 6 ,
σy2 = 2
and hence
P [x(0) + x(T ) < 1] = P [y < 1] = 1 − Q
1 − my
σy
=Q
1
√
2
' 0.24 .
Solution to Problem 1.65 From Example 1.7 A we know that
rx (t, τ ) = ra (τ )c(t)c∗ (t − τ )
and by expanding c(t) into its Fourier series we can write
rx (t, τ ) = ra (τ )
+∞
X
`=−∞
= ra (τ )
+∞
X
C` e
+∞
X
j2π`Fc t
`=−∞ m=−∞
!
+∞
X
m=−∞
∗ −j2πmFc (t−τ )
Cm
e
∗ j2π(`−m)Fc t j2πmFc τ
C` Cm
e
e
!
SOLUTIONS TO PROBLEMS OF CHAPTER 1
545
Then its average autocorrelation is obtained as
rx (τ ) =
+∞
X
+∞
X
1
ra (τ )
Tc
`=−∞ m=−∞
and since
Z
Tc
∗ j2πmFc τ
C` Cm
e
ej2π(`−m)Fc t dt =
0
the cross terms in the double sum vanish and
rx (τ ) = ra (τ )
+∞
X
`=−∞
Tc
0
Z
Tc
ej2π(`−m)Fc t dt
0
, m=`
, m=
6 `
|C` |2 ej2π`Fc τ .
Then by the frequency shift rule of the Ftf we get the result for Px .
By integrating Px we also find
Mx =
+∞
X
`=−∞
|C` |2
Z
+∞
Pa (f − `Fc ) df =
−∞
+∞
X
`=−∞
|C` |2 Ma
and the result for Mx is finally obtained by Theorem 1.3.
Observe that this problem is a generalization of Example 1.7 D. By applying it to the given case we
find
(
1/8 , ` = ±3
3/8 , ` = ±1
C` =
0
, ` 6= ±3, ±1
and the average PSD is illustrated below.
Px (f )
4
4
−3Fc
6
9
64F1
4
−Fc
1
64F1
4
Fc
-
3Fc
f
Since
Mc =
X
`
|C` |2 = 5/16 ,
Ma =
Z
+∞
δ(f ) df +
−∞
1
F1
Z
+∞
triang
−∞
f
F1
df = 2
the resulting average statistical power is Mx = 85 .
Solution to Problem 1.66 The autocorrelation of x is periodic in t with period T p =
process is cyclostationary.
a) x(T ) is a Gaussian rv with zero mean and variance
1
σx2 (T1 ) = Mx (T1 ) = rx (T1 , 0) = |cos 2πf0 T1 | = √ .
2
1
.
2f0
Hence the
546
SOLUTIONS TO PROBLEMS
Then
√
4
2 ' 0.116 .
P [x(T1 ) > 1] = Q
To find the second probability, we introduce the rv y = x(T1 ) − x(2T1 ). Being y a linear
transformation of a Gaussian rve, it is itself Gaussian by Proposition 1.20, with zero mean. Then
P [x(T1 ) > x(2T1 )] = P [y > 0] = Q (0) =
1
.
2
b) The average autocorrelation is
rx (τ ) =
1 −f0 |τ |
e
Tp
Z
Tp /2
cos 2πf0 t dt =
−Tp /2
2 −f0 |τ |
e
π
and by the Ftf in Tab. 1.3 we get
Px (f ) =
4f0
.
πf02 + 4π 3 f 2
Solution to Problem 1.67 From its probability density function, the statistical power of v i (t) is,
52
' 8.3 V2 .
3
M vi =
The power can be obtained also by integrating the PSD and we have
B
q
52
K2
=
.
B
3
2
5
' 2.89. The output voltage is obtained by splitting the input voltage between the
Hence K =
3
resistance and the inductor. The filter frequency response is
H(f ) =
Hence, the PSD of vo (t) is
j4.71 · 10−3 f
j2πf L
=
.
R + j2πf L
100 + j4.71 · 10−3 f
Pvo (f ) = |H(f )|2 Pvi (f ) .
Solution to Problem 1.68
a) The PSD of the output is
Py (f ) = |H(f )|2 Px (f )
where the PSD of x(t) is the Ftf of the autocorrelation,
Px (f ) =
Hence
Py (f ) =
N0
.
2
N0
rect(f /2B) .
2
b) From Tab. 1.3 we obtain
H(f ) =
A
α + j2πf
SOLUTIONS TO PROBLEMS OF CHAPTER 1
and
Py (f ) =
A2 B
A2
P
(f
)
=
.
x
|α + j2πf |2
[α2 + (2πf )2 ][1 + (2πβf )2 ]
Solution to Problem 1.69 The variance of s(t) is
σs2 =
Z
∞
−∞
Ps (f ) = T = 2 .
The probability is then obtained as
P [s(t) > 1] = Q
1
√
2
.
Solution to Problem 1.70 Since BCh > B we obtain
Py (f ) = P(f ) |GCh (f )|2 = G02
f
10−2
rect
2B
2B
Since y(t) is Gaussian, with zero mean and variance
=9
10−4
f
rect
2B
2B
σy2 = G02 10−2 = 9 · 10−4 V2
it is
P [−0.6 < y(t) < 0.6] = 1 − 2 Q
0.6
σy
= 1 − 2 Q (20) .
Solution to Problem 1.71 The PSD of x is illustrated below
Px (f )
P0 /B
6
4
4
-
0
−B
B
f
√
a) In this case G0 = 1/ 2. From Theorem 1.27, y is itself WSS with mean
my = mx G0 ' 1.41 V.
To find the statistical power of y we first write its PSD
Py (f ) =
(
1
Px (f )
2
0
1
P /B
2 0
− B2
, |f | < B/2
, |f | > B/2
Py (f )
6
1
P /B
4 0
4
0
B
2
f
.
547
548
SOLUTIONS TO PROBLEMS
Observe that the input spectral line at the origin is still present, while the input spectral line at
f = B has been filtered out. Finally
My =
P0
2B
Z
B/2
triang(f /B) df +
−B/2
1
m2x
=
2
2
3
P0 + m2x
4
= 5 V2 .
b) The process z can be obtained from x through a filter with frequency response G z (f ) = 1 − G(f ).
Therefore z is WSS. Its mean is easily calculated
mz = mx (1 − G0 ) = 0
as G0 = 1. Concerning the PSD we have
(
2
|Gz (f )| =
and
Pz (f ) =
(
, |f | < B/2
, |f | = B/2
, |f | > B/2
0
1/4
1
P0
triang(f /B) + P0 δ(f − B)
B
0
Pz (f )
6
1 P
, |f | > B/2
, |f | < B/2
4
0
2 B
−B
0
− B2
B
B
2
f
so that in this case the input spectral line at the origin is filtered out, while the input spectral line
at f = B is still present. The statistical power is
Mz = 2
Z
B
B/2
P0
5
triang(f /B) df + P0 = P0 = 10 V2
B
4
To answer the last question we observe that Pz is not an even function of f (there is a spectral line
at B but not one at −B), hence by property 4 on page 76 z (and similarly x) is not real-valued.
Solution to Problem 1.72 From Theorem 1.27 we get
my = m x
and since
1
T1
Z
+∞
e−t/T1 dt = mx = 1 V
0
Px (f ) = m2x δ(f ) +
and
G(f ) =
1
,
1 + j2πT1 f
Py (f ) =
m2x
T1
1 + (2πT1 f )2
|G(f )|2 =
1
1 + (2πT1 f )2
we get the PSD
T1
|G(0)| δ(f ) + |G(f )|
1 + (2πT f )2
2
2
SOLUTIONS TO PROBLEMS OF CHAPTER 1
= m2x δ(f ) +
T1
(1 + 4π 2 T12 f 2 )2
549
To calculate My we observe that in this case it is more convenient to operate in the time domain through
the output correlation. Indeed we have
∗
(g ∗ g−
) = F −1 |G|2
and
∗
(g ∗ g−
) (t) =
,
∗
My = ry (0) = [rx ∗ (g ∗ g−
)] (0) =
=
m2x
1
T1
Z
+∞
e
−|u|/T1
du +
m2x
−∞
Z
1 −|t|/T1
e
T
+∞
−∞
1
T1
= m2x (1 + 1/2) = 1.5 V2 .
Z
∗
rx (u) (g ∗ g−
) (−u) du
+∞
e−2|u|/T1 du
−∞
Solution to Problem 1.73 From the input-ouput relationship of the interpolate filter we can write
2 #
#
" +∞ +∞
" +∞
X
X X
∗
∗
xn xm g(t − nT )g (t − mT )
xn g(t − nT ) = E
My (t) = E n=−∞
n=−∞ m=−∞
then by the linearity of the expectation
+∞
X
+∞
X
My (t) =
n=−∞ m=−∞
∗
E [xn xm ]
Since x is white,
vanish and we get
∗
∗
E [xn xm ] g(t − nT )g (t − mT )
= rx (n − m) = 0 for n 6= m. Hence the cross terms in the double sum
My (t) =
+∞
X
n=−∞
2
2
E |xn | |g(t − nT )|
and by the stationarity of x we prove the first result.
To prove the statement for the average statistical power we can take the time average of the above
result and obtain (see proof of Theorem 1.39)
My =
1
T
Z
T
My (t) dt =
0
1
Mx
T
Z
+∞
−∞
|g(u)|2 du
Alternatively we can integrate (1.296) and obtain, since Px (f ) = Mx T ,
My =
Z
+∞
−∞
1
1
Eg (f )Mx df = Eg Mx .
T
T
550
SOLUTIONS TO PROBLEMS
Solution to Problem 1.74 We can write
y(t) =
+∞
X
xn rect
n=−∞
t − nT − T /2
T
/2
, frequency response
hence y is the output of an interpolator with impulse response g(t) = rect t−T
T
−jπf T
G(f ) = T sinc(f T )e
, and input x. Therefore, from the PSD of x (see Example 1.7 C)
Px (f ) =
+∞
X
2
δ(f − `/T )
T+
3
`=−∞
and (1.296) we get
2
T sinc2 (T f ) + δ(f )
3
Observe that the input spectral lines at `/T with ` 6= 0 have been canceled by the nulls of G(f ). The
average statistical power is obtained by integrating Py and it results My = 23 + 1 = Mx .
Py (f ) =
Solution to Problem 1.75
a) For perfect reconstruction, according to Theorem 1.8, we must have
• non aliasing condition ii) ⇒ Fx ≤ 1/(2Ts )
• irrelevance of the antialiasing filter d for xi , Fd /2 ≥ Fx ⇒ Fd ≥ 2Fx
• the interpolate filter must obey iii) ⇒ Fx ≤ Fg /2 ≤ Fs − Fx
b) In this case the nonaliasing condition does not hold for xi but through the insertion of the ideal
anti-aliasing filter and ideal interpolate filter as discussed on page 39. From (1.261) we have
Me = 2
Z
Fs
Fs /2
P0 (1 − Ts f ) df =
R0
4Ts
c) For the interpolated rp x̃ (which is in general cyclostationary with period T s ) to be WSS there
are no constraints on the input rp or the anti-aliasing filter. It suffices that the interpolate filter be
lowpass with bandwidth smaller than Fs /2 as discussed on page 94, hence we require Fg ≤ Fs .
For the given values the above requirement is met, and we get
Px (f ) = P0 triang(Ts f )
since d is irrelevant for xi . Moreover
Py (f ) = P0
and
+∞
X
k=−∞
1
Px̃ (f ) = Py (f ) Ts
triang(Ts f − k) = P0
2
G(f ) = P0 rect(2Ts f )
SOLUTIONS TO PROBLEMS OF CHAPTER 2
551
8.2 SOLUTIONS TO PROBLEMS OF Chapter 2
Solution to Problem 2.1
a) (P1 )dBm = −10 dBm.
b) (P2 )W = 10 W.
c) (P3 )dBrn = 60 dBrn.
d) (P4 )dBW = −70 dBW.
e) (P5 )dBrn = 14.8 dBrn.
Solution to Problem 2.2
a) From (2.2) we have
H(f ) =
1
1
j2πf C
j2πf C
1
=
=
1 + j2πf RC
R
+R
1
1−
1 + j2πf RC
so that
|H(f )|2 =
(2πf C)2
,
1 + (2πf RC)2
arg[H(f )] =
π
2
− arctan(2πf RC) .
b) By Fourier inversion we have
h(t) =
1
R
h
δ(t) −
i
1 −t/(RC)
e
1(t) .
RC
c) By evaluating the convolution, we obtain
iL (t) = (h ∗ vi )(t) =
V0 −t/(RC)
e
1(t) .
R
d) From (1.50) we have
V0
iL (t) = V0 |H(f0 )| cos(2πf0 t + arg[H(f0 )]) = √
cos(2πf0 t +
2R
π
)
4
.
Since vL (t) = R · iL (t), from (2.3) the average power at the load is
Pv = lim
u→+∞
1
2u
Z
+u
−u
V02
cos2 (2πf0 t +
2R
π
)
4
dt =
V02
.
4R
Solution to Problem 2.3 For the load voltage and current we have vL (t) = vi (t)/2 and iL (t) =
vL (t)/RL . So, from (2.3) the average power at the load is
Pv = lim
u→+∞
that is
A≤
1
2u
Z
r
16RL Pmax
= 1.3 V .
3
+u
−u
3
A2
1
vi2 (t) dt = 4
4 RL
4 RL
552
SOLUTIONS TO PROBLEMS
Solution to Problem 2.4
ZL (f ) = ZS∗ (f ) ,
Rs (f ) constant over [f0 −
B
; f0
2
+
B
]
2
.
Solution to Problem 2.5 Since, in this case, it is iL (t) = vL (t)/RL , from (2.3) we have
Pv = lim
u→+∞
1
2u
Z
+u
−u
V2
V02
sin2 (2πf0 t) dt = 0
RL
2RL
from which we obtain (V0 )dBV = (Pv )dBW + 10 log10 (2RL ) and so
(V0 )dBm = 30 + (Pv )dBm + 10 log10 (2RL ) = 30 − 16 + 30 = 44 dBm .
So, (V0 )mV = 160 mV and (V0 )V = 0.16 V which could be also derived as
√
V0 = 2RL Pv = 0.16 V
where Pv = 2.5 · 10−5 W. Note that f0 is irrelevant.
Solution to Problem 2.6 The reference signal is
vL (t) = v1 (t) + v2 (t) = V1 sin(2πf1 t) + V2 sin(2πf2 t)
whose power is
(Pv )W = P1 + P2 = 10−6 W + 10−9 W ' 10−6 W
or, equivalently, (Pv )dBm = −30 dBm.
Solution to Problem 2.7 Since the system is matched we have ZL = ZS∗ = 100 − j50. As a
consequence, from (2.12), the power density becomes
pv (f ) =
and so
Pv =
V0 f f
rect
4RS B
2B
V0 B
= 3.75 · 10−3 W ,
4RS
(Pv )dBm = 5.74 dBm .
Solution to Problem 2.8 The equivalent source resistance is
RS =
R1 + R2 = 500 Ω , series
R1 R2
= 120 Ω , parallel
(R1 +R2 )
respectively, for a connection in series and in parallel. So, from (2.26) we have
Pwi (f ) = 2 kT RS =
4.3 · 10−18 V2 /Hz , series
1.0 · 10−18 V2 /Hz , parallel
SOLUTIONS TO PROBLEMS OF CHAPTER 2
553
Since the statistical power is
2
σw
i
=
Z
B
−B
Pwi (f ) df = 4 kT B R =
4.3 · 10−12 V2 , series
1.0 · 10−12 V2 , parallel
for the standard deviation we obtain
σw i =
2.1 · 10−6 V , series
1.0 · 10−6 V , parallel
Solution to Problem 2.9 From
wL (t) = w1 (t)
R2
R1
+ w2 (t)
R1 + R 2
R1 + R 2
it is
σw L =
r
4k (T1 R2 + T2 R1 )
p
R1 R2
B = k (T1 + T2 ) R1 B = 9.8 · 10−7 V .
2
(R1 + R2 )
Solution to Problem 2.10 The parallel of a resistance and a capacitor gives a source impedance of
ZS (f ) =
1
R
R (1 − j2πf RC)
R
1
=
=
1 + j2πf RC
1 + (2πf RC)2
+ j2πf C
so that, from (2.33), the power spectral density of the noise voltage at the load is
PwL (f ) = 12 kT R
with statistical power in the full band [−B; B],
2
σw
L
=
Z
B
1
kT R
2
−B
df = kT R B = 4.4 · 10−11 V2 .
For the electrical power, from (2.29) we simply have
Pw =
Z
B
1
kT
2
−B
df = kT B = 2.2 · 10−17 W
2
that is (Pw )dBrn = −46.6 dBrn. Given Pw , σw
could have been derived from (2.25).
L
Solution to Problem 2.11
a) The amplifier noise temperature follows from (2.62), giving
TA = T0 (FA − 1) = 627 K .
The effective receiver input noise temperature is Teff,in = TS + TA = 677 K.
b) From (2.83), the average noise power at the amplifier output is
Pw,out = kTeff,in gA B = 4.67 · 10−10 W
554
SOLUTIONS TO PROBLEMS
that is (Pw,out )dBm = −63.3 dBm.
c) From (2.76), we derive the PSD of the noise voltage at the load
PwL (f ) = pw,out (f )
|ZL |2
=
RL
1
2
kTeff,in g(f ) RL
from which we have
2
σw
L
=2
Z
1
2
B
kTeff,in gA RL df = kTeff,in gA RL B = 2.34 · 10−7 V2
and so
σwL = 4.8 · 10−4 V
=⇒
(σL )mV = 0.48 mV .
See also (2.25).
Solution to Problem 2.12
a) We assume that the noise temperature of the source is TS = T0 . From (2.83) and (2.56) the noise
figure is
Pw,out
=⇒
(F)dB = 7 dB
F=
kT0 g B
and the amplifier noise temperature TA = T0 (F − 1) = 1160 K.
b) To determine the standard deviation of noise voltage at the load we use (2.57) and (2.76), to obtain
PwL (f ) =
1
2
kTeff,in g(f ) RL
from which we have
σw L =
See also (2.25).
p
kTeff,in g RL B =
p
Pw,out RL = 6.3 · 10−5 V .
Solution to Problem 2.13 We consider the source at the standard temperature T 0 , so that from (2.56)
we have Teff,in = F T0 . Thus, from (2.85) we obtain
(Ps,in )dBm = (Λ)dB − 114 + (F)dB + 10 log10 (B)MHz = −81 dBm .
For the statistical power we can use (2.24) to obtain
Mvi = 4RS Ps,in = 1.6 · 10−9 V2 .
Note that if the signal average is zero, then Mvi = σv2i , and the standard deviation of the input voltage
is (σvi )µV = 40 µV.
Solution to Problem 2.14
a) The antenna is the system source. So, the noise figure must be calculated for the cascade of the
remaning devices. From (2.72) we have
F= 1+
T1
T0
+
F2 − 1
F3 − 1
+
g1
g1 g2
=⇒
(F)dB = 0.67 dB .
SOLUTIONS TO PROBLEMS OF CHAPTER 2
555
b) For the output SNR, from (2.85) and Teff,in = Ta + T0 (F − 1) = 108.6 K we have
(Ps,in )dBm = −98.6 dBm .
c) If the second amplifier is removed, the noise figure becomes
F0 = 1 +
T1
T0
+
F3 − 1
g1
=⇒
(F0 )db = 1.7 dB ,
and the required average power becomes
(P0s,in )dBm = −96.1 dBm .
Solution to Problem 2.15
a) From (2.71) and (2.56) we have
Teff,in = TS + T0 (aW − 1) + aW TA +
aW T0 (FT − 1)
= 93 K
gA
where we assumed the waveguide at standard temperature and thus exploited (2.63) to express its
noise figure.
b) For the required signal power level, from (2.85) we have
(Ps,in )dBm = −76 dBm .
Solution to Problem 2.16
a) From (2.63) we know that a passive network has F = a. So, for the noise figure, from (2.72) we
have
F3 − 1
F2 − 1
+
= F1 F2 F3 =⇒ (F)dB = 26 dB .
F = F1 +
g1
g1 g2
b) For the output SNR, from (2.85) where Teff,in = TS + T0 (F − 1) = 1.16 · 105 K, we have
(Λout )dB = 65 dB .
c) For the input SNR, from (2.87) where integration is performed over the reference bandwidth only,
we have
Ps,in
Λin =
=⇒ (Λin )dB = 88 dB .
kTS B
Solution to Problem 2.17 There is no difference between the two scenarios, since in both cases we
have a two-port passive network with overall gain gN
1 at temperature T1 , and thus with noise figure
F = 1 + T1 /T0 (g−N
− 1) (see Example 2.2 A).
1
Solution to Problem 2.18 It is more convenient to use one amplifier since, from (2.72), the noise figure
of the cascade is
F−1
F−1
F−1
F + 1/N + 2/N + . . . + (N −1)/N > F .
g
g
g
556
SOLUTIONS TO PROBLEMS
Solution to Problem 2.19
a) From the definition of noise figure (2.60) and from (2.51) we have
(A)
F(f ) = 1 +
pw,in (f ) g
1
2
kT0 g
= 1 + K1 e−f /f0 ,
K1 =
2K0
= 5 · 1010 .
kT0
b) By assuming the source at the standard temperature T0 , from (2.86) we have Teff,in = F T0 . So,
for the output SNR from (2.57) we have
Pw,out =
Z
1
2
kT0 F(f ) g(f ) df =
h
Z
kT0 F(f ) g df
B
= kT0 g (f2 − f1 ) − f0 K1 e−f2 /f0 − e−f1 /f0
if B = [f1 , f2 ]. So, the output SNR is
Λ=
Pv,in
h
kT0 (f2 − f1 ) − f0 K1 e−f2 /f0 − e−f1 /f0
=⇒
i
i
(Λ)dB = 11.65 dB .
c) In this case, for a sinusoidal signal we have Msi = σs2i and by use of (2.24) we obtain an input
power of
σs2i
= 4 · 10−3 W
Ps,in =
4RS
so that the input power is increased by 6 dB, and so is the output SNR, that is (Λ) dB = 17.65 dB.
d) The frequency is in this case 4f0 = 2 MHz and it is outside the amplifier band. So, we obtain
Λ = 0.
Solution to Problem 2.20
a) For the overall gain, from (2.69) we simply have
g(f ) = g1 g2 (f ) =
g1
1 + (f /B)2
which cannot be simplified further at this stage. For the noise figure, from (2.72) we have
F = F1 +
T2
F2 − 1
= F1 +
g1
T0 g1
(F)dB ' F1 = 10 dB .
=⇒
b) For the power density of noise, from (2.57) we have
pw,out (f ) =
1
2
h
i
k TS + T0 (F − 1) g(f ) .
c) For the output SNR, we cannot use the constant power gain results of (2.82) and (2.83). We thus
have to proceed by evaluating integrals. For the useful signal power by use of (2.22) we have
Ps,out =
Z
Psi (f )
E0
g(f ) df =
4RS
4RS
Z
+B
−B
g1
df
1 + (f /B)2
SOLUTIONS TO PROBLEMS OF CHAPTER 2
557
while for the noise power we have
Pw,out =
Z
pw,out (f ) df =
1
2
h
k TS + T0 (F − 1)
iZ
+B
−B
g1
df .
1 + (f /B)2
Thus, with no need to perform the integration, we obtain
Λ=
Ps,out
=
Pw,out
h
E0
2RS k TS + T0 (F − 1)
=⇒
i
(Λ)dB = 43.8 dB .
Solution to Problem 2.21 From (2.111) and (2.115) we have
(ã(f1 ))dB/km = 8.68 K
p
f1
=⇒
(ã(f1 ))dB/km
√
= 6.1 · 10−4 .
8.68 f1
K=
Instead, for the attenuation at frequency f0 , from (2.114) and (2.117) we have
(a(f0 ))dB = (ã(f1 ))dB/km (d)km
p
f0 /f1 = 0.24 dB .
Solution to Problem 2.22
a) From Example 2.2 A, the noise figure of the line at temperature T is
F` = 1 +
T
(aCh − 1)
T0
=⇒
(F` )dB = 20.36 dB .
So, regarding the noise figure we have the two cases
F=
F` + (FA − 1) aCh , (1)
FA + (F` − 1)/gA , (2)
=⇒
(F)dB =
n
27 dB
, (1)
7.84 dB , (2)
while the noise temperature of the cascade is
TAtot = T0 (F − 1) =
n
147529 K , (1)
1475 K
, (2)
The global gain is g = gA /aCh = 1.
b) Since TS = T0 , from (2.56) the effective input temperature is Teff,in = T0 F. So, from the SNR
expression (2.85), it is
n
−67 dBm , (1)
(Ps,in )dBm =
−86 dBm , (2)
Solution to Problem 2.23
a) From (2.114), the attenuation of the line is (aCh )dB = 2 · 200 = 400 dB. So, the received power
becomes
(PRc )dBm = (PTx )dBm − (aCh )dB = −390 dBm .
b) In this case, the request on the received power is (in dB scale)
(PRc )dBm = (PTx )dBm − (aCh )dB + N (gA )dB ≥ 10
=⇒
N≥
(aCh )dB
= 20 .
(gA )dB
558
SOLUTIONS TO PROBLEMS
c) Following the steps of Example 2.2 B, and using N = 20, the parameters of each repeater section
are gR = gA (aCh )−1/N = 1 and
FR = F C +
FA − 1
= FA (aCh )1/N
gC
=⇒
(FR )dB = 26 dB .
So, the characteristic parameters of the cascade become g = gN
R = 1 and
F = N (FR − 1) + 1
=⇒
(F)dB = 39 dB
The SNR is thus, from (2.104) and (2.105),
(Λ)dB = 10 + 114 − 39 + 20 = 105 dB .
Solution to Problem 2.24
√
a) The attenuation of the cable follows a f law in the logarithmic scale. We can thus write (see
(2.117) and (2.114))
(aCh (f0 ))dB = (ãCh (f1 ))dB/km (d)km
The required transmit power is thus
p
f0 /f1 = 24 dB .
(PTx )dBm = (PRc )dBm + (aCh (f0 ))dB = −60 + 24 = −36 dBm .
b) From (2.104) and (2.105), the SNR becomes
(Λ)dB = −60 + 114 − 8 − 7 = 39 dB .
c) From the above results, we have
(aCh (f00 ))dB = (aCh (f1 ))dB/km (d)km
p
f00 /f1 = 30 dB
which is a 6 dB increase in signal attenuation that, for maintaining the same SNR, needs a −6 dB
in noise figure, that is
(F0Rc )dB = (FRc )dB − 6 = 2 dB .
Solution to Problem 2.25
a) From (2.117) we have
(ãCh (f ))dB/m = (ãCh (f0 ))dB/m
So, by observing (2.110) and (2.111), we obtain
√
|GCh (f )| = e−K d f ,
p
f /f0 = 8.68 α(f ) .
K = 3.64 · 10−8 .
b) From the problem settings, we have
pTx (f ) =
PTx
= 1.67 · 10−11 W/Hz
2(f1 − f0 )
|f | ∈ [f0 , f1 ]
SOLUTIONS TO PROBLEMS OF CHAPTER 2
559
so that the received power density is
pRc (f ) = pTx (f ) gCh (f ) =
PTx
e−2K d
2(f1 − f0 )
√
f
|f | ∈ [f0 , f1 ]
and the received power is
PRc =
Z
pRc (f ) df =
2PTx
2(f1 − f0 )
Z
f1
e−2K d
√
f
df .
f0
Now, we can solve the above integral by parts,
Z
b
e
−D
√
x
a
dx = 2
Z
√
b
ye
√
a
−Dy
2
dy = −
D
1
y+
D
e
−Dy
with a = f0 , b = f1 and D = 2Kd. Finally,
PRc = 1.6 · 10−6 W
=⇒
√b
√
a
(PRc )dB = −27.6 dBm .
c) For the SNR, since the only source of noise is thermal noise and the system is matched, we can
use (2.104) and (2.105) to obtain
(Λ)dB = −27.6 + 114 − 8 − 14.7 = 63.7 dB .
Solution to Problem 2.26 The overall fiber gain is, from (2.124),
(gCh )dB = −(AF )dB − 40 (log10 e) (πf0 σF )2 = −91 dB
and the reference SNR expression is, from (2.104) and (2.105),
(Λ)dB = 1 − 91 + 114 − (FRc )dB + 15 = 39 − (FRc )dB .
So, it is required that
(FRc )dB ≤ 19 dB
Solution to Problem 2.27
(gCh )dB = −30 dB ,
(PTx )dBm = −69 dBm .
Solution to Problem 2.28
a) For the power density of noise, from (2.56) and (2.90) we have T eff,Rc = TS +T0 (FA −1) = 1280 K
and so
pw,Rc,out (f ) = pw,Rc (f ) gRc (f ) = 21 kTeff,Rc gA
or, equivalently,
(pw,Rc,out )dBm/Hz = −150.5 dBm/Hz .
560
SOLUTIONS TO PROBLEMS
b) From (2.135) the attenuation introduced by the channel is
(aCh )dB = 32.4 − 10.5 + 54 − 10 − 7 = 58.9 dB .
So, by using (2.104) we obtain
(PTx )dBm = (Λ)dB + (aCh )dB − 114 + 10 log10
T
eff,Rc
T0
= 20 + 58.9 − 114 + 6.5 + 7 = −21.6 dBm .
+ 10 log10 (B)MHz
c) In this case, from the SNR expression (2.104) we can derive the desired attenuation, that is
(aCh )dB = (PTx )dBm − (Λ)dB + 114 − 10 log10
= 10 − 20 + 114 − 6.5 − 7 = 90.5 dB .
T
eff,Rc
T0
− 10 log10 (B)MHz
Then, from (2.135) we have
20 log10 (d)km = 90.5 − 32.4 − 54 + 10 + 7 = 21.1
that is d = 11.35 km.
Solution to Problem 2.29
a) From (2.135), the attenuation due to the radio link is
(aCh )dB = 32.4 + 40 + 60 − 3 − 5 = 124.4 dB ,
and the effective noise temperature (at the receiver input) is
Teff,Rc = TS + T0 (FR − 1) = 1114 K .
So, from the SNR expression (2.104) the required transmitted power becomes
(PTx )dBm = (Λ)dB + (aCh )dB − 114 + 10 log10
T
eff,Rc
T0
= 30 + 124.4 − 114 + 5.8 − 24 = 22.2 dBm .
+ 10 log10 (B)MHz
b) If the length is increased to 150 km, it is increased by a factor 1.5 which means an increase by
3.5 dB in attenuation. So, to guarantee the same SNR it is required that the new effective noise
temperature T0eff,Rc satisfies
T0eff,Rc
= 10−3.5/10 = (1.5)−2 .
Teff,Rc
We then have
F0R = 1 +
4
9
T0eff,Rc − TS
T0
=⇒
(F0R )dB = 2.66 dB .
SOLUTIONS TO PROBLEMS OF CHAPTER 2
561
Solution to Problem 2.30
a) For the output power we first need to evaluate the line attenuation by means of (2.114) and (2.117),
for which we have
(aC (f0 ))dB = (ãC (f1 ))dB/km (dC )km
Thus, we obtain
p
f0 /f1 = 200 dB .
(Ps,Rc,out )dBm = (PAnt,Rc )dBm + (gAnt,Rc )dB + (gC )dB + (gA )dB = −162 dBm ,
where (PAnt,Rc )dBm + (gAnt,Rc )dB = (PRc )dBm is the received power in dBm.
b) For the noise power, we first evaluate the effective input noise temperature, which is (see (2.71))
Teff,Rc = TS + T0 (aC − 1) + aC T0 (FA − 1) ' T0 aC FA ,
using (2.63) for passive networks. So, for the output noise power from (2.107) we have
(Pw,Rc,out )dBm = −114 + (aC FA )dB + 10 log10 (B)MHz + (gC )dB + (gA )dB
= −114 + (200 + 5) + 7 − 200 + 20
= −82 dBm .
c) In this case, we have (aC )dB = 100 dB for each line section and also (gA )dB = 10 dB and
(FA )dB = 6 dB, so that the overall effective input noise temperature becomes (see also Example 2.2 C)
h
Teff,Rc = TS + T0 (aC − 1) + aC T0 (FA − 1)
' T0 a2C FA g−1
A
ih
1 + aC g−1
A
i
which assures that
2
2
(Pw,Rc,out )dBm = −114 + (a2C FA g−1
A )dB + 10 log10 (B)MHz + (gC )dB + (gA )dB
= −114 + (200 + 6 − 10) + 7 − 200 + 20
= −91 dBm .
Solution to Problem 2.31
a) For the attenuation, from (2.134) with gAnt,Tx = 1 , we have
aCh (d) =
that is
gAnt,Rc ≥
that is (gAnt,Rc )dB = 12.4 dB.
b) The average attenuation is instead
aCh =
1
πd2max
Z
2π
0
Z
1
gAnt,Rc
1
amax
4πdf0
c
4πdmax f0
c
dmax
aCh (r) r dr dϕ =
0
2
2
≤ amax
2
= 17.4
1
d2max gAnt,Rc
4πf0
c
2 Z
dmax
r3 dr
0
562
SOLUTIONS TO PROBLEMS
giving
aCh =
1
2gAnt,Rc
4πdmax f0
c
2
= aCh (dmax )/2
so that the required gain is halved, gAnt,Rc ≥ 8.7, i.e. (gAnt,Rc )dB = 9.4 dB.
8.3 SOLUTIONS TO PROBLEMS OF Chapter 3
Solution to Problem 3.1
a) After the first mixer
A1 (f ) =
1
[A(f − f0 ) + A(f + f0 )] .
2
After the HPF
V(f ) =
1
[A(f − f0 )1(f − f0 ) + A(f + f0 )1(−f − f0 )] .
2
After the second mixer
V1 (f ) =
1
[A(f + f0 )1(−f − f0 ) + A(f − f0 )1(f − f0 )]
2
1
∗ [δ(f − (f0 + B)) + δ(f + f0 + B)]
2
that is
V1 (f ) =
h
1
A(f + f0 − f0 − B)1(−f + f0 + B − f0 )
4
+ A(f + f0 − f0 + B)1(f + f0 + B − f0 )
+ A(f − 2f0 − B)1(f − 2f0 − B) + A(f + 2f0 + B)1(−f − 2f0 − B)
=
h
1
A(f − B)1(−f + B) + A(f + B)1(f + B)
4
+ A(f − 2f0 − B)1(f − 2f0 − B) + A(f + 2f0 + B)1(−f − 2f0 − B)
The LPF removes the frequency components around 2f0 , giving
S(f ) =
1
[A(f − B)1(−f + B) + A(f + B)1(f + B)]
4
as shown in figure.
S(f )
6
B
f
i
i
SOLUTIONS TO PROBLEMS OF CHAPTER 3
563
b) Since the overall transformation is equivalent to exchanging the components of positive and negative frequency of the input Ftf, by applying twice the transformation, at the output of the composite
system we get a scaled version of the original signal.
Solution to Problem 3.2 The receiver is composed by a mixer, performing the down-frequency
conversion, followed by a LPF. The LPF in this case must have a frequency response compensating the
distortion introduced by the BPF, that is, it must be
(+)
H(f )HBP (f + f0 ) = K ,
|f | < B ,
with K a constant, according to the Heaviside conditions (1.75). For K = 1 it must be
H(f ) =
1
1+
rect
|f |
B
f
2B
.
Solution to Problem 3.3 The output of the channel is
so (t) = K1 a(t) cos(2πf0 t) + K2 a2 (t) cos2 (2πf0 t) + K3 a3 (t) cos3 (2πf0 t) .
if a(t) has bandwidth B, then a2 (t) and a3 (t) have bandwidth 2B and 3B, respectively. Moreover,
cosn (2πf0 t) can be expressed as a combination of cosine functions,
cosn (2πf0 t) =
n
X
km cos(2πmf0 t) .
m=0
In order to get a(t) from so (t), firstly, we consider in so (t) the terms multiplied by cos(2πmf0 t). To
avoid frequency overlapping between the various terms, the condition f 0 > 6B is required. Moreover,
around the frequencies 0, f0 , 2f0 we have terms due to a(t), a2 (t) and a3 (t). Only the components
around 3f0 are only due to a3 (t). Hence, provided that f0 > 6B, the signal a(t) can be recovered
from so (t) by a mixer with carrier frequency 3f0 followed by a lowpass filter and a device extracting
the cubic root.
Solution to Problem 3.4
a) At the output of the square-law device the signal is
u(t) = a2 (t) cos2 (2πf0 t) =
a2 (t)
[1 + cos(4πf0 t)] .
2
Its Ftf is given by
U(f ) =
1
1
1
(A ∗ A) (f ) + (A ∗ A) (f − 2f0 ) + (A ∗ A) (f + 2f0 )
2
4
4
b) At the output of the narrowband filter only the components at ±2f 0 are observed. Moreover, by
considering the baseband equivalent of the signal f raca2 (t)2 cos(4πf0 t), it is seen that, since
the filter bandwidth is very small, its output yields the average of a2 (t), i.e. Ma , modulated by
1
cos(4πf0 t). So, we obtain
2
Ma
cos(4πf0 t) .
v(t) '
2
564
SOLUTIONS TO PROBLEMS
Solution to Problem 3.5 We consider the output of the upper branch of the receiver depicted in
Fig. 3.10. From (3.38) we have
u1 (t)
= a1 (t) cos(2πf0 t + ϕ1 ) cos(2πf0 t + ϕ0 )
−a2 (t) cos(2πf0 t + ϕ1 ) sin(2πf0 t + ϕ0 )
a1 (t)
a1 (t)
.
=
cos(ϕ1 − ϕ0 ) +
cos(2π2f0 t + ϕ1 + ϕ0 )
2
2
a2 (t)
a2 (t)
−
sin(ϕ1 − ϕ0 ) −
sin(2π2f0 t + ϕ1 + ϕ0 )
2
2
The components around 2f0 are removed by the LPF, and
ao,1 =
a1 (t)
a2 (t)
cos(ϕ1 − ϕ0 ) −
sin(ϕ1 − ϕ0 ) ,
2
2
where the second term represents the interference, i.e. a signal component due to signal a 2 at the output
of demodulator 1.
Solution to Problem 3.6
a)
sTx (t) = a(t) cos(2πf0 t)
1
1
STx (f ) = A(f − f0 ) + A(f + f0 )
2
2
where
A(f ) = δ(f − 1500) + δ(f + 1500) +
Then
STx (f )
1
1
δ(f − 3000) + δ(f + 3000)
2
2
= 12 δ[f − (f0 − 1500)] + 21 δ[f − (f0 + 1500)]
+ 21 δ[f + (f0 − 1500)] + 21 δ[f + (f0 + 1500)]
+ 41 δ[f − (f0 − 3000)] + 41 δ[f − (f0 + 3000)]
+ 41 δ[f + (f0 − 3000)] + 41 δ[f + (f0 + 3000)]
b) The modulated signal sTx (t) can be written as
sTx (t) = cos[2π(f0 + 1500)t] + cos[2π(f0 − 1500)t]
1
1
+ cos[2π(f0 + 3000)t] + cos[2π(f0 − 3000)t]
2
2
Then we have the components
at f0 + 1500 Hz with power 1/2 V2
at f0 − 1500 Hz with power 1/2 V2
at f0 − 3000 Hz with power 1/8 V2
at f0 + 3000 Hz with power 1/8 V2
Solution to Problem 3.7 The Ftf of sTx (t) is given by
STx (f ) =
1
1
A(f − f0 ) + A(f + f0 ) ,
2
2
where
A(f ) = rect(f ) + triangle(f ) .
SOLUTIONS TO PROBLEMS OF CHAPTER 3
565
Teherefore the bandwidth of a(t) is B = 1 Hz and the bandwidth of s Tx is Bs = 2B = 2 Hz. Note
that the band of sTx is Bs = (f0 − B, f0 + B) = (99 Hz, 101 Hz).
Solution to Problem 3.8 As known, the output of a DSB coherent demodulator in the presence of a
phase error is
1
ao (t) = cos(ϕ1 )a(t) .
2
Then
1
Mo = cos2 (ϕ1 )Ma ,
4
and the ratio Mo /Ma is
Mo
1
= cos2 (ϕ1 ) .
Ma
4
Solution to Problem 3.9 As known, the output of a DSB coherent demodulator, in the presence of a
phase error, is
ao (t) = cos (ϕ1 (t) − ϕ0 ) a(t) ,
where the factor
difference signal
1
2
in (3.14) is compensated by the amplitude of the carrier. Then, the error is the
e(t) = a(t) − ao (t) = [1 − cos (ϕ1 (t) − ϕ0 )] a(t) .
If ϕ1 (t) is constant, the power of the error is
Me = Ma [1 − cos (ϕ1 − ϕ0 )]2 .
If ϕ1 (t) = 2πf1 t and assuming f1 ≥ Ba , it is
h
Me = M a 1 +
1
2
i
.
Solution to Problem 3.10
a) With f0 = 1.25 kHz the modulated signal is shown in the figure.
A(f )
6
1 kHz
f
STx (f )
6
1.25 kHz
The output of the demodulator is
ao (t) =
1
a(t) .
2
f
566
SOLUTIONS TO PROBLEMS
b) With f0 = 0.75 kHz the modulated signal STx (f ) and the signal after the receive mixer U(f ) =
1
S (f − f0 ) + 21 STx (f + f0 ) are shown in figure. In this case the output of the demodulator
2 Tx
is not proportional to a(t).
STx (f )
6
f
0.75 kHz
U(f )
6
/
H(f )
f
1.5 kHz
Ao (f )
6
f
1 kHz
c) The minimum carrier frequency is f0 = 1 kHz.
Solution to Problem 3.11
a) From (2.48) and (2.40)
pRc (f ) = pTx (f ) |GCh (f )|2 = G02
10−2
f
rect
2B
2B
and from (2.43)
=
10−4
f
rect
2B
2B
[W/Hz]
PsRc (f ) = pRc (f ) R .
sRc (t) has a Gaussian distribution with zero mean and variance
σs2Rc = G02 10−2 R = 10−2 .
Hence
P [−0.5 < sRc (t) < 0.5] = 1 − 2Q
0.5
σsRc
= 1 − 2Q
0.5
0.1
= 1 − 2 Q(5)
b) From (3.55) with
Teff,Rc = TS + TA = T0 + (F − 1)T0 = FT0
and
PRc = G02 PTx =
MsRc
= 0.1 mW
R
⇒
(PRc )dBm = −10 dBm
SOLUTIONS TO PROBLEMS OF CHAPTER 3
it is
(Γ)dB = −10 + 114 − 6 = 98 dB
c) In this case the link attenuation is provided by (2.135)
(aCh )dB = 32.4 + 20 log10 50 + 20 log10 1000 − 12 − 15
= 32.4 + 34 + 60 − 27 = 99.4 dB
and from (3.55)
(Γ)dB = −89.4 + 114 − 6 = 18.6 dB.
where
(PRc )dBm = 10 − 99.4 = −89.4 dBm
is the received power.
Solution to Problem 3.12 Here
sTx (t) = a(t) cos(2πf0 t) + A cos(2πf0 t) .
a) The channel only introduces a delay, hence it is non distorting.
b)
r(t) = 10−2.5 sTx (t − tD ) + wRc
−2.5
= 10−2.5 a(t − tD ) cos(2πf0 (t − tD )) + 10
| {z A} cos(2πf0 (t − tD )) + wRc (t)
AR
c) From
√
A2
R
( 2)2
Mcarrier
A2R
=
= √2 N
MwNBF
N0 2∆
( 2)2 20 2∆
we obtain
A2R = 105 · 2 · 10−12 · 2 · 500 = 2 · 10−4
⇒
AR =
√
2 · 10−2 V .
d)
A = AT = AR 102.5 = 4.47 V .
e) From (3.126), (3.94) and (3.52)
Λo = Γη =
MsTx Ma /2
Ma
=
= 103 .
N0 BaCh MsTx
2N0 BaCh
Being
aCh =
it is
1
10−2.5
2
,
Ma = 103 · 2 · (102.5 )2 · 2 · 10−12 · 4 · 103 = 16 · 10−1 = 1.6 V2 ,
and
MsTx =
A2
Ma
+
= 10.79 V2 .
2
2
567
568
SOLUTIONS TO PROBLEMS
f) Since m = aAm < 1, a non-coherent receiver can be used, having the same performance of the
coherent receiver.
Solution to Problem 3.13
a) The value of GCh (f ) at the frequency f0 is
1
1
=
1 + j2πf0 TCh
1 + j2π10
hence
1
⇒ (|GCh (f0 )|)dB = −36 dB
1 + 400π 2
corresponding (see (2.48)) to an attenuation of (aCh )dB = 36 dB.
b) From (2.32) the PSD of the noise at the output (open circuit voltage) of the pre-amplifier is
|GCh (f0 )|2 =
Pwout (f ) = 2kTeff,Rc Rg
with
Teff,Rc = Ts + TA = 700 K
then
Pwout (f ) = 2kTeff,Rc Rg = 2 · 1.38 · 10−23 · 700 · 100 · 1000 = 1.93 · 10−15 V2 /Hz .
c)
Γ=
MsTx
N0 BaCh
with N0 /2 = Pwout (f )/g = 1.93 · 10−18 V2 /Hz hence
Γ=
Ma /2
4 · 10−6
=
N0 BaCh
2 · 1.93 · 10−18 · 4 · 103 · 103.6
⇒
(Γ)dB = 48 dB .
since Ma = A · 2B = 8 · 10−6 V2 .
d) From (3.66)
Λo = Γ cos2 (∆φ) = Γ cos2 (
6
π)
40
⇒
(Λo )dB = 50.14 dB .
Solution to Problem 3.14 Firstly, we note that the Hilbert transform of a(t) = A sinc(2Bt) is
a(h) (t) = A sinc(Bt) sin(πBt). Hence from (3.25) the SSB+ transmitted signal is
sTx (t) =
A sinc(Bt) sin(πBt)
A sinc(2Bt)
cos(2πf0 t) +
sin(2πf0 t) .
2
2
At the output of the demodulator, from (3.29) we have
ao (t)
=
=
A sinc(2Bt)
A sinc(Bt) sin(πBt)
π
π
cos
sin
−
4
4
4
4
A
√ [sinc(2Bt) − sinc(Bt) sin(πBt)] .
4 2
SOLUTIONS TO PROBLEMS OF CHAPTER 3
569
Solution to Problem 3.15
a) The signal can be interpreted as the output of an interpolate filter
bk -
a(t)
-
g
T
where g(t) = rect(t/T ) and bk = (−1)k . In the frequency domain at the output of an interpolate
filter, it is A(f ) = B(f ) G(f ), and
A(h) (f ) = B(f ) G (h) (f ) .
Hence the Hilbert transform of a(t) can be obtained by an interpolate filter with impulse response
g (h) . From the Ftf pair
F
sinc(η) sgn(η) −→ j
we have
g (h) (t) =
b) From (3.29)
ao (t) =
1
2τ − 1 log ,
π
2τ + 1
2t/T + 1 1
.
log π
2t/T − 1 +∞
+∞
sin(ϕ1 ) X
cos(ϕ1 ) X
(−1)k g(t − kT ) +
(−1)k g (h) (t − kT ) .
4
4
k=−∞
k=−∞
Solution to Problem 3.16 s(t) is a SSB− modulated signal with information signal a(t).
Solution to Problem 3.17
a) The modulated SSB+ signal is given by (3.23) with ϕ0 = 0,
sT x (t)
a(t)
a(h) (t)
A cos(2πf0 t) −
A sin(2πf0 t)
2
2
cos(2πfa t)
sin(2πfa t)
A
=
A cos(2πf0 t) −
A sin(2πf0 t) =
cos (2π(f0 + fa )t)
2
2
2
=
so that the output of the envelope detector is a constant
A
ao (t) =
2
q
2
a2 (t) + (a(h) (t)) =
Ap 2
A
.
cos (2πfa t) + sin2 (2πfa t) =
2
2
Hence ao (t) is affected by interference.
b) In general the modulated SSB+ signal (3.23) with ϕ0 = 0 can be expressed as
sTx (t) = <
h
a(t) + ja(h) (t)
and the output of the envelope detector is
A
2
q
A
2
2
a2 (t) + (a(h) (t)) .
ej2πf0 t
i
570
SOLUTIONS TO PROBLEMS
Solution to Problem 3.18 The signal after the mixer is given by
ac (t) = 4 cos(2π(f0 − f1 )t) + 4 cos(2π(f0 + f1 )t)
+ 2 sin(2π(f0 − f2 )t) + 2 sin(2π(f0 + f2 )t)
represented in figure.
Ac (f )
6
4
4
4
4
HBP (f )
f
f0
6
1
f0 − B > f0 I
f0 − ρB
f0 + ρB
STx (f )
f
6
4
4
4
f
f0
When filtered by the BPF, the component at (f0 + f2 ) is suppressed, the component at (f0 − f2 )
remains unchanged, and the components at (f0 − f1 ) and (f0 + f1 ) are attenuated. In particular at
(f0 + f1 ) the gain is 1/10, while at (f0 − f1 ) the gain is 9/10. Then the modulated VSB signal is
sTx (t) =
2
18
cos(2π(f0 − f1 )t) + cos(2π(f0 + f1 )t) − 2 sin(2π(f0 − f2 )t) ,
5
5
as shown in figure.
Solution to Problem 3.19
a) We can separately consider the frequency shift of f0 . From
(+)
HBP (f
1
+ f0 ) =
B
Z
f
rect
−∞
u
B
du − 1(f − B) ,
the inverse Fourier transform of HBP (f ) yields
hBP (t) =
1 sinc(Bt) − ej2πBt cos (2πf0 t) .
−j2πt
b) The receiver is composed by a mixer, followed by an ideal LPF with bandwidth B.
SOLUTIONS TO PROBLEMS OF CHAPTER 3
571
Solution to Problem 3.20 Note that the BPF hBP can be considered as half the sum of the BPF for
DSB and of the BPF for SSB+ ,
HBP (f ) =
1 SSB
1 DSB
HBP (f ) + HBP + (f ) .
2
2
Then the scheme giving the modulated signal is presented in figure.
-×
6
cos(2πf0 t + ϕ0 )
− 41
+
sin(2πf0 t + ϕ0 )
Hilbert filter
-
h(h)
sTx (t)
-
-
a(t)
-
1
2
?
-×
Solution to Problem 3.21
a) The idea is to divide the 60 signals in L subgroups, each composed of K signals. The signal
corresponding to a subgroup occupies a band (f0 , f0 + KB). An example is presented in figure
for K = 3, f0 = 10 kHz and B = 4 kHz.
6
K=3
10
13 14
17 18
21
f [kHz]
b) With LK = 60 the possible values of (L, K) are
{(1, 60), (2, 30), (3, 20), (4, 15), (5, 12), (6, 10)}
The minimum value of L + K is 16, obtained for L = 6 and K = 10 (or L = 10 and K = 6).
c) Assuming L = 6 and K = 10, 16 carrier frequencies are needed. K = 10 carriers frequencies
fk1 , . . . , fk10 are assigned to adjacent subchannels forming each subgroup. Then, for example
fk1 = 10 kHz
fk2 = 14 kHz
fk3 = 18 kHz
fk4 = 22 kHz
fk5 = 26 kHz
fk6 = 30 kHz
fk7 = 34 kHz
fk8 = 38 kHz
fk9 = 42 kHz
fk10 = 46 kHz .
L = 6 carrier frequencies fl1 , . . . , fl6 are used to multiplex the various subgroups. To avoid
overlapping, the minimum carrier separation is 40 kHz. Then a possible choice is
fl1 = 290 kHz
fl2 = 330 kHz
572
SOLUTIONS TO PROBLEMS
fl3 = 370 kHz
fl4 = 410 kHz
fl5 = 450 kHz
fl6 = 490 kHz .
Solution to Problem 3.22
a) To avoid interference at the receiver output, the products ci (t) cj (t) must have a null baseband
component for i 6= j. Since the baseband component of the product c i (t) cj (t) is given by
1
1
cos(αi − αj ) + cos(βi − βj ) , i, j = 1, . . . , 4
2
2
Then the solution is
α2 = ϑ + π2
α3 = ϑ + π
α4 = ϑ + π2
(α1 = β1 = 0) .
β2 = ϑ + π2
β3 = ϑ
β4 = ϑ + 23 π
for all values of ϑ.
b) If B is the bandwidth of the information signals, it must be
|fa − fb | ≥ 2B .
In fact, only in this case, at the LPF input, the components around |f a − fb | would not overlap
with the baseband components.
Solution to Problem 3.23 Since A2 /2 = 100, it is
√
A = 2 · 10 V .
The modulated signal can be written in the standard form with two terms,
sTx (t) = A cos(2π200t) + A a(t) cos(2π200t) ,
where
2B
cos(2π20t) .
A
a(t) =
From (3.94)
η=
B2
A2
2
we have
B=
r
+ B2
= 0.4 ,
40
= 8.16 V .
0.6
From (3.98) and knowing that kf2 = 1/2, we have
m2
0.4 =
,
2 + m2
m=
r
0.8
= 1.15 .
0.6
Solution to Problem 3.24 The modulated signal can be written as
sTx (t) = A cos(2π200t) + am cos(2π200t) cos(2π20t)
SOLUTIONS TO PROBLEMS OF CHAPTER 3
573
with A = 40 and am = 12. Hence, sTx (t) can be also written as
sTx (t) = [A + am cos(2π20t)] cos(2π200t) .
From (3.91), it is m = 0.3 and from (3.94), recalling that for a sinusoidal waveform it is k f2 = 1/2, we
have η = 4.3%.
Solution to Problem 3.25
a) From (1.32)
1
2
4
Ma =
Z
2
0
1
t2
dt = .
4
3
b) From (3.92), since am = aM = 1 the shaping factor is
kf2 =
1
Ma
= .
1
3
Then, from (3.98)
m2
,
3 + m2
whose maximum is achieved for the maximum value of the modulation index, i.e. m = 1 for the
conventional AM.
c) The value of η corresponding to the modulation index found in the previous point is
η=
η=
1
1
= = 25% .
3+1
4
Solution to Problem 3.26 From (3.98), being kf2 = 1/2, we have
η=
m2
.
2 + m2
Solution to Problem 3.27
a)
STx (f )
6
4
4
4
4
4
f0 − 5fa
4
f0
f0 − 2fa
f0 + 2fa
f0 − f a f0 + f a
b) Since aM = am = 5 and
Ma =
we have
4
9
1 2
2 + 1 + 22 =
2
2
kf2 =
9
.
50
-
f0 + 5fa f
574
SOLUTIONS TO PROBLEMS
From (3.98)
9
(0.8)2 50
= 10.33% .
9
1 + (0.8)2 50
η=
Solution to Problem 3.28 For a1 we have am = A1 and
2
Ma =
Tp
Z
Tp
2
A21
0
From (3.92)
kf2 =
Then from (3.98)
η=
For a2 we have am = A2 and
n
2
Ma =
Tp
Since kf2 = 1, it is
η=
For a3 we have am = A3 and
4
Ma =
Tp
Z
n
Tp
2
dt =
A21
.
3
1
.
3
A22 dt = A22 .
0
32.88% , m = 0.7
50%
,m=1.
A23
0
η=
Tp
2
!2
14% , m = 0.7
25% , m = 1 .
Z
Tp
4
Since kf2 = 31 , it is
t
n
t
Tp
4
!2
dt =
A23
.
3
14% , m = 0.7
25% , m = 1 .
Solution to Problem 3.29
a) The modulated signal can be expressed as
sTx (t) = (a(t) + A) cos (2πf0 t)
Moreover, from the figure it is 320 = aM + A and 80 = −am + A. If am = aM , we get
A=
320 + 80
= 200 V
2
Then
m=
am =
320 − 80
= 120 V
2
am
120
=
= 0.6
A
200
b)
a(t) = 120 cos(2πfa t)
where, from the figure, being Ta = 0.02 s, it is fa = 1/Ta = 50 Hz.
SOLUTIONS TO PROBLEMS OF CHAPTER 3
575
c) The modulated signal can be expressed as
sTx (t) = (a(t) + A) cos(2πf0 t)
where, from the figure, being T0 = 10 µs, it is f0 = 1/T0 = 100 kHz. The modulated signal can
be also written as
sTx (t) = A [1 + m cos(2πfa t)] cos(2πf0 t)
with A = 200 V, m = 0.6, fa = 50 Hz, f0 = 100 kHz.
d) The modulation efficieny is the ratio of the power of the useful term to the total power
m2 Mā
.
1 + m2 Mā
η=
The normalized useful term is
ā(t) = cos(2πfa t) ,
with
Mā =
1
.
2
Then
0.62
m2
=
' 0.15 .
2
2+m
2 + 0.62
e) To get m = 0.9, from m = aAm , it is
η=
A=
120
= 133.3 V .
0.9
Solution to Problem 3.30
a) We have bc (t) = a(t) + cos(2πf0 t) and
d(t)
= k1 [a(t) + cos(2πf0 t)] + k2 a2 (t) + cos2 (2πf0 t) + 2a(t) cos(2πf0 t)
= k1 a(t) + k2 a2 (t) + k2 cos2 (2πf0 t) + k1 cos(2πf0 t) + 2k2 a(t) cos(2πf0 t)
b) The BPF extracts the components around f0 , so that its ideal frequence response must be a constant
within the band (f0 − B, f0 + B).
c) The modulated AM signal is given by
sTx (t) = [k1 + 2k2 a(t)] cos(2πf0 t) .
Then
m=
2k2 aM
.
k1
Solution to Problem 3.31
a)
sTx (t) = A [1 + mā(t)] cos(2πf0 t)
where ā(t) in normalized by
am = |min a(t)| = 6 V .
t
576
SOLUTIONS TO PROBLEMS
The normalized signal ā(t) can be expressed as the periodic repetition, with period T p = 0.3 ms,
ā(t) =
+∞ X
rect
n=−∞
t − Tp /2 − nTp
Tp /2
− rect
t − 3Tp /2 − nTp
Tp /2
.
b)
η=
kf2 m2
.
1 + kf2 m2
Since kf2 = 1, for m = 0.8 we have
η=
0.82
= 0.39
1 + 0.82
c) For m = 1, it is η = 0.5.
Solution to Problem 3.32
a) For the information signal a(t), it is
am = | min a(t)| = 1 ,
t
and
1
Ma =
Tp
From (3.92), it is
Z
Tp
0
|a1 (t)|2 dt = 4 .
kf2 = 4 ,
and from (3.98), for m = 1, it is
η=
4
4
= = 80% .
1+4
5
Note that this result is in contrast with the fact that the modulation efficiency in AM is usually
lower than 50%. In fact, here am 6= aM and kf > 1.
b) In this case am = 4, while
Ma =
1
Tp
Z
Tp
0
|a2 (t)|2 dt = 4 ,
and
kf2 =
Hence for m = 1
η=
1
4
1+
1
4
=
1
.
4
1
= 20% .
5
c) Since the symbols 0 and 1 can be assumed equally probable, the average efficiency is
η=
1
1
η0 + η1 = 50% ,
2
2
where η0 (η1 ) is the efficiency obtained when transmitting 0 (1), that is waveform a 2 (t) (a1 (t)).
SOLUTIONS TO PROBLEMS OF CHAPTER 3
Solution to Problem 3.33
a) The signal is symmetric with aM = am = 4 V. The modulation index is m =
b)
η=
Ma
2
A2
2
+
am
A
577
= 1.
.
Ma
2
The statistical power is given by
Ma =
=
Then η = 0.39.
1
Tp
2
Tp
Z
Z
Tp
a2 (t) dt =
0
Tp /2
3 + e−5t
0
2
Tp
2
Z
Tp /2
a2 (t) dt
0
dt = 10.29 V2 .
Solution to Problem 3.34 From (3.150) we have βP = 0.1 and from (3.151) Bs = 2 · 6 · 1.1 =
13.2 MHz. We consider as receive front-end an ideal BPF with bandwidth 13.2 MHz centered around
the carrier at f0 = 70 MHz.
a) In this case the single tone is within the BPF band, so that SIR= 40 dB.
b) In this case the single tone is outside the BPF band, so that it is suppressed by the filter and
SIR= ∞.
c) The noise is partially filtered by the BPF, so that only the power corresponding to the PSD between
70 and 76.6 MHz enters the receiver. Then
SIR =
10 4
10 and (SIR)dB = 41.8 dB .
6.6
d) Since the BPF remains unchanged, the results are the same.
Solution to Problem 3.35 The first condition is used to determine the constant K P giving the phase
deviation as a function of the information signal
KP =
1
= 2 rad/V .
0.5
Moreover, in both cases the information signal can be written as
a(t) = aM cos(2πfa t) .
a) The bandwidth of a(t) is B = fa = 500 Hz. The bandwidth of the modulated signal, according
to Carson’s rule is
Bs = 2B(1 + β) ,
where, for PM,
β = K P aM = 2 .
Hence
Bs = 2 · 500(1 + 2) = 3 kHz .
b) Here
B = 150 Hz
β = KP aM = 2 · 1.2 = 2.4 .
578
SOLUTIONS TO PROBLEMS
Hence
Bs = 2 · 150(1 + 2.4) = 1.02 kHz .
Solution to Problem 3.36
a) If sTx (t) is a PM signal, its phase deviation 100 sin 220t is proportional to the information signal,
a(t) =
1
100 sin(220t) .
KP
b) If sTx (t) is a FM signal, its frequency deviation is proportional to the information signal. Then
a(t) =
1
11 · 103
1 1
100 · 220 cos(220t) =
∆fs (t) =
cos(220t) .
KF
KF 2π
πKF
Solution to Problem 3.37
a) Writing sTx as
sTx (t) = 20 cos [2πf0 t + ϕ(t)]
6
we have f0 = 10 /2π = 159 kHz and from (1.138) and (1.144)
fs (t) = 159000 −
b) From (3.144) we have
a(t) =
c) From (3.137)
a(t) = −
5000
sin 500t .
2π
1
10 cos(500t)
KP
1 5000
sin(500t)
KF 2π
Solution to Problem 3.38
a) The instantaneous frequency is given by
fs (t) = f0 +
where KF =
100
.
2π
1
1 d∆ϕs (t)
= f0 + KF a(t) = f0 +
100a(t) ,
2π
dt
2π
A representation of fs (t) is given in figure.
fs (t)
6
.....
500
2π
f0 +
f0 . . . . . . . . . . . . . . . .
. . . f0 −
0
t
b) The maximum frequency deviation is
∆F = KF max{|a(t)|} =
250
100
5=
.
2π
π
500
2π
SOLUTIONS TO PROBLEMS OF CHAPTER 3
579
Solution to Problem 3.39
PM
βP = K P A a ,
which is independent of fa .
FM
KF A a
.
fa
In this case the modulation index depends on fa , which represents the bandwidth of the modulating
signal.
βF =
Solution to Problem 3.40
a)
fs,max = f0 + aM KF = 106 + 6 · 105 Hz = 1.6 MHz .
b) In PM we have
∆ϕs (t) = KP a(t) ,
so that the instantaneous frequency is
fs (t) = f0 +
Kp da(t)
,
2π dt
having maximum value
fs,max = 106 + 2 · 105 = 1.2 MHz ,
and minimum value
fs,min = 106 − 2 · 105 = 800 kHz .
c) Since aM = 1 V, the maximum instantaneous frequency is
fs,max = f0 + KF aM = 106 + 103 Hz = 1.001 MHz .
The bandwidth of a2 (t) is B = 104 Hz, so that the modulation index is
βF =
K F aM
103
= 4 = 0.1 .
B
10
Using Carson’s rule we have
Bs = 2B(1 + βF ) = 2 · 104 · 1.1 = 22 kHz .
Solution to Problem 3.41
a) The information signal can be written as
a(t) =
+∞
X
k=−∞
"
rect
t − kTp
Tp
2
!
− rect
t−
Tp
−
2
Tp
2
kTp
!#
580
SOLUTIONS TO PROBLEMS
Then, from (3.138)
fs (t) = f0 + KF
+∞
X
k=−∞
"
t − kTp
rect
Tp
2
!
t−
− rect
Tp
−
2
Tp
2
kTp
!#
b) The above expression allows to write the modulated signal sTx (t) as a repetition of period Tp of
the pulse
s(t) = A
(
t
rect
Tp
2
!
t−
cos(2π(f0 + KF )t) + rect
Tp
2
Tp
2
!
cos(2π(f0 − KF )t)
)
having Ftf (see Tabs. 1.2 and 1.3)
i
h
n
i
h
Tp
Tp
Tp −j2π(f −f0 +KF ) T2p
Tp
+
e
sinc (f − f0 − KF )
sinc (f − f0 + KF )
4
2
4
2
i
i
h
h
o
Tp
Tp
Tp
Tp −j2π(f +f0 −KF ) T2p
+
+
e
sinc (f + f0 + KF )
sinc (f + f0 − KF )
4
2
4
2
S(f ) = A
Because sTx is periodic of period Tp , it can be written as (see (1.13))
sTx =
+∞
X
`=−∞
S` ej2π`Fp t
where FP = 1/Tp and, from (1.14)
1
S
S` =
Tp
`
Tp
.
Then, denoting the carrier frequency as f0 = M/Tp and letting γ0 = KF Tp , we have
S` =
and
+∞
A X
sTx (t) =
4
`=−∞
+
`−M +γ0
A
A
` − M − γ0
` − M + γ0
−j2π
2
+
e
sinc
sinc
4
2
4
2
`+M −γ0
A
` + M + γ0
` + M − γ0
A
−j2π
2
+
sinc
+
sinc
e
4
2
4
2
` − M − γ0
sinc
2
+∞
A X
4
`=−∞
sinc
` − M + γ0
+ sinc
2
` + M + γ0
2
+ sinc
e
` + M − γ0
2
−j2π
e
`−M +γ0
2
−j2π
`+M −γ0
2
ej2π`Fp t
ej2π`Fp t
SOLUTIONS TO PROBLEMS OF CHAPTER 3
581
Introducing the change of index m = ` − M in the first sum and n = ` + M in the second, it is
sTx (t) =
+∞
A X
4
m=−∞
+∞
A X
+
4
sinc
n=−∞
m − γ0
2
+ sinc
n + γ0
sinc
2
m + γ0
2
n − γ0
+ sinc
2
e
−j2π
e
m+γ0
2
−j2π
n−γ0
2
ej2π(m+M )Fp t
ej2π(n−M )Fp t
Lastly, summing the terms corresponding to negative values of n to the terms corresponding to
positive values of m and reminding that sinc is an even funtion, we get
sTx (t) =
+∞ h
m − γ0
A X
sinc
cos(2π(m + M )Fp t)
2
2
m=−∞
+ sinc
m + γ0
2
i
(−1)m cos(2π(m + M )Fp t + πγ0 ) .
Solution to Problem 3.42
a) The modulated signal can be written as
sTx (t) = < A (1 + j2πKF sin(2πfa t)) ej2πf0 t ,
then from (1.97) and (1.136) the envelope is given by
p
A |1 + j2πKF sin(2πfa t)| = A 1 + c2 sin2 (2πfa t) ,
√
√
whose maximum value is A 1 + c2 , while the minimum value is A, to yield a ratio 1 + c2 .
b) The power of the narrowband FM signal is (see also Example 1.2 E)
MsTx =
c2
A2
1+
2
2
while the power of the un-modulated carrier is
1+
A2
.
2
,
Then the ratio is
c2
.
2
c) From the analytic signal
(a)
sTx (t) = A [1 + j2πKF sin(2πfa t)] ej2πf0 t ,
we derive
ϕsTx (t) = 2πf0 t + arctan [2πKF sin(2πfa t)]
and
fsTx (t) = f0 +
1
2πKF fa cos(2πfa t)
1 + (2πKF )2 sin2 (2πfa t)
' f0 + 2πKF fa cos(2πfa t)
under the hypothesis of narrowband.
582
SOLUTIONS TO PROBLEMS
Solution to Problem 3.43 The value of βF determines both a constraint on the bandwidth of the
modulated signal and on the required SNR. By Carson’s rule (3.151), it must be
Bs = 2B(1 + βF ) ≤ 120 kHz .
Therefore
βF ≤ 5 .
Supposing to work at the threshold level (see (3.195)),
Γth = 20(1 + βF ) .
the requirement on Λo imposes that
3 kf2 βF2 20(1 + βF ) ≥ 104 ,
so that
βF ≥ 6.6 .
Therefore, the system is limited by the bandwidth requirement and we choose
βF = 5 .
Since we selected to work with βF lower than the value in correspondence of Γth , it means the system
is working above threshold, and from the expression
Λo = 3 kf2 βF2 Γ = 104 ,
we have that
(Γ)dB = 24 dB .
Now, from (3.51), it turns out
(MsTx )dBm = (MsRc )dBm = 14.3 dBm ,
since the channel does not introduce attenuation. Without the constraint on the bandwidth, with β F =
6.6 it results (Γ)dB = (Γth )dB = 21.9 dB, and the minimum transmitted power would have been
(MsTx )dBm = 12.2 dBm ,
with a power saving of 2.1 dB, and a larger bandwidth Bs = 153 kHz.
Solution to Problem 3.44 Since the final modulation index that has to be achieved is β F = ∆F/B =
15/15 = 1, while the modulation index of the narrowband modulator is β i = ∆Fi /B = 150/15000 =
10−2 , βi it should be multiplied by a factor 100. On the other hand the carrier frequency has to be
increased from the initial value of 85 MHz to the final value of 1.7 GHz, that is it should be multiplied
by a factor 20.
As known, a “frequency multiplication” operation is the method to increase the modulation index,
increasing at the same time the carrier frequency, which is multiplied by the same factor.
Two choices are possible:
1. A multiplication of the instantaneous frequency by a factor 100, which would move the carrier
frequency up to 8.5 GHz, followed by a down-frequency conversion from 8.5 GHz to 1.7 GHz,
SOLUTIONS TO PROBLEMS OF CHAPTER 3
583
obtained by a mixer with a carrier frequency of 6.8 GHz. Note however that the design of the
BPF around 8.5 GHz is challenging, since it is a narrowband filter centered at high frequency.
2. A first instantaneous frequency multiplication by a factor 10, followed by a down-frequency
conversion from 850 MHz to 170 MHz, obtained by a mixer with a carrier frequency of 680 MHz,
followed by a further frequency multiplication by a factor 10.
Solution to Problem 3.45
a) According to Carson’s rule
Bs = 2B(1 + βF )
where B = f2 is the maximum frequency of a(t), while Bs is the available RF bandwidth. Hence
βF =
20 − 17.048
Bs − 2B
=
= 0.173 ,
2B
17.048
and the maximum frequency deviation is
∆F = βF B = 0.173 · 8.524 = 1.476 MHz .
b) The frequency deviation in FM is proportional to the information signal, according to
∆fs (t) = KF a(t) ,
where (see (3.150))
KF =
1.476 · 106
βF B
=
= 7.38 · 104 Hz/V .
aM
20
Therefore
σ∆f = KF
√
Ma = 0.339 MHz ,
where the statistical power of a(t) was derived from its PDF
Ma =
Z
+∞
−∞
Pa (f ) df = 2 · (8.524 − 0.564) · 106 · 1.35 · 10−6 = 21.1V2 .
Hence, the required ratio is
1.476
∆F
=
= 4.354 .
σ∆f
0.339
Solution to Problem 3.46 The output SNRs in FM and PM are, respectively, given by
Λ(FM)
= 3kf2 βF2 Γ ,
o
Λ(PM)
= kf2 βP2 Γ .
o
Then the ratio is
(FM)
Λo
(PM)
= 10 log10
3βF2
βP2
Λo
dB
The maximum phase deviation if PM is βP aM , where aM is the maximum vaue of the information
signal, which in this case is aM = 9. In the FM case, the phase deviation is
∆ϕ(t) = 2πKF d(t) ,
584
SOLUTIONS TO PROBLEMS
with
d(t) =
Then
∆ϕ(t) =
Z
t
a(τ ) dτ .
−∞
h
i
KF
1
8 sin(2πfa t) + sin(10πfa t) ,
fa
5
KF
. Then, reminding that βF =
whose maximum value is 41
5 fa
get a maximum phase deviation
KF a M
B
and noting that B = 5fa , we
41 KF
41 βF B
41 βF 5
41
=
=
=
βF .
5 fa
5 aM f a
5 aM
9
Imposing the equality between the maximum phase deviations in PM and FM, we have
41
βF = 9βP
9
81
βP .
41
or
βF =
= 10 log10
Then the ratio between the SNRs is
10 log10
3βF2
βP2
= 10 log10
3βF2
βP2
3 · 812
412
= 10.7 dB .
Solution to Problem 3.47
a) We have Pb (f ) = Pa (f ) |rect(f T )|2 hence
Mb =
Z
1/(2T )
2T0 df = 2T0 /T = 1
−1/(2T )
and therefore kf2 = Mb /b2m = 1/2.
b) The bandwidth of the modulating signal is Bb = min(Ba , Bh ) = 1/(2T ) = 500 Hz and from
(3.151) the bandwidth of the modulated signal is Bs = 2Bb (1 + β) = 5.041 kHz.
c) Since the channel has a flat frequency response on the band of the modulated signal (3.193) holds
true and
Λo
Γ=
= 408.16 > Γth = 20(1 + β) = 100 ,
49/2
and
(Γ)dB = 26.09 dB .
Moreover, Teff,Rc = T0 +T0 (F−1) = T0 F if Ts = T0 . Then from (3.55) with (aCh )dB = 100 dB,
we obtain
(PTx )dBm = (Γ)dB + (aCh )dB − 114 + (F)dB + 10 log10 (Bb )MHz
= 26.09 + 100 − 114 + 7 − 33 = −13.91 dBm .
Solution to Problem 3.48
(3.52) it must be
From (3.68), the output SNR is equal to the reference ratio Γ, so, from
55
MsTx C 2
= 10 10
N0 B
SOLUTIONS TO PROBLEMS OF CHAPTER 3
and
585
55
MsTx =
10 10 2 · 10−11 · 8 · 103
= 506 V2 .
10−4
Solution to Problem 3.49 From (3.78) and (3.52)
MsTx = aCh N0 B Λo .
Then
(MsTx )dB = 50 + 50 + 10 log10 (2 · 10−12 · 104 ) = 23 dB [V2 ] .
Solution to Problem 3.50
a) From (3.55), with Teff,Rc = TS + T0 (F − 1) = FT0 (assume TS = T0 ), and B = 5 kHz, it follws
that
(Γ)dB = 13 − 60 + 114 − 3 − 10 log 10 (5 · 10−3 ) = 87 dB .
Finally, from (3.115)
(Λo )dB = 87 + 10 log10 (0.8) = 86 dB .
b) For FM, we can have a maximum modulation index (according to the Carson’s rule (3.151))
Bs
− 1 = 10 − 1 = 9
2B
βF =
using the whole channel bandwidth. Then, from (3.193)
(Λo )dB = 10 log10 (3 · 0.5 · 81) + 87 = 20.84 + 87 = 107.8 dB .
Solution to Problem 3.51 We have
Ma =
Z
+∞
Pa (f ) df = K B
−∞
and from (3.77)
MsTx =
KB
.
4
From (3.78) and (3.52)
Λo =
KB
4
N0 BaCh
.
Solution to Problem 3.52 Since the receiver is a standard DSB-SC receiver, we have
Λo = Γ
where Γ is given by (3.52). In this case we can consider no channel attenuation, a Ch = 1, while the
transmitted signal (statistical) power is given by
MsTx =
Z
+∞
−∞
PsTx (f )df = 4
BK
= 2KB .
2
586
SOLUTIONS TO PROBLEMS
Then
2K
2KB
.
=
N0 B
N0
Λo =
Solution to Problem 3.53
a) From (3.52)
Γ=
MsTx
104
= −12
= 2 · 103
N0 BaCh
10
· 5 · 103 · 109
=⇒
(Γ)dB = 33 dB
Note that kf2 = Mā = 0.1. From (3.98)
η=
(0.8)2 · 0.1
= 6% .
1 + (0.8)2 · 0.1
From (3.126)
(Λo )dB = 33 + 10 log10 (0.06) = 20.79 dB .
b) From the available channel bandwidth Bs = 100 kHz and (3.151),
βF =
Bs
−1=9.
2B
Then, from (3.193)
(Λo )dB = 10 log10 (3 · 0.1 · 81) + 33 = 13.85 + 33 = 46.85 dB .
Solution to Problem 3.54
a) Since the information signal is sinusoidal, kf2 = 1/2. Moreover, the useful signal at the output of
the filter hRc coincides with the transmitted (modulated) signal, then
MsTx =
i
10−6 h
1
A2 1 + · 0.52 = 5.6 · 10−7 V2 .
1 + kf2 m2 =
2
2
2
The noise has statistical power
MwMi
Z
+∞
N0
=
|HRc (f )|2 df
2 −∞
Z 500 N0
N0
f 2
· 4 kHz + 4
=
df
2
2 0
500
N0 500
N0
· 4 kHz + 4
= 4.66 · 10−9 V2 .
=
2
2 3
b) From (3.98) we have
η=
1
2
1+
Moreover, from (3.52) with B = 1 kHz, it is
(Γ)dB = 10 log10
·
1
2
1
4
·
1
4
=
1
.
9
5.6 · 10−7
2 · 10−12 · 103
= 24.5 dB .
SOLUTIONS TO PROBLEMS OF CHAPTER 3
587
Then, from (3.115), we have
(Λo )dB = 10 log10
1
9
+ (Γ)dB = −9.54 + 24.5 = 15 dB .
Solution to Problem 3.55
a) For SSB+ we have
Λo = Γ
where from (3.52)
Γ=
so that
(MsTx )dB
MsTx
,
N0 BaCh
= Λo + 10 log(10−14 ) + 10 log(1.5 · 106 ) + 90
= 30 − 140 + 61, 76 + 90 ' 41.76 dBV2 .
b) For DSB-TC we have
Λo = ηΓ
with
kf2 =
Ma
=
a2m
and
η=
Z
1
u2 pa (u)du = 2
−1
Z
1
0
1
1
u2 du = ,
2
3
m2 kf2
0.52 13
1
=
=
.
2
2
13
1 + m kf
1 + 0.52 31
Then
MsTx = ΓN0 BaCh =
Λo
N0 BaCh
η
and
(MsTx )dB
= (Λo )dB − 10 log(η) + 10 log(10−14 ) + 10 log(1.5 · 106 ) + 90
= 30 + 11.14 − 140 + 61.76 + 90 ' 52.9 dBV2
c) Same as SSB+ since, also for DSB-SC, we have
Λo = Γ .
Solution to Problem 3.56
a) From (3.200)
fLO = f0 + fIF = 96.9 + 10.7 = 107.6MHz .
b) RF filter: center frequency f0 = 96.9 MHz; bandwidth ' 4fIF = 42.8 MHz. IF filter: center
frequency 10.7 MHz; bandwidth 200 kHz.
c) Image at carrier frequency f0 + 2fIF = 96.9 + 21.4 = 118.3 MHz.
Solution to Problem 3.57
b) Filters:
RF filter:
IF filter 1:
IF filter 2:
Center frequency f0 = 146.7 MHz,
0
= 10.7 MHz,
Center frequency fIF
00
Center frequency fIF
= 455 kHz,
Bandwidth ' 4fIF = 42.8 MHz.
Bandwidth 200 kHz.
Bandwidth 200 kHz.
588
SOLUTIONS TO PROBLEMS
Local oscillators:
0
0
fLO
= f0 + fIF
= 146.7 + 10.7 = 157.4 MHz
00
fLO
0
00
= fIF
− fIF
= 10.7 MHz − 455 kHz = 10.245 MHz .
d) Image channel at carrier frequency
0
f0 + 2fIF
= 146.7 + 21.4 = 168.1 MHz .
The second conversion has no image channels at the input of IF filter 2. In fact, the IF filter 1 has
already selected the station band of interest.
Solution to Problem 3.58
a)
fLO = f0,video + fIF,video = 83.25 + 45.75 = 129 MHz
fLO = f0,audio + fIF,audio = 87.75 + 41.25 = 129 MHz .
b) The image is at the carrier frequency
fim = f0,video + 2fIF,video = 83.25 + 2 · 45.75 = 174.75 MHz ,
so that channel 7 represents the image for channel 6.
c) Requiring a IF audio carrier frequency lower than the video IF carrier frequency, the only choice is
with “LO above the carrier", otherwise, with “LO below the carrier", the audio carrier frequency
at IF would be again 4.5 MHz above the video carrier frequency.
8.4 SOLUTIONS TO PROBLEMS OF Chapter 4
Solution to Problem 4.1
a) We have
hx(t), y(t)i =
b) We have
hx(t), y(t)i =
Z
Z
+∞
e−t e−4t dt =
0
Z
+∞
e−5t dt =
0
+∞
e−(2+j3)t e−(2−j3)t dt =
0
Z
1
.
5
+∞
e−4t dt =
0
1
.
4
Solution to Problem 4.2 By following the Gram-Schmidt procedure, firstly we set φ 01 (t) = s1 (t) and
determine the energy of φ01 (t). We have
E1 =
Z
3
0
|s1 (t)|2 dt = 4A2 · 2 + A2 · 1 = 9 A2 ,
so that
s1 (t)
φ1 (t) =
=
3A
(2
3
1
3
0
,0≤t<2
,2≤t<3
, otherwise
SOLUTIONS TO PROBLEMS OF CHAPTER 4
589
Next, we need to calculate the inner product
c2,1
1
= hs2 (t), φ1 (t)i =
3A
Z
3
s2 (t) s1 (t) dt =
0
4A2 + 2A2 + A2
= 37 A ,
3A
from which we have
φ02 (t)
= s2 (t) − c2,1 φ1 (t) = s2 (t) −
with energy
E20
and so
7
9
=
Z
3
0
 4

 + 95 A
,0≤t<1
,1≤t<2
,2≤t<3
, otherwise
−9A
s1 (t) =
2

+9A
0
|φ02 (t)|2 dt = ( 94 A)2 + (− 59 A)2 + ( 92 A)2 =
 √4
+

 355
φ02 (t)
,0≤t<1
,1≤t<2
,2≤t<3
, otherwise.
− 3 √5
=
φ2 (t) = p

E20
 + 3√2 5
0
The signal vector representation is thus
h
s1 = 3A , 0
i
s2 =
h
5 2
A
9
√
7
A,
3
5
A
3
i
.
We incidentally note that a simpler basis can be identified by observing the signal representations.
An alternative orthonormal basis can be defined as
φ1 (t) =
( √2
,0≤t<1
,2≤t<3
, otherwise
5
√1
5
0
for which
s1 =
h√
5 A , 2A
φ2 (t) =
i
s2 =
h√
n
1
0
,1≤t<2
, otherwise
i
5 A, A .
Solution to Problem 4.3 We follow the Gram-Schmidt procedure. For the first signal we have
E1 = 1
=⇒
φ1 (t) = s1 (t) = rect(t) .
For the second signal the inner product of interest is
c2,1 = hs2 (t), φ1 (t)i =
Z
|t| rect(t) dt =
1
4
,
and so
φ02 (t) = s2 (t) − c2,1 φ1 (t) = (|t| − 14 ) rect(t) .
By calculating the energy we have
E20 =
1
48
=⇒
φ2 (t) =
√
3 (4|t| − 1) rect(t) .
590
SOLUTIONS TO PROBLEMS
For the third signal of the basis the inner products of interest are
c3,1 = hs3 (t), φ1 (t)i = 0
c3,2 = hs3 (t), φ2 (t)i = 0
being inner products between an odd symmetric signal and an even symmetric signal. We thus have
√
1
=⇒ φ3 (t) = 2 3 t rect(t) .
E3 = 12
The corresponding signal vector representation is
h
i
s1 = 1 , 0 , 0 , s 2 =
h
1
4
1
√
,
4 3
i
h
, 0 , s3 = 0 , 0 ,
1
√
2 3
i
.
Solution to Problem 4.4 We follow the Gram-Schmidt procedure. For the first signal we have
E1 = 1
=⇒
φ1 (t) = s1 (t) = rect(t) .
For the second signal the inner product of interest is
c2,1 = hs2 (t), φ1 (t)i =
Z
sgn(t) rect(t) dt = 0 ,
and s2 is thus orthogonal to φ1 . We obtain
E2 = 1
=⇒
φ2 (t) = s2 (t) = sgn(t) rect(t) .
For the third signal of the basis, the inner products of interest are
c3,1 = hs3 (t), φ1 (t)i =
1
2
c3,2 = hs3 (t), φ2 (t)i = 0
,
and so we obtain
φ03 (t) = s3 (t) − c3,1 φ1 (t) − c3,2 φ2 (t)
= (1 − 2|t|) rect(t) −
1
2
rect(t) = ( 21 − 2|t|) rect(t) .
Finally, the third element of the basis becomes
E30 =
1
12
=⇒
φ3 (t) =
√
3 (1 − 4 |t|) rect(t) ,
and the signal vector representation is
h
i
h
i
s1 = 1 , 0 , 0 , s2 = 0 , 1 , 0 , s3 =
Note that for the energy of the waveforms we have
E1 = 1 , E 2 = 1 , E 3 =
1
3
h
.
1
2
, 0,
1
√
2 3
i
.
SOLUTIONS TO PROBLEMS OF CHAPTER 4
591
Solution to Problem 4.5 It is immediate to verify that the signal set
φ1 (t) =
φ3 (t) =
n
n
,0≤t<1
, otherwise
,2≤t<4
, otherwise
1
0
√1
2
0
φ2 (t) =
n
1
0
,1≤t<2
, otherwise
is orthonormal. The vector representation of signals then simply follows as
h
i
h
s1 = 2 , 2 , 0 , s2 = 0 , 1 ,
h
√ i
√ i
2 , s3 = 3 , 3 , 3 2 .
Solution to Problem 4.6 Since the signals are linearly independent, as we let the reader verify, we
have I = M = 4. Thus, we can set a very simple basis
φi (t) =
n
1
0
,i−1≤t<i
, otherwise
i = 1, 2, 3, 4
for which the signal vector representation is
h
i
h
i
h
i
h
i
s1 = 2, −1, −1, 0 , s2 = 1, 1, 1, 1 , s3 = 1, −1, 1, 1 , s4 = 0, 0, 2, 2 .
Concerning the energies we have
E1 = 6 , E 2 = 4 , E 3 = 4 , E 4 = 8 ,
while the relative distances can be evaluated from (4.32), giving
√
√
d1,2 = 6 + 4 − 2 · 0 = 10
√
√
d1,3 = 6 + 4 − 2 · 2 = 6
p
√
d1,4 = 6 + 8 − 2 · (−2) = 18
√
d2,3 = 4 + 4 − 2 · 2 = 2
√
d2,4 = 4 + 8 − 2 · 4 = 2
√
d3,4 = 4 + 8 − 2 · 4 = 2 .
Solution to Problem 4.7 The signals are all linear combinations of h(t), so that I = 1 and we can set
h(t)
φ1 (t) = √
,
Eh
Eh =
Z
T
0
|h(t)|2 dt .
The signal vector representation is thus
i
h √ i h √
sn = An Eh = A Eh (2n − M − 1)
592
SOLUTIONS TO PROBLEMS
with distances (from (4.32))
p
En + Em − 2hsn , sm i
p
√
= Eh A2n + A2m − 2 An Am
√
√
= Eh |An − Am | = 2A Eh |n − m|
dn,m =
so that the signals at largest distance are s1 and sM .
Solution to Problem 4.8 We can exploit the results of Example 4.1 E to write the signals as
sn (t) = An cos(ϕn ) cos(2πf0 t) − An sin(ϕn ) sin(2πf0 t) ,
0≤t<T .
By further considering that (4.21) is satisfied since f0 T 1, we can use the basis
φ1 (t) =
φ2 (t) =
r
r
2
cos(2πf0 t) ,
T
0≤t<T
2
sin(2πf0 t) ,
T
0≤t<T
and so the corresponding signal vector representation is
sn =
hr
T
An cos(ϕn ) , −
2
r
i
T
An sin(ϕn ) .
2
Concerning the distances, from (4.32) we have
dn,m =
giving
p
En + Em − 2hsn , sm i =
√
T ,
√
' 0.94 T ,
√
' 3.11 T .
r
T p 2
An + A2m − 2An Am cos(ϕn − ϕm )
2
√
T ,
√
' 2.76 T ,
d1,2 ' 3.27
d1,3 ' 3.86
d2,3
d2,4
d3,4
d1,4 ' 1.11
√
T ,
Solution to Problem 4.9 We first identify an orthonormal basis. Since s1 (t) and s2 (t) are orthogonal,
we immediately have
E1 = 1 =⇒ φ1 (t) = s1 (t)
E2 = 1
=⇒
φ2 (t) = s2 (t)
We now need to project the signal s(t) onto the basis. From (4.40) we find
c1 = hs(t), φ1 (t)i =
c2 = hs(t), φ2 (t)i =
Z
Z
1
2
t dt +
0
1
t dt =
0
Z
1
2
1
1
2
(−t) dt = − 14
SOLUTIONS TO PROBLEMS OF CHAPTER 4
593
and so, from (4.34) we obtain
ŝ(t) = c1 φ1 (t) + c2 φ2 (t) =
(1
, 0 ≤ t < 21
, 21 ≤ t < 1
, otherwise
4
3
4
0
while the error is
e(t) = s(t) − ŝ(t) =
(
t−
t−
0
, 0 ≤ t < 12
, 21 ≤ t < 1
, otherwise
1
4
3
4
Thus, from (4.43) we have
Es =
1
3
Eŝ =
5
16
1
48
Ee = Es − Eŝ =
=⇒
.
Finally, the ratio between the error energy and the signal energy is
Ee
1
=
= 0.625 = 6.25 % .
Es
16
Solution to Problem 4.10 We first determine the correct values Ai that guarantee the orthonormality
of the basis. For the energies of φi (t) we have
E1 = A21
=⇒
A1 = 1
for the first signal of the basis, while for i = 2, 3, 4, 5 we have
Ei =
Z
1
+2
A2i sin2 (2πfi t) dt =
−1
2
1
2
A2i
=⇒
Ai =
√
2.
Then, the projection coefficients (4.40) become
c1 = hs(t), φ1 (t)i =
Z
+1
2
−1
2
1
1
e−t dt = e 2 − e− 2 ,
and
ci = hs(t), φi (t)i =
√
2
Z
+1
2
e−t sin(2πfi t) dt =
√
1
−2
1
1
2 (−1)fi e 2 − e− 2
2πfi
1 + (2πfi )2
so it results
1
2
ŝ(t) = e − e
−1
2
"
1+2
5
X
(−1)
fi
i=2
#
2πfi
sin(2πfi t) rect(t) .
1 + (2πfi )2
With respects to the error energy, from the energy of the projected signal
Eŝ =
5
X
i=1
2
1
2
|ci | = e − e
1
−2
2
"
1+2
5
X
i=2
(2πfi )2
(1 + (2πfi )2 )2
#
594
SOLUTIONS TO PROBLEMS
and the original signal energy
Es =
Z
+1
2
e−2t dt =
−1
2
1
2
e − e−1
it results
-
Es − Eŝ
Ee
=
' 0.50 = 50% .
Es
Es
The plot of s(t) and ŝ(t) is shown below.
s(t)
ŝ(t)
1
1
2
−1
2
t
Solution to Problem 4.11 We first note that the signals si (t), i = 1, 2, 3, 4, are orthogonal, all with
energy Ei = 8, so that the signal basis is
si (t)
φi (t) = √ , i = 1, 2, 3, 4 .
8
Concerning the projection coefficients (4.40), since s(t) = sin(2πf 0 t), with f0 = 1/8, is a sinusoid
periodic of period Tp = 1/f0 = 8, we have
Z
and
Z
s(t)φ1 (t) =
8
Z
s(t)φ3 (t) =
2
s(t)φ2 (t) = √
8
0
So, the projected signal (4.34) is
Z
4
0
Z
s(t)φ4 (t) = 0
√
4 2
sin(πt/4) dt =
.
π
( 2
√
+π
4 2
ŝ(t) =
φ2 (t) = − π2
π
0
,0≤t<4
,4≤t<8
, otherwise
with energy Eŝ = 32/π 2 . For the error energy, from (4.43) we finally have
Ee = Es − Eŝ = 4 −
32
' 0.75 .
π2
Solution to Problem 4.12 Being A the minimum distance, the coordinates of points for constellation
a) are of the form (±A, ±A), (0, ±A) and (±A, 0). The resulting average energy is thus (see (4.69)
SOLUTIONS TO PROBLEMS OF CHAPTER 4
595
with pn = 81 )
4 (2A2 ) + 2 (A2 ) + 2 (A2 )
= 32 A2 = 1.5 A2 .
8
For constellation b), the coordinates are of the form (± 21 A, 0) for the points on the abscissa, while for
√
√
the other points we have (0, ± 23 A) and (±A, ± 23 A). So, the average energy is
Es =
Es =
2 ( 14 A2 ) + 2 ( 43 A2 ) + 4 (A2 + 43 A2 )
=
8
9
8
A2 = 1.125 A2 .
In conclusion, under a minimum distance constraint, constellation b) is more efficient in terms of
energy.
Solution to Problem 4.13 Being A the minimum distance between points, the coordinates of points
for constellation a) are of the form (± 21 A, ± 12 A), (± 32 A, ± 21 A), (± 12 A, ± 32 A), (± 32 A, ± 32 A). So,
from (4.69), with pn = 81 , we obtain
4 ( 14 A2 + 14 A2 ) + 4 ( 94 A2 + 14 A2 ) + 4 ( 41 A2 + 94 A2 ) + 4 ( 94 A2 + 94 A2 )
16
= 52 A2 = 2.5 A2 .
Es =
π
For constellation b), since the distance between two successive points is d = 2r sin( 16
), with r the
π
radius, we have r = A/[2 sin( 16 )]. Moreover, the average energy becomes
Es =
16 r 2
A2
=
' 6.57 A2 .
π
16
4 sin2 ( 16
)
So, under a minimum distance constraint, constellation a) is more efficient in terms of energy.
596
SOLUTIONS TO PROBLEMS
Solution to Problem 4.14
•
-
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
-
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
-
•
•
•
•
•
(i) 8-QAM
•
•
-
•
•
•
•
-
•
•
•
•
•
•
-
•
•
(f) 8-QAM
-
-
(h) V29
•
•
•
•
•
•
•
•
•
-
-
•
•
-
(e) 53-QAM
•
•
-
•
•
(d) 32-QAM
•
-
•
(g) 8-QAM
•
•
-
•
(c) 16-QAM
-
(b) 8-PSK
-
(a) QPSK
•
-
•
•
•
•
•
•
-
•
•
Solution to Problem 4.15
a) Since s1 (t) and s2 (t) are orthogonal, while s3 (t) is linearly dependent on s2 (t), the dimension of
the signal space is I = 2. Moreover, we can set
s1 (t)
φ1 (t) = √ ,
T
s2 (t)
φ2 (t) = √
T
so that the waveform constellation is
h√
i
h √ i
s1 =
T,0 ,
s2 = 0 , T ,
h
√ i
s3 = 0 , − T .
SOLUTIONS TO PROBLEMS OF CHAPTER 4
597
-
b) The resulting optimum decision regions are the minimum distance regions illustrated in the picture
below. From the figure note that s1 (t) is the most sensitive waveform to noise. In fact by
overlapping the three reference points s1 , s2 , s3 with relative decision regions it is seen that the
decision region R1 is included in both R2 and R3 . Hence, it is more likely to exit R1 given s1 ,
then the other two cases.
φ2
•
√
T
R1
R2
-
•
φ1
√
T
R3
•
√
T
Solution to Problem 4.16
a) The minimum distance decision regions are simply
R1 = (−∞, −2V0 ) R2 = [−2V0 , 0) R3 = [0, 2V0 ) R4 = [2V0 , +∞) .
We then evaluate the probability of correct decision through (4.61) and (4.73). Because of (4.53),
we have
P [r ∈ R1 |a0 = 1] = P [s1 + w < −2V0 ] = P [w < V0 ]
= 1 − Q (V0 /σI ) = 1 − Q (2)
and
P [r ∈ R2 |a0 = 2] = P [−2V0 ≤ s2 + w < 0] = P [−V0 ≤ w < V0 ]
= 1 − 2 Q (V0 /σI ) = 1 − 2 Q (2)
while the remaining terms can be evaluated by symmetry to give
P [r ∈ R3 |a0 = 3] = 1 − 2 Q (2) ,
P [r ∈ R4 |a0 = 4] = 1 − Q (2) .
By combining the results through (4.61), we finally have
3
3
1
1
P [C] = 8 (1 − Q (2)) + 8 (1 − 2 Q (2)) + 8 (1 − 2 Q (2)) + 8 (1 − Q (2))
= 1 − 74 Q (2)
and Pe = 74 Q (2) ' 3.981 · 10−2 .
b) The optimum decision regions can be evaluated from (4.81). In the present context it is easier to
identify the boundaries between decision regions. Because of the system symmetry, we have
R1 = (−∞, −v) R2 = [−v, 0) R3 = [0, v) R4 = [v, +∞)
where v must be evaluated. Specifically, we have that v is the coordinate such that D(v; 3) =
D(v; 4), that is
3
8
1
p
2πσI2
exp
−
(v − V0 )2
2σI2
=
1
8
1
p
2πσI2
exp
−
(v − 3V0 )2
2σI2
598
SOLUTIONS TO PROBLEMS
which gives
v = 2+
1
8
ln 3
V0 ' 2.55 V0 .
For the probability of error, by the above procedure we obtain
P [r ∈ R1 |a0 = 1] = P [r ∈ R4 |a0 = 4] = P [w > 3V0 − v]
=1−Q 2−
1
4
ln 3
P [r ∈ R2 |a0 = 2] = P [r ∈ R3 |a0 = 3] = P [V0 − v ≤ w < V0 ]
= 1 − Q (2) − Q 2 +
1
4
ln 3
so that
Pe =
1
4
Q 2−
1
4
ln 3 +
3
4
Q (2) +
3
4
Q 2+
1
4
ln 3 ' 3.6218 · 10−2
which is slightly lower than Pe obtained with minimum distance regions.
Solution to Problem 4.17
a) The decision regions are
R1 = (−∞, − 13 V0 ) R2 = [− 13 V0 , 31 V0 ) R3 = [ 13 V0 , +∞) .
We then evaluate the probability of correct decision by (4.61). From (4.53) and σ I2 =
have
1
2
P [r ∈ R1 |a0 = 1] = P s1 + w < − 3 V0 = P w < 3 V0
√ 2
= 1 − Q 23 V0 /σI = 1 − Q
2 2
V ,
9 0
we
and
1
1
1
1
P [r ∈ R2 |a0 = 2] = P − 3 V0 < s2 + w < 3 V0 = P − 3 V0 < w < 3 V0
1
V /σI
3 0
= 1 − 2Q
= 1− 2Q
q 1
2
and also P [r ∈ R3 |a0 = 3] = P [r ∈ R1 |a0 = 1] because of the system symmetry. By combining
the results through (4.61), we finally have
P [C] = 1 −
and
Pe =
1
2
Q
1
2
Q
q √ 1
2 −Q
,
2
q √ 1
2 +Q
' 0.279 .
2
b) The optimum decision regions can be evaluated from (4.81). In the present context it is easier to
identify the boundaries between decision regions. Because of the system symmetry, we have
R1 = (−∞, −v) R2 = [−v, v) R3 = [v, +∞)
SOLUTIONS TO PROBLEMS OF CHAPTER 4
599
where v must be evaluated. Specifically, we have that v is the coordinate such that D(v; 2) =
D(v; 3), that is
1
2
which gives
1
p
2πσI2
exp
−
v=
(v)2
2σI2
1
2
2
9
+
1
4
=
ln 2
1
p
exp
2πσI2
(v − V0 )2
2σI2
−
V0 ' 0.65 V0 .
For the probability of error, by the above procedure we obtain
P [r ∈ R1 |a0 = 1] = P [r ∈ R3 |a0 = 3]
= P [w > V0 − v] = 1 − Q
3
√
2 2
P [r ∈ R2 |a0 = 2] = P [−v ≤ w < v] = 1 − 2 Q
so that
Pe =
1
2
Q
3
√
2 2
−
√
2
3
ln 2 + Q
2
3
√
√
2
3
+
2
−
3
√
2 2
√
2
3
+
ln 2
√
2
3
ln 2
ln 2 ' 0.198 .
Solution to Problem 4.18
a) We begin by evaluating the probability of correct decision through (4.61). From (4.53), we have
6
1
P [r ∈ R1 |a0 = 1] = P s1 + w < 5 A = P w < 5 A
=1−Q
and
6
A/σI
5
= 1 − Q (6)
1
4
P [r ∈ R2 |a0 = 2] = P s2 + w > 5 A = P w > − 5 A
=1−Q
so that
P [C] = 1 −
4
A/σI
5
1
2
= 1 − Q (4)
1
2
Q (6) −
1
2
Q (4) ' 1.58 · 10−5 .
Q (4) ,
and
Pe =
1
2
Q (6) +
b) The optimum decision regions can be evaluated from (4.81). In the present context it is easier to
identify the boundary v. Specifically, we have that v is the coordinate such that D(v; 1) = D(v; 2),
that is
1
p1 p
exp
2πσI2
−
(v + A)2
2σI2
= (1 − p1 ) p
1
2πσI2
exp
−
(v − A)2
2σI2
where we also exploited the fact that p2 = 1 − p1 . By then substituting v = A/5, we obtain
p1 =
1
'1,
1 + e−10
p2 =
1
' 4.54 · 10−5 .
1 + e10
600
SOLUTIONS TO PROBLEMS
Solution to Problem 4.19 We preliminarily note that the signals are linearly dependent, hence I = 1
and the basis function is given by (for its energy evaluation see Example 4.1 E)
φ1 (t) =
r
2
sin(2πf0 t) rect(t/T )
T
and so the waveform vector representation is
s1 = V 0
r
T
,
2
We also note that σI2 = N20 can be written as σI2 =
a) The minimum distance regions are given by
R1 = [v, +∞) ,
s3 = −V0
s2 = 0 ,
1
V2
10 0
r
T
.
2
T.
R3 = (−∞, −v) ,
R2 = [−v, v) ,
pT
. For evaluating the probability of error, we begin by evaluating the
where v = 12 s1 = V0
8
probability of correct decision through (4.61). From (4.53), we have
P [r ∈ R1 |a0 = 1] = P [s1 + w > v] = P [w > v − s1 ]
q 5
4
=1−Q
and
P [r ∈ R2 |a0 = 2] = P [−v ≤ s2 + w < v] = P [|w| < v]
= 1 − 2Q
q 5
4
while P [r ∈ R3 |a0 = 3] = P [r ∈ R1 |a0 = 1] because of the system symmetry. In conclusion,
by combining the results through (4.61), we obtain
Pe = 1 − P [C] =
3
2
Q
q 5
4
' 0.198 .
b) We determine the optimum decision regions by the optimum threshold v. Specifically, we have
that v is the coordinate such that D(v; 1) = D(v; 2), that is
1
4
giving
1
p
2πσI2
exp
v=
1
2
−
V0
p
(v − V0 T /2)2 =
2σI2
r
T
σ 2 ln 2
+ Ip =
2
V0 T2
1
2
1
2
+
1
p
1
5
2πσI2
exp
ln 2 V0
r
−
v2
2σI2
T
2
The probabilities needed in the evaluation of the error probability are
P [r ∈ R1 |a0 = 1] = P [r ∈ R3 |a0 = 3] = P [w > v − s1 ] = 1 − Q
√
5
2
−
ln
√2
5
SOLUTIONS TO PROBLEMS OF CHAPTER 4
and
P [r ∈ R2 |a0 = 2] = P [|w| < v] = 1 − 2 Q
giving
Pe =
1
2
Q
√
5
2
−
ln
√2
5
+Q
√
5
2
+
ln
√2
5
√
5
2
+
ln
√2
5
601
' 0.181 .
Solution to Problem 4.20 Being the modulation antipodal, with average signaling energy
Es = A 2
Z
3T
4
1T
−4
h
i
e−2|t|/T dt = 12 A2 T 2 − e−1/2 − e−3/2 ' 0.585 A2 T ,
the waveform constellation is given by (4.129) and the optimum decision regions by (4.130). So the
threshold at 0 identifies optimum decision regions for which the corresponding error probability is, from
(4.131),
Pbit
v
h
i
u
!
r h
u A2 T 2 − e−1/2 − e−3/2
i
t

=Q
= Q
' 0.0152 .
4 2 − e−1/2 − e−3/2


A2 T /4
Solution to Problem 4.21 In this problem, the error is additive but not Gaussian. This assures that
(4.76) is valid, provided that we use the correct expression for the conditional probability p r|a0 (ρ|n).
Using (4.53), we have
D(ρ; n) = pr|a0 (ρ|n) pn = pw (ρ − sn ) pn = p
1
2 σI2
exp
−
√ |ρ − sn | 2
pn .
σI
Now, the functions D(ρ; n) are centered in sn = ±V0 and exponentially decaying. This assures that
the optimum decision regions are compact intervals of the form
R1 = [v, +∞) ,
R2 = (−∞, v)
with v the decision threshold. This can be evaluated by setting D(v; 1) = D(v; 2), to obtain
2
5
1
p
2 σI2
exp
−
√ |v − V0 | =
2
σI
3
5
1
p
2 σI2
exp
−
and, by considering that it is −V0 < v < V0 , we obtain
σI
v = √ ln
2 2
3
2
=
ln 23
√ V0 ' 0.018 V0 .
16 2
√ |v + V0 | 2
σI
602
SOLUTIONS TO PROBLEMS
For the probability of error, we begin by evaluating the probability of correct decision through (4.61).
From (4.53), we have
P [r ∈ R1 |a0 = 1] = P [s1 + w > v] = P [w > v − s1 ]
=
Z
+∞
v−V0
=1−
1
2
pw (u) du = 1 −
exp
Z
v−V0
pw (u) du
−∞
√ (v − V0 ) 2
=1−
σI
1
2
exp
1
2
ln
3
2
−
1
2
ln
−8
√ 2
and
P [r ∈ R2 |a0 = 2] = P [s2 + w < v] = P [w < v − s2 ]
=
Z
v+V0
pw (u) du = 1 −
−∞
=1−
1
2
exp −
Z
+∞
pw (u) du
v+V0
√ (v + V0 ) 2
=1−
σI
1
2
exp
3
2
−8
√ 2
so that the probability of error is
Pe =
1
5
exp
1
2
ln
3
2
−8
√ 2 +
3
10
exp
−
1
2
ln
3
2
−8
√ 2 ' 6 · 10−6 .
Solution to Problem 4.22 As for Problem 4.21, the noise here is Laplacian with PDF
pw (u) = p
1
exp
2 σI2
−
√
2 |u/σI |
where σI = V0 /3. For obtaining the probability of error, we evaluate the probability of correct decision
through (4.61). From (4.53), we have
P [r ∈ R1 |a0 = 1] = P [s1 + w < 0] = P [w < −s1 ]
=
Z
V0
−∞
=1−
and
pw (u) du = 1 −
1
2
exp −
Z
+∞
pw (u) du
V0
√ V0 2
=1−
σI
1
2
exp
√ −3 2
P [r ∈ R2 |a0 = 2] = P [s2 + w > 0] = P [w > −s2 ]
=
Z
+∞
−V0
pw (u) du = P [r ∈ R1 |a0 = 1]
because of the system symmetry. In conclusion, we have
P [C] =
1
4
P [r ∈ R1 |a0 = 1] +
3
4
P [r ∈ R2 |a0 = 2] = 1 −
1
2
exp
√ −3 2
SOLUTIONS TO PROBLEMS OF CHAPTER 4
and
Pe =
1
2
exp
603
√ − 3 2 ' 7.185 · 10−3 .
Solution to Problem 4.23 This is the case where the noise is additive and Gaussian, but not independent
of the useful component. However, in this case it is quite straightforward to derive the expression for
D(ρ; n) in (4.76). We have
D(ρ; 1) =
(ρ − V0 )2
1
1
√
exp −
2
2 2π · 16σ
2 · 16σ 2
and
1
1
ρ2
√
exp − 2 .
2
2 2π σ
2σ
With these definitions, we can now apply (4.77) to identify decision regions. We need to identify the
thresholds v where D(v; 1) and D(v; 2) coincide, that is D(v; 1) = D(v; 2). Here the solution to this
equation yields two values, v1 and v2 , given by
D(ρ; 2) =
v1,2 =
−V0 ±
p
n
16 V02 + 15 · 32 σ 2 · ln(4)
−0.3468 V0
=
+0.2135 V0
15
This assures that the decision regions are of the form
R1 = (−∞, v1 ) ∪ [v2 , +∞)
R2 = [v1 , v2 ) .
The error probability is finally obtained by use of (4.61). Since the error is Gaussian we can immediately
write
h
i
v1 − V 0
v2 − V 0
+Q
P [r ∈ R1 |a0 = 1] = 1 − Q
4σ
4σ
and
v2
v1
−Q
P [r ∈ R2 |a0 = 2] = Q
σ
σ
so that the probability of error becomes
v1 − V 0
4σ
' 2.6 · 10−5 .
P [E] =
1
2
Q
−
1
2
Q
v2 − V 0
4σ
−
1
2
Q
v1
σ
+
1
2
Q
v2
σ
+
1
2
Solution to Problem 4.24
a)
hTx (t)
φ1 (t) = p
E hT x
Eh = T .
b)
Pe =
1
P [w > 1.5 · 10−1 ] + P [w < −1.5 · 10−1 ] + P [w > 1 · 10−1 ] .
2
604
SOLUTIONS TO PROBLEMS
Pe =
c) Since
3 · 10−1
2σI
< φ1 (t), d(t) > =
Z
2
4
2Q
+Q
2 · 10−1
2σI
.
T
d(t)dt = 0 ,
0
no disturbance is present at the decision point and the error probability is not altered.
Solution to Problem 4.25
a) We have Es = E1 = E2 = A2 Tp and
ρ=
then
< s1 (t), s2 (t) >
3
=−
Es
4

p
 −1/ Tp
s1 (t)
p
= +1/ Tp
φ1 (t) = √

E1
T
, 0 < t < 12p
T
, 12p < t < Tp
, otherwise
0
and
φ02 (t) = s2 (t) − ρs1 (t) =
with Eφ02 =
7
A 2 Tp
16
s1 =
13
T
12 p
2
− 1 A , 12
Tp < t < T p
13
, Tp < t < 12
Tp
0
, otherwise
4


 −A
which provides φ2 (t) = φ02 (t)/
√
Es [1 , 0]
b) We select T =
 3
1
− 4 A , 0 < t < 12
Tp


1
2
 74 A , 12
Tp < t < 12
Tp
p
Eφ02 and so
√ i √ h √
q
3
7
.
s2 = ρ Es , Eφ02 = Es − ,
4 4
= 2.16 µs. From the expression of error probability (4.119) we obtain
Pbit = Q
hence
Es
=
N0
r
Es
(1 − ρ)
N0
Q−1 (Pbit )
1−ρ
2
!
= 10−4
= 7.9
where Q−1 (10−4 ) = 3.7.
c) The performance deteriorates since we have an orthogonal modulation with ρ 0 = 0 > ρ = − 34 .
Solution to Problem 4.26
a) Being I = 1, the waveform basis is
1
φ1 (t) = √
T
0≤t<T
SOLUTIONS TO PROBLEMS OF CHAPTER 4
605
with the resulting waveform vector representation
√
s2 = A T .
s1 = 0 ,
The optimum receiver, which in the present context is the minimum distance receiver, can be
implemented more efficiently by a ‘signal projection’ followed by a detector as in Fig. 4.11, where
the decision regions are
√
√
R1 = (−∞, 12 A T ) ,
R2 = [ 12 A T , +∞) .
b) Since E1 = 0 and E2 = A2 T , we have Es = 12 E2 = 12 A2 T . Moreover, the distance between
√
√
the two points is d = A T = 2Es . So, from (4.117) we obtain
Pbit = Pe = Q
r
d2
2N0
!
=Q
r
Es
N0
!
.
Since for an antipodal signal we have (see (4.131))
Pbit = Q
r
Es
2
N0
!
we can conclude that the OOK signaling is less efficient in terms of the ratio E s /N0 , by a factor
of 2, with respect to antipodal signaling employing the same average energy.
Solution to Problem 4.27
a) Being I = 1, the waveform basis is
φ1 (t) =
s1 (t)
√
2A T1
and the resulting waveform vector representation is
√
√
s2 = −2A T1 .
s1 = 2A T1 ,
The optimum receiver, which in the present context is the minimum distance receiver, can be
implemented more efficiently by a signal projection followed by a detector as in Fig. 4.11, where
the decision regions are
R1 = [0, +∞) ,
R2 = (−∞, 0) .
b) For the error probability we can exploit (4.131) to obtain
Pbit = Pe = Q
r
8A2 T1
N0
!
.
Solution to Problem 4.28 Since the transmitted waveforms are equally likely, p 1 = p2 = 12 , and the
noise is AWG, the optimum receiver is the minimum distance receiver. We can resort to the single filter
606
SOLUTIONS TO PROBLEMS
receiver structure of Fig. 4.28. Since
Es = E 1 = E 2 =
Z
T
A2
0
and the correlation coefficient is
RT
0
ρ=
s1 (t) s2 (t) dt
Es
2
t
T
1
6
1
3
=
dt =
1
3
A2 T
A2 T
=
A2 T
1
2
,
the distance (see (4.109)) is
d=
p
2E1 (1 − ρ) =
√
E1 = A
r
T
.
3
Hence the receive filter impulse response is
s∗ (T − t) − s∗2 (T − t)
s2 (t) − s1 (t)
φ1 (t) = 1
=
=
d
d
r
3
T
h
!
.
1−2
t
T
i
rect
t − 21 T
T
.
Moreover, from (4.119) we obtain
Pbit = Pe = Q
r
A2 T
6N0
Solution to Problem 4.29 The transmission system employs antipodal waveforms with equal probability p1 = p2 = 12 . So, the optimum receiver is that of Fig. 4.24 where Es = T and
ψ1 (t) = φ∗1 (T − t) =
−s1 (t)
√
.
T
The resulting error probability is then expressed by (4.131), giving
Pbit = Pe = Q
!
.
t − 12 T
T
r
2T
N0
Solution to Problem 4.30
a) Being the modulation antipodal, we have I = 1 with
1
φ1 (t) = √ rect
T
.
So, according to (4.53), at the decision point the expression of the received sample r is
√
r = ±A T + w ,
w ∼ N (0, N20 ) .
607
SOLUTIONS TO PROBLEMS OF CHAPTER 4
The receiver may be implemented as in Fig. 4.24 with t0 = T and ψ1 (t) = φ1 (t). So, the SNR
expression becomes
A2 T
SNR =
.
N0 /2
b) When the filter ψ̃1 (t) is used in place of ψ1 (t), then, the signal constellation at the decision point
is given by
s1 =
Z
+∞
−∞
ψ̃1 (T − u) s1 (u) du = A
q Z
T
e(u−T )/τ du = A
2
τ
√
0
2τ (1 − e−T /τ )
and s2 = −s1 , while the noise component w is still zero-mean Gaussian with variance
ψ̃1 (t) has unit energy. So, the SNR becomes
SNR0 =
N0
2
since
2A2 τ (1 − e−T /τ )2
N0 /2
and the SNR loss can be expressed as
2 (1 − e−T /τ )2
SNR0
=
,
SNR
T /τ
which is strictly < 1, and has a maximum value of ' 0.82 for T /τ ' 1.3.
Solution to Problem 4.31
a) The optimum receiver, which in the present context is the minimum distance receiver, can be
implemented by the single filter receiver implementation of Fig. 4.29 where
ξ1 (t) =
s∗1 (3
− t) −
s∗2 (3
and the energy is
 −t

0≤t<1
1≤t<2
2≤t<3
otherwise
2t − 3
− t) = s2 (t) − s1 (t) =
3 − t
0
Es = E 1 = E 2 = 2
Z
1
t2 dt =
2
3
0
so that the additive constant is 21 (E2 − E1 ) = 0. The corresponding error probability can be
derived from the correlation coefficient
ρ=
R2
1
s1 (t) s2 (t) dt
Es
=
R2
1
(2 − t) (t − 1) dt
Es
=
and application of (4.119) to give
Pbit = Pe = Q
r
1
2N0
= 2 · 10−3 .
1
6
2
3
=
1
4
608
SOLUTIONS TO PROBLEMS
b) We need to introduce two waveforms with correlation coefficient greater than
the relation s2 (t) = s1 (t − 1), the waveform
s1 (t) =
√1
3
rect
t−1
2
1
.
4
By preserving
guarantees ρ = 21 and Es = 23 .
c) We need to introduce two waveforms with correlation coefficient smaller than 41 . By preserving
the relation s2 (t) = s1 (t − 1), the waveform
s1 (t) =
guarantees ρ = 0 and Es = 23 .
q
2
3
rect(t − 12 )
Solution to Problem 4.32 In this case, the optimum receiver is no more the minimum distance receiver.
The decision regions can be derived by application of (4.77), with D(ρ; n) as in (4.76) and where (4.70)
is still valid. This guarantees that, when s1 < s2 , the decision regions are of the form
R1 = (−∞, v)
R2 = [v, +∞)
where v is such that D(v; 1) = D(v; 2). We have
giving
pp
1
2πσI2
exp
− 12
(v − s1 )2
σI2
v=
σI2
ln
= (1 − p) p
1−p
p
1
2πσI2
exp
− 21
(v − s2 )2
σI2
s1 + s 2
1
+
.
s1 − s 2
2
The error probability can then be derived through (4.61), where
P [r ∈ R1 |a0 = 1] = P [w < v − s1 ] = 1 − Q
and
P [r ∈ R2 |a0 = 2] = P [w > v − s2 ] = Q
giving
h
i
v − s1
σI
v − s2
σI
v − s1
v − s2
− (1 − p) Q
σI
σI
v − s1
v − s2
= (1 − p) + p Q
− (1 − p) Q
.
σI
σI
Pe = 1 − p 1 − Q
Solution to Problem 4.33 In this case, we can follow the procedure outlined in the single-filter receiver
implementation and choose φ1 (t) as in (4.136) and φ2 (t) as in (4.139), so that the resulting constellation
is that of (4.140). The corresponding functions D(ρ; n) are of the form
2
2
1
1 (ρ1 − sn,1 ) + (ρ2 − sn,2 )
exp
−
D(ρ; n) = pn
2
2πσI2
σI2
SOLUTIONS TO PROBLEMS OF CHAPTER 4
609
where the points have the second coordinate in common, that is s 1,2 = s2,2 . The boundary between
decision regions is now determined through the equivalence D(v; 1) = D(v; 2), giving
v1 = σI2 ln
1−p
p
1
s1,1 + s2,1
+
,
s1,1 − s2,1
2
which is a threshold along φ1 (t) only. So, the projection onto φ2 (t) can be dropped, as in the case of
the single filter receiver when p = 12 . The error probability can now be derived from the results of
Problem 4.32, by simply replacing s1 → s1,1 and s2 → s2,1 .
Solution to Problem 4.34 In the general case when p1 = p and p2 = 1 − p, the optimum receiving
scheme is not based upon the minimum distance criterion, and we must resort to the application of (4.77).
The general derivation is given in Problem
√ 4.32. In the present context where the signaling scheme is
antipodal, we can set φ1 (t) = −s1 (t)/ Es so that the resulting waveform vector representation is
√
√
s1 = − Es ,
s2 = Es
and, from the results of Problem 4.32, the decision regions are
R1 = (−∞, v)
R2 = [v, +∞)
with threshold
N0
v= √
ln
4 Es
and the resulting error probability is
Pe = (1 − p) + p Q
When p =
1
2
v+
p
√ !
Es
N0 /2
p
1−p
− (1 − p) Q
v−
p
√ !
Es
N0 /2
' 3 · 10−6 .
we can resort to the ordinary minimum distance receiver where v = 0 and
Pe = Q
r
2Es
N0
!
= 3.87 · 10−6
Solution to Problem 4.35 In the general case when p1 = p 6= p2 , we must resort to the results of
Problem √
4.32. In the present context where the signaling scheme is antipodal, we can set φ 1 (t) =
−s1 (t)/ Es so that the resulting constellation is
√
√
s1 = − Es ,
s2 = Es
and, from the results of Problem 4.32, the decision regions are R 1 = (−∞, v) and R2 = [v, +∞) with
threshold
N0
p
v= √
.
ln
1−p
4 Es
610
SOLUTIONS TO PROBLEMS
The resulting error probability is
v+
Pe = (1 − p) + p Q
p
where
Es = 2A2
Z
√ !
Es
N0 /2
T /2
0
By substitution we obtain Pe = 1.81 · 10−6 .
− (1 − p) Q
2
2t
T
v−
p
√ !
Es
N0 /2
dt = 13 A2 T .
Solution to Problem 4.36
a) By considering the results of Example 4.1 E, since (4.21) holds, a suitable basis is
φ1 (t) =
r
2
cos(2πf0 t) hTx (t) ,
T
φ2 (t) =
r
2
sin(2πf0 t) hTx (t) ,
T
so that the corresponding waveform vector representation is
s1 =
r
T
(1, 0) ,
2
s2 =
r
T
(0, 2) ,
2
s3 =
r
T
(0, −1) ,
2
-
while the corresponding decision regions are illustrated in the figure below.
•
p
T (− 1 , 1 )
2
2 2
s2
y= 3
4
•
s3
-
•
s1
p
T +x
2
2
-
y =−x
b) The upper bound for the error probability is given by (4.151) which, in the present context becomes
Pe ≤ 2 Q
√ T
2σI
= 2Q
r
T
2N0
!
= 2Q
√ 5 ' 2.1 · 10−2 .
Solution to Problem 4.37 For the waveform set {sn (t)}, being the waveforms linearly dependent we
have I = 1. The waveform vector representation is
√
√
√
√
s1 = −3A T , s2 = −A T , s3 = A T , s4 = 3A T
SOLUTIONS TO PROBLEMS OF CHAPTER 4
611
with an average constellation energy Es = 5A2 T . Thus, from (4.150), (4.151), and (4.158), the bounds
on the error probability become
Q (x) ≤ Pe ≤
3
2
Q (x) + Q (2x) +
1
2
Q (3x) ≤ 3 Q (x)
p
√
where x = 2A T /(2σI ) = 2Es /(5N0 ).
For the signal set {vn (t)}, being orthogonal, we can exploit the results of (4.168) to write
Q (y) ≤ Pe ≤ 3 Q (y) , y =
p
Es /N0 .
Since x < y and so Q (y) < Q (x), it is evident that the orthogonal signal set, {vn (t)}, is more energy
efficient.
Solution to Problem 4.38 By assuming that A is the maximum distance between constellation points,
from the solution of Problem 4.12, the two average constellation energies are
Es(a) =
A2 ,
3
2
Es(b) =
9
8
A2 .
With respect to the upper and lower bounds to the error probability, for the first constellation we have
the upper bounds (from (4.150) and (4.151))
√ 1
√ √ 3
2x + 2 Q (2x) + 2 Q
5x + 2 Q
8x ≤ 7 Q (x) ,
Pe ≤ 2 Q (x) + Q
where
A
=
x=
2σI
and the lower bound (from (4.158))
r
A2
=
2N0
r
(a)
1
3
Es
,
N0
Pe ≥ Q (x) .
For the second constellation, the upper bounds become
√ 3
Q (y) + 74 Q
3y + 2 Q (2y) +
Pe ≤ 13
4
where
A
=
y=
2σI
and the lower bound
r
A2
=
2N0
r
Pe ≥ Q (y) .
(a)
Now, from a lower bound perspective, and by setting Es
1
2
Q
√ 7y ≤ 7 Q (y)
(b)
4
9
Es
,
N0
(b)
= Es and thus x =
√
3
2
y < y, we have
Q (y) < Q (x)
and so the constellation b) is more robust to noise. This result is confirmed from the upper bound
perspective. In fact, for the looser upper bound we have 7 Q (y) < 7 Q (x), and similarly for the tighter
upper bound (as can be verified by plotting the curves).
Solution to Problem 4.39 By assuming that A is the minimum distance between constellation points,
from the solution of Problem 4.13, the two average energies are
Es(a) =
5
2
A2 ,
Es(b) =
1
A2 .
π
4 sin2 ( 16
)
612
SOLUTIONS TO PROBLEMS
With respect to the upper and lower bounds to the error probability, for the first constellation we have
the upper bounds (from (4.150) and (4.151))
√ √ √ 5
2x + 2 Q (2x) + 3 Q
5x + 4 Q
8x + Q (3x) +
Pe ≤ 3 Q (x) + 2 Q
√
√
√
1
3
10x + Q
13x + 4 Q
18x ≤ 15 Q (x)
+2Q
where
A
x=
=
2σI
and the lower bound (from (4.158))
r
r
A2
=
2N0
(a)
1
5
Es
,
N0
Pe ≥ Q (x) .
π
For the second constellation, being the radius r = A/[2 sin( 16
)], the upper bounds become
Pe ≤ 2
7
X
k=1
Q
π
sin( 16
k) y
π
sin( 16
)
where
A
y=
=
2σI
and the lower bound
r
+Q
A2
=
2N0
r
2y
π
sin( 16
)
≤ 15 Q (y)
(b)
π
2 sin2 ( 16
)
Pe ≥ Q (y) .
(a)
Now, from a lower bound perspective, and by setting Es
Es
,
N0
(b)
= Es and thus x = 1.6 y > y, we have
Q (x) < Q (y)
and so the constellation a) is more robust to noise. This result is confirmed from the upper bound
perspective. In fact, for the looser upper bound we have 15 Q (x) < 15 Q (y), and similarly for the
tighter upper bound (as can be verified by plotting the curves).
Solution to Problem 4.40
√
a) A basis for antipodal rectangular signals with amplitude A is given by φ 1 (t) = rect(t/T )/ T ,
so that the system constellation is
√
√
s1 = A T , s2 = −A T
and the system energy is Es = A2 T , where Rb = 1/T . So, from (4.131) we obtain
A=Q
−1
(Pbit )
r
Rb
N0
= 15 · 10−3 V
2
=⇒
(A)mV = 15 mV .
b) Conversely, if we are able to deliver only 0.8 A, from (4.131) we have
Pbit = Q
s
(0.8 A)2
Rb N20
!
' 7.2 · 10−5 .
SOLUTIONS TO PROBLEMS OF CHAPTER 4
613
Solution to Problem 4.41 At the receiver input, the received waveforms are of the form
s̃1 (t) = A C rect(t/T ) ,
s̃2 (t) = −s̃1 (t)
which is an antipodal√waveform set with dimension I = 1, energies E s = E1 = E2 = A2 C 2 T , basis
φ̃1 (t) = rect(t/T )/ T and constellation
√
√
s̃2 = −A C T .
s̃1 = A C T ,
So, from (4.131) and (4.65), the bit error probability is
s
Pbit = Q
A2 C 2 T
N0
2
!
.
Now, because we are dealing with a binary modulation, M = 2, from (4.176) and (4.177), we have
T = Tb = 1/Rb , which assures
A=
Q−1 (Pbit )
C
r
Rb
N0
= 1.5 · 103 V .
2
Solution to Problem 4.42 The binary PSK waveforms were introduced in Example 4.3 D, where for
f0 T 1 the two waveforms are antipodal with equal energy A2 T /2. Being the system binary we
further have T = Tb = 1/Rb . At the receiver, the waveforms are simply scaled by a factor C, that is
the received constellation is
s1 = A C
q
1
T
2
s2 = −A C
,
q
1
T
2
with received energy equal to Es = A2 C 2 T /2. So, from (4.131), the maximum power attenuation that
can be tolerated is aCh = 1/C 2 with
√
1 −1
Q (Pbit ) N0 Rb =
C=
A
(
6.7 · 10−3
, Rb = 10 kbit/s
21.24 · 10−3 , Rb = 100 kbit/s
67.18 · 10−3 , Rb = 1 Mbit/s
Solution to Problem 4.43
a) The system dimension is I = 1 with orthonormal basis
φ1 (t) =
hTx (t)
√
A T
so that the system implementation is the implementation type I receiver of Fig. 4.18 where t 0 = T
1
and ψ1 (t) = φ1 (t). Because of the attenuation C = 10
introduced by the channel, the constellation
at the decision point is
s1 = −
3 √
3 √
4 √
4 √
A T , s2 = − A T , s3 = + A T , s4 = + A T ,
10
10
10
10
614
SOLUTIONS TO PROBLEMS
and the corresponding decision regions are separated by the threshold values
v1 = −
7 √
7 √
A T , v2 = 0 , v 3 = + A T .
20
20
b) We have
Pe =
1
2
Q
√ √ 3A T
A T
+Q
.
10σI
20σI
c) Since hd(t), φ1 (t)i = 0, the disturbance does not go through the matched filter ψ 1 , and the error
probability remains unaltered.
Solution to Problem 4.44
a) For the decision regions we have
o
n o
n ρρ1 > 0, ρ2 > 0 , R2 = ρρ1 < 0, ρ2 > ρ1
n o
R3 = ρρ2 < 0, ρ1 > ρ2
R1 =
b) The upper bound on error probability can be derived from (4.150) to obtain
√
A
2A
2
4
+3Q
' 3.8 · 10−7 .
Pe = 3 Q
σI
σI
c) In this case we have a binary modulation for which (4.117) holds, yielding
√
2A
Pe = Q
' 1.28 · 10−12 .
σI
Solution to Problem 4.45
a) The channel induces a distortion on the signals, infact the received waveforms are of the form
s̃1 (t) =
1
2
s1 (t) +
1
4
s1 (t − 12 T0 ) ,
s̃2 (t) =
1
2
s1 (t − 12 T0 ) +
1
4
s1 (t − T0 )
with extensions [0, T0 ] and [ 12 T0 , 32 T0 ]. So, to avoid ISI, a smaller symbol period T that could be
chosen is T = 23 T0 = 3 µs, yielding a maximum bit rate of Rb = 1/T = 0.33 Mbit/s.
b) The optimum receiver should be built upon the received waveform set {s̃ 1 (t), s̃2 (t)}. Since these
5
waveforms have equal energy Es̃ = 32
A2 T0 = 1.25 · 10−6 V2 /Hz, and correlation coefficient
2
ρ̃ = 5 , from (4.119) the error probability is
Pbit = Q
s
Es̃
2
3
5
N0
2
!
=Q
s
A 2 T0
64 N0
3
2
!
' 4.57 · 10−10 .
The optimum receiver is shown in Fig. 4.28, where the receive filter is
ψ1 (t) =
s̃1 (t0 − t) − s̃2 (t0 − t)
,
d˜
SOLUTIONS TO PROBLEMS OF CHAPTER 4
615
with t0 = 23 T0 . By recalling (4.109), which in the present context becomes
d˜ =
we obtain
p
ψ1 (t) =
 √1
 − 3T0

c) In this case, the receive filter is
ψ1 (t) =
√
2Es̃ (1 − ρ̃) = 41 A 3T0 ,
+ √2
3T0
0
, 0 ≤ t < T0
, T0 ≤ t < 32 T0
, otherwise
s1 (t0 − t) − s2 (t0 − t)
,
d
d=
√
√
2Es = A T0
with decision regions R1 = [0, +∞) and R2 = (−∞, 0). Since the received waveforms are
{s̃n (t)}, the constellation at the decision point is
√
√
s̃2 = − 14 A T0
s̃1 = 18 A T0 ,
with corresponding incorrect transition probabilities (4.195)
p1|2
p2|1
s̃1
= P [w + s̃1 < 0] = P [w < −s̃1 ] = Q
σI
−s̃2
= P [w + s̃2 > 0] = P [w > −s̃2 ] = Q
σI
=Q
=Q
s
A 2 T0
64 N20
s
!
A 2 T0
16 N20
!
So, by considering equally likely waveforms, from (4.199) the bit error probability becomes
Pbit =
1
2
(p1|2 + p2|1 ) =
1
2
"
Q
s
A 2 T0
64 N20
!
+Q
s
A 2 T0
16 N20
!#
' 1 · 10−4 ,
which is considerably higher than in the optimum case.
Solution to Problem 4.46
√
a) The reference signal space has dimension I = 1, with basis φ1 (t) = hTx (t)/ T , and constellation
√
s1 = B , s2 = −B , s3 = 2B , s4 = −2B ,
where B = A T . By letting the waveforms have equal probability pn =
decision regions are the minimum distance regions
1
,
4
the optimum
R1 = [0, 32 B) , R2 = [− 32 B, 0) , R3 = [ 32 B, +∞) , R4 = [−∞, − 32 B) .
616
SOLUTIONS TO PROBLEMS
So, by setting C = B/σI = 4, from (4.195) for the incorrect transition probabilities we have
p2|1 = P − 32 B ≤ w + B < 0 = Q (C) − Q
p3|1 = P
3
2
B ≤w+B =Q
p4|1 = P w + B <
− 23 B
−5
p1|2 = p2|1 ' 3.16 · 10
1
C
2
=Q
5
C
2
5
C
2
−2
' 2.27 · 10
' 3.16 · 10−5
' 7.62 · 10−24
p3|2 = p4|1 ' 7.62 · 10−24
p4|2 = p3|1 ' 2.27 · 10−2
p1|3 = P 0 ≤ w + 2B < 23 B = Q
1
C
2
− Q (2C) ' 2.27 · 10−2
p4|3 = P w + 2B < − 23 B = Q
p1|4 = p2|3 ' 6.22 · 10−16
7
C
2
7
C '
2
−45
p2|3 = P − 32 B ≤ w + 2B < 0 = Q (2C) − Q
' 7.79 · 10
6.22 · 10−16
p2|4 = p1|3 ' 2.27 · 10−2
p3|4 = p4|3 ' 7.79 · 10−45
Note that the probability of error is determined by the higher transition error probabilities, that is
by: p3|1 , p4|2 , p1|3 , and p2|4 .
b) We apply (4.203) which in the present context becomes
Pbit '
1
8
p3|1 + p4|2 + p1|3 + p2|4
= 1.14 · 10−2 .
c) We apply (4.203) which in the present context becomes
Pbit '
1
8
2p3|1 + 2p4|2 + 2p1|3 + 2p2|4
= 2.27 · 10−2 .
This first approach is thus to be preferred because it associates binary words with a lower Hamming
distance to waveforms with higher incorrect transition probability.
Solution to Problem 4.47 Because of the equal probability of waveforms, the optimum receiver is
the minimum distance receiver. In the present context, decision regions are separated by the thresholds
1
A, 32 A, 25 A. For the incorrect transition probabilities (4.195) we have
2
p2|1
p1|2
p1|3
p1|4
= Q (x) − Q (3x) ,
= Q (x) ,
= Q (3x) ,
= Q (5x) ,
p3|1
p3|2
p2|3
p2|4
= Q (3x) − Q (5x) ,
= Q (x) − Q (3x) ,
= Q (x) − Q (3x) ,
= Q (3x) − Q (5x) ,
p4|1
p4|2
p4|3
p3|4
= Q (5x)
= Q (3x)
= Q (x)
= Q (x) − Q (3x)
where x = 12 A/σI . Since Es = 27 A2 and Es = C 2 EsTx , we can also write
x=
s
Es
=
14 N20
s
EsTx C 2
.
14 N20
SOLUTIONS TO PROBLEMS OF CHAPTER 4
617
By applying (4.203), the bit error probability becomes
Pbit = Q (x) −
1
4
Q (3x) +
1
4
Q (5x) ' Q (x) .
So, concerning the average transmit energy we have
EsTx '
2
14 N20 −1
Q (Pbit )
= 3.15 V2 /Hz .
2
C
Solution to Problem 4.48
a) In this situation, the most efficient ML receiver is the implementation type I receiver of Fig. 4.18,
illustrated in Fig. 4.45 in the specific QAM case; the decision regions for the minimum distance
criterion correspond to the four quadrants.
b) For the conditional correct decision probability we have
P [C|a0 = 1] = p1|1 = P [s1 + w ∈ R1 ]
= P [A + w1 > 0, B + w2 > 0] = Q (−A/σI ) Q (−B/σI )
B
A
1−Q
= 1−Q
σI
σI
while for the probability of correct decision we simply have P [C] = P [C|a0 = 1] because of the
system symmetry.
c) By assuming a Gray bit mapping, for example
00 → s1 , 01 → s2 , 11 → s3 , 10 → s4 ,
the bit error probability can be derived as in (4.219), giving
Pbit =
1 − P [C]
'
2
1
2
Q
A
σI
+
1
2
Q
B
σI
= 5 · 10−3 .
-
Solution to Problem 4.49
a) The optimum decision regions are reported in the picture below.
• 100
• 000
•
101
•
001
011 •
111 •
010
•
110
•
-
618
SOLUTIONS TO PROBLEMS
A lower bound to the error probability can be determined from (4.157), giving





√
2
2
1
Pe ≥ 4 Q  q  + 4 Q  q  '
8
2 N20
2 N20
1
2
Q (14) +
1
2
Q (20) .
b) The lowest conditional error probability is given by P [E|s5 ] since the decision region associated
with s1 is contained in the decision region of s5 , and so the correct transition probability associated
with s5 is bigger than that of s1 .
c) The signal that is more likely to be detected is either s2 or s4 , which are the closest points to s1 .
So, by the minimum distance criterion, a possible Gray bit mapping approach is given in the plot
above.
Solution to Problem 4.50
a) For determining the modulation cardinality M , we can use (4.176) and the settings on the maximum
bit rate to obtain
Rb
log2 M ≥
1/T
while for determining the SNR we can use the bit error probability approximation (4.219) where
the symbol error probability is given by (4.248). By solving with respect to E s /N0 we have
Es
M −1
=
N0
3
Q
In the specific case we obtain M = 4 and
b) In the first case we have
−1
Es
N0
Pbit log2 M
4(1 − √1M )
Es
N0
Es
= 1427 =⇒
N0
Es
N0
and in the second case
M = 256 ,
.
' 18, that is (Es /N0 )dB = 12.6 dB.
Es
= 80 =⇒
N0
M = 16 ,
!!2
= 19 dB
dB
= 31.5 dB .
dB
We can conclude that, when the bandwidth is limited, higher bit rates require higher SNRs, hence
higher energy, to achieve the same bit error probability.
Solution to Problem 4.51
a) No, it is not possible because there are points with four constellation points at the minimum distance
A. Since the bits available for Gray bit mapping are 3 (log 2 8 = 3), one of the points at minimum
distance must have a bit representation with at least 2 bit difference from the bit representation of
the reference constellation point.
b) By exploiting (4.177) we have F = 20 MBaud.
c) From Tab. 4.1 we have (N = 8)
Pbit,QAM
4
=
3
1
1− √
2 2
Q
r
3 Es
7 N0
!
' 0.8619 Q
r
Es
0.4286
N0
!
SOLUTIONS TO PROBLEMS OF CHAPTER 4
Pbit,PSK
2
= Q
3
s
1
1− √
2
Es
N0
!
' 0.6667 Q
r
Es
0.2929
N0
619
!
We obtain the following prospect
Pbit
10−1
10−2
10−3
10−4
10−5
10−6
10−7
10−8
10−9
Es
N0
QAM
5.16
8.50
10.95
12.97
14.74
16.32
17.76
19.10
20.36
Es
N0
PSK
6.84
11.98
15.66
18.68
21.29
23.63
25.76
27.74
29.59
So, QAM is more efficient than PSK.
d) 8-QAM is more robust to phase errors, since it is required a phase error of at least π/4 to exit a
decision region. In 8-PSK, instead, the maximum allowable phase error is π/8. On the other hand
8-PSK is more robust to errors in the amplitude.
Solution to Problem 4.52
a) From (4.176), we have 1/T = Rb / log2 (4) = 45 MBaud.
b) From (4.176), we have 1/T = Rb / log2 (16) = 22.5 MBaud.
c) For determining the SNR we can use (4.269) and, by assuming Gray coding, (4.219). For 4-PSK
we have
"
2#
−1 1
[Q
P
log
M
]
bit
Es
2
2
' 22.57 ,
= 12
π
N0
sin M
while for 16-PSK
Es
= 278 .
N0
Solution to Problem 4.53
a) The channel introduces an attenuation (aCh )dB = 6 · 13 = 78 dB, and aCh = 6.3 · 107 . Therefore
the maximum value of the waveforms is
(SRc )max = √
2
= 2.52 · 10−4 V .
aCh
b) Let us indicate with φ1 (t0 − t) and φ2 (t0 − t) the receive filters. The signals of the waveform set
are orthogonal, with the same energy EsTx = 22 T = 4T , hence
s1 (t)
1
φ1 (t) = p
= √ rect (t/T )
T
EsTx
s2 (t)
1
φ2 (t) = p
= √ rect (t/T ) sgn(t)
T
EsTx
620
SOLUTIONS TO PROBLEMS
c) From (4.298) and (4.300)
Λo = Γ =
PRc
kT0 FBmin
where
PRc =
EsTx
MsRc
4
=
=
R
RT aCh
100 aCh
⇒
(PRc )dBm = −14 − 78 + 30 = −62 dBm
and Bmin = 1/(2T ) = 500 kHz, it is
(Λo )dB = 45 dB .
d) From (4.65) and (4.296)
σI2 =
EsRc
EsTx
N0
=
=
.
2
Γ
ΓaCh
Solution to Problem 4.54
a) Being PAM a baseband modulation, we have BCh = Bmin = 1/(2T ) and from (4.177) we obtain
Rb = 2BCh log2 M = 10 kbit/s .
b) For M = 4 we obtain Rb = 20 kbit/s.
c) Being QAM a passband modulation, we have BCh = Bmin = 1/T and
Rb = BCh log2 M = 20 kbit/s .
d) For non-coherent FSK BCh =
M
T
Rb =
and we have
BCh log2 M
= 2500 bit/s .
M
M
and Rb = 5 kbit/s.
e) For coherent FSK BCh = 2T
f) We have Rb = 1.875 kbit/s.
Solution to Problem 4.55
a) The channel attenuation can be measured by means of (2.135), giving
(aCh )dB = 32.4 + 100 + 60 − 20 − 40 = 132.4 dB ,
so that the received power is
(PRc )dBm = (PTx )dBm − (aCh )dB = −92.4 dBm .
b) From Tab. 4.1, for BPSK the reference SNR Γ as a function of Pbit gives
Γ=
1
2
h
Q−1 (Pbit )
i2
= 11.28
=⇒
(Γ)dB = 10.5 dB .
By using (4.295) we then determine Bmin = 1/T = Rb . We have
Rb = Bmin =
PRc
' 12.3 Mbit/s .
Γ kTeff,Rc
SOLUTIONS TO PROBLEMS OF CHAPTER 4
621
c) When QPSK is used (M = 4), from Tab. 4.1 the expression of Γ becomes
h
Γ = Q−1 (Pbit )
i2
= 22.56
=⇒
(Γ)dB = 13.53 dB ,
doubled with respect to the binary case. Since Rb = 2/T = 2Bmin , for the same received power
it results that the two systems have the same bit rate.
Solution to Problem 4.56
a) Since the maximum bandwidth is Bmin = 1200 Hz, from Tab. 4.1 and from (4.177) where M = 2,
we have Rb = 1/T = 2Bmin = 2400 bit/s.
b) From Tab. 4.1, the request of Pbit = 10−6 for a binary PAM system is equivalent to requiring a
reference SNR
h
Γ = Q−1 (Pbit )
i2
= 22.56
=⇒
(Γ)dB = 13.53 dB .
We now express the reference SNR Γ in terms of electrical parameters. We preliminarily note that
the attenuation introduced by each line section is (aC )dB = (ãC )dB/km (L)km = 50 dB which is
perfectly recovered by the amplifier with gain gA = 1/aC . So, from Example 2.2 C the overall
line gain is g = 1 and the effective noise temperature at the transmitter output becomes
Teff,Tx =
T
+ N T 0 FA aC +
transmitter
line
S
|{z}
|
{z
}
T0 (FRc − 1)
g
|
{z
receiver
}
where TS is the transmitter output noise temperature, and N is the number of line segments, that
is N = 20. By assuming TS = T0 we have
Teff,Tx = N T0 FA aC + T0 FRc ' N T0 FA aC
=⇒
Teff,Tx
' 107 .
T0
So, by (4.300) and considering that Teff,Rc aCh = Teff,Tx , we can write
(Γ)dB = (PTx )dBm + 114 − 70 + 29.2 = 13.53
that is (PTx )dBm = −59.67 dBm.
Solution to Problem 4.57
a) From (2.114), the power attenuation due to the line is (aCh )dB = 6 · 15 = 90 dB. Because the
system is narrowband, from (2.110) we have
GCh (f0 ) =
√
gCh = 3.1 · 10−5
→
VRc,max = V0 GCh (f0 ) = 6.3 · 10−5 V .
b) Since the waveforms have equal energy, the optimum receiver could be the single-filter receiver of
Fig. 4.29 where 21 (E2 − E1 ) = 0 and where
ξ1 (t) = s∗1 (T − t) − s∗2 (T − t) =
n
2VRc,max , 0 < t < 12 T
0
, otherwise
where we considered t0 = T . Incidentally, the value of 2VRc,max does not affect system performance.
622
SOLUTIONS TO PROBLEMS
c) Since this is a binary orthogonal form of baseband transmission, the bit error probability is given
1
by (4.135), and from (4.291) the minimum bandwidth is B min = 2T
= 500 kHz. So, from (4.295)
we have Γ = 2Es /N0 and
Pbit = Q
q
1
Γ
2
h
Γ = 2 Q−1 (Pbit )
=⇒
i2
= 45.12
that is (Γ)dB = 16.5 dB. In addition, by recalling (4.300), it is
(Γ)dB = (PTx )dBm − 90 + 114 − 13 + 3 = 16.5
→
(PTx )dBm = 2.5 dBm .
d) Since EsTx = V02 T , from (2.25) and (4.181) we have
PTx =
MsTx
EsTx /T
V2
=
= 0
R
R
R
=⇒
V0 =
√
PTx R ' 0.422 V .
8.5 SOLUTIONS TO PROBLEMS OF Chapter 5
Solution to Problem 5.1 If the probability of saturation satisfies the relation P [|a(nT s )| > vsat ] 1,
and
then Mesat ' 0. Introducing in (5.29) the change of variable z = Qi − u, as vi = Qi + ∆
2
vi−1 = Qi − ∆
, we have
2
Meq ' Megr =
L−1
∆/2
i=0
−∆/2
XZ
z 2 pa (Qi − z)dz .
If ∆ is small enough, then
pa (Qi − z) ' pa (Qi ) for |z| ≤
and assuming
PL−1
i=0
pa (Qi )∆ '
R +∞
−∞
pa (u) du = 1, we get
L−1
Megr =
X
∆
,
2
pa (Qi )∆
i=0
!Z
∆/2
−∆/2
∆2
z2
dz '
.
∆
12
Solution to Problem 5.2 The quantizer has a symmetic characteristics around the origin, hence the
signal a(t) must be amplitude shifted of its mean to obtain a new quantizer input b(t) also symmetric
in amplitude around 0:
b(t) = a(t) − 1 .
Moreover
pb (u) =
βa −βa |u|
e
,
2
with
Mb = σa2 =
i.e. βa = 1 V−1 .
2
= 2 V2 ,
βa2
623
SOLUTIONS TO PROBLEMS OF CHAPTER 5
From the expression of the saturation probability
Psat = e−βa vsat .
and imposing Psat = 10−3 , we have
6.9
= 6.9 V ,
βa
vsat '
and
σa
' −13.77 .
vsat
Considering only the granular noise, from (5.32), we get b = 9.
20 log10
Solution to Problem 5.3 The quantizer range is (−vsat , +vsat ) with vsat = 5 V. With this choice (see
(5.32))
(Λq )dB = 6.02b .
Then
b=7.
Note that b must be an integer. For this choice (Λq )dB = 42 dB.
Solution to Problem 5.4 Since the probability density function of the input signal is symmetrical from
(5.28) we have
Z ∞h
2
i2
− u2
1
∆
e 2σa du ,
−u √
Mesat = 2
vsat −
2
2πσa
vsat
where vsat = 3σa so that ∆ =
Z
+∞
v
6σa
2b
2
= 34 σa . From the general integral
− u2
1
v
[A − u]2 √
e 2σa du = A2 Q
σa
2πσa
the result follows with v = vsat and A = vsat −
2
∆
.
2
Solution to Problem 5.5 To get the saturation probability of 10−3 we must have
Then, from (5.32) we have
vsat
= Q−1 5 · 10−4 ' 3.3 .
σa
6.02b = 60 − 4.77 + 20 log 10 (3.3) = 65.6
and
b = d10.89e = 11 .
Solution to Problem 5.6
a) Since the signal has amplitude between −20 and 20 V, the transformation is
b(t) =
2
2Aσa − v 2
vσa − v 2
v
+ √ e 2σa − √ e 2σa − σa2 Q
σa
2π
2π
a(t)
1
+
40
2
to get a signal b(t) matched to the quantizer range.
624
SOLUTIONS TO PROBLEMS
b) We can consider the quantizer range (−20, 20) V divided in L = 2 b = 16 intervals, with a quantization step size ∆ of 2.5 V. Since the value −1.58 V lies in the eighth interval, it is quantized with
Q7 , while 5.22 V is quantized with Q10 . The associated code word depends on the IBMAP. Two
binary representations are given below.
−1.58 V
5.22 V
1000
0111
0010
1010
Sign and amplitude
Progressive from the lowest level
√
c) Since σa /vsat = 1/ 2, from (5.32) it is (Λq )dB = 25.84 dB.
Solution to Problem 5.7 Since pa (u) = 1/4, |a| < 2, the statistical power of the input signal
is Ma = 4/3 V2 . From (5.28) the statistical power of the granular noise is Megr = ∆2 /12 V2 , with
∆ = 2vsat /L = 2/256 = 1/128 V. In this case the saturation noise can not be neglected and its
statistical power is
Z +∞ 2
∆
vsat −
Mesat = 2
− u pa (u) du
2
vsat
1
=2
4
=
Then
Λq =
Z
2
1
∆
1−
−u
2
∆
2
1
du =
2
Z
1+ ∆
2
1 3 1+ 2
1
∆
∆2
= +
u +
.
∆
6
6
4
8
2
Ma
=
Megr + Mesat
4/3
∆2
12
+
1
6
+
∆
4
+
u2 du
∆
2
∆2
8
= 7.87
and
(Λq )dB = 8.96 dB .
Note that in the absence of saturation, for the same number of levels, (Λ q )dB ' 48 dB.
Solution to Problem 5.8 To avoid saturation, we must have vsat = amax . The power of the signal is
given by
Z T
1
a2
amax t 2
Ma =
dt = max .
T 0
T
3
Then application of (5.32) gives
(Λq )dB = 6.02b + 4.77 + 20 log10
1
√
3
= 6.02b ,
as in the case of an uniform signal.
Solution to Problem 5.9 For a Laplacian signal with probability density function
pa (u) =
β −β|u|
e
,
2
the saturation probability is
Psat = 2P [a > vsat ] = e−βvsat
SOLUTIONS TO PROBLEMS OF CHAPTER 5
Imposing Psat = 10−3 , it is βvsat = 6.9. The statistical power of a(t) is Ma =
have
(Λq )dB = 6.02 b − 8.99
and b = 12.
Solution to Problem 5.10
a) The statistical power of the signal is
Ma =
Z
+∞
−∞
Pa (f ) df = A 20000 V2
and the electrical power is
Pa =
Ma
20000
=A
= 200 A = 1 W
R
100
from which we obtain
A = 5 · 10−3 V2 /Hz .
b) Fs = 40 kHz.
c) The standard deviation of the signal is
√
√
√
σa = Ma = A 20000 = 100 = 10 V .
The saturation probability is
Psat = 2Q
Hence we obtain
Q
and therefore
vsat
σa
vsat
σa
.
= 0.5 · 10−6
vsat
= 4.9
σa
vsat = 49 V .
d) The constraint on the signal-to-quantization noise ratio provides (see (5.32))
b≥
100 − 20 log10
6.02
1
4.9
− 4.77
=
109
= 18.11
6.02
and hence
b = 19 bit .
Solution to Problem 5.11
(44.1 · 103 ) · (16) · (20 · 60) = 846.72 Mbit .
2
.
β2
625
From (5.32) we
626
SOLUTIONS TO PROBLEMS
Solution to Problem 5.12 Since the probability density function is uniform with statistical power
Ma = 2 V2 , the signal amplitude is in the range (−am , am ) with
Ma =
so that vsat = am =
a2m
,
3
√
6 V to avoid saturation. Then, from (5.32), we have
b=
(Λq )dB
6.02
=9.
Hence from (5.6) the bit rate, corresponding to the minimum sampling frequency of 20 kHz, is
Rb = 20 · 9 = 180 kbit/s .
Solution to Problem 5.13
a)
ra (τ )
6
−1
b)
Pa (f ) =
A2
2
0
1
2
3
1
1
+
1 + 4π 2 (f − f0 )2
1 + 4π 2 (f + f0 )2
Pa (f )
τ
4
6
−1
0
1
2
f [Hz]
SOLUTIONS TO PROBLEMS OF CHAPTER 5
627
c) The statistical power is Ma = ra (0) = A2 /2. Since the maximum amplitude is A, to avoid
saturation we choose vsat = A. From (5.32) we have
(Λq )dB = 6.02 b + 1.76
to yield b = 10 and L = 2b = 1024.
d) The conventional bandwidth is obtained in correspondence of the frequency where the PSD is
2
−24 dB below its maximum value, Pa (f0 ) ' A2 . From the condition
24
1
= 10− 10
1 + 4π 2 (f − 1)2
we derive B = 3.52 Hz. Then, from (5.6), Rb = 70.4 bit/s.
Solution to Problem 5.14
a) The statistical power of a(t) is
Ma = 2
Z
2
u2
0
1
u
1−
2
2
du =
2 2
V
3
Choosing the quantizer range (−vsat , vsat ) with vsat = 2 V to avoid saturation and approximating
the quantization noise as uniform, from (5.32) for b = 5 we get
(Λq )dB = 27.1 dB ,
with a reduction of about 3 dB with respect to an uniform input.
b) The minimum sampling frequency is Fs = 2B = 10 kHz. Since b = 5, we have
Rb = b Fs = 50 kbit/s .
c) The number of bits that can be employed to represent each sample is b = 8. Then, from (5.32) we
have
!
p
2/3
= 45.14 dB .
(Λq )dB = 6.02 · 8 + 4.77 + 20 log10
2
d) With a guard bandwidth of 1 kHz, the sampling frequency becomes F s = 12 kHz, so that the
maximum number of bits that can be employed by the quantizer are
b=
and, from (5.32),
j
80
12
k
=6
(Λq )dB = 6.02 · 6 + 4.77 + 20 log10
p
2/3
2
!
= 33.1 dB .
Solution to Problem 5.15 The statistical power of the signal a(t) is given by
Ma =
202
172
+
= 344.5 V2
2
2
628
SOLUTIONS TO PROBLEMS
which coincides with its variance σa2 , since a(t) has zero mean. To avoid saturation, the range of the
quantizer has to be chosen with vsat = 20 + 17 = 37 V. Then, use of (5.32) and (5.56), gives the values
(Λq )dB = 45 ,
(Λq )dB = 50 ,
uniform quantizer
standard µ−law quantizer
b=8
b = 10 .
The signal b(t) can be written as the repetition, with period T p , of the
Solution to Problem 5.16
waveform
rect
t
Tp /2
The Fourier series expansion of b(t) is
+∞
X
2 sinc
`=1
− rect
t − Tp /2
Tp /2
.
1
2` − 1
cos 2π(2` − 1)
2
Tp
.
After the filter we have only the components corresponding to ` = 1, 3, 5, whose power is
h
2 sinc2
1
2
+ sinc2
3
2
+ sinc2
i
5
2
= 0.933 .
To avoid saturation, the quantizer range is (−vsat , vsat ) with vsat = 1.
a) Use of (5.32) gives
(Λq )dB = 40.28 dB .
b) From (5.56), it is
(Λq )dB = 25.98 dB .
Solution to Problem 5.17 First we can evaluate the value of K, according to the normalization property
of the probability density function, whose integral must be equal to unit. It is
K=
1
.
1 − e−6
The signal a(t) has zero mean, ma = 0, and statistical power
Ma
=
=
Z
+∞
u2 pa (u) du = 2K
−∞
1
1 − e−6
h
i
1
25 −6
.
−
e
2
2
Z
0
3
u2 e−2u du = −K
3
e−2u 2
4u + 4u + 2 4
0
Since the amplitude of a(t) is limited to the interval (−3, 3), to avoid saturation we choose v sat = 3.
a) For the uniform quantizer with 3 bits, use of (5.32) gives the value
(Λq )dB = 6.73 dB .
b) For the non-uniform quantizer with 3 bits and standard µ-law companding, (5.56) gives
(Λq )dB = 7.76 dB .
SOLUTIONS TO PROBLEMS OF CHAPTER 5
629
Solution to Problem 5.18
a) In the general expression (5.32) with
σa2
1
= Ma =
2A
Z
+A
a2 da =
−A
A2
3
and vsat = A, we obtain kf2 = 1/3 and Λq = L2 = 4b , i.e. (Λq )dB = 6b dB.
b) By inverting (5.81), we obtain
L≥
r
√
ΛPCM (1 − 4 Pbit )
= 1666 = 40.8 ,
1 − 4 Pbit ΛPCM
hence b = 6, L = 64, ∆ = 2A/L = 0.156 V e Rb = 2B b = 48 kbit/s.
c) No, since the input signal is uniformly distributed.
Solution to Problem 5.19
0
10
Pe,c
b=12
b=16
-1
10
b=8
10-2
-3
10
-2
-1
10
10
Pbit
0
10
The solid lines represent the exact expression (5.64), while the dash-dot lines represent the approximation
(5.65).
Solution to Problem 5.20 The number of bits is determined on the basis of the quantizer signal-toquantization noise ratio Λq . For a signal with uniform amplitude, chosing vsat equal to the maximum
amplitude, for Λq ≥ ΛPCM and by (5.36) it is b = 8 and (Λq )dB = 6.02 · 8 = 48.2 dB. Then, from
(5.81), we have
Λq
−1
Λ
Pbit = PCM
= 4.08 · 10−6 ,
2b
4 (2 − 1)
which represents the maximum value for the bit error probability to get (Λ PCM )dB ≥ 45 dB.
Solution to Problem 5.21 From (5.81) it is
22b =
ΛPCM (1 − 4Pbit )
,
1 − 4ΛPCM Pbit
630
SOLUTIONS TO PROBLEMS
which, for Pbit = 10−5 , yields
22b =
104 (1 − 4 · 10−5 )
= 16660 ,
1 − 4 · 104 · 10−5
whose solution is b = 8. On the other hand, for Pbit = 10−3 we have 4 · ΛPCM Pbit = 4 · 104 · 10−3 =
40, which is greater than 1, so that, from the expression above, it is not possible to achieve the desired
value of ΛPCM with any number of bits.
Solution to Problem 5.22 The number of bits is derived from (5.32) and gives b = 8. Then, from
(5.65), Pe,c ' 8 · 10−6 .
Solution to Problem 5.23 From Tab. 4.1, the value of Γ gives Pbit = 1.96 · 10−7 . Then, use of (5.81)
gives b = 10.
Solution to Problem 5.24
a) We set vsat = 3σa to get a saturation probability around 10−3 . Then from (5.32) we must have
b = 9 to guarantee (Λq )dB > 45 dB.
b) Since the PSD of a(t) is
Pa (f ) = ATa triang (f Ta ) ,
the bandwidth of a(t) is B = 1/Ta . Hence the sampling frequency is Fs = 2/Ta = 10 MHz and
the bit rate
Rb = 9 · 107 = 90 Mbit/s .
c) Now the channel bandwidth is set equal to Bmin = 1/T , and from (4.177) it must be
log2 M ≥
Rb
=
Bmin
l
90
16
m
=6.
Then M = 26 = 64, and the modulation is 64-QAM.
d) The value of the reference SNR Γ is given by (4.300) where
Teff,Rc = Ts + (F − 1)T0 = 400 + 30.62 · 290 = 9280.6 K .
Then
(Γ)dB
T
= −60 + 114 − 10 log10 eff,Rc
− 10 log10 (Bmin )MHz
T0
= −60 + 114 − 15.05 − 12.04 = 27 dB .
Correspondingly, from Tab. 4.1
Pbit ' 3 · 10−7 .
Last, from (5.81) with (Λq )dB = 45 dB, it is
(ΛPCM )dB = 48 dB .
Solution to Problem 5.25 Use of (5.94) yields
Pbit,N = 4 · 10−5 .
Then, (5.81) gives
(ΛPCM )dB = 50 − 10 log10 1 + 4 · 4 · 10−5 218 − 1
= 33.67 dB .
SOLUTIONS TO PROBLEMS OF CHAPTER 5
631
Solution to Problem 5.26 From (5.81) with Λq = 22b and Pbit,N as given by (see Tab. 4.1 for the
expression of Pbit for M -QAM)
N analog repeaters :
N regenerative repeaters :
Pbit,N =
4(1 −
√1
M
Pbit,N =
4(1 −
√1
M
)
log2 M
)
log2 M
Q
r
NQ
3
Γ
M −1N
r
!
3
Γ
M −1
!
,
where M = 16, we get the values of ΛPCM presented in the table below.
b
(ΛPCM )dB (analog rep.)
(ΛPCM )dB (regenerative rep.)
3
4
5
6
7
8
18.0
23.0
29.6
34.5
37.8
39.2
18.0
24.0
30.1
36.1
42.1
48.2
Then regenerative repeaters are more convenient for b ≥ 4.
Solution to Problem 5.27
a) The solution is similar to that of Problem 5.26, where now b = 8 and M = 16. By evaluating
ΛPCM for five values of N we get the table below.
b)
N
(ΛPCM )dB (analog rep.)
(ΛPCM )dB (regenerative rep.)
1
2
3
4
5
45.70
26.26
18.31
14.19
11.65
45.70
44.14
43.00
42.09
41.34
Then, regenerative repeaters are more convenient for N ≥ 2.
N
(ΛPCM )dB (analog rep.)
(ΛPCM )dB (regenerative rep.)
1
2
3
4
5
6
7
8
48.16
48.16
48.16
47.78
45.12
39.81
34.82
30.81
48.16
48.16
48.16
48.16
48.16
48.16
48.16
48.16
632
SOLUTIONS TO PROBLEMS
In this case up to N = 4 repeaters using regenerative or analog repeaters is equivalent, since the
bit error probability is negligible. Regenerative repeaters are more convenient for N > 4 .
Solution to Problem 5.28
a) For the Gaussian signal the saturation probability is
Psat = 2Q
and hence the saturation value is
vsat,G = σa Q−1
Psat
2
while the quantization step size is
=
vsat,G
σa
√ −1
1 Q (1.9 · 10−1 /2) ≈ 1.3
2vsat
.
2b
√
For the sinusoidal signal a(t) = A sin(2πf0 t) since the statistical power is one, we obtain A = 2,
and
√
vsat,S = A = 2 .
∆G =
Hence
∆G
vsat,G
1.3
=
= √ = 0.919
∆S
vsat,S
2
⇒
b) From (5.32)
(Λq,G )dB − (Λq,S )dB = −20 log
c) The difference of the number of bits is
bG − b S =
=
(Λq,G )dB − 4.77 + 20 log(vsat,G )
6.02
−
∆G
∆S
vsat,G
vsat,S
dB
= −0.73 dB .
= 0.73 dB
(Λq,S )dB − 4.77 + 20 log10 (vsat,S )
6.02
42 − 4.77 + 20 log10
42 − 4.77 + 20 log10 1.3
−
6.02
6.02
√
2
= d6.56e − d6.68e = 0
d) From
(Λq,G )dB = 42 dB
the number of bits is b = 7.
Since the bandwidth is B = 4 kHz, the rate is
Rb = b2B = 56 kbit/s .
Solution to Problem 5.29
a) {s1 (t), s2 (t)} is not an orthonormal basis, since the power of the signals is not unitary and they
are not even orthogonal.
SOLUTIONS TO PROBLEMS OF CHAPTER 5
633
b) The transmit signaling energy is
EsTx =
Z
=2
2
|s1 (t)| dt =
Z
T /2
Z
2
triang t − T /2 dt
T /2
T
0
8 T3
8 3 T /2
[t /3]0 =
= T /3 .
2
T
3 · T2 8
(2t/T )2 dt =
0
Now, the signal-to-noise ratio is given by (4.300), where from (4.181)
PTx =
EsTx
TR
⇒
(PTx )dBm = 15.23 dBm ,
(aCh )dB = (ãCh )dB/km · 24.89 = 124.45 dB, Teff,Rc /T0 = FA and Bmin = 1/(2T ). Hence
(Γ)dB = 17.25 dB .
c) For the solutions of points b) and c) we observe that the transmission is 2-PAM and we obtain the
number of repeaters/regenerators by inverting (5.93) and (5.95) for M = 2.
Since Q−1 (10−3 ) = 3 we obtain
Nrepeaters =
and
Nregenerators =
Solution to Problem 5.30
a) From (4.219), it is
Pbit =
and therefore from (5.81)
ΛPCM =
$
Γ
[Q−1 (Pbit )]2
Pbit
√ Q
Γ
%
=
=5
Pbit
Q (7.28)
= 1010 .
p
3
3
Q
Γ/5 = Q(4.6) = 1.59 · 10−6 ,
4
4
Λq
102.4
=
2
1 + 4Pbit (L − 1)
1 + 4 · 1.59 · 10−6 (28 − 1)
and
b) In this case
0
Pbit
(ΛPCM )dB ' (Λq )dB = 24 dB .
= N P bit, and we obtain
ΛPCM =
Λq
102.4
=
1 + 4N Pbit (L2 − 1)
1 + 400 · 1.59 · 10−6 (28 − 1)
and
(ΛPCM )dB = 23.3 dB .
c) From (4.291) Bmin = (log2 L/ log2 M ) B with log2 L = 4, hence
log2 M = (B/Bmin ) log2 L = (4/2) · 4 = 8 bit
634
SOLUTIONS TO PROBLEMS
or M = 28 = 256.
8.6 SOLUTIONS TO PROBLEMS OF Chapter 6
Solution to Problem 6.1
a) Assuming iid symbols, from (6.50) and (6.76) the PSD of the baseband PAM signal is given by
1
rcos(f T, ρ) ,
T
PsPAM (f ) =
where T = 1/9600 and ρ = 0.5. Then from (1.251) the PSD of the modulated DSB-SC signal is
1
1
Ps
(f − f0 ) + PsPAM (f + f0 ) .
4 PAM
4
PsTx (f ) =
b) The optimum receiver is composed by a mixer, a matched filter with square root raised cosine
characteristic, a sampler and a threshold detector with zero threshold.
Solution to Problem 6.2 We derive ma = 1 and σa2 = 3. Hence the Ftf of ra (nT ) yields (see (6.52))
"
Pa (f ) = T 3 + 2 cos(2πf T ) +
+∞
X
k=−∞
k
δ f−
T
#
.
The required PSD PsTx (f ) is given by (6.50) with
HTx (f ) = T rect(f T ) .
At the output of the channel we have
PsRc (f ) = PsTx (f ) |GCh (f )|2 ,
with
GCh (f ) = g0 3T rect(f 3T )e−j2πf
T
6
.
Solution to Problem 6.3 The transmit pulse in RZ is hTx (t) = rect (2t/T ), with T = log2 M/Tb =
1 µs. The baseband equivalent of the QPSK signal is
(bb)
sTx (t) =
+∞
X
k=−∞
ak hTx (t − kT ) ,
where the transmit pulse in RZ is hTx (t) = rect (2t/T ), with Ftf
HTx (f ) =
fT
T
sinc
2
2
Since the variance of the symbol sequence {ak } is σa2 = 2, from (6.57), it is
Ps(bb) = T 2 sinc2
Tx
fT
2
.
SOLUTIONS TO PROBLEMS OF CHAPTER 6
The channel has baseband equivalent impulse response (see Section 1.5.3)
response (see Tab. 1.3)
1 (bb)
1
1
G
(f ) =
.
2 Ch
2 1 + j2π Ff
Ch
Then from (1.258)
From (6.58) it is
=
h
1
Ps(bb) (f − f0 ) + Ps(bb) (−f − f0 )
4
Rc
Rc

2
(bb)
gCh (t), with frequency
1 (bb) 2
Ps(bb) = Ps(bb) GCh (f )
4
Rc
Tx
1
fT
T
2
sinc
=
2
8
2
f
1 + 2π FCh
PsRc (f ) =
1
2
635
(f −f0 )T
2
i
(f +f0 )T
2
2

sinc
T  sinc

2 +
2  .

32
f −f0
f +f0
1 + 2π FCh
1 + 2π FCh
The PSD of the transmitted signal, PsTx (dash line), the square amplitude of the channel frequency
response, |GCh (f )|2 (dot line), and the PSD of the received signal, PsRc (solid line), are presented in
figure.
PsRc (f )
6
|GCh (f )|2
PsTx (f )
R
−f0
f0
f
Solution to Problem 6.4
a) We set vsat = 3σa to get a saturation probability of 10−3 . Then from (5.32) we must have b = 9
to get (Λq )dB > 45 dB.
b) Since the PSD of ai (t) is
Pai (f ) = ATs triang (f Ts ) ,
its bandwidth is B = 1/Ts . Hence the sampling frequency is Fs = 2/Ts = 10 MHz and the bit
rate is
Rb = 9 · 107 = 90 Mbit/s .
c) Since the available channel bandwidth is BCh = 30 MHz and the bandwidth of the modulated
QAM signal is Bs = (1 + ρ)/T , with T = Tb log2 M , we must have a cardinality M such that
log2 M ≥
Rb (1 + ρ)
=
BCh
l
108
30
m
=4.
636
SOLUTIONS TO PROBLEMS
Then M = 24 = 16, and the modulation is 16-QAM. The actual symbol rate is 1/T = 90/4 =
22.5 MBaud and the bandwidth of the modulated QAM signal is B s = 27 MHz.
d) From (6.51), the PSD of the symbols is
Pa (f ) = T
M −1
.
3
From (6.57) and (6.58) we have
PsTx (f ) =
T M −1
P0 [rcos ((f − f0 )T, ρ) + rcos ((f + f0 )T, ρ)] ,
4
3
where T = 44.44 ns and ρ = 0.2. The value P0 is determined according to the desired statistical
power of the transmitted QAM signal, which is given by
MsTx = P0
M −1
.
6
Solution to Problem 6.5
a) No, the periodic repetition of H(f ) with period 1/T is not constant.
b) Yes, the pulse has nulls at ` T8 , ` 6= 0, therefore also at nT, n 6= 0.
c) No, the pulse has duration 3T and has two interferers.
d) Yes, the pulse has duration T /3.
e) No, the pulse has nulls only at 2`T, ` 6= 0.
Solution to Problem 6.6
a) Yes, the pulse is the minimum bandwidth Nyquist pulse, with timing phase t 0 = T /4.
b)
ak
6
•
1
•
•
-
T
kT
−1 •
sTx (t)
•
•
•
6
•
1
T /4
•
•
-
T
t
−1 •
•
•
•
637
SOLUTIONS TO PROBLEMS OF CHAPTER 6
Solution to Problem 6.7 With the condition B < 1/T , for each frequency, in (6.74) only two terms
overlap, namely for 0 < f < 1/T ,
1
H(f ) + H f −
T
Then
h
< H(f ) + H f −
1
T
i
= H0 [rect (f T ) + rect ((f − 1/T )T )] + O(f ) + O(−f )
= H0 ,
where O(f ) denotes the odd function around 1/(2T ). For the imaginary part, recalling the Hermitian
symmetry of the Fourier transform of a real signal, it is
h
= H(f ) + H f −
1
T
i
= E(f ) − E(−f ) = 0 ,
where E(f ) denotes the even function around 1/(2T ). Then, according to (6.74), ISI is absent.
Solution to Problem 6.8 The minimum value of ISI is 0 and is obtained in correspondence of a
sequence of all symbols “0”.
The maximum value of ISI corresponds to a sequence of all symbols “1" and results
5
+∞
X
e−2k =
k=1
5 e−2
,
1 − e−2
where h0 = 5.
Solution to Problem 6.9 Noting that
g(t) = 2πfc e−2πfc t 1(t) ,
the pulse at the output of the filter is
qR (t) =
1 − e−2πfc t
, 0<t<T
(1 − e−2πfc T ) e−2πfc (t−T ) , t > T .
The eye diagrams in the three cases are represented in the figure below.
sR (t0 )
sR (t0 )
6
sR (t0 )
6
2T t0
T
a)
6
2T t0
T
b)
2T t0
T
c)
638
SOLUTIONS TO PROBLEMS
Solution to Problem 6.10
a) Since f2 > B, for its effect on the transmitted signal, the channel can be considered having infinite
bandwidth. The inverse Ftf of GCh (f ) yields
gCh (t) = δ(t) +
gc
[δ(t − tg ) + δ(t + tg )] ,
2
from which
sRc (t) = (sTx ∗ gCh )(t) = sTx (t) +
gc
[sTx (t − tg ) + sTx (t + tg )] .
2
b) For a PAM signal with pulse hTx , the matched filter has impulse response
hRc (t) = hTx (t0 − t) ,
where t0 is the sampling phase. Here t0 = 0. The pulse associated to the PAM signal at the receiver
input is hTx (t)+ g2c [hTx (t−tg )+hTx (t+tg )]. Then at the output of the matched filter, the discrete
time overall impulse response is hi = h(iT ) = rhTx (iT ) + g2c [rhTx (iT + tg ) + rhTx (iT − tg ),
where
Z
hTx (τ )hTx (τ − u) dτ .
rhTx (u) =
Reminding that rhTx (0) = EhTx , the sample under detection is then
sR,k = ak {EhTx + gc rhTx (tg )}
o
n
X
gc
ak−i rhTx (iT ) + [rhTx (iT + tg ) + rhTx (iT − tg )] .
+
2
i6=0
c) For tg = T ,
(i)
hi
= rhTx (iT ) +
gc
[rhTx (it + T ) + rhTx (it − T )] .
2
Solution to Problem 6.11
ik = e1/2 − 1
h
5
3
7
e− 2 − e − 2 + e − 2 − . . .
3
= e1/2 − 1 e− 2
+∞
X
n=0
i
3
(−e)−n = e1/2 − 1 e− 2
1
.
1 + e−1
Solution to Problem 6.12 The mean value is mi = 0, since the alphabet of input symbols {ak } is
balanced.
The variance of ISI, equal to the statistical power, is given by
σi2 = Ma
+∞
X
n=1
n6=2
2
qR
(nT ) .
SOLUTIONS TO PROBLEMS OF CHAPTER 6
639
Since Ma = 1, then
σi2 = 1 − e−1
2
+ e2 − 1
+∞
X
n=3
e−2n = 1 − e−1
From (6.71) we have
2
+ e2 − 1
e−6
.
1 − e−2
2
Pi (f ) = Pa (f ) H(i) (f ) ,
where
Pa (f ) = T ,
and
H(i) (f ) = 1 − e−1 e−j2πf T + e2 − 1
= 1−e
−1
e
−j2πf T
2
+ e −1
+∞
X
e−2n e−j2πf nT
n=3
e−6−j6πf T
.
1 − e−2−j2πf T
Solution to Problem 6.13
a) The OOK modulation can be considered a PAM modulation with binary symbols a k ∈ {0, 1},
where in this case the transmission pulse is Gaussian, given by
Es
e
hTx (t) = √
2πσhTx
−1
2
t
σh
Tx
2
.
Es represents the pulse optical energy. At the output of the channel we have
QCh (f ) = HTx (f )GCh (f )
From Tab. 1.3 we have
HTx (f ) = Es e
2
−2π 2 σh
Tx
f2
,
while the fiber frequency response is (see (2.123) )
−2π
GCh (f ) = A−1
F e
2
2 2
σF
f
where σF = dσ̃F is the dispersion for a fiber length d. The receive filter has a Gaussian shape
with
2
−2π 2 σh
f2
Tx
HRc (f ) = K e
Then the pulse at the decision point is again Gaussian
KEs
−1
√
e 2
AF 2πσR
qR (t) =
t
σR
2
,
2
,
with
2
σR
= σF2 + 2σh2 Tx .
b) Sampling with phase t0 = 0, we have
(i)
hi
= qR (iT ) =
KEs
−1
√
e 2
AF 2πσR
iT
σR
i 6= 0
640
SOLUTIONS TO PROBLEMS
c) The first interferer, with respect to the desired term h0 = qR (0), has amplitude
(i)
h1
−1
=e 2
h0
For T =
1
2.4
T
σR
(i)
2
.
· 10−9 and simposing h1 /h0 = 0.1, we have
exp


10


−18
= 0.1 ,
−
 2 · 2.4 · 2 10−18 + 2 · 10−24 · d2 
2
(2.4) ·36
which gives d = 141 km.
Solution to Problem 6.14 Assuming a ML receiver, the decision threshold is 0. The error probability
is then evaluated conditioning on the ISI.
Pe =P [ak 6= âk |ik = −1/2] pi (−1/2) + P [ak 6= âk |ik = 0] pi (0)
+ P [ak 6= âk |ik = 1/2] pi (1/2) .
The conditional probabilities are
Then
1
1
Q
+
2
2σI
1
P [ak 6= âk |ik = 0] = Q
σI
1
1
P [ak 6= âk |ik = 1/2] = Q
+
2
2σI
P [ak 6= âk |ik = −1/2] =
Pe =
1
1
Q
2
σI
This has to be compared with Pe = Q
1
σI
+
1
1
Q
4
2σI
+
3
1
Q
2
2σI
1
3
Q
2
2σI
3
1
Q
4
2σI
.
obtained in the absence of ISI.
Solution to Problem 6.15
a) At the output of the matched filter we have a triangular pulse with base 2T . Introducing a timing
phase error of 10%, we have a reduction of the sample qR (t0 ) to a value h0 = 0.9qR (t0 ) with a
consequent loss (in dB) in the value of Γ given by −20 log 10 (0.9) = 0.91 dB.
b) The error in the sampling phase introduces also an interferer with amplitude h 1 = qR (t0 + T ) =
qR (t0 )/10, so that
qR (t0 )
h0
= ak−1
.
ik = ak−1
10
9
Note that the symbol determining the ISI contribution is ak−1 if the timing phase offset is positive,
while it should be ak+1 for a negative offset. On the other hand, since the pulse is symmetric, the
amplitudes associated to the desired term and the interferer do not change. The received signal
can be written as
h0
rk = ak h0 + ak−1
+ wR,k ,
9
641
SOLUTIONS TO PROBLEMS OF CHAPTER 6
where wR,k is the contribution of the noise at the decision point, modeled by a Gaussian random
variable with zero mean and variance σI2 . Then the error probability becomes
Pe =
10h0
1
Q
2
9σI
which must be compared to Pe = Q
10h0
9σI
+
8h0
1
Q
2
9σI
,
in the case of no ISI.
Solution to Problem 6.16
a) Since the bandwidth is 1/(2T ), the energy of h(t) is the same of {h i } and
Eh = T
2
X
h2i ,
i=0
q
7 Eh
. The error probability is obtained conditioning on the values
from which it is h0 =
8 T
assumed by the two interferers and is given by
1
Pe = Q
2
r
7 Eh
8 T σI2
1
+ Q
4
" r
7
1
+
8
2
!r
Eh
T σI2
#
1
+ Q
4
" r
7
1
−
8
2
!r
b) From (6.93) it is
γ=
and
√
1
1
Pe = Q ( γ) + Q
2
4
In the absence of ISI we have
"
1+
7 Eh
,
8 T σI2
r !
2
7
#
√
1
γ + Q
4
√
Pe = Q ( γ) .
The error probabilities are plotted in the graph.
"
1−
r !
2
7
√
γ
#
.
Eh
T σI2
#
.
642
SOLUTIONS TO PROBLEMS
The penalty is basically due to the lastterm of the error probability. Hence, we have approximately
a penalty of −20 log10 1 −
probability Pe = 10
−6
.
p
2/7
= 6.64 dB, which results also from the graph at the error
Solution to Problem 6.17
a) At the output of the matched filter, for an ideal channel we have a triangular pulse with support
2T and maximum value T in t = 0. The effect of the channel is to give a pulse at the output of
the matched filter of the form
qR (t) =0.8 T triang
t − T /8
T
+ 0.1 T triang
+ 0.2 T triang
t − 17T /8
T
t − 9T /8
T
+ 0.1 T triang
t − 25T /8
T
,
which reaches its maximum value, equal to 0.8 T , in t = T /8. Sampling with the sampling phase
t0 = T /8 we have
hi = qR iT +
T
8
and the discrete time filter giving ISI is
(i)
hi
=
 0.8 T , i = 0


 0.2 T , i = 1
0.1 T , i = 2


 0.1 T , i = 3
0
, i < 0, i > 3 ,
= 0.2 T δi−1 + 0.1 T δi−2 + 0.1 T δi−3 .
b) Modelling ISI as a Gaussian random variable, its mean and variance, given by (6.109), are m i = 0
and
σi2 = 2T 2 (0.2)2 + (0.1)2 + (0.1)2 = 0.12 T 2 ,
643
SOLUTIONS TO PROBLEMS OF CHAPTER 6
assuming σa2 = 2. Hence the variance per dimension is σi2 /2 = 0.06 T 2 . Then, noting that
the noise is circulary symmetric complex Gaussian with variance per dimension given by (6.49),
σI2 = N20 T , from (6.94) with σI2 replaced by σI2 + 12 σi2 the error probability is

Pe = 2Q  q
0.8T
0.06T 2
+
N0
T
2

s
 = 2Q
(0.8)2
0
0.06 + N
2T
!
.
In the case of an ideal channel gCh (t) = δ(t), it is h0 = T and
Pe = 2 Q
From (6.60) Γid =
Tab. 4.1
N0
2T
r
1
N0
2T
.
(the subscript “id” is used to denote the ideal channel case) and from
Pe = 2 Q
√ Γid ,
The above error probability in the presence of ISI can be compared to the ideal also in terms of
Γid ,
r
0.64
Pe = 2Q
Γid
.
1 + 0.06Γid
Solution to Problem 6.18
a) At the output of the matched filter, for an ideal channel we have a triangular pulse with support
2T and maximum value T in t = 0. The effect of the channel is to give
t
qR (t) = 0.8 T triang
T
+ 0.3j T triang
t − T /2
T
+ 0.2 T triang
Sampling at t0 = 0 we have
hi = qR (iT ) = T
(
t−T
T
.
0.8 + j0.15 , i = 0
0.2 + j0.15 , i = 1
0
, i < 0 and i > 1
(i)
and the filter giving ISI is hi = (0.2 + j0.15) T δi−1 .
b) The received signal can be written as
rk = ak h0 + ak−1 h1 + wk
where the noise is circulary symmetric complex Gaussian with variance per dimension given by
(6.49), σI2 = N20 T .
The symbol values are modified according to the rule ak −→ h0 ak
(1 + j) −→ T (0.65 + j0.95)
(−1 + j) −→ T (−0.95 + j0.65)
(−1 − j) −→ T (−0.65 − j0.95)
(1 − j) −→ T (0.95 − j0.65) ,
644
SOLUTIONS TO PROBLEMS
while the possible values of ISI, according to the values of h1 ak−1 , are
(1 + j) −→ T (0.05 + j0.35)
(−1 + j) −→ T (−0.35 + j0.05)
(−1 − j) −→ T (−0.05 − j0.35)
(1 − j) −→ T (0.35 − j0.05) .
Assuming the decision rule does not account for the phase distortion introduced by the channel,
so that the thresholds are at zero in both dimensions. Then, we have
h h i
1
0.7T
T
1.3T
0.3T
1
Q
Q
+Q
+
+Q
4
σI
σI
4
σI
σI
h i
T
1
0.6T
0.9T
1
Q
+
+Q
+ Q
2
σI
4
σI
σI
1
0.3T
0.6T
0.7T
1
1
= Q
+ Q
+ Q
4
σI
2
σI
4
σI
1
1
1
0.9T
T
1.3T
+ Q
+ Q
+ Q
.
4
σI
2
σI
4
σI
Pe =
i
The error probability can be expressed in terms of γ defined in (6.93),
γ=
T 2 |0.8 + j0.15|2
.
σI2
Hence
q
q
1
γ
1
γ
Pe = Q 0.3
+ Q 0.6
+
4
0.6625
2
0.6625
q
q
1
1
γ
γ
+ Q 0.9
+ Q
+
4
0.6625
2
0.6625
which has to be compared to
q
1
γ
Q 0.7
4
0.6625
q
1
γ
Q 1.3
.
4
0.6625
√
Pe = 2 Q ( γ)
representing the case with ideal channel.
Solution to Problem 6.19 The symbol rate is 1/T = Rb / log2 M = 1 MBaud. The Fourier transform
of the transmit pulse is
HTx (f ) = T rect(f T ) .
Then at the output of the channel we have
QC (f ) = T e
whose energy is given by
|f |
−F
Ch
rect(f T ) ,
EqC = T 2 FCh 1 − e
− F2B
Ch
,
where B = 1/2T .
a) If the receive filter is matched to hTx , imposing unit energy, we have
√
HR (f ) = T rect(f T ) ,
SOLUTIONS TO PROBLEMS OF CHAPTER 6
645
and at the decision point
QR (f ) = T
√
Te
|f |
−F
rect(f T ) .
Ch
The noise variance at the decision point is then, from (6.49), σ I2 =
the symbol under detection is
h0 = qR (0) = 2T
√
T
Z
B
e
−F
Hence, reminding (6.104)
4T FCh 1 − e
The value associated to
√ − B
df = 2T FCh T 1 − e FCh .
f
Ch
0
h20
Γ
= 2
σI2
σa
N0
.
2
1−e
− FB
Ch
− F2B
Ch
From (6.70) we have, limited to one period 1/T ,
H(i) (f ) = F [h(t) − h0 δ(t)] =
√
Te
2
.
|f |
−F
Ch
− h0 .
From (6.109) we have
σi2 = σa2 T
h
Z
1
T
1
−T
(i) 2
H (f ) df
= σa2 T 2 FCh 1 − e
and finally,
1
Pe = 2 1 −
M
Q
r
− F2B
h20
σi2 + σI2
Ch
1−e
− F2B
Ch
− h20
=2 1−
with
σ 2 T 2 FCh
β= a 2
h0
σ2
− σa2 + a
Γ
i
1
M
,
1−e
Q
− F2B
HR (f ) = r
and
QR (f ) = r
|f |
−F
Ch
FCh 1 − e
Te
− F2B
Ch
rect(f T ) ,
2|f |
−F
Ch
FCh 1 − e
− F2B
Ch
rect(f T ) .
1
√
β
Ch
4T FCh 1 − e
b) If the filter is matched to qC , again imposing unit energy, we have
e
− FB
Ch
2
646
SOLUTIONS TO PROBLEMS
Since from (6.49), σI2 =
N0
,
2
it is
− 2B
2
4 1 − e FCh
h20
Γ
.
=
− 4B
σI2
σa2 1 − e− F2B
Ch
1 − e FCh
As derived in the previous point,
σi2 = σa2
1
− F2B
1
Pe = 2 1 −
M
T 2 FCh 1 − e
Ch
and again the error probability is
h
T FCh
2
Q
r
1−e
h20
2
σi + σI2
− F4B
Ch
− h20
i
,
.
c) Using a filter to remove the ISI, it is
|f |
HR (f ) = r
e FCh
2B
FCh e FCh − 1
which gives
QR (f ) = r
rect(f T ) ,
T
FCh e
In this case σi2 = 0, while
2B
FCh
−1
rect(f T ) .
Γ
1
h20
.
2B
= 2
σI2
σa (T F )2 1 − e− F2B
Ch
e FCh − 1
Ch
The error probability is then


v
u
1
1
 Γ 2B

Pe = 2 1 −
Q u
 .
t σ2
− F2B
M
a
FCh
Ch
1−e
e
−1
The error probabilities corresponding to cases a), b) and c) are plotted in the graph below and
compared to the ideal channel case with matched filter.
SOLUTIONS TO PROBLEMS OF CHAPTER 6
647
Note that the filter matched to the received pulse of curve b), which has been designed to be
optimum when there is no ISI, gives the best performance when the noise is predominant, that is
for low values of Γ. On the other hand, the presence of ISI gives an irreducible error probability
level for the curves a) and b).
Solution to Problem 6.20
0, 1, −1, 1, 0, 0, −1, 0, 0, 0, −1, 1, 0, 0, 0, 1, −1, 1
Solution to Problem 6.21
a) Since ma = 21 and σa2 = 14 , from (6.51) the PSD of symbols {ak } is
Pa (f ) =
+∞
1
T
T X
δ t−`
.
+
2
4
T
`=−∞
The modulation pulse in the NRZ format is
hTx (t) = rect
with
t − T /2
T
,
T
HTx (f ) = T sinc (f T ) e−j2πf 2 .
From (6.50) we get
PsTx (f ) =
T
1
sinc2 (f T ) + δ(f )
2
4
b) The symbol sequence {ck } = {2ak −1} has zero mean and variance σa2 = 1, but is still a sequence
of iid symbols, since the transformation does not introduce correlation. Hence the PSD of symbols
648
SOLUTIONS TO PROBLEMS
{ck } is
Pc (f ) = T .
Then from (6.50) we get
PsTx (f ) = T sinc2 (f T ) .
Solution to Problem 6.22 The sequence ck = ak − ak−1 can be interpreted as the output of a filter
with impulse response hn = δn − δn−1 and input {ak }. Then
Pc (f ) = Pa (f ) [2 − 2 cos(2πf T )] ,
with (see (6.51))
Pa (f ) =
+∞
1 X
T
1
+
.
δ t−`
2
4
T
`=−∞
Note that H(f ) nulls at the multiples of 1/T , hence
Pc (f ) = T [1 − cos(2πf T )] .
The modulation pulse in the RZ format is
hTx (t) = rect
From (6.50), with
HTx (f ) =
t − T /4
T /2
T
fT
sinc
2
2
we get
PsTx (f ) =
1
sinc2
4
fT
2
.
T
e−j2πf 4 ,
[1 − cos(2πf T )] .
Solution to Problem 6.23
a) From (6.50), noting that Pa (f ) = T , we have
PsTx,1 (f ) = T sinc2 (f T )
PsTx,2 (f ) = 4T sinc2 (f 2T ) .
b) Denoting with ck the coded symbol, it is obtained as output of a filter with impulse response
hi = δi + l3 δi−3 , i.e. ck = ak + l3 ak−3 . Since the Fourier transform of the pulse hTx,2 is
non-null at f = 1/(3T ), it is necessary to introduce a zero in the PSD of the symbols {c k }. From
(6.51)
Pc (f ) = Pa (f ) 1 + l23 + 2l3 cos(2πf 3T ) = T 1 + l23 + 2l3 cos(2πf 3T ) ,
which at f = 1/(3T ) assumes the value 1 + l3 = 0. Hence it must be l3 = −1.
SOLUTIONS TO PROBLEMS OF CHAPTER 6
649
Solution to Problem 6.24
a) The biphase-L transmission can be interpreted as antipodal PAM signalling, with a k = 2bk − 1,
therefore {ak } are iid symbols belonging to the alphabet {−1, 1}, with PSD
Pa (f ) = T
The pulse hTx has Fourier transform
HTx (f ) = sinc
Then, from (6.50) we have
fT
2
h
fT
2
PsTx (f ) = T sinc2
e−j2πf 4 − e−j2πf
3T
4
h
T
2
T
2 − 2 cos 2πf
b) The PSD of the coded symbol sequence {ck } is given by
i
i
Pc (f ) = Pa (f ) [2 + 2l1 cos (2πf T )] = T [2 + 2l1 cos(2πf T )] .
To have a zero in the PSD of sTx , since HTx (1/T ) 6= 0, we must have 2 + 2l1 cos(2π) = 0.
Hence, it must be l1 = −1.
Solution to Problem 6.25
a) The symbol rate is 1/T = Rb / log2 M = 1 MBaud. The Fourier transform of the modulation
pulse is
HTx (f ) = AT rect(f T ) .
Then at the output of the channel we have
QC (f ) = AT e
|f |
−F
Ch
rect(f T ) .
Its expression in the time domain is given by the inverse Fourier transform,
qC (t) = AT
Z
B
e
|f |
−F
Ch
ej2πf t df ,
−B
with B = 1/2T , which gives (denoting with TCh = 1/FCh )
qC (t) = AT
= 2AT
1 − e−j2πt−TCh
1 − ej2πt−TCh
+
TCh − j2πt
TCh + j2πt
TCh − e−TCh B [TCh cos(2πtB) + 2πt sin(2πtB)]
2
TCh
+ 4π 2 t2
b) Since the bandwidth of the pulse is B = 1/2T , the only Nyquist pulse that can be obtained is the
minimum-bandwidth pulse. Then, from (6.126) we have
|f |
GR (f ) = Ke FCh rect(f T ) .
650
SOLUTIONS TO PROBLEMS
The matched filter frequency response is proportional to Q∗C (f ), i.e.
GR (f ) = K AT e
|f |
−F
rect(f T ) .
Ch
Solution to Problem 6.26
a) The OOK modulation can be considered a PAM with binary symbols a k ∈ {0, 1}, where in this
case the transmit pulse is Gaussian, given by
1
−2
Es
hTx (t) = √
e
2πσhTx
t
σh
Tx
2
,
where Es represents the pulse optical energy. At the output of the channel we have
QC (f ) = HTx (f )GCh (f )
From Tab. 1.3 we have
HTx (f ) = Es e
−2π 2 σh2
Tx
f2
,
while the fiber frequency response is
−2π
GCh (f ) = A−1
F e
2
2 2
f
σF
where σF = dσ̃F = 0.2 ns is the dispersion. Then the pulse at the channel output is Gaussian
qC (t) =
p
2
= σh2 Tx + σF2 =
with σC
have a Gaussian shape
p
Es
−
√
e
AF 2πσC
t
2σC
2
,
T 2 /36 + 4 · 10−20 = 0.21 ns. In the frequency domain we still
QC (f ) =
2 2
Es −2π2 σC
f
e
AF
b) To have a Nyquist pulse at the decision point, for example a raised cosine shape, Q C (f ) =
rcos(f T, ρ), we get
rcos(f T, ρ)
HR (f ) = −2π2 σ2 f 2 .
C
e
Note that if σC /T is large, equalization becomes difficult since the gain of the equalizer at frequencies close to the value T1 (1 + ρ) becomes very large. The filter matched to qC is still Gaussian,
with variance equal to the variance of the pulse at the channel output
HR (f ) = K e−2π
2
2 2
σC
f
Solution to Problem 6.27 The transmit pulse is
hTx (t) = rect
2t
T
.
SOLUTIONS TO PROBLEMS OF CHAPTER 7
651
where T = 0.5 µs. The channel frequency response is (see Tab. 1.3)
GCh (f ) =
FCh
.
FCh + j2πf
The pulse at the channel output is then
QC (f ) =
T
fT
sinc
2
2
From (6.126) we have
HR (f ) = rcos(f T, 0.4)
Since the first zero of sinc
have
fT
2
FCh
FCh + j2πf
2(FCh + j2πf )
FCh T sinc f2T
is at f = 2/T , which is beyond the bandwidth of rcos(f T, 0.4), we
|HR (f )| = rcos(f T, 0.4)
2
p
2
FCh
+ 4π 2 f 2
FCh T sinc
2πf
.
FCh
The matched filter frequency response is proportional to Q∗C (f ), i.e.
arg HR (f ) = arctan
fT
2
fT
T FCh sinc 2
HR (f ) = K
,
2 FCh − j2πf
with
sinc f T 2
|HR (f )| = p
2
2 FCh
+ 4π 2 f 2
arg HR (f ) =
−2πf
FCh
arctan −2πf
FCh
− arctan
π−
, 2k T2 < f < (2k + 1) T2
, (2k + 1) T2 < f < (2k + 2) T2 .
8.7 SOLUTIONS TO PROBLEMS OF Chapter 7
Solution to Problem 7.1
a) For g(α) as in (7.2), properties P1–P4 can be immediately derived by the properties of the logarithm
function.
b) By differentiating P4 with respect to β we get
αġ(αβ) = ġ(β)
Then, with β = 1 we get
αġ(α) = ġ(1)
Now, let γ = ġ(1), it must be
ġ(α) =
γ
α
652
SOLUTIONS TO PROBLEMS
so g is of the type
g(α) = γ ln α + c
The constant c is determined by imposing P2 and we obtain c = 0. Thus we get
g(α) = γ ln α = log1/b α ,
where
1
= e1/γ .
b
Solution to Problem 7.2
a) The information of the value k ∈ Ax is obtained by (7.4) as
ix (k) = log1/2 (1 − p) + k log1/2 p
Then the information of x is obtained by replacing k with x in the above expression
ix (x) = log1/2 (1 − p) + x log1/2 p
By taking the expectation (7.5) we find the entropy
H(x) = log1/2 (1 − p) + E [x] log1/2 p = log1/2 (1 − p) +
p
log1/2 p
1−p
as we make use of the fact that the mean of a geometric rv is E [x] = p/(1 − p).
Since for a geometric rv the alphabet cardinality is infinite, so is its nominal information. Hence
its efficiency cannot be defined.
b) With the same procedure as above we get
ix (k) = k log1/2
p
1−p
+ n log1/2 (1 − p) + log1/2
+ n log1/2 (1 − p) + E log1/2
n
k
.
Then, by taking the expectation (7.5)
H(x) = E [x] log1/2
p
1−p
and considering that the mean is E [x] = np, while log1/2
H(x) ≤ np log1/2
p
1−p
n
k
n
x
≤ 0 for all k = 0, . . . , n, we obtain
+ n log1/2 (1 − p)
and hence the bound.
c) The bound in this case is
8(1/2 + 1/2) = 8 bit
which is higher that the nominal information bound log 2 (n + 1) ' 3.17 bit, and therefore useless.
By calculating the finite sum
E log1/2
n
x
we get H(x) = n − E log1/2
n
x
=
n
X
k=0
px (k) log1/2
n
k
' 5.456 bit
' 2.544 bit. The corresponding efficiency is ηx = 0.8026.
653
SOLUTIONS TO PROBLEMS OF CHAPTER 7
Solution to Problem 7.3 The alphabet A[x,y] is illustrated below
n
4
6
3
2
1
•
•
•
1
2
•
•
•
3
•
•
•
•
-
4 m
Observe that
ix,y (m, n) = log2 m + 2
whereas by the marginal rule (1.177)
px (m) =

25/48


,
,
,
,
13/48
py (n) =

 7/48
1/16
1
,
4
n=1
n=2
n=3
n=4
ix (m) = 2 ,
m = 1, 2, 3, 4

4 + log2 3 − 2 log2 5


4 + log2 3 − log2 13
iy (n) =

 4 + log2 3 − log2 7
4
,
,
,
,
,
n=1
n=2
n=3
n=4
By taking expectations of the information functions, we get the entropies
H(x, y) = E [log2 x] + 2 = 2 +
11 + log2 3
1
(0 + 1 + log2 3 + 2) =
' 3.15 bit
4
4
H(x) = 2 bit
and by (7.22)
3 + log2 3
' 1.15 bit
4
H(y|x) =
Analogously, since
H(y) = 4 +
25
7
13
15
log2 3 −
log2 5 −
log2 7 −
log2 13 ' 1.66 bit
16
24
48
48
we get H(x|y) ' 1.49 bit. The efficiency of the rve is found by (7.29) to be
η[x,y] =
(11 + log2 3)/4
11 + log2 3
=
' 0.79
2 log2 4
16
Solution to Problem 7.4 The transition probabilities for a QPSK system are, with q = Q
• for correct decisions p(i|i) = (1 − q)2
q
• for transitions between symbols whose decision regions are adjacent p(i|j) = q(1 − q)
• for transitions between symbols whose decision regions are opposite p(i|j) = q 2
Hence we get the PMD and entropy of â
pâ (i) =
4
X
j=1
pa (j)p(i|j) =
1
,
4
i = 1, 2, 3, 4
,
H(â) = 2
Es
N0
654
SOLUTIONS TO PROBLEMS
The conditional entropies are
H(â|a) =
X
pa (i) p(j|i) log2
i,j
1
p(j|i)
= 2 q log1/2 q + (1 − q) log1/2 (1 − q)
H(a|â) = H(a) + H(â|a) − H(â) = H(â|a)
and the mutual information
I(a, â) = 2 1 − q log1/2 q − (1 − q) log1/2 (1 − q) .
The results are plotted below
[bit]
26
H(â) = H(a)
H(a|â) = H(â|a)
I(a, â)
1
−20
−10
0
10
20 Es /N0 [dB]
For the given values of Es /N0 we get
Es /N0 [dB]
H(a|â) = H(â|a) [bit]
I(a, â) [bit]
0
10
20
1.26
1.84 · 10−2
1.17 · 10−21
0.74
1.98
2 − 1.17 · 10−21
Observe from their expressions that the entropies and the mutual information are twice those of a binary
symmetric channel with Pbit = q as given in Example 7.4 C. This is related to the fact that in a QPSK
system with
Gray coding, errors on the two binary symbols are indeed independent with
AWGN and
Pbit = Q
p
Es /N0 .
Solution to Problem 7.5
a) We have Ay = {−1, 0, 1} and since y0
enumeration
x−1 x0
1
0
1
1
0
0
0
1
Hence we get H(y0 ) = 2 ·
1
4
+
1
4
is a function of the iid symbols x0 , x−1 we can find by
y0
−1 py0 (−1) = 1/4
o
0
py0 (0) = 1/2
0
1
py0 (1) = 1/4
+1·
1
2
= 3/2
SOLUTIONS TO PROBLEMS OF CHAPTER 7
655
b) Analogously to a) we get
x−2
x−1
x0
x1
y−1
y0
y1
0
0
0
0
0
0
0
0
0
0
1
1
0
1
0
1
0
0
0
0
0
0
1
1
0
1
−1
0
0
0
0
0
1
1
1
1
0
0
1
1
0
1
0
1
1
1
1
1
−1
−1
0
0
0
1
−1
0
1
1
1
1
0
0
0
0
0
0
1
1
0
1
0
1
−1
−1
−1
−1
0
0
1
1
0
1
−1
0
1
1
1
1
1
1
1
1
0
0
1
1
0
1
0
0
0
0
0
0
−1
−1
0
0
0
1
−1
0
so there are 15 distinct triplets each of which has probability 1/16 and carries 4 bit information,
apart from the [000] triplet that has probability 2/16 and carries 3 bit information. Hence
H(y−1 , y0 , y1 ) = 14 · 4 ·
1
1
31
+3· =
16
8
8
c) With y = [y−N · · · yN ] we observe that there are 22N +2 − 2 sequences with probability 1/22N +2
and 2N + 2 bit information, and the all zero sequence with probability 1/2 2N +1 and 2N + 1 bit
information. Hence
H(y) = (22N +2 − 2)
= 2N + 2 −
1
1
(2N + 2) + 2N +1 (2N + 1)
22N +2
2
1
22N +1
and
Hs (y) = 1 +
1
1
− 2N +1
2N + 1
2
(2N + 1)
By taking the limit (7.34) we find the entropy per symbol and the efficiency of {y n } as
Hs (y) = 1 ,
ηy =
Solution to Problem 7.6
a) We have H(x, y|z) ≥ H(x|z) ≥ H(x|y, z)
1
' 0.64
log2 3
656
SOLUTIONS TO PROBLEMS
b) For any a ∈ Ax , since {x = a} implies {w = g(a)}, we have
pw (g(a)) ≥ px (a) and iw (g(a)) ≤ ix (a)
Hence we can write
H(w) = E [iw (w)] = E [iw (g(x))] ≤ E [ix (x)] = H(x)
c) Since py|x (b|a) = pu|x (a + b|a) and pxy (a, b) = pxu (a, a + b), we get
H(y|x) =
X
pxy (a, b) iy|x (b|a) =
X
pxu (a, a + b) iu|x (a + b|a)
a,b
a,b
=
X
pxu (a, c) iu|x (c|a) = H(u|x)
a,c
Solution to Problem 7.7 Let a be an ε-typical sequence for x with respect to relative frequency, then
by combining the first inequality of (7.41) with (7.43) we get
X N (a)
ix (a)
=
ix (a)
L
L
a∈Ax
>
X
a∈Ax
(1 − ε)px (a)ix (a) = (1 − ε)H(x)
so that a is seen to satisfy the first inequality of (7.44) with ε replaced by δ = εH(x). Analogously by
the second inequality of (7.41) we can see that a satisfies the second inequality of (7.44) with the same
value of δ.
Solution to Problem 7.8
a) From the distribution in the table, and with the code word lenghts L(µ s (A)) = L( ) = 2,
L(µs (B)) = L( ) = 4, and so on, we get the average length
Ly =
X
a
p(a)L(µs (a)) ' 3.34
Then, by calculating the entropy
H(y) = H(x) =
X
p(a) log2
a
1
= 4.1983
p(a)
we obtain the efficiency of the ternary code y
ηy =
H(y)
' 0.79
Ly log2 3
b) Let s = [s(0), s(T ), . . . s(`T − T )] be the binary word associated to the transmission of the letter
x excluding the 3T letter separator (for example if x = A, we have s = [10111], and if x is a space,
s = [0000]). Then, by the distribution above, we have that its average length is L s ' 5.95 and
the average duration Ts = T Ls ' 0.71 s. On the other hand when evaluating the efficiency of the
message s, we must take into account the letter separator, hence we define the word s 0 = [s, 000].
SOLUTIONS TO PROBLEMS OF CHAPTER 7
657
By determining the average length Ls0 = Ls + 3 ' 8.95 we get
ηs 0 =
and the information rate
R(s) =
H(x)
' 0.47
L s0
H(x)
= 3.91 bit/s
L s0 T
Solution to Problem 7.9
a) We complete the PMD by calculating px (3) thanks to the normalization condition (1.150) and
obtain px (3) = 5/12. In order to have a one-to-one fixed length coding map µ s it must be
5
log2 5 we get the efficiency of the
Ly = log2 M = 2. Since H(y) = H(x) = log2 3 + 76 − 12
code
7
5
1
−
log2 5 ' 0.89
ηy = log2 3 +
2
12
24
b) In a Shannon-Fano code, from (7.68) we would have codeword lengths
`0 = 2, `1 = 3, `2 = 4, `3 = 2
so that the average length is
Ly 0 =
3
1
1
7
·2+ ·3+
·4=
4
6
12
3
higher than the fixed length case, and its efficiency drops to
ηy0 ' 0.76
c) In a Huffman code, the codewords are
µopt
s (3) = [0] ,
µopt
s (0) = [10] ,
µopt
s (1) = [110] ,
µopt
s (2) = [111]
so that the average length is
Ly00 =
11
6
and the efficiency is
ηy00 ' 0.97
d) By the discussion on page 461 we know that we can obtain a code with unit efficiency if and only if
px (a) = 1/2`a with `a an integer for all a ∈ Ax . For a quaternary source this is true for example
if
px (0) = 1/2 , px (1) = 1/4 , px (2) = px (3) = 1/8
or any permutation of these values.
Solution to Problem 7.10
a) By the Huffman procedure we obtain
658
SOLUTIONS TO PROBLEMS
b
0
10
11
a
0
1
2
1/2
1/3
0
1/6
1
px (a)
1/2
1
0
and by calculating the average code length
1
1
1
+2
+
2
3
6
Ly = 1 ·
and the entropy
H(y) = H(x) =
=
3
2
log2 3
1
1
1
2
· 1 + log2 3 + (1 + log2 3) = +
2
3
6
3
2
we obtain the efficiency
log2 3
4
+
' 0.97
9
3
ηy =
b)
b
00
a
00
px (a) 9/36
010
011
101
110
1000
1001
1110
1111
01
10
11
02
20
12
21
22
6/36
0
6/36
1
4/36
3/36
3/36
0
2/36
1
2/36
0
1/36
1
12/36
1
0
5/36
0
21/36
3/36
1
6/36
1
0
9/36
0
1
15/36
1
0
The average length is in this case
Ly =
8
19
9
107
·4+
·3+
·2=
36
36
36
36
and since H(y) = H(xm ) = 2H(x), the effiency is
ηy =
2H(x)
= 0.981
107/36
and is increased with respect to coding one symbol at a time, as expected.
SOLUTIONS TO PROBLEMS OF CHAPTER 7
Solution to Problem 7.11
a) We get
Ax q =
n
pxq (−∆/2) = pxq (∆/2) =
Z
with PMD
pxq (−3∆/2) = pxq (3∆/2) =
pxq (−5∆/2) = pxq (5∆/2) =
±
∆
3
5
, ± ∆, ± ∆
2
2
2
∆
px (a) da =
0
Z
o
1 − e−1/2
' 0.197
2
2∆
px (a) da = e−1/2
Z∆∞
659
px (a) da =
2∆
1 − e−1/2
' 0.119
2
1
' 0.184
2e
The resulting entropy obtained from (7.6) is H(xq ) = 2.55 bit. Since the symbols in {x(nTs )}
are iid, so are those of {xq (nTs )} and Hs (xq ) = H(xq ). The information rate of xq is therefore
R(xq ) = 20.4 kbit/s.
b) By the Huffman coding procedure we get
µs (−∆/2) = [00] ,
µs (∆/2) = [01] ,
µs (3∆/2) = [101] ,
µs (−5∆/2) = [110] ,
µs (−3∆/2) = [100] ,
µs (5∆/2) = [111]
so that
Ly = 2.61 ,
ηy = 0.98
c) For L even and ∆ = V0 ln 2 the probability of the i-th quantization level is
pxq (Qi ) =
Z
|i|∆
(|i|−1)∆
px (a) da =
1 −|i|∆/V0 ∆/V0
1
e
e
− 1 = |i|+1
2
2
Since all the symbol probabilities are integer powers of 1/2, the Shannon-Fano procedure yields
a code with unit efficiency.
Solution to Problem 7.12
a) The coding map is a variable length encoder with
"
µs (n) = |0 .{z
..0
}1
n
#
The resulting code is a prefix code, since every codeword has a 1 in its last symbol, and all its
prefixes only have zeros.
b) The average codeword length is
Lb =
M −1
M −1
1 X
1 X
M +1
L(µs (n)) =
(n + 1) =
M
M
2
n=0
n=0
660
SOLUTIONS TO PROBLEMS
Hence the average entropy per symbol for {bq } is
Hs (b) =
H(b)
2 log2 M
=
= 2/3 bit
Lb
M +1
and the information rate is R(b) = Hs (b)/Tb ' 667 Mbit/s.
c) Since the average duration of a codeword is Ta = Lb Tb = 4.5 ns, and the energy transmitted with
each codeword is Eh = 0.6 · 10−15 V2 s, we obtain the average statistical power of the transmitted
signal as Ms = Eh /Ta = 1.33 · 10−7 V2 .
Solution to Problem 7.13
a) We can write C = {0, γ 1 , γ 2 , γ 3 } and see that the 0 word is in C. Moreover by taking sums of
all the remaining words we get
γ 1 + γ 2 = [111100] + [101011] = [010111] = γ 3
γ3 + γ1 = γ3 − γ1 = γ2 ,
γ3 + γ2 = γ3 − γ2 = γ1
Hence the code satisfies condition (7.120) of Definition 7.10 and is linear. The code has 2 k = 4
codewords, hence the information word length is k = 2.
b) A systematic generating matrix is of the type (7.123). By picking the third and fourth codewords
as columns we get


1 0
 0 1 


 1 0 
G=

 0 1 
 1 1 
1 1
and by Proposition 7.21 we can write a systematic parity check matrix as in (7.128)

1
 0
H=
1
1
0
1
1
1
1
0
0
0
0
1
0
0
0
0
1
0

0
0 
0 
1
c) All the nonzero codewords have Hamming weight kγ i kH = 4, which is therefore the minimum
Hamming weight and by Proposition 7.18 also the minimum Hamming distance of the code. We
can therefore detect up to 3 errors.
With a memoryless symmetric channel, ML decoding is equivalent to minimum distance decoding, and the code can only correct 1 error. By examining the distances from c̃ to the codewords
dH (c̃, 0) = dH (c̃, γ 2 ) = 4 ,
dH (c̃, γ 1 ) = dH (c̃, γ 3 ) = 2
we see that the most likely transmitted codewords are either γ 1 or γ 3 , equivalently. Assuming a
systematic encoding they yield the information words b̂ = [11] or b̂ = [01], respectively.
Solution to Problem 7.14
a) Yes, it is linear.
b) We have dmin = wmin = 2. Hence the code can detect 1 error.
SOLUTIONS TO PROBLEMS OF CHAPTER 7
661
c) From the systematic generating matrix

we can write by Proposition 7.21

1
 0
G=
1
0
H=
1
0
0
1 
1 
0
1
0
1
0
0
1
d) The entropy of the information words is
H(b) =
X
1
1
1
7
+ · 2 + 2 · · 3 = bit
2
4
8
4
pb (a) log1/2 pb (a) =
a∈{0,1}2
and it is also the entropy of the codewords since the coding map is one-to-one. The nominal
information of the codewords is n = 4 bit, hence the redundancy is given by 1 − H(c)/n =
9/16 = 0.5625.
Solution to Problem 7.15
a) The parity check matrix has 5 rows and 7 columns, so we have a (7, 2) linear code. As H is in
systematic form (7.128) we know that a possible generating matrix is given by (7.123)
 1
 0
 1

G=
 1
 1

0
1
1
1
1
0
1
1
0
The 2k − 1 = 3 nonzero codewords are
g 1 = [1011110] ,








g 2 = [0111101] ,
γ 3 = g 1 + g 2 = [1100011]
The minimum Hamming weight and hence the minimum Hamming distance of the code is d min =
kγ 3 kH = 4, and the code can only correct 1 error.
b) Since the channel is not symmetric we can not use the Hamming distance criterion and must
calculate the likelyhood function pc̃|c (ξ|γ) for all codewords γ. As the channel is memoryless
we can still write (7.105) and obtain
pc̃|c (ξ|0) = p(1|0)3 p(0|0)4 =
pc̃|c (ξ|g 1 ) = p(1|1)3 p(0|0)2 p(0|1)2 =
1 3
10
99 3
100
9 4
10
' 6.56 · 10−4
9 2
10
2
pc̃|c (ξ|g 2 ) = p(1|1)2 p(0|0)p(0|1)3 p(1|0) =
99
100
pc̃|c (ξ|γ 3 ) = p(1|1)p(0|0)p(0|1)3 p(1|0)2 =
99 9
100 10
9
10
2
1
100
1
100
3
1
100
3
' 6.56 · 10−5
1
10
1 2
10
' 8.82 · 10−8
' 8.91 · 10−9
662
SOLUTIONS TO PROBLEMS
The most likely codeword is therefore ĉ = 0, yielding the information word b̂ = [00]. Observe
that in this case minimum Hamming distance decoding would have given ĉ = g 1 .
Solution to Problem 7.16 The rule (similar to that for parity bit) is
cn+1 =
n
X
ci
i=1
a) Let a0 and b0 be two codewords in C 0 , obtained by expanding a and b ∈ C, respectively. Then
c0 = a0 + b0 is itself a codeword in C 0 since c = a + b ∈ C and c is expanded by letting
cn+1 =
n
X
ci =
n
X
(ai + bi ) =
ai +
n
X
bi = an+1 + bn+1
i=1
i=1
i=1
i=1
n
X
b) By expanding each column in the generating matrix G of Example 7.3 C we obtain

1
0
0
0
1
1
0
1




0
G =




0
1
0
0
1
0
1
1
0
0
1
0
0
1
1
1
0
0
0
1
1
1
1
0










which is in systematic form (7.123). Then, by Proposition 7.21 we can easily obtain a parity check
matrix as in (7.129)
H0 =
A0
I4

1
 1
=
0
1
1
0
1
1
0
1
1
1
1
1
1
0
1
0
0
0
0
1
0
0
0
0
1
0

0
0 
0 
1
c) Since the code is linear it is sufficient to find the minimum weight for C 0 . The initial Hamming
code C has wmin = 3, as shown in Tab. 7.2. As all the codewords in C with weight 3 are expanded
into codewords of weight 4, this is the minimum weight for C 0 .
Solution to Problem 7.17
a) The conditional error probabilities are
P [E|ak = 0] =
Z
+∞
v
P [E|ak = 1] =
pw (ρ − s0 ) dρ = e−2 ln 2 =
Z
1
4
v
−∞
pw (ρ − s1 ) dρ = 0
The system can have errors only when a 0 is transmitted. Hence the overall error probability is
given by
1
Pbit = pa (0) P [E|ak = 0] =
16
SOLUTIONS TO PROBLEMS OF CHAPTER 7
663
b) By considering the decision functions
D(ρ; 0) =
1 −2ρ
e
1(ρ) ,
2
D(ρ; 1) =
3 −2(ρ−1)
e
1(ρ − 1)
2
we see that D(ρ; 1) > D(ρ; 0) for ρ ≥ 1. Hence the optimum threshold is v opt = s1 = 1
c) From the conditional error probabilities found in a) we can write the joint PMD of a k , âk as
paâ (0, 0) =
3
,
16
paâ (0, 1) =
1
,
16
paâ (1, 1) =
3
4
Hence we get
H(ak , âk ) =
1
3
3
5
15
·4+
(4 − log2 3) + (2 − log2 3) = −
log2 3
16
16
4
2
16
H(ak ) =
1
3
3
2 + (2 − log2 3) = 2 − log2 3
4
4
4
3
13
(4 − log2 3) +
(4 − log2 13)
16
16
13
3
log2 3 −
log2 13
=4−
16
16
H(âk ) =
The mutual information is therefore
I(ak , âk ) =
7
13
−
log2 13 ' 0.49 bit
2
16
d) Since the transition probabilities are not symmetric we seek the source statistic that maximizes
I(ak , âk ). Let the PMD of ak be pa (0) = p and pa (1) = 1 − p. Then the entropies become
H(ak , âk ) =
3p
p
(2 − log2 p) +
(4 − log2 3 − log2 p) − (1 − p) log2 (1 − p)
4
4
H(ak ) = −p log2 p − (1 − p) log2 (1 − p)
3p
3
3
H(âk ) =
(4 − log2 3 − log2 p) − 1 − p log2 1 − p
4
4
4
and the mutual information is
3
1
3
3
I(ak , âk ) = − p − p log2 p − 1 − p log2 1 − p
2
4
4
4
Since Cs = maxp∈(0,1) I(ak , âk ), we seek the maximum of I(ak , âk ) by taking the derivative
with respect to p
h
d I(ak , âk )
1
3
3
3
3
= − − (log2 p + log2 e) − − log2 1 − p − log2 e
dp
2
4
4
4
4
and setting it to 0. We get the equation
3
log2
4
3
1
−
p
4
=
1
2
i
664
SOLUTIONS TO PROBLEMS
which is solved by
p=
3
4
1
= 0.428
+ 22/3
and yields the capacity
Cs = 0.558 bit/symbol
Solution to Problem 7.18
a) The transition
p probabilities for a QPSK system are given in the solution to Problem 7.4 in terms
of q = Q( Es /N0 ). In this case
1
Es = Eh /2 =
2
Hence q = Q(
p
Z
+∞
A df 2 T 2 rect T f =
−∞
1 2
A T = 5 · 10−10 V2 s
2
5/2) ' 0.11 and the transition probabilities are
p(i|i) = 0.79 ,
p(i|j) '
0.098
0.012
, i, j adjacent
, i, j opposite
b) By the symmetry of the transition probabilities it is justified to assume that the source achieving
the channel capacity is a memoryless symmetric source with pa (i) = 1/4, for all i. Hence we get,
again from the solution to Problem 7.4, that Cs = I(ak , âk ) ' 1 bit/symbol.
Solution to Problem 7.19
a) From (7.152) and under the constraint M1 + M2 = M, we can write the total capacity as
C = C1 + C2 = B1 log2 1 +
M1 g1
N1 B 1
+ B2 log2
1+
(M − M1 )g2
N2 B 2
By taking its derivative with repect to M1 we obtain
1
dC
=
dM1
ln 2
B2 g2
B1 g1
−
M1 g1 + N1 B1
(M − M1 )g2 + N2 B2
Then we seek the capacity by equating it to 0 and get
M1
N1
M − M1
N2
+
=
+
B1
g1
B2
g2
hence
M1 =
M/B2 + N2 /g2 − N1 /g1
1/B1 + 1/B2
M2 =
M/B1 − N2 /g2 + N1 /g1
1/B1 + 1/B2
and symmetrically
If both expressions for M1 and M2 lie in the interval (0, M) they represent the optimal power distribution. On the other hand if the above result gives M1 < 0 and M2 > M, then the optimal choice is
to transmit all power on subchannel 2, that is M1 = 0 and M2 = M, and viceversa.
SOLUTIONS TO PROBLEMS OF CHAPTER 7
b) In this case we have, by writing Mn = M −
C=
Pn−1
i=1
Mi
P
n−1
n−1
(M − i=1 Mi )gn
B X
Mi gi
+ ln 1 +
ln 1 +
ln 2
Ni B
Nn B
i=1
665
Then by taking its derivative with respect to Mj we get
∂C
B
=
∂Mj
ln 2
gj
gn
−
Pn−1
Nj B + M j g j
Nn B + (M − i=1 Mi )gj
for j = 1, . . . , n − 1. By equating it to 0 for each j and rewriting Mn into it we get
Mj +
Nn B
Nj B
= Mn +
,
gj
gn
j = 1, . . . , n − 1
Now let P0 = Mn /B + Nn /gn so that
Mj +
Nj B
= P0 B ,
gj
j = 1, . . . , n − 1
as in the statement. Moreover by summing over all j we get
M+B
n
X
Nj
j=1
= nP0 B
gj
from which the statement is proved.
The optimum power allocation in the general case (but you’re not allowed to use this result to solve
the problem!) corresponds to distributing the total power M among the subchannels according to the
“water pouring” technique, that is
Mi =
Bi (P0 − Ni /gi )
0
with
P0 =
M+
, Ni /gi < P0
, Ni /gi ≥ P0
P
Bi Ni /gi
Pi
i
Bi
where the sums are meant not over all values of i but only over those for which N i /gi < P0 . The result
is illustrated in the plot below
Nj
gj
P0
M1
Mj = 0
Mn
Mi
Nn
gn
N1
g1
Ni
gi
B1
Bi
Bj
Bn
666
SOLUTIONS TO PROBLEMS
Solution to Problem 7.20 For the proof of Proposition 7.28 we proceed analogously to that of Theorem
7.7. Consider the rvs ix,y (xn , yn ) which are iid with mean H(x, y). By the law of large numbers
L
1X
ix,y (xn , yn ) −→ H(x, y) ,
L→∞
L
in probability .
n=1
Since the pairs {(xn , yn )} are iid we can write
ix,y (x, y) =
L
X
ix,y (xn , yn )
n=1
and hence
ix,y (x, y)
L
We have
−→
L→∞
H(x, y) ,
ix,y (x, y)
lim P L→∞
L
in probability .
− H(x, y) < ε = 1
and therefore, for all δ > 0 we can find L0δ such that
ix,y (x, y)
1
− H(x, y) < ε > 1 − δ ,
P L
3
for all L > L0δ
Moreover, since by Theorem 7.7 we have
lim P [x ∈ Tx (ε, L)] = 1 ,
lim P [y ∈ Ty (ε, L)] = 1
L→∞
L→∞
hence there exist L00δ and L000
δ such that
P [x ∈ Tx (ε, L)] > 1 −
1
δ ,
3
for all L > L00δ
1
δ , for all L > L000
δ
3
0
00
000
Then, with Lδ = max {Lδ , Lδ , Lδ }, we have for all L > Lδ
P [y ∈ Ty (ε, L)] > 1 −
P [(x, y) ∈ Tx,y (ε, L)] =
ix,y (x, y)
<ε
−
H(x,
y)
L
ix,y (x, y)
− H(x, y) ≥ ε
= 1 − P {x 6∈ Tx (ε, L)} ∪ {y 6∈ Ty (ε, L)} ∪ L
ix,y (x, y)
≥ 1 − P [x 6∈ Tx (ε, L)] − P [y 6∈ Ty (ε, L)] − P − H(x, y) ≥ ε
L
= P {x ∈ Tx (ε, L)} ∩ {y ∈ Ty (ε, L)} ∩
1
1
1
δ− δ− δ
3
3
3
=1−δ
≥1−
thus proving Proposition 7.28.
(8.1)
SOLUTIONS TO PROBLEMS OF CHAPTER 7
667
Now we prove 7.29 i). We write
X
(a,b)∈Tx,y (ε,L)
px,y (a, b) ≤ 1
(8.2)
On the other hand for each pair of jointly ε-typical sequences (a, b) we have from (7.157) i x,y (a, b) <
L[H(x, y) + ε] and hence px,y (a, b) > 1/2L[H(x,y)+ε] , so
X
px,y (a, b) >
(a,b)∈Tx,y (ε,L)
X
(a,b)∈Tx,y (ε,L)
|Tx,y (ε, L)|
1
= L[H(x,y)+ε]
2L[H(x,y)+ε]
2
(8.3)
By combining the inequalities (8.2) and (8.3) we get
|Tx (ε, L)|
<1
2L[H(x)+ε]
and hence the statement of Proposition 7.29 i).
To prove 7.29 ii), we observe that by Theorem 7.28, for any δ > 0 there exists L δ such that for all
L > Lδ
(8.4)
P [(x, y) ∈ Tx,y (ε, L)] > 1 − δ .
Moreover, for all (a, b) ∈ Tx,y (ε, L), it is px,y (a, b) < 1/2L[H(x,y)−ε] , and we get
P [(x, y) ∈ Tx,y (ε, L)] =
X
(a,b)∈Tx,y (ε,L)
px,y (a, b) <
|Tx,y (ε, L)|
.
2L[H(x,y)−ε]
By combining the inequalities (8.4) and (8.5) we obtain the proof of Proposition 7.29 ii).
(8.5)
Download