LECTURE 23 Last time: Finite-state channels

advertisement
LECTURE 23
Last time:
• Finite-state channels
• Lower capacity and upper capacities
• Indecomposable channels
• Markov channels
Lecture outline
• Spreading over fading channels
• Channel model
• Upper bound to capacity
• Interpretation
Spreading over fading channels
Channel decorrelates in time T
Channel decorrelates in frequency W
Recall Markov channels: difficulty arises when
we do not know the channel
Gilbert-Eliot channel: hypothesis testing for
what channel state is
Consider several channels in parallel in fre­
quency (recall that if channels are known,
we can water-fill)
Channel model
Block fading in bandwidth and in time
Over each coherence bandwidth of size W ,
the channel experiences Rayleigh flat fading
All the channels over distinct coherence band­
widths are independent, yielding a blockfading model in frequency
We transmit over µ coherence bandwidths
The energy of the propagation coefficient
F [i]j over coherence bandwidth i at sam­
pled time j is σF
For input X[i]j at sample time j (we sample
at the Nyquist rate W ), the corresponding
output is Y [i]j = F [i]j X[i]j + N [i]j , where
the N [i]j s are samples of WGN bandlimited
to a bandwidth of W , with energy normal­
ized to 1
Channel model
The time variations are block-fading in na­
ture
The propagation coefficient of the channel
remains constant for T symbols (the co­
herence interval), then changes to a value
independent of previous values
(j+1)T W
Thus, F [i]jT W +1
(j+1)T W
the F [i]jT W +1
for j = 1, 2, . . ..
is a constant vector and
are mutually independent
Signal constraints:
• For the signals over each coherence band­
width, the second moment is upper bounded
by E[X 2] ≤ Eµ
�
• The amplitude is upper bounded by Eγ E[X 2]
Upper bound to capacity
Capacity is C
�
�
2 , T, µ, γ
W, E, σF
�
�
2 , T, µ, γ = lim max
C W, E, σF
⎛
⎝
µ
k �
�
�
k→∞ pX
�
⎞
1
1
(i+1)T W
(i+1)T W
I X[j]iT W +1 ; Y [j]iT W +1 ⎠
k i=1 j=1 T
(1)
where the fourth central moment of X[j]i is
upper bounded by µγ2 and its average energy
constraint is Eµ .
Since we have no sender channel side in­
formation and all the bandwidth slices are
independent, we may use the fact that mu­
tual information is concave in the input dis­
tribution to determine that selecting all the
inputs to be IID maximizes the RHS of (1).
Upper bound to capacity
We first rewrite the mutual information term:
�
�
1
(i+1)T W
(i+1)T W
I X[j]iT W +1 ; Y [j]iT W +1
T �
�
1
(i+1)T W
=
h Y [j]iT W +1
T �
�
1
(i+1)T W
(i+1)T W
−
h Y [j]iT W +1 |X[j]iT W +1
(2)
T
We may upper bound the first term of (2):
�
�
1
(i+1)T W
h Y [j]iT W +1
T
�
��
�
�
�
1
�
�
T
W
≤
ln (2πe)
�Λ
�
(i+1)T
W
�
�
Y
[j]
2T
iT W +1
Gaussian
distribution bound
⎛
⎞
T�
W�
�
1
2 σ2
≤
ln ⎝(2πe)T W
σF
+1 ⎠
X[i]
2T
i=1
from Hadamard’s inequality
W
�
�
W
1 T�
2
2
=
ln (2πe) +
ln σF σX[i] + 1
2
2T i=1
�
�
W
W
2E +1
ln (2πe) +
ln σF
(3)
2
2
µ
from concavity of ln and our average energy
constraint.
≤
Upper bound to capacity
We now proceed to minimize the second
term of (1)
(i+1)T W
(i+1)T W
Conditioned on X[j]iT W +1 , Y [j]iT W +1
(i+1)T W
is Gaussian, since F [j]iT W +1 is Gaussian
(i+1)T W
and N iT W +1 Gaussian and independent
of F
�
�
1
(i+1)T W
(i+1)T W
h Y [j]iT W +1 |X[j]iT W +1
T
�
���
� �
�
�
1
�
�
T
=
EX ln (2πe) �Λ
�
(i+1)T
W
�
�
Y
[j]
2T
iT W +1
(4)
Λ
(i+1)T W
Y [j]iT W +1
2 x[k]2 +
has kth diagonal term σF
2,
1 and off-diagonal (k, j) term equal to x(k)x(j)σF
conditioned on
(i+1)T W
X[j]iT W +1
= x = [x(1), . . . , x(T W )]
The eigenvalues λj of ΛY are 1 for j =
2 + 1 for j = T W .
1 . . . T W − 1 and ||x||2 σF
Upper bound to capacity
Hence, we may rewrite (3) as
�
�
1
(i+1)T W
(i+1)T W
h Y iT W +1 |X iT W +1
T
� ���
��
��2
�
�
�
�
1
(i+1)T W
2 +1
=
EX ln ����xiT W +1 ���� σF
2T
W
+
ln(2πe)
(5)
2
We seek to minimize the RHS of (5) sub­
ject to the second moment constraint hold­
ing with equality and the subject to the
peak amplitude constraint
The distribution for X which minimizes the
RHS of (5) subject to our constraints can
be found using the concavity of the ln func­
tion
The distribution is such that the only val­
ues which |X | can take are 0 and √γ with
2
2
µE
probabilities 1 − Eγ 2 and Eγ 2 , respectively.
Upper bound to capacity
Thus, we may lower bound (5) by
�
�
1
(i+1)T W
(i+1)T W
h Y iT W +1 |X iT W +1
T
�
�
2
2
E
γ 2
W
≥
ln T W
σ +1 +
ln(2πe)
2T γ 2
µE F
2
(6)
Combining (6), (3) and (1) yields
�
2 , T, µ, γ
C W, E, σF
�
�
µW
2E +1
ln σF
2
µ
�
µE 2
γ2
−
2T γ
ln T W
2
µE
�
≤
�
2 +1
σF
(7)
Upper bound to capacity
What is the limit for µ infinite?
ln(1 + x) � x for small x
First term goes to
2WE
σF
2
Second term also goes to
Graphical interpretation
2WE
σF
2
Interpretation
Over any channel (slice of of bandwidth of
size W ), we do not have enough energy to
measure the channel satisfactorily
Necessary assumptions:
• energy scales per bandwidth slice over
the whole bandwidth
• peak energy per bandwidth slice over
the whole bandwidth
Interpretation
We may relax the assumption of the peak
bandwidth
Assume second moment (variance) scales
1 and fourth moment (kurtosis) scales
as µ
as µ12
The mutual information goes to 0 as µ →
∞
We may also relax the assumption regard­
ing the channel block-fading in time and
frequency as long as we have decorrelation
MIT OpenCourseWare
http://ocw.mit.edu
6.441 Information Theory
Spring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Download