LECTURE 12 Last time: Strong coding theorem

advertisement
LECTURE 12
Last time:
• Strong coding theorem
• Revisiting channel and codes
• Bound on probability of error
• Error exponent
Lecture outline
• Error exponent behavior
• Expunging bad codewords
• Error exponent positivity
• Strong coding theorem
Last time
Ecodebooks[Pe,m] ≤ 2−N (E0(ρ,PX (x))−ρR)
for
E0(ρ, PX (x))
⎛
⎡
⎤1+ρ
⎞
1
⎜�
⎣�
⎟
1+ρ
⎦
=
− log
⎝
PX (x)PY |
X (yi|
x)
⎠
y xN
We need to:
• get rid of the expectation over codes
by throwing out the worst half of the
codes
• Show that the bound behaves well (ex­
ponent is −N α for some α > 0)
• Relate the bound to capacity
Error exponent
Define
Er (R) = max0≤ρ≤1 maxPX (E0(ρ, PX (x)) −
ρR)
then
Ecodebooks[Pe,m] ≤ 2−N Er (R)
Ecodebooks,messages[Pe] ≤ 2−N Er (R)
For a BSC:
Expunge codewords
The new Pe,m is bounded as follows:
Pe,m
=
=
≤
)
−N Er (max0≤ρ≤1 maxP (E0 (ρ,PX (x))−ρ log(2M
))
N
X
2.2
)
ρ
−N Er (max0≤ρ≤1 maxP (E0 (ρ,PX (x))− N
−ρ log(M
))
N
X
2.2
)
−N Er (max0≤ρ≤1 maxP (E0 (ρ,PX (x))−ρ log(M
))
N
X
4.2
= 4.2−N Er (R)
Now we must consider positivity. Let PX (x)
be such that I(X; Y ) > 0, we’ll show that
the behavior of Er is:
Error exponent positivity
We have that:
1. E0(ρ, PX (x)) > 0, ∀ρ > 0
X (x)) > 0, ∀ρ > 0
2. I(X; Y ) ≥ ∂E0(ρ,P
∂ρ
2
X (x)) ≤ 0, ∀ρ > 0
3. ∂ E0(ρ,P
∂ρ2
We can check that
X (x)) |
I(X; Y ) = ∂E0(ρ,P
ρ=0
∂ρ
then showing 3 will establish the LHS of 2
Showing the RHS of 2 will establish 1
Let us show 3
Error exponent positivity
To show concavity, we need to show that
∀ρ1, ρ2 ∀θ ∈ [0, 1]
E0(ρ3, PX (x))
≥ θE0(ρ1, PX (x)) + (1 − θ)E0(ρ2, PX (x))
for ρ3 = θρ1 + θρ2
We shall use the fact that
θ(1+ρ1 )
1+ρ3
2 )
= 1
+ (1−θ)(1+ρ
1+ρ
3
and Hölder’s inequality:
�
j cj dj ≤
�
�
1
cx
j j
�x �
�
1
1−x
j cj
�1−x
Let us pick
cj =
θ(1+ρ3 )
PX (j) 1+ρ3 P
dj =
(1−θ)(1+rho2 )
1+ρ3
PX (j)
P
x =
Y |X
θ
(k|j) 1+ρ3
Y |X
θ(1 + ρ1)
1 + ρ3
1−θ
(k|j) 1+ρ3
Error exponent positivity
Proof continued
�
≤
PX (j)PY |X
j∈X
⎛
�
⎝
1
1+ρ
3
(k|j)
⎞ θ(1+ρ1)
PX (j)PY |X
1+ρ3
1
(k|j) 1+ρ1 ⎠
j∈X
⎞ (1−θ)(1+ρ2)
⎛
�
× ⎝
⎛
j∈X
⇒
⎝
⎛
≤
⎝
⎛
×
⎝
PX (j)PY |X
�
j∈X
3
PX (j)PY |X
1
(k|j) 1+ρ3 ⎠
PX (j)PY |X
⎞θ(1+ρ )
1
1
(k|j) 1+ρ1 ⎠
PX (j)PY |X
⎞(1−θ)(1+ρ )
2
1
(k|j) 1+ρ2 ⎠
j∈X
�
1+ρ3
⎞1+ρ
j∈X
�
1
1+ρ
2 ⎠
(k|j)
Error exponent positivity
Proof continued
⎛
⇒
≤
×
≤
×
�
⎝
�
⎞1+ρ
PX (j)PY |X
k∈Y
j∈X
⎛
�
�
⎝
P
X (j)PY |
X
1
1+ρ
3 ⎠
(k|j)
3
⎞θ(1+ρ )
1
1
(k|j) 1+ρ1 ⎠
k∈Y j∈X
⎛
⎞(1−θ)(1+ρ )
2
1
�
⎝
PX (j)PY |X (k|j) 1+ρ2 ⎠
j∈X
⎡
⎛
⎞(1+ρ )
⎤θ
1
1
�
�
⎢
⎥
⎝
PX (j)PY |
X (k|j) 1+ρ1 ⎠
⎦
⎣
k∈Y j∈X
⎡⎛
⎞(1+ρ )
⎤(1−θ)
2
1
�
⎢⎝
⎥
PX (j)PY |X (k|j) 1+ρ2 ⎠
⎣
⎦
j∈X
Error exponent positivity
Proof continued
⎛
≥
−
⎞1+ρ ⎞
3
1
�
�
⎜
⎟
⎝
− log
⎝
PX (j)PY |
X (k|
j) 1+ρ3 ⎠
⎠
k∈Y j∈X
⎛
⎛
⎞(1+ρ )
⎞
1
1
�
�
⎜
⎟
⎝
−θ log
⎝
PX (j)PY |
X (k|
j) 1+ρ1 ⎠
⎠
k∈Y j∈X
⎛⎛
⎞(1+ρ )
⎞
2
1
�
⎜
⎟
(1 − θ)
⎝
⎝
PX (j)PY |
X (k|
j) 1+ρ2 ⎠
⎠
j∈X
⎛
⇒ E0(ρ3, PX ) ≥ θE0(ρ1, PX ) + (1 − θ)E0(ρ2, PX )
so E0 is concave!
Error exponent positivity
Proof continued
Hence, extremum, if it exists, of E0(ρ, PX )−
X ) = R, which
ρR over ρ occurs at ∂E0(ρ,P
∂ρ
implies that
∂E0 (ρ,PX )
|ρ=1
∂ρ
X )|
≤ R ≤ ∂E0(ρ,P
ρ=0 = I(X; Y )
∂ρ
is necessary for Er (R, PX ) = max0≤ρ≤1[E0(ρ, PX )−
ρR] to have a maximum
We have now placed mutual information
somewhere in the expression
X )|
Critical rate is RCR is defined as ∂E0(ρ,P
ρ=1
∂ρ
Error exponent positivity
Proof continued
X) = R
From ∂E0(ρ,P
∂ρ
we obtain
∂R
∂ρ
=
∂ 2 E0 (ρ,PX )
∂ρ2
Hence ∂R
∂ρ < 0, R decreases monotonically
from C to RCR
We can write
X)
Er (R, PX ) = E0(ρ, PX ) − ρ ∂E0(ρ,P
∂ρ
for Er (R, PX ) = E0(ρ, PX ) − ρR (ρ allows
parametric relation)
then
∂Er (R,PX )
∂ρ
2
(ρ,PX )
= −ρ ∂ E0∂ρ
>0
2
Error exponent positivity
Proof continued
X ) =
Taking the ratio of the derivatives, ∂Er (R,P
∂R
−ρ
Er (R, PX ) is positive for R < C
Moreover
∂ 2 Er (R,PX )
∂R2
2
(ρ,PX ) −1
= −[ ∂ E0∂ρ
]
>0
2
thus Er (R, PX ) is convex and decreasing in
R over RCR < R < C
Error exponent positivity
Proof continued
Taking Er (R) = maxPX Er (R, PX )
is the maximum of functions that are con­
vex and decreasing in R and so is also con­
vex
For the PX that yields capacity, Er (R, PX )
is positive for R < C
So we have positivity of error exponent for
0 < R < C and capacity has been intro­
duced
This completes the coding theorem
Note: there are degenerate cases in which
∂Er (R,PX )
∂ 2 Er (R,PX )
= constant and
=0
∂ρ
∂ 2ρ
when would that happen?
MIT OpenCourseWare
http://ocw.mit.edu
6.441 Information Theory
Spring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Download