Chapter 5 LIST DECODABILITY OF RANDOM LINEAR CONCATENATED CODES 5.1

advertisement
72
Chapter 5
LIST DECODABILITY OF RANDOM LINEAR CONCATENATED
CODES
5.1 Introduction
In Chapter 2, we saw that for any fixed alphabet of size §ŸõV there exist codes of rate 4
åZ
that can be list decoded up to ² ¯ `G-I,45IF‡< fraction of errors with list of size Š‹`G6X‡e .
Z
For linear codes one can show a similar result with lists of size ‘’ “ј . These results are
shown by choosing the code at random. However, as we saw in Chapter 4 the explicit
constructions of codes over finite alphabets are nowhere close to achieving list-decoding
capacity.
The linear codes in Chapter 4 are based on code concatenation. A natural question to
ask is whether linear codes based on code concatenation can get us to list-decoding capacity
for fixed alphabets.
In this chapter, we answer the question above in the affirmative. In particular, in Section 5.4 we show that if the outer code is random linear code and the inner codes are also
(independent) random linear codes, then the resulting concatenated codes can get to within
‡ of the list-decoding capacity with list of constant size depending on ‡ only. In Section 5.5,
we also show a similar result when the outer code is the folded Reed-Solomon code from
Chapter 3. However, we can only show the latter result with polynomial-sized lists.
The way to interpret the results in this chapter is the following. We exhibit an ensemble
of random linear codes with more structure than general random (linear) codes that achieve
the list-decoding capacity. This structure gives rise to the hope of being able to list decode
such a random ensemble of codes up to the list-decoding capacity. Furthermore, for designing explicit codes that meet the list-decoding capacity, one can concentrate on concatenated
codes. Another corollary of our result is that we need fewer random bits to construct a code
that achieves the list-decoding capacity. In particular, a general random linear code requires
number of random bits that grows quadratically with the block length. On the other hand,
random concatenated codes with outer codes as folded Reed-Solomon code require number
of random bits that grows quasi-linearly with the block length.
The results in this chapter (and their proofs) are inspired by the following results due to
Blokh and Zyablov [19] and Thommesen [102]. Blokh and Zybalov show that random concatenated linear binary codes (where both the outer and inner codes are chosen uniformly
at random) have with high probability the same minimum distance as general random linear codes. Thommesen shows a similar result when the outer code is the Reed-Solomon
code. The rate versus distance tradeoff achieved by random linear codes satisfies the so
73
called Gilbert-Varshamov (GV) bound. However, like list decodability of binary codes,
explicit codes that achieve the GV bound are not known. Coming up with such explicit
constructions is one of the biggest open questions in coding theory.
5.2 Preliminaries
|L
X' for some fixed †ŸtV .
We will consider outer codes that are defined over È , where ö
The outer code will have rate and block length of 4 and ~ respectively. The outer code
1\b will either be a random linear code over UÈ or the folded Reed-Solomon  code from
Chapter 3. In the case when 1 b is random, we will pick 1 b by selecting
4ø~
vectors uniformly at random from }È to form the rows of the generator matrix. For every
Î
position G[EmÍyE ~ , we will choose an inner code 1 Î to be a random linear code over
¯ of block length and rate Gy 6 . In particular, we will work with the corresponding
generator matrices í.Î , where every í.Î is a random †îo matrix over ¯ . All the generator
matrices í.Î (as well as the generator matrix for 1 b , when we choose a random 1 b ) are
chosen independently. This fact will be used crucially in our proofs.
Î
Given the outer code 1 b
and the inner codes 1 Î , the resulting concatenated code 1
1\b ?ŽŽi1 Î Z !¥¤¥¤¥¤—!f1 Î is constructed as follows.1 For every codeword É´ WÉkZ\!¥¤¥¤¥¤!NÉ (+
1\b , the following codeword is in 1 :
pC
|
v
ɳí
Ä
û …ɳZëí]Z!NÉä\íu—!¥¤—¤¥¤—!NÉ
Ì
í
b!
where the operations are over ¯ .
We will need the following notions of the weight of a vector. Given a vector ÅF+S Y¯ ,
its Hamming weight is denoted by ƒ:_/ŵ . Given a vector d
?;Z\!¥¤¥¤¥¤Z! ; _+Oç ¯ 5 ~u¡ , we will use ƒ _ Þ /dµ to denote the Hamming weight over ¯ of the
and a subset ¶¾
subvector ?;e΅ÑÎ Ù Þ . Note that ƒ:_/dk:
ƒ _
<; .
We will need the following lemma due to Thommesen, which is stated in a slightly
different form in [102]. For the sake of completeness we also present its proof.
ú
B
Lemma 5.1 ([102]). Given a fixed outer code 1¹> \ of block length ~ and an ensemble of
random inner linear codes of block length given by generator matrices íKZ!¥¤¥¤¥¤!bí
the
5~g¡
following is true. Let d´+] Y¯ . For any codeword É´+S1 >\ , any non-empty subset ¶º÷
;
such that ÉjÎ9 Ï
Z
¯2 :
for all Í3+K¶ and any integer uE,8·¸¶-·X(>
. GšI
IX
J ƒ _
Þ
å Þ Z½å
Wɳí2Iwd9E U¡jEQ ¿ ¿ N !
¾
Ù Ê
where the probability is taken over the random choices of íKZ!¥¤¥¤¥¤—!bí
.
1
Note that this is a slightly general form of code concatenation that is considered in Chapter 4. We did
consider the current generalization briefly in Section 4.4.
74
%
Proof. Let ·¸¶-·"ûÁ and w.l.o.g. assume that ¶t 5Á¥¡ . As the choices for íKZ!¥¤¥¤¥¤—!bí
are
made independently, it is enough to show that the claimed probability holds for the random
, we have
choices for í0Z\!—¤¥¤¥¤—!\í  . For any G‰E ÍgE Á and any ; +÷ ¯ , since ÉjÎ0Ï
å
IN
J … »É Îiíaμ; ¡Y È . Further, these probabilities are independent for every Í . Thus, for
Â
any d5 ?; Z!—¤¥¤¥¤—Z! ; +÷ç ¯ , IJ O »É ÎiíaÎ(FX; Î for every G]E ͆Å
E Á¥¡Ð å  . This
œ
þ
implies that:
ú ‚ q
MÆ c
INJ
œ
þ
| ú å Â ü
ƒ:_ Þ Wɳí2Iwd8EU¡t €
Cý
;
H
v
Á

ÿ
Û ‹þ y
WJIG € ¤
The claimed result follows from the Proposition 2.1.
*+-,
( $ ) !
&
'
" %# $ ! * * # $ !
Figure 5.1: Geometric interpretations of functions ™ and ™ .
For
;
E
|
E G define
p
¯ ³ GšI‰² ¯
`GšIl
„ëåZ
b¤
We will need the following property of the function above.
Lemma 5.2. Let *ŸtV be an integer. For every
¯ 8E
;
E
¤
p
E G,
(5.1)
75
Proof. The proof follows from the subsequent sequence of relations:
p `GÐI‰ëå Z ëåZ pGšIQ`GšI‰ c¨ª©£>« ¯W-IQGÁnLGIl båZ U¨ª©£«£¯GšI‰ ëåZ únF ëåZ ëåZ nLGšI‰ ëåZ þ GÐlI ¨ª©£« ¯ þ Gš-I‰IQ ëG åZ ÿJÿ
„
¯ k GšI‰² ¯
„
„
„
„
!
„ëåZ
where the last inequality follows from the facts that ¯ åZ
which implies that >¯. Z½å ¯/. 2 Ÿ G .
¨ª©£«
„
IG
„
E
„
|
•Xœ
We will consider the following function
÷`GšI £ åZ ²
MG
E
MGP|I G6e ,
„ëåZ
and GPIQ
E
åZ
I X0 ¬\!
¯ Gš2
; E C`! ¬0E|G . We will need the following property of this function.
; and ; E §EQ ¯ /¬N6¬ ,
Lemma 5.3 ([102]). Let *ŸtV be an integer. For any ¬0Ÿ
ý Ö !4 Ó ­ ¯ ÷`GšI åZ ² ¯ åZ GI´¬ b¤
Proof. The proof follows from the subsequent geometric interpretations of £ ¯ and
¯ ™ . See Figure 5.1 for a pictorial illustration of the arguments used in this proof (for
PLV ).
;
First, we claim that for any E ý E÷G , ¯ satisfies the following property: the line
;
åZ
åZ
segment between … ¯ ý \!f² ¯ GkIK ¯ ý ` and ý ! is tangent to the curve ² ¯ G³I at ¯ ý .
Thus, we need to show that
Iò² ¯ åZ `GÐI‰ ¯ ý
(5.2)
ý Il ¯ ý zW² ¯ åZ  `GšIl ¯ ý `
åZ
åZ
åZ
.
One can check that i² ¯  G<I-¬³
¯
™
Z
å
Z
å
™
Z
å
‚
’ ˜ ¾ ’ ¾ •eœ ’ å‚ ˜®˜ ™¾ ’ Z™å ¾ •eœ ’ Z™å‚ ˜®˜
’ ¾ •eœ ’ ˜®˜
¾
`
¾
Now,
ý Il ¯ ý ý IQå8G Z nLGšI‰ åZ …JIQG ItGšI‰ åZ `GšI‰ åZ I‰
ý IG
GšåI‰Z åZ WJIG I´ `GšIl ååZ úZ n ý IG
² ¯ GšI‰ åý Z
WJIG I´ i² ¯ GIl ý `
`GšIF² ¯ GšI‰ ý
Iò² ¯ åZ GIl ý
i² ¯ åZ  `GÐI‰ ý !
¯ 1>0 k
20 30
where
2
>;
¤Ï 6Ï
1>
0 k
B
l;
8;
N N
k
„
„
Æ
„ Æ
äÀѨª©<«>¯
¯
nS¨ª©<«>¯
ä
¯
¯
N
Lemma 5.3 was proven in [102] for the
~
the result for general .
„
„
¨ª©£«>¯
ä
NN ¹»º ¼
¹»º½¼
ä
NäÀ
Ѩª©<«>¯
¯
2
c¨ª©£«£¯
Æ
¹»º½¼
\¤
¨ª©£«>¯
Æ
Æ
„
U¨ª©<«>¯
Æ
¯
N
N
~65
* case. Here we present the straightforward extension of
76
åZ
and
which proves (5.2) (where we have used the expression for ¯ and i² ¯  GòI
„båZ
å
Z
the fact that Gšl
I 5² ¯ `GÐI‰ ¯ N ).
We now claim that ¥ ¯ 0> is the intercept of the line segment through …¬Y! and
1e0 ¬j!ë² ¯ åZ `GYI70e¬"N on the “; -axis.” Indeed, the “; -coordinate” increases by ² ¯ åZ `GYI70e¬" in
the line segment from ¬ to 0e¬ . Thus, when the line segment crosses the “; -axis”, it would
the “gain” going from ¬ to 0X¬ . The lemma follows
cross at an intercept of G6 Gš2
I £0 times
åZ I G> is a decreasing (strictly) convex function of G
from the fact that the function ² ¯ `Gòò
and thus, the minimum of X ¯ 0£ would occur at 0Ž±; provided ; ¬0EQ ¯ …¬" .
C
;
c
5.3 Overview of the Proof Techniques
In this section, we will highlight the main ideas in our proofs. Our proofs are inspired by
Thommesen’s proof of the following result [102]. Binary linear concatenated codes with
an outer Reed-Solomon code and independently and randomly chosen inner codes meet the
Gilbert-Varshamov bound3 . Given that our proof builds on the proof of Thommesen, we
start out by reviewing the main ideas in his proof.
The outer code 1 b in [102] is a Reed-Solomon code of length ~ and rate 4 (over
‰È ) and the inner codes (over ¯ such that me' for some wŸ
G ) are generated by ~
½SZb!—¤¥¤¥—¤ \! , where G´ 6 . Note
randomly chosen
generator matrices
that since the final code will be linear, to show that with high probability the concatenated
åZ G}I±G£4} , it is enough to show that the probability
code will have distance close to ²
åZ `Gs>
of the Hamming weight of ɳ over ¯ being at most i²
I G£4P8I‡<½´~ (for some
Reed-Solomon codeword Éx …É3Z\!¥¤¥¤¥¤!NÉ ), is small. Let us now concentrate on a fixed
codeword É + 1 b
. Now note that if for some GxEùS
Í E ~ , ÉjÎ. , then for every
choice of Î , ÉúiΠΆ
. Thus, only the non-zero symbols of É contribute to ƒ …ɳ .
Further, for a non-zero ÉjÎ , ÉúiÎ Î takes all the values in ¯ with equal probability over
the random choices of Î . Also for two different non-zero positions ÍfZ‰Ï ͙ in É , the
random variables ÉjÎ
Î and ÉúÎ Ø .Î Ø are independent (as the choices for Î and .Î Ø are
œ
œ that ɳ takes each of the possible X ’ ˜ valuesœ in t¯ with
independent). This implies
the same probability. Thus, the total probability that ɳ has a Hamming weight of at most
’ ˜ å
’ ˜ EQ å ’ ˜ Z™å Ù ] ` . The rest of the argument follows by
is › 6
doing a careful union bound of this probability for all non zero codewords in 1b
(using
the known weight distribution of Reed-Solomon codes4 ).
Let us now try to extend the idea above to show a similar result for list decoding of a
code similar to the one above (the inner codes are the same but we might change the outer
_îw
%
í
í ía ;
aí
íÂ
ía
í
ÛCý 8 9 ;:
9 9
A
3
A binary code of rate
“n .
4
A
8 9 ;:
ö
í  í
í
%
;
í
<8 9 ;:
>= ? -@
í
M
í]
_
8 9 ;:
íÂ
B
satisfies the Gilbert-Varshamov bound if it has relative distance at least
í
BC
c
hQt‚u
In fact, the argument works just as well for any code that has a weight distribution that is close to that
of the Reed-Solomon code. In particular, it also works for folded Reed-Solomon codes– we alluded to this
fact in Section 4.4.
77
.zW²
åZ
`kG I G£4}I
code). We want to show that for any Hamming ball of radius at most
‡e½~ has at most ƒ codewords from the concatenated code 1 (assuming we want to show
that ƒ is the worst case list size). To show this let us look at a set of ƒònuG codewords from 1
and try to prove
the probability that all of them lie within some ball of radius is small.
that
Z
Z
Let É !¥¤—¤¥¤—!NÉ
be the corresponding codewords in 1 b . As a warm up, let us try and
show this for a Hamming
centered around ñ . Thus, we need to show that all of the ƒJn§G
ball
Z
Z
codewords É
have Hamming weight at most . Note that ƒ
reverts
!¥¤¥—¤ ¤—N! É
back to the setup of Thommesen, that is, any fixed codeword has weight at most with
small probability. However, we need all the codewords to have small weight. Extending
Thommesen’s proof would be straightforward if the random variables corresponding to
Î
having small weight were independent. In particular, if we can
each of É
show that for
every position G´Erͧ½
E ~ , all the non-zero symbols in ¢XÉ Î Z NÉ Î ¥—¤ ¤¥¤—!NÉ Î Z ¦ are linearly
independent5 over ¯ then the generalization of Thommesen’s proof is immediate.
íK
í
v ;
í
! !
z
Unfortunately, the notion of independence discussed above does not hold for every
ƒ‰n G tuple of codewords from 1 b . A fairly common trick to get independence when
dealing with linear codes is to look at messages that are linearly independent. It turns out
that if 1\b is a random linear code over È then we have a good approximation of the the
notion of independence above. Specifically, we show that with
a
Z very high probability for
Z
Z
, the set of codewords É
linearly independent (over UÈ ) set of messages6 L ¥¤¥¤¥—¤ !hL
1\b ` L Z \¥¤¥—¤ ¤¥!fÉ 1\b ` L have the following approximate independence property.
Z
For most of the positions G E÷ÍsEû~ , most of the non-zero symbols in ¢XÉ Î ¥—¤ ¤¥¤—!NÉ Î ¦ are
linearly independent over ¯ . It turns out that this approximate notion of independence is
enough for Thommesen’s proof to go through. Generalizing this argument to the case when
the Hamming ball is centered around an arbitrary vector from Y¯ is straightforward.
!
M
Â
!
!
ú
We remark that the notion above crucially uses the fact that the outer code is a random
linear code. However, the argument is bit more tricky when 1¹
Z b
is fixed to be (say) the
Z
are linearly independent
Reed-Solomon code. Now even if the messages L !¥¤¥¤¥¤—h! L
it is not clear that the corresponding codewords will satisfy the notion of independence
in the above paragraph. Interestingly, we can show that this notion of independence is
equivalent to showing good list recoverability properties for 1>\
. Reed-Solomon codes
are however not known to have optimal list recoverability (which is what is required in our
case). In fact, the results in Chapter 6 show that this is impossible for Reed-Solomon codes
in general. However, as we saw in Chapter 3, folded Reed-Solomon codes do have optimal
list recoverability and we exploit this fact in this chapter.
ED is isomorphic to GF and hence, we can think of the symbols in !H as vectors over .
Again any set of I
t messages need not be linearly independent. However, it is easy to see that some
5L;K MON>P H
subset of J
Q
I
t SR of messages are indeed linearly independent. Hence, we can continue the
t with J .
argument by replacing I
5
6
Recall that
Š ‹
Š ‹
Ik
h k
6n
k
Š
Š ‹
78
5.4 List Decodability of Random Concatenated Codes
In this section, we will look at the list decodability of concatenated codes when both the
outer code and the inner codes are (independent) random linear codes.
The following is the main result of this section.
; RÛ R
; R% R
; ö
R
!
GlOG be an arbitrary rational. Let
Theorem 5.1. Let be a prime power and let
‡‰r ¯ <>G be an arbitrary real, where ¯ <G£ is as defined in (5.1), and
4hE
the following holds for large enough integers ³^~ such
… ¯ <G>äIl‡<`6 G be a rational. Then


4 ~ . Let 1 b be a random
that there exist integers and that satisfy §¼GX and

Z
linear code over ¯/T that is generated by a random î¹~ matrix over ¯/T . Let 1 Î !¥¤¥¤¥¤!f1 Î Î
be random linear codes over ¯ , where 1 Î is generated by a random .î matrix §Î and
are all independent.
the random choices for 1 b !b0Z!¥¤¥—¤ ¤¥!\
X Then the concatenated code
U V
í
|L
|
í
í
Ø
Z
å
Z
1
1\b ?ŽPi1 Î ¥! ¤¥¤—¤¥!ë1 Î is a ² ¯ GšI‰4{>G äI´‡c!N ‘ 3 ›”œ/• WB -list decodable code with
å
probability at least G(Ix › ’ tµ˜ over the choices of 1 b !\í]Z!¥¤¥¤¥¤b! í
. Further, with high
probability, 1 has rate G£4 .
Ù
ÿ
þ
|
In the rest of this section, we will prove the above theorem.
Define ö te' . Let ƒ be the worst-case list size that we are shooting for (we will fix
Z its
Z
9+
valueat the end). The first observation is that any ƒ8n*G -tuple of messages L !¥¤¥¤¥¤h! L
ç ‘È Z contains at least Ò0 È iƒunxG many messages that are linearly independent
over }È . Thus, to prove the theorem it suffices to show that with high probability, no
åZ I £G 4PjIw‡e½´~ contains a Ò -tuple of codewords
Hamming ball over t¯ of radius i² ¯ `G9#
Z
Z !¥¤¥¤¥¤—h! L Ô are linearly independent over È .
1 QL \¥¤¥¤¥—¤ !f1aQL Ô N , where L
å
Z
Define , ² ¯ `G}I4 G>ÐI ‡ . For every Ò -tuple of linearly independent messages
Y QL Z ¥—¤ ¤¥Z —¤ !hL Ô s+5ç ‘È Ô and received
+ Y¯ , define an indicator random variable
word dQl
Y
Z
E  E3Ò ,
/d8h! L !—¤¥¤¥¤—^! L Ô> as follows. /d3!^L !¥¤¥¤¥¤h! Lô£Ô ò
G if and only if for every G§K
ƒ:
_ i1‹L
€¥äI´dµ9E,@‚~
. That is, it captures the bad event that we want to avoid. Define
Wa
!
!
B
@ L
À üÙ
|
¾
O
Ò
^¨ª©£«

Ø
Z
Ù
ü
M
’ œ
_
Ù
˜ 0 Þ ’ " µ —˜
ß ÛÚ
O M
È
‘
Y
Z
/d3^! L
!¥¤¥¤¥¤!hL
Ô
Ô
;
À
where ÓÑ­ ö‹! !sÒY denotes the collection of subsets of UÈ -linearly independent vectors
‘
. By Markov’s
from È of size Ò . We want to show that with high probability O
inequality, the theorem would follow if we can show that:
ã( O
ü
À Ù
¡
Ø
ü
|o[Z
Ù
¾
Ù ÞB
ß *Ú
’ œ M ˜ 0 ’ È ‘ä Ô¥˜
M
Y
ã
Z
/d3!hL
Z
!—¤¥¤¥¤—^! L
Ô
å
˜ ¤
Ñ¡ is › ’ t
(5.3)
Note that the number
of distinct possibilities for d8!hL ¥! ¤¥¤¥¤—h! L Ô is upper bounded by t ö Ô 5 Y3’ Z ԗ˜ . Fix some arbitrary choice of d3h! L Z ¥! ¤¥¤¥¤!hL Ô . To prove (5.3), we will
show that
Y
Z
å
Z tk’ Ô¥˜ *
ã d8!^L
!¥¤¥¤¥¤—h! L Ô ½¡ is › ’ Y˜ ¤
(5.4)
^
79
Z
É Ô.+KÈ , we
Before we proceed, we need some more notation. Given vectors É !—¤¥¤¥¤—!fè
Z
define î( ¥¤¥—¤ ¤—N! ÉèԣР9Z\¥¤¥—¤ ¤¥!
as follows. For every G*ELÍEK~ , µÎ 2 y
Ò ¡ denotes
…
€
the largest subset such that the elements Î ’ are linearly independent over ¯ (in case
€
of a tie choose the lexically first such set), where É´€Â € Z ¥—¤ ¤¥—¤ ! € . A subset of }È is
linearly independent over ¯ if its elements, when viewed as vectors from j¯' (recall that
¯/T is isomorphic to ú¯' ) are linearly independent over ¯ . If €Î + äÎ then we will call €Î
a good symbol. Note that a good symbol is always non-zero. We will also define another
Z
partition of all the good symbols, \
¥¤¥¤¥¤!NÉ Ô k÷< HjZ¥¤¥¤¥—¤ !ZH by setting H q¢ Í· a+¥äÎѦ
€
Ô
for GPE aECÒ .
Z
Since L !¥¤—¤¥¤—!hL Ô are linearly independent over UÈ , the corresponding codewords in
Z
1\b are distributed uniformly in }È . In other words, for any fixed …É !—¤¥¤¥¤—!NÉèÔ>8+‰ ‰È Ô ,
WÉ !
2 ú ! !ú
^Ë Ù
!
.WÉ
IJ
^Ë ! Ë
Ë ú
ú
Ë
^
`a
À!]_^
!
úõ
Ô
1\b NQ L
Z
]
Û
€
ktÉ [€ b
€
ö å
t å
Ô
Y
Ô
¤
(5.5)
Î
í
Recall that we denote the (random) generator matrices for the inner code 1 Î by .Î for
Z
every G}EQÍ8¼
E ~ . Also note that every …É Z !¥¤¥¤¥¤!NÉèÔ£š
+Fç‰È Ô has a unique î(
—¤¥¤¥—¤ !fÉèÔ£ .
In other words, the V Ô choices of î partition the tuples in È Ô .
Let ÷@‚´~ . Consider the following calculation (where the dependence of î and \
Z
on É !¥¤¥¤¥¤—!NÉ Ô have been suppressed for clarity):
^
K
㚠Y
/d3^! L
Z
WÉ !
!¥¤—¤¥¤—!hL
Ô
Ñ¡
ü
Ù
’;: œ : ˜ ’ |
ß
O Zc
IJ
å Y
å Y
Ô
ß
À!]_^
ü
ß
å Y
Ô
O
ß
Ô
Z
Û
€
Zc
Zc
ß
˜
I
J €
’ œ
Û
Û
’ œ
WÉ í2I´dµ9
E b
Ô
ƒ:_
Z
Û
€
IJ ˜
˜
Z
1\ b
NQ L
ß
˜
ß
’ œ
`a
Zc
O Û
]
Ù
’;: œ : ˜ ’ |
ü
Ù
’;: œ : ˜ ’ |
ü
Ù
’;: œ : ˜ ’ |
O
E
Ô
˜
IJ `a
d
ktÉ € b
O ˜
Z
`a
`a
€
(5.7)
Ô
Û
€
ƒ:
_
Z
Û
˜
Z
WÉ ímIwd8E b
Ô
€
O (5.6)
€
Z
ƒ
_
ð
S
WÉ í2I´dµ9E b
€
(5.8)
Ô
ß
å
€
Û
Z
I
J feyƒ
_
ð
S
WÉ í2I´dµÐ Ehg
€
(5.9)
í In the above (5.6) follows from the fact that the (random) choices for 1b and
are all independent. (5.7) follows from (5.5). (5.8) follows from the simple
Ñí0Z\!—¤¥¤¥—¤ \! í
80
^B
5 ~u¡ , ƒ
fact that for every d´+‰ ¯ and HK÷
^
d9E
^
d . (5.9) follows from the subseÔ Z ƒ:_
€
quent argument. By definition of conditional probability, IiJ I d9E
ð S
IN
J ƒ:_
ð
ß
…É
ð
ƒ
_
jlk
` a åZ
WÉ í2w nm
Û
€
is the same as
`
_
Ô
åZ
a
ímIwd9Epoo Z
Ô
ƒ
_
…É
S
ð
í2I´dµ9E b
€
Ô
I
J €
€
ƒ
Z
Û
Û
_
WÉ í2I´dµ9E b
€
S
ð
¤
.+
Now as all symbols corresponding to H are good symbols, for every Í ¾H , the value
Ô
Ô
åZ
Z
ν¦ . Further since each of
of É Î Ô .Î is independent of the values of ¢XÉ Î Î™!¥¤¥¤¥¤!N Î Ô
are chosen independently (at random), the event ƒ:_ ß …ÉèԂ IFd}
í]Z!¥¤¥¤¥¤—b! í
Eq is inð
åÔ Z
Ô Z ƒ:_
€
dependent of the event
. Thus, IJ
I d9E
Z ƒ:_ ð S € í÷I dµ9E
ð S
€
€
is
aí
É aí
í
jk
k
WÉ í2w m
WÉ K ` a åZ
W É ímIwd9Ehšg IJ Z ð WÉ í2I´dµÐ E b ¤
ð
í
Û
Û
I
J fe®ƒ:_
ß
Ô
Ô
©
ƒ:_
€
S
Û
€
Inductively applying the argument above gives (5.9).
Further,
š Y
ã
/d3^! L
Z
!¥¤¥¤¥¤h! L
Ô
Ñ¡
ü
Ù
’;: œ : ˜ ’ |
ü
Ù
’ Ä œ Ä ˜ ß
O E
ü
Ù
ß
’ÝWÄ ý œ O Ä à ˜ ß
˜
Û
ü
€
à ß
å t
©IJ re®ƒ:_ ð
Z
;: : Ù
Û
ÔÛ
’ —Ô ˜
ds lt œ Ä
S
ß
Ù
˜
ß
S
S
¿ð ¿
Ýiý O à ß
Û
€
¿ð S ¿
Û
I
J fe®ƒ:_
Ô
S
ð
Z
Û
€
(5.10)
WÉ ¥2
í I´d9Ehg
T€
t
(5.11)
IN
J fe®ƒ
Ô
å
Z½
Ä
Û
Ô
€
Û
€
å
WÉ ímIwd9Ehg
S
å
ß
ß
Zc
’ Xœ O O ˜ ’ |ß ˜ ß ’ ¿ð œ ¿ Ä œ ¿ð ¿ Ä ˜
ü
’ Ä œ O Ä
ß
Ýiý O ß
O Zc
Ô
å
Û
_
WÉ —í2I´dµ9Eug
S
ð
T€
t
S
I
J re®:
ƒ _ S
U ð
ZÑ
å ’6Ä
ÄS
WÉ —í2Iwv d9Ehg
T€
S
å µ˜ å
! ,ú
ß
Ù
S
å
X
Z
(5.12)
(5.13)
Ù
In the above (5.10), (5.11), (5.13) follow from rearranging and grouping the summands.
, the number of tuples
(5.12) uses the following argument. Given a fixed î‰÷ Z\¥¤¥¤¥¤!
’ …
Z
Z
…É !¥¤¥’ ¤—… ¤¥!fÉèÔ£ such that î-…É !¥¤¥¤¥¤!NÉèÔ>š î is at most w yx Î Z ¿ ¿ ' e ¿ ’ … ¿ ’Ô å ¿ ’ … ¿ ˜ , where
the ¿ ¿ ' is an upper bound on the number of · ³`Î · linearly independent vectors from ¯' and
¿ ’ … ¿ ’Ô å ¿ ’ … ¿ ˜ follows from the fact that every bad symbol ¢ €Î ¦ ’ … has to take a value that is a
‰ú
ú
z
Ë ÙÚ
€
Û
81
Ë Ù
linear combination
of the symbols ¢ €Î ¦
ß
€
’ ' ԗ˜ S œ ¿ ð S ¿ . Finally, there are V Ô* EQ
(5.13) implies the following
s lt
p
ÔÛ
^
Y
Z
Z ԗ˜
’
t3
*(
ã d9h
! L ¥! ¤¥¤¥¤—!hL
where
U
å Z™å ˜®˜ å
å å
t ’6Ä S k’
v
ßz
å
Zß
˜ 5£’ '
. Now w Ex Î Z ¿ ¿ ’ ' ¥Ô Y
distinct choices for î .
’ …
å
S
ü
½¡jE
Ô
’ …
Û
ß
’ Ä œ Ä ˜
X
Z
Ù
Ýiý à ß
Ô
å
#
Z
Û
€
—
Ô [
˜s
…Z
tœ¿ ¿
’ …
€
WÉ ímw hg
©IJ r®e ƒ:_ ð S € I d9E ³¤
å

E
We now proceed to upper bound # by › ’ t µ
. Note that this
˜ for every GgEü´
€
will imply the claimed result as there are at most ~ n5G Ô
’ tµ˜ choices for different
values of T ’s.
€
We first start with the case when T ,T X , where
€
#
€
Ù
Ù
H
R
T
w`GšI‰4QI
~
Ò
p
8YÁ\!
; R tRGòIQ4 to be defined later. In this case we use the fact that
WÉ í2I´dµ Ehgs|
E G . Thus, we would be done if we can show that
Ò T<Î n EqJI Ý  R ; !
Á
G
n
WT I ´ GšIF4P Án
Ò
Âÿ
; that we will choose soon. The above would be satisfied if
for some ݁"ˆ
T
R| Gš‰I }4 äI G Ò G n ÁÒ T n G ÿ I ݁ !
Z n
n Z n
which is satisfied if we choose Qˆ
as T RvT . Note that if 5ˆ
€.
, it is enough to choose g
nQG and if we set Ý  V Ò
WY
for some parameter
I
J feyƒ
_
ð
S
X
Ð
€
~
G
€
~
þ
€
~
G
Y
. ÚÔ Ä
ÿ
S
~
~
N
Z
2
.
þ
€
~
ÔÚÄ S
t
Ô
Ð
2
Y
Ô
G
X
€
Ô
We now turn our attention to the case when T ŸQTX . The arguments are very similar to
€
the ones employed by Thommesen in the proof of his main theorem in [102]. In this case,
by Lemma 5.1 we have
#
€
å
E rÄ
Q
|{ Z™å ¾ { v } å { Z™å ›”œ/v ~
• WB } å v
S
Ù
Z
S
The above implies that we can show that #
every T X EQTaE ~ ,
D6C^úTU9Et²
¯
åZ
þ
GšI8G
þ
GšI
€
is
Z ß
S
ß
å
B
å
v }
Z
Ù
¤
S
Ù
å › ’ tk
Z™å ‚å } ®˜ ˜ provided we show that for
’
~´GšF
I 4P
T

S
ÿ
I
~
Ò
T2
I
Ò
I
~
ÿ
úTÁ
I‰XÝ !
82
Ý for L
Ô
n
rÄ
E
Ò
‡<6XŒ . Now if öŸ
V
, then both Ô E and E . In other words,
å Z
jÄ
Ô&Ä
ÔÚÄ
. Using the fact that ² ¯ is increasing, the above is satisfied if
ÔÚÄ
B6U/úTc9Et²
¯
åZ
Gš8
I G
Ò_Ÿ
þ
V~
I
ÿ
T
þ
By Lemma 5.4, as long as >Ü ¯ 6CW Ý
above can be satisfied by picking
Òÿ
I‰Ý !
T2o
`8
G I´4}N (and the conditions on Y are satisfied), the
D6C/ ´~[3L²
€
w`GšI‰4QI8YÁ
~
GI
@"!
åZ
¯ GIãG£4PäI‰£Œ ÝP
Ò ‚
Ø
as required. We now verify that the conditions on Y in Lemma 5.4 are satisfied by our
Z™å
’× ˜ .
choice of Y0
. Note that if we choose St Üb¯ 6C… Ý `GšIF4}` , we will have Y0
Ô ¾
Now, as 4 G , we also have Y EtÝ 6C?<G Ü끯 . Finally, we show that Y Ep`Gš‰
I }
4 `6<V . Indeed
pR
Yg
[
Ý ` GšI‰4}
Ü ¯ G
‡ ` GšÜ I‰ 4}
E
¯ G
‡U`GšI‰4}
G
 G
S
¯ <G>\`GšF
I 4P
E
R
GI‰4
V
!
where the first inequality follows from the facts that ܁¯ Ÿ G and ‡E G . The second
inequality follows from the assumption on ‡ . The third inequality follows from Lemma 5.2.
Z
Z Z™å ˜ “ ˜ as claimed in the stateNote that ´÷Š . Z™å
2 , which implies ƒ, ö ‘’ ’®’
’
˜“
ment of the theorem.
1>\ Ž
We still need to argue that with high probability the rate of the code 1
Z
1 Î !¥¤¥¤¥¤!f1 Î is G<4 . One way to argue this would be to show that with high probabil
ity all of the generator matrices have full rank. However, this is not the case: in fact, with
some non-negligible probability at least one of them will not have full rank. However, we
claim that with high probability 1 has distance ˆ
. Note that as 1 is a linear code, this
Z
+5 ‘ are mapped to distinct
implies that for every distinct pair of messages L
A
Ï L
È
codewords, which implies that 1 has æ codewords, as desired. We now briefly argue
why 1 has distance ˆ . The proof above in fact implies that with high probability 1 has
åZ
distance about ² ¯ G(I G£4P½ ~ . It is easy to see that to show that 1 has distance at least
Y
, it is enough to show that with high probability M | c iú
ñ hL‰} . Note that this is
a special case of our proof, with
G and d÷Oñ and hence, with probability at least
GЉ
I X› ’ Y ˜ , the code 1 has large distance. The proof is complete.
Ø
Ò
Ø
W
;
;
Ò|ù
Ù d€ !
;
Remark 5.1. In a typical use of concatenated codes, the block lengths of the inner and
outer codes satisfy wóy…¨ª©£« ~ , in which case the concatenated code of Theorem 5.1 is
Z½å ˜ •eœ . However, the proof of Theorem 5.1 also
list decodable with lists of size ~ ‘ “• ’
, the proof of Theorem 5.1
works with smaller . In particular as long as is at least Œ
goes through. Thus, with in o˜ , one can get concatenated codes that are list decodable
Z½å ˜ • s up to the list-decoding capacity with lists of size ‘ “•YýN’
.
[
Ò
Ø ^
^
Ò
83
Lemma 5.4. Let
rationals and Ý]ˆ
;
;R ! R
Geë4
G be
be a prime power, and G§EõQEF~ be integers. Let
be a real such that 4 E … ¯ <G>kIÝeN6G and Ý]Ev ¯ < G> , where ¯ <G>
%
;
Ó­
Ø
Z™å
! × 2 , where Ü끯 is the
¾
×
constant that depends only on from Lemma 2.4. Then for all integers LŸ Z™å¾ and
I G£4}úI´V<Ý<Ñ~ the following is satisfied. For every integer `8G I´4,’I¥YÁ ˜ ~ E
gEpW ¯ åZ G9¥
TaE‹~ ,
~´GšIF48
I YÁ I V~ ¤
åZ
GšI8G GšI
Et² ¯
(5.14)
úT
T
Á(
T
is as defined in (5.1). Let Yxˆ
²
be a real such that YxEõÖ
þ
.
M
Ò ÿ
ÿ
þ
؀
Ò
åZ
Proof. Using the fact ² ¯ is an increasing function, (5.14) is satisfied if the following is
true (where TX9÷`GšI‰4QIãYúZ~ ):
E
´~
;
é
Ö Ó­
ÏÏ
Ä
 Ä
T
åZ
X² ¯
~
F
ÿ
þ
GÐ8
I G
þ
~wGI‰4I8Yú
GšI
T
þ
ÿ
V~
I
Òÿê
¤
T XÐ
g
~ EtT‹E ~ ,
Define a new variable 0}÷GúI ~´GjIg4KI YÁN6eT . Note that as T X ÷GjI 4KI YÁ ö
~ `GIF4Q8
I úY \`GšI20> åZ . Thus, the above inequality would be
E 0oE54xn Y . Also TU6Úö
satisfied if
z
Ö Ó­
ý !4 6Ï 6Ï
E÷`GÐIF4Qã
I Y Á
´~
é
Gš2
I 0£ åZ ²
}
¯
åZ
V
Gš8
I G 0sI
¤Ò ÿ ê
¤
`GšIF4I8YÁ
þ
åZ
Again using the fact that ² ¯ is an increasing function along with the fact that YFE
4PN 6<V , we get that the above is satisfied if
´~
;
for every
¤Ï 6Ï
Ep`GšI‰4QI8YÁ
Ò
[
E3y
0 Et4
é
}
Ø ’ € Z™× å ¾ ˜ , then ²
`Gš2
I 0£ åZ ²
¯
åZ
Gš8
I G‚s
0 I
þ
€
Ÿq
¤Ò ÿ ê
`GÐIF4}
¤
G IãG‚s
0 I Z™å 2
² ¯ åZ G(I G 0>IFÝ . Since
¯ åZ . š
’
˜žÔ
åZ
Y GjI 0> Ý*EQÝ , the above equation would be satisfied
n Y , GjIg4 I Á
By Lemma 2.4, if lŸ
if
Ó­
Ö
ý !4 `G-I
´~
_
7
Ep`GšI‰4QI8YÁ
 ù
[
Ó­
Ö
ý¤Ð!4 6Ï
}
¯ 1 >0 äIlÝX¤
H
Note that the assumptions Y E Ý 6U< G<Ü끯 tE Ý<6 G (as Ý E G and Ü믁 Ÿ G ) and 4 E
n Y E ¯ ?G>N6©G . Thus, by using Lemma 5.3 we get that
… ¯ <G>PI Ý<`6 G , Ó we have 4v/
G*
I 4õI YÁcÖ ­ ý Ð } ¯ 0£]h² ¯ åZ G†I G£4õI‹GÁY . By Lemma 2.4, the facts that
åZ
åZ
åZ
YSEQÝ 6C< G<Ü ¯ and ² ¯ is increasing, we have ² ¯ GYI G£4{I GYÁ9Ÿt² ¯ `GYI G£4PI Ý . This
Et² ¯ åZ GIãG£4PäIFV<Ý , as desired.
implies that (5.14) is satisfied if D6C^~ 9
z
7
¤!46Ï
We also use the fact that
B"C
‹ c
is increasing.
g
84
5.5 Using Folded Reed-Solomon Code as Outer Code
In this section, we will prove a result similar to Theorem 5.1, with the outer code being
the folded Reed-Solomon code from Chapter 3. The proof will make crucial use of the list
recoverability of folded Reed-Solomon codes. Before we begin we will need the following
definition and results.
5.5.1 Preliminaries
We will need the following notion of independence.
;
Definition 5.1 (Independent tuples). Let 1 be a code of block length ~ and rate
È 4 defined
over ¯ T . Let {Ÿ G and EQC
T Z\—¤¥¤¥¤—f! T Ô E‹~ be integers. Let ƒ õ…TCZ¥¤¥¤¥¤!NT Ô . An ordered
Z
tuple of codewords …Ü —¤¥¤¥¤—!NÜۂ
Ô , Ü € +1 is said to be ƒ8!™ ¯ -independent if the following
Z
holds. TCZ8ƒ _…Ü and for every G} .E
, T is the number of positions Í such that Ü €Î is
€
å
Z
Z
¯ -independent of the vectors ¢XÜ Î ¥¤—¤¥¤—!NÜ €Î ¦ , where Ü S ÷…Ü S Z ¥¤¥¤¥¤!NÜ S .
Ò|
!
!
R êÒ
!
_
Z
Æ !
!
!
Note that for any tuple of codewords …Ü ¥¤—¤¥¤—!NÜ Ô there exists a unique ƒ such that it is
1 8
ƒ ™ ¯ -independent.
The next result will be crucial in our proof.
!
/Ò
Lemma 5.5. Let 1 be a folded Reed-Solomon code of block length ~ that is defined over
‰È with ö
' as guaranteed by Theorem 3.6. For any ƒ -tuple of codewords from 1 ,
¯ where ƒ2Ÿ {B ~o6X‡ ‘ •Xœ Ô ¹»º™¼ ’ ˜ (where ‡{ˆ
is same as the one in Theorem 3.6),
there exists a sub-tuple
of codewords such that the -tuple is ƒ8™ ¯ -independent, where
È
ƒ
…TCZ¥¤¥¤—¤—!NT such that for every GsE .E , T Ÿ GI‰4Il‡<~ .
KzÆ !
“
Ò
%
;
Ò
CÒ p
Ô
!
€
Proof. The proof is constructive. In particular, given an ƒ -tuple of codewords, we will construct a sub-tuple with the required property. The correctness of the procedure will hinge
on the list recoverability of the folded Reed-Solomon code as guaranteed by Theorem 3.6.
We will construct the final sub-tuple iteratively. In the first step, pick any non-zero
Z
codeword in the ƒ -tuple– call it Ü . Note that as 1 has distance `GòI 4P~ (and ñFQ1 ),
Z
Ü is non-zero in at least TCZ* `GJIQ4}Z~ ˆ `GòIQ4Ix‡<Z ~ many places. Note that Ü Z is
vacuously independent of the “previous” codewords in these positions. Now, say that the
Z
Â
ƒ ^™ ¯ -independent for
procedure has chosen
codewords Ü ¥¤¥¤¥¤!NÜ such that the tuple is 1Y
È
ƒjD …TCZ¥¤¥¤—¤—!NT , where for every GŽEò E Á , T õG(IF4QI‰‡e~ . For every GŽE Í9E ~ ,
Z €
Â
Â
define ¶"Î to be the ¯ -span of the vectors ¢XÜ Î ¥¤¥¤—¤—!NÜ Î ¦ in Á¯' . Note that ·¸¶"ÎN·ÐEö . Call
Ü÷WܗZ !¥¤¥¤¥—¤ !NÜ 8+K1 to be a bad codeword, if there does not exist anyÈ Tk Z9÷`GCI 4Ž
I ‡<~
Z
Â
such that WÜ ¥¤—¤¥¤—!NÜ NÜ¥ is ƒ8™ ¯ -independent for ƒ TCZ !¥—¤ ¤¥¤—!NT Z . In other words, Ü is a
bad codeword if and only if some HC÷ 5~g¡ with · H ·>÷i4´n{‡e~ satisfies Ü\ÎY ¶BÎ for every
Í3 H . Put differently, Ü satisfies the condition of being in the output list for list recovering
1 with input ¶äZ\¥¤¥—¤ ¤— ¶
and agreement fraction 4´n ‡ . Thus, by Theorem 3.6, the number
¯ ¯ Â
of such bad codewords is wõ ~ 6X‡ ‘ “•eœ ¹»º½¼ ’ ˜ Em~ 6X‡ ‘ •Xœ Ô ¹»º™¼ ’ ˜ , where is
Ò
Ÿ%
ÊÆ !
+
!
!
! !f
!
!
Â
!
Ÿ
!
+
Ÿ
KzÆW
H ‹
K
y
+_
y
t
‹
“
%
Ò
85
the number of steps for which this greedy procedure can be applied. Thus, as long as at
each step there are strictly more than w codewords from the original ƒ -tuple of codewords
left, we can continue this greedy procedure. Note that we can continue this procedure
ƒ 6„w . The proof is complete.
times, as long as Et8
Ò
Ò_
Finally, we will need a bound on the number of independent tuples for folded ReedSolomon codes.
; R pR
;
Lemma 5.6. Let 1 be a folded Reed-Solomon code of block length ~ and rate 4 G
T Z !¥¤¥¤¥¤ !NT Ô E‹~ be integers and
that is defined over È ,È where ö te' . Let KŸ G and EQC
define ƒ
…TCZ —! ¤¥¤¥¤—!NT . Then the number of ƒ8!™ ¯ -independent tuples in 1 is at most
|
_zÆ
Òp
E… ’ Ä
Ô
X’
Ô Ô
Z
˜
Ô
å
Û
€
!
Z
Z
ö
¡
E
S
å k’ Z™å ˜ ½Z ˜
¤
ý
!
Proof. Given a tuple WÜ ¥¤—¤¥¤—!NÜ*>
Ô that is ƒ8™ ¯ -independent, define H
25 ~u¡ with · H ·C
€
€
T € , for G‰ý
E 5

E
to be the set of positions Í , where Ü €Î is linearly independent of the
åZ
Z
values ¢XÜ Î —¤¥¤¥¤—! Ü €Î ¦ . We will estimate the number of ƒ8™ ¯ -independent tuples by first
estimating a bound w on the number of choices for the  ážË codeword in the tuple (given a
€
fixed choice of the first ŽIQG codewords). To complete the proof, we will show that
! f
Ò
w
+K
€
Z
’Ô
˜ ö
EQ 3
E
!
E… ’ Ä
¡
S
å 3’ Z½å ˜ ½Z ˜
¤
ý
A codeword Üò 1 can be the  ážË codeword in the tuple in the following way. Now for every
åZ EQ Ô values (as in these position the value has
position in ~u¡ [ H , Ü can take at most j€
€
to lie in the ¯ span of the values of the first †ItG codewords in that position). Since 1 is
folded Reed-Solomon, once we fix the values at positions in 5~g¡ [ H , the codeword will be
€
E i 4 ~QI[ ~tI.T <n[£
G
Ö Cú
E …T I ~ `GIÂP
4 <n[£G completely determined once any Ö CÁ
€
€
positions in H are chosen (w.l.o.g. assume that they are the “first” so many positions). The
€
number of choices for H is S E Œ,
EtB
. Thus, we have
‰
€
w
€
EQ
c… Ô å ÄS
ö
tV
!; t
Ä
E
¡†
… ’Ä
S
Z
å 3’ Z½å ˜ ½Z ˜
’ Ô
˜ ö
EQ k
ý
E
!;
w
¡†
… ’Ä
S
å 3’ Z™å ˜ Z ˜ ˜
!
ý
as desired.
5.5.2 The Main Result
We will now prove the following result.
;R R
; R pp
;R
G
G be an arbitrary rational. Let
Theorem 5.2. Let be a prime power and let ò‹
‡†, ¯ <G> an arbitrary real, where ¯ <G> is as defined in (5.1), and 4 E … ¯ <G£BI]‡<`6 G
be a rational. Then the following holds for large enough integers ³^~ such that there exist


24 ~ . Let 1 b
be a folded Reed-Solomon
integers and that satisfy SWGX and
R
!
86
Z
code over z¯ T of block length ~ and rate 4 . Let 1 Î !¥¤¥¤¥¤!f1 Î be random linear codes over
¯ , where 1 Î Î is generated by a random 8îš matrix ÂÎ over ¯ and the random choices for
Z
are all independent. Then the concatenated code 1m 1¹ b ‰*
Z!¥¤¥¤¥¤—!b
Ž W1 Î !¥¤¥¤—¤—!f1 Î í]
í
í
^
`GšI‰4 >äI
å
least GJI › ’ ˜ over the choices of í[Z\!¥¤—¥¤ ¤—!bí
åZ
is a ² ¯
Ø
Z
Z½å
Ø
‘
“
•
—
f
–
’
˜
•
™’
´‡c‚! “
{G
˜
¹»º ¼
þ
Ê
-list decodable code with probability at
ÿ
t
rate G<4 .
Ê
. Further, with high probability, 1
has
In the rest of this section, we will prove the above theorem.
Define ö ' . Let ƒ be the worst-case list size that we are shooting for (we will
fix
its value at the end). By Lemma 5.5, any ƒwn G -tuple of 1¹ b codewords W !¥¤¥¤¥¤ !NÉ †+
¯ }
˜ codewords that form an
ëWƒ n G`6C ~o6Y ‘ •e>œ Ô ¹»º½¼ ’
W1\b i Z contains at least
È
18
ƒ !™ ¯ -independent tuple, for some ƒ~ WT —¤ ¤ ¤— T , with T Ÿ G-Ix45I YÁ ~ (we will
€
Ô
GŽ 4 , later). Thus, to prove the theorem it suffices to show that
specify Y , AY
åZ
with high probability, no Hamming ball in t¯ of radius i² ¯ G(îG£4P‰
I ‡e½~ contains
Z
a -tuple of codewords W
!¥¤—¤¥¤—N! É Ô‚ , where …É !—¤¥¤¥¤—!fÉ Ô£ is a -tuple of folded Reedƒ ! ¯ -independent. For the rest of the proof, we will call a
Solomon codewords that is 18
W ¤ ¤—¤ É Ô a good tuple if it is 18
ƒ ! ¯ -independent for some
-tuple of 1\b codewords
È
.
ƒ
…TCZ!¥¤¥¤—¤—N! T Ô , åwhere
T €
`
š
G
‰
I
Q
4
I
YÁZ ~ for every GsE .E
Z
¤ ¤ ¤!NÉ Ô Define ‹ ² ¯ GÐ 4 G>Y . For every good -tuple of 1 b codewords W
Y
Z
and received word d
"t¯ , define an indicator variable d8!NÉ !¥¤—¤¥¤—!NÉ Ô> as follows.
Y
, ƒ:_W € ö d E >~ . That
/d8!NÉ Z !¥¤—¤¥¤—!NÉ Ô>oùG if and only if for every G E xE
is, it captures the bad event that we want to avoid. Define
; R õR Iq
É Z íS è íg
É Z !¥¥ ¥!f
Ÿp
8
@ L IF I´‡
+r
è
Ò
Ò
|
Ò_ÿ
+ [t
Z\! ¥¥ !f
KzÆ
À ü
|
O
É
H
Æ
I
è
K
ü
Ò
CÒ
Ò
^
Ò
è
É Z !¥¥¥
É í I, . @
^
Z
d8!NÉ ¥! ¤¥¤—¤¥!fÉ
Y
ý
Ô
\¤
: ’ !À ]S^ ˜ ;
. By Markov’s inequality, the theorem
We want to show that with high probability O
À
would follow if we can show that:
ü
ü
Z
å
(5.15)
( /d8!NÉ !¥¤¥¤¥¤—!NÉ ¡ Et › ’ µ˜ ¤
š O ¡"
À
|
’;: œ : ˜ ’ À ]S^ ˜
¾
Z
Before we proceed, we need a final bit of notation. For a good tuple WÉ !¥¤¥¤¥¤!NÉ E ÷E Ò , define H WÉ Z !¥¤¥¤¥¤!NÉ # ‰~u¡ to be the set of positions Í such
and every GQ½
åZ
Z
¦ . Note that since the tuple is good,
that É is ¯ -independent of the set ¢XÉ !¥¤¥¤¥¤ !NÉ
Z
· H …É !¥¤¥¤¥¤—!NÉ —
·cŸp`GšIF4IãY Z ~ .
Let [
sequence of inequalities (where below we have
q@ ´~ . Consider the following
Z
Ø
Ù
[
Ù Z
¾
good
’;:
O ß ˜
œ
ß
Ù
]
Y
ã
Ø
Ù
Ù Z good
ß
Ô
ã
ß
½
ú
t
Ù
„
]
Ô

€
€
Î
‚
suppressed the dependence of H
O
ã
ü
¡"
À
|
Ø
Ù
Î
€
ú
Ô
€
Ô
Ù Z good
¾
’;:
ü
€
on …É
: ’ À ]S^
ß ˜
œ
Ù
„
]
˜
ß
IJ
Î
è
!¥¤¥¤—¤¥!fÉ £Ô for clarity):
Û
’
œ
O Z
˜
`a
€
Û
Ô
Z WÉ ímIwd9 E b
ƒ:_
€
(5.16)
87
ü
E
|[Ù Z
Ø
ü
¾
Ù
good
| Ù[Z
Ø
¾
good
E ü
|
¾
ü
Ù
Ø
[
Ù Z
Ù
Ø
good
|oÙ[Z
¾
Ù
E
ü
: ’
ü !À ]S^
’;:
O ß ˜
œ
’;:
: ’
ü !À ]_^
O ß ˜
œ
’;:
œ
ß
O
Ù
O j
Ý
Ù
Z’ ½å’ BÄ œå‚ } ˜ Ä ˜ ß
O
j
O Ù
Ý
Z
O ß
™
å
’BÄ œ å‚} Ä ˜ O ’
˜ j
˜
€
Ô
å
ß
Û
€
œ
Z
ß
ß
à
Z˜
¿ ¿ Ä ˜
ö
Z
Ô
å
Û
€
à
Ô
€
Û
Z
å jÄ
ß
]
Û
ԒÔ
Ù
€
Û
å
¡†
E
… ’Ä
S
S
Ô
å
Û
Z
(5.19)
ZÑ ˜
å ’ ™Z å ˜j Ù S
Ô
å
ý
Z
Ä
S
å ’ ½Z å Bå‚} ˜j
Ô
å
Û
€
Z
S• 
Ù S
Z
Z
Û
S
å
Z
{ v }n}
{
{
ßzZ
(5.20)
å rÄ S Z™å
¾
å jÄ S Z™å
|{ Z½å ¾ ˆ{ v i} å { Z½å ›”œ…|• W£v ‰ } å v
S
{ v h} }
{
å jÄ S Z™å
¾
€
ö
(5.18)
{ ‡ }n}
¾
Ù
Ô
(5.17)
€
S
Î
S
ß
€
å
_
ü
’¿ œ¿ Ä œ
Z
ԒÔ
˜
ƒ:_
€
ð {
Û
Ô
Û
’ð ;:Xœ : O ˜ ð ß ’ À„]S^ ß ˜
à
t ˜
Z
å S Z™å
¿ ¿
ß good
`a
Z ð WÉ í2Iwd9 E b
í I´dµ9Eug
ð WÉ 2
O I
J feyƒ
ß
ü
Z
Û
’
Û
à
ü
E
]
t O j
ß
O
Z’ ½å’ BÄ œå‚ } ˜ Ä ˜ Ý
Ù
]
Ô
å
ß
˜
Ù
: ˜ ’ !À ]S^
ü
˜
]
ß
O IJ ß
Ù
Z’ ½å ’ BÄ œå‚ } ˜ Ä ˜ Ý
Ù
ü
ß
¾
v }
›Ù p 3
S œH
Ù S
{ v }h}
(5.21)
Ù S
(5.22)
¤
(5.23)
In the above (5.16) follows from the definition of the indicator variable. (5.17) follows
from the simple fact that for every vector É of length ~ and every H 5~u¡ , ƒ _ …É‹E
ƒ:_W . (5.18) follows by an argument similar to the one used to argue (5.9) from (5.8)
in the proof of Theorem 5.1. Basically, we need to write out the probability as a product
of conditional probabilities (with t
“taken out” first) and then using the facts that
Z
are independent.8 (5.19)
the tuple …É !¥¤—¤¥¤—!NÉ Ô is good and the choices for !¥¤¥¤¥¤!b
follows from Lemma 5.1. (5.20) follows from rearranging the summand and using the
fact that the tuple is good (and hence T Ÿ Go 4vI YÁ ~ ). (5.21) follows from the
€
fact that there are t choices9 for d and Lemma 5.6. (5.22) follows from the fact that
ð
É
Ò
[í Z
Ip
í
8
In Theorem 5.1, the tuple of codewords were not ordered while they are ordered here. However, it is easy
to check that the argument in Theorem 5.1 also works for ordered tuples as long as the induction is applied
in the right order.
9
‹ Š
As the final code will be linear, it is sufficient to only look at received words that have Hamming
weight at most j-„Œ . However, this gives a negligible improvement to the final result and hence, we just
~
bound the number of choices for  by `iŽ .
88
T
€
I5`G(Ix4}Z ~Hn5G‹E|T I5`G(I,4tºI YÁ~
€
Ò Ò
Ÿv6
(for ~
G
Y ) and that
T
€
Ÿ2`G-Ix4LI
YÁ ~ .
(5.23) follows by rearranging the terms.
Z
Now, as long as FŸ k˜ un5G , we have 3’ Ô ˜ E . (5.23) will imply (5.15) (along
åZ
rÄ that ÔÚfor
Ä
4 YÁZ~OEtTaE‹~ ,
every Gµ ´
with the fact that ² ¯ is increasing) if we can show
úT
ÝlM‡
Et² ¯
åZ
GI
šI `GšIF4QT IãYÁ~
ãG
I[ I
I VÁÒ ~ T IlÝX!
þ
G
þ
ÿ
ÿ
Ò
for
e6eΠ. Thus, by Lemma 5.4 (and using the arguments used in the proof of Theorem 5.1 to show that the conditions of Lemma 5.4 are satisfied), we can select in
. Ø Z™å 2 (and Y in y/‡ GI‰4}`6G> ), and pick
“ ’
Z
˜
D6C/ ´~[3L²
¯
åZ
`GšIã£G 4PäIl‡P@"!
as desired. This along with Lemma 5.5, implies that we can set
l÷
ƒ
~‹6‡ ‘ “•
¥–f’ Z™å ˜^ Ø ™ ’ ¯
•
¹»º ¼
˜ !
as required.
Using arguments similar to those in the proof of Theorem 5.1, one can show that the
code 1\ b
1Ž W1 Î !¥¤¥¤¥¤! 1 Î with high probability has rate G£4 .
P Z
f Remark 5.2. The idea of using list recoverability to argue independence can also be used
to prove Theorem 5.1. That is, first show that with good probability, a random linear outer
code will have good list recoverability. Then the argument in this section can be used to
prove Theorem 5.1. However, this gives worse parameters than the proof presented in Section 5.4. In particular, by a straightforward application of the probabilistic method, one can
show that a random linear code of rate 4 over UÈ is i4xn
! ! ö{S } -list recoverable [49,
Sec 9.3.2]. In proof of Theorem 5.2, is roughly Ô , where is roughly G6X . Thus, if we
used the arguments in the proof
Ø of Theorem 5.2, we would be able to prove Theorem 5.1
îYä , Ò
™
‡
¯ 3 • › œ…•WB •Xœ , which is worse than the list size of
but with lists of size of ö
guaranteed by Theorem 5.1.
ö
Ø
‘ “• ’ Z™å ˜^•Xœ 5.6 Bibliographic Notes and Open Questions
The material presented in this chapter appears in [61].
Theorem 5.1 in some sense generalizes the following result of Blokh and Zyablov [19].
Blokh and Zyablov show that the concatenated code where both the outer and inner codes
are chosen to be random linear codes with high probability lies on the Gilbert-Varshamov
åZ
for rate .
bound of relative minimum distance is (at least) ² ¯ Gš
The arguments used in this chapter also generalize Thommesen’s proof that concatenated codes obtained by setting Reed-Solomon codes as outer codes and independent random inner code lie on the Gilbert-Varshamov bound [102]. In particular by using d to be
IF4P
4
89
ÒS5 ÷
ƒ‰ G in proof of Theorem
the all zero vector and
5.2, one can recover Thommesen’s
È
proof. Note that when S G , a codeword ï is N ƒ !™ ¯ -independent if ƒ:_ u3ƒ . Thus,
the proof of Thommesen only required a knowledge of the weight distribution of the ReedSolomon code. However, for our purposes, we need a stronger form of independence in the
proof of Theorem 5.2 for which we used the strong list-recoverability property of folded
Reed-Solomon codes.
Theorem 5.2 leads to the following intriguing possibility.
Ò z
v
Æ
l Open Question 5.1. Can one list decode the concatenated codes from Theorem 5.2 up to
the fraction of errors for which Theorem 5.2 guarantees it to be list decodable (with high
probability)?
Current list-decoding algorithms for concatenated codes work in two stages. In the
first stage, the inner code(s) are list decoded and in the second stage the outer code is list
recovered (for example see Chapter 4). In particular, the fact that in these algorithms the
first phase is oblivious to the outer codes seems to be a bottleneck. Somehow “merging”
the two stages might lead to a positive resolution of the question above.
Download