Introduction to Reed

advertisement
Introduction to
Reed-Solomon Coding
( Part I )
1
L. J. Wang
Introduction

One of the most error control codes is ReedSolomon codes.

These codes were developed by Reed &
Solomon in June, 1960.

The paper I.S. Reed and Gus Solomon, “
Polynominal codes over certain finite fields ”,
Journal of the society for industrial & applied
mathematics.
2
L. J. Wang

Reed-Solomon (RS) codes have many
applications such as compact disc (CD, VCD,
DVD), deep space exploration, HDTV,
computer memory, and spread-spectrum
systems.

In the decades, since RS discovery, RS codes are
the most frequency used digital error control
codes in the world.
3
L. J. Wang
Effect of Noise
digital data
0
1
0
1
1
0
0
1
1
0
0
1
0
1
0
0
0
0
1
0
1
1
0
0
0
1
0
0
1
0
1
0
1
0
signal
noise
singal
w ith noise
sam pling
tim e
reconstruct
-ed data
bits in error
F igure-1. E ffect of noise on a digital signal
4
L. J. Wang
digital data
0 1 0 1 1 0 0 1 1 0 0 1 0 0 0
Reconstructed data
0 1 0 1 1 0 0 0 1 0 0 1 0 1 0
encoder
0
0 0 0
1
1 1 1
check bits, r=2
information bits, k=1
block length of code, n=3
000, 111  code word
a ( n, k ) code, n=3, k=1, and r=n-k=3-1=2
code rate, p=k/n=1/3
encoder
000 111 000 111 111 000 000
receiver
000 101 000 111 111 010 001

decoder


000 111 000 111 111 000 000
5
L. J. Wang
A (7,4) HAMMING CODE
A (7,4) hamming code
n=7, k=4, r=n-k=7-4=3, p=4/7.
0101 1100 1001 0000
I1
I2
I3
I4
encoder
  
  
0101c1 c2 c3 1100 c1 c2 c3 1001c1 c2 c3
receiver
 
  
0111c1c 2 c 3 1100 c1 c 2 c 3 1001c1 c 2 c 3

decoder

  
  
0101c1 c2 c3 1100 c1 c2 c3 1001c1 c2 c3
6
L. J. Wang
Let a1, a2, ..., ak be the k binary of message digital.
Let c1, c2, ..., cr be the r parity check bits.
An n-digital codeword can be given by
a1a2a3...akc1c2c3...cr
n bits
The check bits are chosen to satisfy the r=n-k equations,
0 = h11a1h12a2...h1kakc1
0 = h21a1h22a2...h2kakc2
.
.
(1)
.
0 = hr1a1hr2a2...hrkakcr
7
L. J. Wang
Equation (1) can be writen in matrix notation,
h11 h12 ... h1k 1 0 ... 0
a1
0
h21 h22 ... h2k 0 1 ... 0
a2
0
.
.
.
.
ak
.
c1
0
.
c2
0
.
.
0
.
hr1 hr2 ... hrk 0 0 ... 1
rn
=
cr
0
n1
r1
 HT=0
8
L. J. Wang
Let E be an n1 error pattern at least one error, that is
E
=
e1
0
e2
.
0
.
.
.
.
en
=
ej
.
.
=
1
0
Also let R be the received codeword, that is
R
=
r1
a1
0
r2
.
a2
.
0
.
.
= T + E =
ak
+
ej
.
.
c1
.
.
.
rn
cr
0
9
=
1
L. J. Wang
Thus
S = H  R = H  (T+E) = H  T + H  E = H  E
 S=HE
where S is an r1 syndorme pattern.
Problem, for given S, Find E
s1
h11
h12
0
s2
h21
h22
0
.
=
.
e1 +
.
e2 + ... +
.
.
.
.
.
.
.
.
.
sr
hr1
hr2
1
10
en
(2)
L. J. Wang
Assume e1=0, e2=1, e3=0, ..., en=0
s1
h12
s2
h22
.
=
.
.
.
.
.
sr
hr2
The syndrome is equal to the second column of the parity check matrix H.
Thus, the second position of received codeword is error.
11
L. J. Wang

A (n,k) hamming code has n=r+k=2r-1, where k is message
bits and r=n-k is parity check bits.

The rate of the hamming code is given by
R 
k
n
2 1 r
r

2 1
r
 1
r
2 1
r

Hamming code is a single error correcting code.

In order to correct two or more errors, cyclic binary code,
BCH code and Reed-Solomon code are developed to correct
t errors, where t≧1.
12
L. J. Wang
Single-error-correcting Binary BCH code
In GF(24), let p(x)=x4+x+1 be a primitive irreducible polynomial over
GF(24).
Then the elements of GF(24) are
0

1

2

3

0
    1  0011
5
2
      0110
6
3
2
      1100
7
3
    1    1011
8
2
    1  0101
9
3
      1010
10
2
      1  0111
11
3
2
        1110
12
4
3
2
        1111
13
 1
3
2
 1
     1101
14
3
    1  1001
15
 1
4
= 0000
 0001
 0010
 0100
 1000
13
L. J. Wang

The parity check matrix of a (n=15,k=11) BCH code for
correcting one error is
H  (
14
,
13
,
12
 1 11 0 0 0

 0 11 0 1 0


0 01 0 0 1

 1 11 0 0 0

,  ,.....  ,  ,  ,  )
11
3
2
1
0
0

0
0

1 

Encoder:

Let the codeword of this code is
C 14 C 13 C 12 C 11 C 10 C 9 C 8 C 7 C 6 C 5 C 4 C 3 C 2 C 1 C 0
information bits
parity check bits
14
L. J. Wang
1110010100 1c 3 c 2 c1 c 0
 1 
 1 
 1 
 0 
 
 0 
1 
 1 1 1 1 0 0 0  0 
 0 11 0 1 0 0  1
 0 0 1 0 0 1 0     0
 1 1 1 0 0 0 1  0 

 0 
 1 
 c3 
c 
 2
 c1 
c 
 0
c0  1  1  1  0  0  1  1  1
c1  1  0  0  1  1  0  1  0
c2  1  1  0  0  0  0  0
c3  1  1  1  0  1  1  0  0  1
The code word is 1110010100 11001.
15
L. J. Wang

Decoder:

Let received word be
R
= C
+
codeword
E
error pattern
H‧R=H(C+E)=H‧C+H‧ET=H‧ET
14
13
12
= e14  e13  e12  .....  e1  e 0
where E  ( e14 , e13 , e12 , ...... , e1 , e 0 )

Let R=C+E=(11100101001001)+(00100....0)
=(11000101001001)
16
L. J. Wang
H *R
T
 1
 1
 0
 1
 1
 1
 0
 1
 1
 
 
 
0
0
0
 
 
 
0
0
0
 
 
 
 1
 1
 1
14
13
11
1
0
0
0
0
 H    H    H    (  ,  ,  ,....  ,  )
 1
 1
 1
 0
 0
 0
 0
 0
 0
 1
 1
 1
 0
 0
 0
 0
 0
 0
 
 
 
 1
 1
 1
 ( 0   0   1
12
  ......... location
14
13
12
17
 0 
11
 0 
10
 0
 0
 1
 
0
 
0
 
 0
 0
 1
 0
 0
 1
 0
 0
 
 1
 ...... 0 )
L. J. Wang

Let Information polynomial be
I(x)=
C 14 x
14
 C 13 x
13
 C 12 x
C6 x  C5x  C4x
6

5
12
 C 11 x
11
 C 10 x
10
 C9 x  C8 x  C7 x 
9
8
7
4
The codeword is
C ( x )  C 14 x
14
 C 13 x
13
 ......  C 5 x  C 4 x  C 3 x  C 2 x  C 1 x  C 0
5
Information polynomial
4
3
2
1
parity check polynomial
R(x)
I(x)
18
L. J. Wang

Note that
C(x)=Q(x)‧g(x)
where
g(x) is called a generator polynomial,
C(x) is a codeword if and only if
C(x) is a multiple of g(x).

For example, to encode a (15,11) BCH code, the generator
polynomial is g(x)=x4+x+1, where α is a order of 15 in
GF(24) and is called a minimum polynomial of α.
19
L. J. Wang

To encode, one needs to find C3,C2,C1,C0 or
R(x) =

C3 X  C2 X
3
2
 C1 X  C 0
such that
C ( x)  I ( x)  R ( x)
satisfies
C ( )  I ( )  R ( )  0
To show this, dividing I(x) by g(x), one obtains
I(x)=Q(x)g(x)+R(x)

Encoder
C(x)=I(x)+R(x)=Q(x)*g(x)
C ( )  I ( )  R ( )  Q ( ) * g ( )  0

Since C(x) is a multiple of g(x); then
C(x)=I(x)+R(x) is a (15,11) BCH code.
20
L. J. Wang

Example :
I ( x)  1 x
14
 1 x
 1 x
13
 0x
12
 0x
11
 0  x  1 x  0  x  0  x  1 x
8
7
6
5
Q (x)  x
g (x)  x
4
 x 1 x
x
14
 x
13
 x
12
10
9
4
 x  x  x
9
 0x
14
 1  x ...
10
8
11
 0x
11

x
…
x
10
7
 x  x
5
2
 x 1
 ....  0  x  1  x
5
4
10
R( x)  1 x  0  x  0  x  1
3
2
I(x)=Q(x)g(x)+R(x)
C(x)=Q(x)g(x)=I(x)+R(x)
=
x
14
 x
13
 x
12
 0x
11
 0x
10
 1 x  0  x  1 x  0  x
9
8
7
6
0  x  1  x  1  x  0  x  0  x  1
5
4
3
2
=111001010011001
21
L. J. Wang

To decode, let the error polynomial is
E(x)= 0  x 14  0  x 13  1  x 12  0  x 11  0  x 10  .... 0

The received word polynomial is
R’(x)=C(x)+E(x)= x 14  x 13  x 9  x 7  x 4  x 3  1

The syndrome is
S  R ( )  C ( )  E ( )  Q ( ) g ( )  E ( )  E ( )
=

14
= 3

13

1
3


12


   1
2
9
2
7

1

4
3
3
1
 
3
 1 1
3
1
3
12
is the error location in a received word.
22
L. J. Wang
Double-error-correcting Binary BCH code

To encode a (n=15, I=7) BCH code over GF(24), which can
correct two errors.

Let C(x)=K(x)g1(x)g2(x)
where g1(α) is the minimal polynomial of α.
=> g1(α) = 0
g2(α3) is the minimal polynomial of α3.
=> g2(α3) = 0
23
L. J. Wang

The minimal polynomial of αis
g 1 ( x )  ( x   )( x   )( x  
2
2
2
)( x  
2
3
)
 x  x1
4

The minimal polynomial of α3 is
g 2 ( x )  ( x   )( x  ( ) )( x  ( ) )( x  (
3
3
2
3
4
3
8
) )
 x  x  x  x1
4

3
2
The generator polynomial of a(15,7) BCH code is
g ( x )  g 1 ( x ) g 2 ( x )  ( x  x  1)( x  x  x  x  1)
4
4
3
2
 x  x  x  x 1
8
7
6
4
24
L. J. Wang
Reed-Solomon (RS) code

An RS code is a cyclic symbol error-correcting code.

An RS codeword will consist of I information or message
symbols, together with P parity or check symbols. The
word length is N=I+P.

The symbols in an RS codeword are usually not binary,
i.e., each symbol is represent by more than one bit. In fact,
a favorite choice is to use 8-bit symbols. This is related to
the fact that most computers have word length of 8 bits or
multiples of 8 bits.
25
L. J. Wang

In order to be able to correct ‘t’ symbol errors, the
minimum distance of the code words ‘D’ is given by
D=2t+1.

If the minimum distance of an RS code is D, and the word
length is N, then the number of message symbols I in a
word is given by
I=N–(D–1)
P = D – 1.
26
L. J. Wang
Download