Coding Theory - Nachrichtentechnische Systeme, NTS

advertisement
Solutions of exercise problems
regarding the lecture
Coding Theory“
”
N T S
Prof. Dr.-Ing. A. Czylwik
Yun Chen
Department Communication Systems
Room: BA 235, Phone: -1051, eMail: chen@nts.uni-due.de
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
1/57
Solution Problem 1
The discrete information source has a source alphabet X = {x1 , x2 , x3 }. To transmit the
symbols of the information source via a binary channel, the symbols are binary coded.
That means that a unique binary codeword is assigned to each symbol of the information
source.
The goal for source coding is to find a code that
• allows a unique decoding of the transmitted binary codewords
• minimizes the lengths of the binary codewords to reduce the redundancy of the
code
1.1 The information content I(xi ) of a symbol xi with the probability of occurrence
p(xi ) is defined as
1
I(xi ) = ld
= −ld (p(xi ))
(1)
p(xi )
= − log2 (p(xi ))
log (p(xi ))
log10 (p(xi ))
=−
(2)
= −
log10 (2)
log (2)
So, the lower the probability of occurrence of a symbol is, the greater the information
content is and vice versa.
The unit of the information content is bit .
symbol
Using the given probability of occurrence of the symbols x1 , x2 and x3 yields:
bit
symbol
bit
= 3.32
symbol
bit
= 0.51
symbol
x1 :
p(x1 ) = 0.2
=⇒
I(x1 ) = − log(0.2)
= 2.32
log(2)
(3)
x2 :
p(x2 ) = 0.1
=⇒
I(x2 ) = − log(0.1)
log(2)
(4)
x3 :
p(x3 ) = 0.7
=⇒
I(x3 ) = − log(0.7)
log(2)
(5)
1.2 The entropy of the information source is a measure for the average information
content of the source. The definition of the entropy of the source with N symbols
x1 , x2 , . . . , xN is:
H(X) = hI(xi )i
N
X
=
p(xi ) I(xi )
=
i=1
N
X
i=1
1
p(xi ) ld
p(xi )
(6)
The entropy becomes a maximum, if the symbols are equiprobable.
Date: 12. Juli 2011
(7)
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
2/57
Using the values for p(xi ) and I(xi ) yields:
H(X) =
0.2 · 2.32 + 0.1 · 3.32 + 0.7 · 0.51
= 1.16
bit
symbol
bit
symbol
(8)
1.3 The redundancy of the source Rs is a measure for the difference between the number
of binary decisions H0 that are necessary to select a symbol of the source (without
taking the probability of the symbols into account) and its entropy H(X).
Rs = H0 − H(X)
For a source with N symbols, the number of binary decisions H0 is calculated to
H0 = ld(N )
This is the maximum value for the entropy of an information source with N symbols
that are equiprobable.
The given source with N = 3 symbols yields:
H0 = ld(3) = 1.58
bit
symbol
So, the redundancy of the source is:
Rs = 1.58 − 1.16
bit
bit
= 0.42
symbol
symbol
1.4 Shannon- and Huffman-Code are codes, that fulfill the Shannon-Coding-Theorem:
“For every given information source with the entropy H(X), it is possible to find a
binary prefix-code, with an average codeword length L, with:”
H(X) ≤ L ≤ H(X) + 1
(9)
Thereby, a prefix code is a code, where each codeword is not the beginning of another codeword.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
3/57
1.4.1 Carrying out a Shannon-Coding
A Shannon-Code fulfills inequation(9) by finding a code, where the length
of every codeword L(xi ) is in the range between its information content I(xi )
and its information content plus one.
I(xi ) ≤ L(xi ) ≤ I(xi ) + 1
(10)
The Shannon-Code can be derived in a schematical way:
(a) Sort the symbols xi ,such their probabilities of occurrence p(xi ) decrease
(b) Calculate the information content of the symbols I(xi )
(c) The length of every codeword has to fulfill inequation(10). Thus the length
of each codeword L(xi ) equals next higher, the information content I(xi )
following, integer number.
e.g.
I(xi ) = 1.3 =⇒ L(xi ) = 2
I(xi ) = 1.7 =⇒ L(xi ) = 2
I(xi ) = 2.1 =⇒ L(xi ) = 3
(d) Calculate the accumulated probability P (xi ) of every symbol. The accumulated probability is the sum of the probabilities of all previously
observed symbols in the sorted list.
(e) The code for the symbol xi is the binary value of the accumulated probability P (xi ), cut after L(xi ) digits.
Using this schematical way yields:
xi
x3
x1
x2
p(xi )
0.7
0.2
0.1
P (xi )
I(xi ) L(xi )
0
0.52
1
0.7
2.32
3
0.9 = 0.7 + 0.2 3.32
4
2−1
0
1
1
2−2
0
0
1
2−3
0
1
1
Example for the calculation of the binary value of 0.7 (symbol x1 ):
0.7 − 2−1
= 0.2 > 0
1
0.2 − 2−2
<0
0
0.2 − 2−3
= 0.075 > 0
1
= 0.0125 > 0
1
0.0125 − 2−5
<0
0
0.0125 − 2−6
<0
..
.
0
0.075 − 2−4
Date: 12. Juli 2011
2−4
0
1
0
2−5
0
0
0
2−6
0
0
1
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
4/57
The determined Shannon-Code for the given information source is:
x1 :
p(x1 ) = 0.2
101
x2 :
p(x2 ) = 0.1
1110
x3 :
p(x3 ) = 0.7
0
(11)
(12)
(13)
The symbol with the maximum probability has the minimum codewordlength
and vice versa.
The Shannon-Code is not the optimal code, because not all possible end points
of the codewordtree are used.
0
1
x3
0
0
1
0
1
1
x1
0
1
n.u. n.u.
0
1
n.u. n.u.
0
1
x2 n.u.
Bild 1: Codetree of the determined Shannoncode (n.u. =
ˆ not used)
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
5/57
1.4.2 Carring out a Huffman-Coding
The schematical way for a Huffman-Coding is given in the example:
STEP 1. The symbols are sorted by their probabilities, such that the probabilities
decrease.
xi
p(xi )
Code
x3
0.7
x1
0.2
x2
0.1
STEP 2. A “1” is assigned to the symbol with the minimum probability and a
“0” is assigned to second smallest probability
xi
p(xi )
Code
x3
0.7
x1
0.2
0
x2
0.1
1
STEP 3. The two symbols with the minimum probability are combined to a
new pair of symbols. The probability of the new pair of symbols is the sum of the
single probabilities.
xi
p(xi )
Code
x3
0.7
x1
x2
0.3 = 0.1 + 0.2
0
1
Now, the sequence starts again with STEP 1. (sorting)
xi
p(xi )
Code
x3
0.7
x1
x2
0.3
0
1
The assignment of “0” and “1” to combined symbols has to be made for every
symbol contained in the combined symbols. The assignment is made
from the left side.
xi
p(xi )
Code
Date: 12. Juli 2011
x3
0.7
0
x1
x2
0.3
10
11
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
6/57
STEP4 Combining the symbols
xi
p(xi )
Code
x3
x1
x2
1.0 = 0.7 + 0.3
0
10
11
The coding is finished when all symbols are combined.
The probability of all combined symbols has to be 1.
The determined Huffman-Code for the given information source is:
x1 :
p(x1 ) = 0.2
10
x2 :
p(x2 ) = 0.1
11
x3 :
p(x3 ) = 0.7
0
(14)
(15)
(16)
The symbol with the maximum probability got the minimum codewordlength
and vice versa.
The Huffman-Code is called “optimal code”, because all possible end points
of the codewordtree are used.
0
1
x3
0
x1
1
x2
Bild 2: Codetree of the determined Huffman-Code
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
7/57
1.5 The redundancy of a code RC is a measure for the difference of the average codewordlength of the code L and the entropy of the information source H(X)
Rc = L − H(X)
The average codewordlength L is defined by the statistical average length of the
codewords L(xi ) of the symbols xi :
L=
N
X
p(xi ) L(xi )
i=1
• Shannoncode
Using the determined codes (eq.(11)..eq.(13)) yields:
LShannon = 0.2 · 3 + 0.1 · 4 + 0.7 · 1 = 1.7
=⇒
RCShannon = 1.7 − 1.16
bit
symbol
bit
bit
= 0.54
symbol
symbol
• Huffmancode
Using the determined codes (eq.(14)..eq.(16)) yields:
LHuf f man = 0.2 · 2 + 0.1 · 2 + 0.7 · 1 = 1.3
=⇒
RCHuf f man = 1.3 − 1.16
bit
symbol
bit
bit
= 0.14
symbol
symbol
So, the “optimal” Huffman-Code got a significantly smaller redundancy than the
Shannon-Code.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
8/57
Solution Problem 2
The information source that is used in exercise 1 created the symbols statistically independently. Thus, the probability of a sequence of two specific symbols xi , xj is:
p(xi , xj ) = p(xi ) · p(xj )
(17)
In this case, the probability of a symbol xi under the condition that the previous transmitted symbol xj is known is just the probability of the symbol xi
p(xi |xj ) = p(xi )
(18)
Now, the information source is more realistic. It creates the symbols statistically dependently. This means, that the probability of the current transmitted symbol xi is different,
if the sequence of the previous transmitted symbol changes.
Generally, the probability of the current symbol xi depends on the knowlegde of the
previous transmitted sequence of symbols {xj , xk , xl , . . .} and the probability for this
sequence.
p(xi , xj , xk , xl , . . .) = p xi {xj , xk , xl , . . .} ·p {xj , xk , xl , . . .}
(19)
|
{z
}
transition probability
Usually and also in this case the information source is a Markov source 1st order. This
means: Only the knowledge of the last transmitted symbol is necessary to determine the
transition probability. Thus, the probability of the current symbol xi depends only on
the knowledge of the last transmitted symbol xj and the probability for this symbol.
p xi {xj , xk , xl , . . .} = p(xi |xj )
The following equations are always valid:
p(xi ) =
N
X
j=1
N
X
i=1
Date: 12. Juli 2011
p(xi |xj ) = 1
p(xi , xj ) =
N
X
p(xi |xj ) · p(xj )
(20)
j=1
(21)
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
9/57
2.1 The Markov diagram is shown below. The given probabilities are marked bold.
p(x1 |x1 ) = 0
x1
p(x1 |x2 ) = 0.8
p(x3 |x1 ) = 0.5
p(x2 |x1 ) = 0.5
x2
p(x1 |x3 ) = 0.171
p(x3 |x2 ) = 0.2
x3
p(x3 |x3 ) = 0.829
p(x2 |x2 ) = 0
p(x2 |x3 ) = 0
The other (not bold) transition probabilites are determined as follows:
p(x1 |x2 ) + p(x2 |x2 ) + p(x3 |x2 ) = 1
=⇒ p(x3 |x2 ) = 1 − p(x1 |x2 ) = 0.2
(I)
p(x2 ) = p(x2 |x1 ) · p(x1 ) + p(x2 |x2 ) · p(x2 ) + p(x2 |x3 ) · p(x3 )
= p(x2 |x1 ) · p(x1 )
0.1
p(x2 )
=
= 0.5
(II)
=⇒ p(x2 |x1 ) =
p(x1 )
0.2
=⇒ p(x3 |x1 ) = 1 − p(x2 |x1 ) − p(x1 |x1 )
= 1 − p(x2 |x1 ) = 0.5
(III)
p(x1 ) = p(x1 |x2 ) · p(x2 ) + p(x1 |x3 ) · p(x3 )
p(x1 ) − p(x1 |x2 ) · p(x2 )
0.12
12
=⇒ p(x1 |x3 ) =
=
=
p(x3 )
0.7
70
= 0.171
(IV)
=⇒ p(x3 |x3 ) = 1 − p(x1 |x3 ) − p(x2 |x3 )
58
= 0.829
= 1 − p(x1 |x3 ) =
70
(V)
(VI)
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
10/57
2.2 2 symbols X, Y are combined into a new symbol Z ,
e.g assume the output of the information source:
( x3 , x1 ,
| {z }
zn
x ,x ,
| 3{z }2
zn−1
x ,x ,
| 3{z }2
··· )
zn−2
↑
↑
↑
current
last
last before
symbol symbol last symbol
H(X, Y ) =
3 X
3
X
i=1 j=1
p(xi , xj ) · ld
1
p(xi , xj )
with p(xi , xj ) = p(xi |xj ) · p(xj ) =⇒
p(x1 , x1 )
p(x1 .x2 )
p(x1 , x3 )
p(x2 , x1 )
p(x2 , x2 )
p(x2 , x3 )
p(x3 , x1 )
p(x3 , x2 )
p(x3 , x3 )
=
=
=
=
=
=
=
=
=
0 · 0.2 = 0
0.8 · 0.1 = 0.08
0.171 · 0.7 = 0.12
0.5 · 0.2 = 0.1
0 · 0.1 = 0
0 · 0.7 = 0
0.5 · 0.2 = 0.1
0.2 · 0.1 = 0.02
0.829 · 0.7 = 0.58
=⇒ H(X, Y ) = 1.892 < 2 · H(X) = 2.314
=H(X,
ˆ
Y )
X,Y are statistically independent
2.3 Coding of statistically dependent symbols
=⇒ Coding using the knowledge of the transition probabilities
Definition:
xi
xj
xk
xl
=
ˆ
=
ˆ
=
ˆ
=
ˆ
current transmitted symbol of the sequence
last transmitted symbol of the sequence
last before last transmitted symbol of the sequence
symbol,that was transmitted before xk
=⇒ e.g. assume sequence of transmitted symbols ( xi , xj , xk , , xl . . . )
| {z } | {z }
zn
Date: 12. Juli 2011
zn−1
NTS/IW
UDE
Exercise problems for
Coding Theory
coding using:
Page
11/57
=⇒ zn = {xi , xj }
=
ˆ
current transmitted pair of symbols
=⇒ zn−1 = {xk , xl }
=
ˆ
last transmitted pair of symbols
p(zn |zn−1 ) = p({xi , xj }|{xk , xl })
= p({xi , xj }|xk )
p(xi , xj , xk )
=
p(xk )
p(xi |{xj , xk }) · p(xj , xk )
=
p(xk )
= p(xi |{xj , xk }) · p(xj |xk )
= p(xi |xj ) · p(xj |xk )
The codeword for the pair of symbols zn = {xi , xj } depends on the symbol xk , that
was transmitted before {xi , xj }
xj xi
x1
x1
x1
x1
x2
x2
x3
x3
x1
x3
x1
x3
0.5 · 0.8 = 0.4
0.5 · 0.2 = 0.1
0.5 · 0.171 = 0.086
0.5 · 0.829 = 0.415
00
010
011
1
0.8
0.3
0.258
0.415
Lx1 = 1.77
x2
x2
x2
x2
x1
x1
x3
x3
x2
x3
x1
x3
0.8 · 0.5 = 0.4
0.8 · 0.5 = 0.4
0.2 · 0.171 = 0.034
0.2 · 0.829 = 0.166
1
00
011
010
0.4
0.8
0.102
0.498
Lx2 = 1.8
x3
x3
x3
x3
x1
x1
x3
x3
x2 0.171 · 0.5 = 0.086
x3 0.171 · 0.5 = 0.086
x1 0.829 · 0.171 = 0.142
x3 0.829 · 0.829 = 0.687
101
100
11
0
0.258
0.258
0.284
0.687
Lx3 = 1.487
Date: 12. Juli 2011
p({xi , xj }|xk )
Code L({xi , xj }|xk )/
bit
pair of symbol
xk
NTS/IW
UDE
Exercise problems for
Coding Theory
L{xi , xj } = p(x1 ) · Lx1 + p(x2 ) · Lx2 + p(x3 ) · Lx2
= 0.354 + 0.18 + 1.041
bit
= 1, 575
pair of symbols
L{xi , xj }
bit
=⇒ L =
= 0.788
2
symbol
Date: 12. Juli 2011
Page
12/57
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
13/57
Solution Problem 3
3.1 Generally : [Markov source of 1st order]!
The probability for a symbol xi depends on the state of the source
Current state (point in time) : k
previous state (point in time) : k − 1
pk (xi ) =
N
X
pk (xi |xj ).pk−1 (xj )
j=1
This equation can be written in matrix form, by using:
• Probability vector at the k th state : wk
• Probability vector at the (k − 1)th state : wk−1
• Transition matrix: P
wk = (pk (x1 ), pk (x2 ), ... pk (xN ))
wk−1 = (pk−1 (x1 ), pk−1 (x2 ), ... pk−1 (xN ))



P = 




= 

=⇒
p(x1 |x1 )
p(x1 |x2 )
..
.
p(x2 |x1 ) · · · p(xN |x1 )
p(x2 |x2 ) · · · p(xN |x2 )
..
..
...
.
.
p(x1 |xN ) p(x2 |xN ) · · · p(xN |xN )

p11 p12 · · · p1N
p21 p22 · · · p2N 

..
..
.. 
.
. ···
. 
pN 1 pN 2 · · · pN N





wk = wk−1 · P
Here:
Stationary 1st order Markov source in the steady state (Source was switched on a
long time ago), so the probability for a symbol does not depend anymore on the
state k.
=⇒ k −→ ∞
=⇒ pk (xi ) = pk−1 (xi ) = p(xi )
=
ˆ wk = wk−1 = w
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
14/57
The matrix equation can be rewritten by using the steady state probabilities wi =
p(xi ) as:
w = w·P
=⇒ (w1 , w2 , w3 , w4 ) = (w1 , w2 , w3 , w4 ) · P
The transition matrix P can be obtained by using the given Markov diagram:
zn
x1
zn−1
x1
x2
x3
x4
=⇒
x2
x3
x4
0.75 0.25 0
0
0 0.75 0.25 0
0
0
0.5 0.5
0.5
0
0 0.5

0.75 0.25 0
0
 0 0.75 0.25 0 

P=
 0
0
0.5 0.5 
0.5
0
0 0.5

The steady state probabilities wi = p(xi ) can be obtained by evaluating the matrix equation and using the normalization property of probabilities (sum over all
probabilities equals one):
w1 =
w2 =
w3 =
w4 =
w1 + w2 + w3 + w4 =
3
w1 +
4
1
w1 +
4
1
w2 +
4
1
w3 +
2
1
1
w4
2
3
w2
4
1
w3
2
1
w4
2
=⇒ w1 = 2w4
=⇒ w2 = w1 = 2w4
1
=⇒ w3 = w2
2
=⇒ w3 = w4
=⇒ 2w4 + 2w4 + w4 + w4 = 1
1
=⇒ w4 = p(x4 ) =
6
1
=⇒ w3 = p(x3 ) =
6
1
=⇒ w2 = p(x2 ) =
3
1
=⇒ w1 = p(x1 ) =
3
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
15/57
3.2 The steady state entropy is the expected value over all conditional entropies:
H∞ (z) = E{H(zn |zn−1 )}
N
X
=
wi · H(zn |zn−1 = xi )
H(zn |zn−1
i=1
N
X
1
= xi ) =
p(xj |xi ) · ld
p(xj |xi )
j=1
=
N
X
pij · ld
j=1
=⇒ H(zn |zn−1 = x1 ) =
H(zn |zn−1 = x2 ) =
H(zn |zn−1 = x3 ) =
H(zn |zn−1 = x4 ) =
=⇒ H∞ (z) =
1
pij
1
1
+ 0.25 · ld
= 0.811
0.75 · ld
0.75
0.25
0.811
1
1
+ 0.5 · ld
=1
0.5 · ld
0.5
0.5
1
1
1
1
bit
1
· 0.811 + · 0.811 + · 1 + · 1 = 0.874
3
3
6
6
symbol
3.3 Source without memory : symbols are statistically independent
=⇒ H(X) =
N
X
i=1
1
p(xi ) · ld
p(xi )
= 0.528 + 0.528 + 0.431 + 0.431
bit
= 1.918
symbol
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
16/57
Solution Problem 4
Lempel Ziv algorithm :
Source encoding without knowing the statistical properties of the information source like: type of source, probabilities of symbols, transition probabilities ...
Sequence of symbols
(output of the information source)
input of the encoder :
Encoding is done by using the repetitions of subsequences within the input sequence
to encode the input sequence
search buffer
0
1
2
3
4
look ahead buffer
5
6
7
0
1
2
3
4
5
6
7
input
sequence
sliding window
Algorithm searches for repetitions of subsequences in the search buffer within the look
ahead buffer.
It transmits codewords c containing the starting position of the subsequence in the search
buffer, the number of repetitions and the next symbol after the repetition sequence in
the look ahead buffer instead of the whole input sequence.
General definitions:
Size of sliding window is : n
Length of look ahead buffer : Ls
=⇒ Length of search buffer: n − Ls
base of the symbol alphabet N
e.g. digital alphabet: N = 2, octal alphabet: N = 8, hex alphabet: N = 16, ASCII:
N = 256
Length of the codeword (depends on the base of the symbol alphabet N ):
• (n − Ls ) different possible starting positions in the search buffer
=⇒ logN (n − Ls ) symbols are needed to tell the starting positions
• Possible number of repetitions: 0 . . . Ls − 1
(not 0 . . . Ls , because one symbol in the look ahead buffer is needed for the next
symbol)
=⇒ Ls different values for number of repetitions
=⇒ logN (Ls ) symbols are needed to tell the number of repetitions
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
17/57
• 1 symbol is needed to tell the next symbol
=⇒ length of the codewords: logN (n − Ls ) + logN (Ls ) + 1
=⇒ c = (starting postion, number of repetitions, next symbol)
{z
} |
{z
} |
{z
}
|
number of symbols
logN (n − Ls )
logN (Ls )
1
4.1 The parameters for the given problem are:
N = 8
n = 16
Ls = 8
So, the codewords consist of 3 symbols:
• one symbol for the starting position: logN (n − Ls ) = log8 (8) = 1
• one symbol for the number of repetitions: logN (Ls ) = log8 (8) = 1
• one symbol for the next symbol
Encoding is done as follows:
• At the beginning of the encoding procedure, the search buffer is filled up with
’0’.
• The input sequence is shifted into the look ahead buffer and the algorithm
searches for subsequences within the search buffer that match to the beginning
sequence of the look ahead buffer. The longest repetition sequence is used. If
there are two or more repetition sequences with the same maximum length,
one is chosen arbitrarily. The repetition sequence starting in the search buffer
can overlap into the look ahead buffer.
• The codeword is determined and the input sequence is shifted by (number of
repetition symbols +1 ) into the sliding window.
• ...
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
18/57
starting position
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
0
0
0
0
0
0
0
0
4
0
4
0
5
3
repetition sequence
encoded sequence
c1 = {5, 2, 4}
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
0
0
0
0
0
4
0
4
0
5
3
4
0
5
c2 = {6, 3, 5}
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
0
4
0
4
0
5
3
4
0
5
4
0
5
7
c3 = {2, 0, 3}
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
4
0
4
0
5
3
4
0
5
3
4
0
5
7
c4 = {4, 7, 7}
0
1
2
3
4
5
6
7
0
1
4
0
5
3
4
0
5
7
5
1
2
3
4
5
6
7
c5 = {2, 1, 1}
So, the encoder uses 5 code words with 3 symbols each (altogether 15 symbols) to
encode an input sequence of 18 symbols. The performance of the encoder improves,
if there are many repetitions within the input sequence.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
19/57
4.2 Decoding is done as follows:
• At the beginning of the decoding procedure, the search buffer is filled up with
’0’ and the look ahead buffer is empty.
• The codeword to be decoded tells the starting position, the length of the
repetition sequence and the next symbol. Using this information, the repetition
sequence together with the next symbol is written into the look ahead buffer.
This is the decoded sequence.
• The symbol sequence is shifted into the search buffer such that the look ahead
buffer is empty.
• ...
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
20/57
starting position
c1 = {5, 2, 4}
0
1
2
3
4
5
6
7
0
1
2
0
0
0
0
0
0
0
0
0
0
4
3
repetition sequence
4
5
6
7
decoded sequence
c2 = {6, 3, 5}
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
0
0
0
0
0
4
0
4
0
5
1
2
3
4
5
6
7
c3 = {2, 0, 3}
0
1
2
3
4
5
6
7
0
0
0
0
4
0
4
0
5
3
c4 = {4, 7, 7}
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
0
0
4
0
4
0
5
3
4
0
5
3
4
0
5
7
2
3
4
5
6
7
c5 = {2, 1, 1}
0
1
2
3
4
5
6
7
0
1
4
0
5
3
4
0
5
7
5
1
The the total decoded sequence is:
s=
ˆ 004040534053405751
This is the input sequence that was encoded.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
21/57
Solution Problem 5
5.1
The probabilities for the input symbols are:
p(x1 ) = p1
p(x2 ) = 1 − p(x1 ) = 1 − p1
Using the transition probabilities one get probabilities for the output symbols:
p(y1 ) =
=
p(y2 ) =
p(y3 ) =
p(x1 ) · p(y1 |x1 )
p1 · (1 − perr )
(1 − p1 ) · (1 − perr )
p1 · perr + (1 − p1 ) · perr = perr
The information flow is the output entropy minus the information added on the
channel.
T (X, Y ) = H(Y ) − H(Y |X)
Using the definition of the output entropy and the probabilities for the output
symbols:
3
X
1
p(yi )
i=1
= p1 · (1 − perr ) · ld
H(Y ) =
p(yi ) · ld
1
p1 · (1 − perr )
1
+(1 − p1 ) · (1 − perr ) · ld
(1 − p1 ) · (1 − perr )
1
+perr · ld
perr
1
1
= p1 · (1 − perr ) · ld
+ ld
p1
1 − perr
1
1
+(1 − p1 ) · (1 − perr ) · ld
+ ld
1 − p1
1 − perr
1
+perr · ld
perr
1
1
H(Y ) = (1 − perr ) · p1 · ld
− ld
p1
1 − p1
1
1
1
+perr · ld
− ld
− ld
perr
1 − p1
1 − perr
1
1
+ld
+ ld
1 − p1
1 − perr
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
22/57
Using the definition of the irrelevance H(Y |X):
H(Y |X) =
3 X
2
X
i=1 j=1
=
3 X
2
X
p(yi , xj ) · ld
| {z }
=p(xj )·p(yi |xj )
1
p(yi |xj )
p(xj ) · p(yi |xj ) · ld
i=1 j=1
1
p(yi |xj )
1
= p1 · (1 − perr ) · ld
1 − perr
1
+p1 · perr · ld
perr
1
+(1 − p1 ) · perr · ld
perr
1
+(1 − p1 ) · (1 − perr ) · ld
1 − perr
1
1
H(Y |X) = perr · ld
− ld
perr
1 − perr
1
+ld
1 − perr
Using the output entropy and the irrelevance, the transinformation flow can be
determined:
1
1
=⇒ T (X, Y ) = (1 − perr ) · p1 · ld
− ld
p1
1 − p1
1
1
1
− ld
− ld
+perr · ld
perr
1 − p1
1 − perr
1
1
+ld
+ ld
1 − p1
1 − perr
1
1
1
− perr · ld
− ld
+ ld
perr
1 − perr
1 − perr
1
1
= (1 − perr ) · p1 · ld
− ld
p1
1 − p1
1
1
+ ld
+perr · −ld
1 − p1
1 − p1
1
1
1
= (1 − perr ) · p1 · ld
− ld
+ ld
p1
1 − p1
1 − p1
1
1
= (1 − perr ) · p1 · ld
+ (1 − p1 ) · ld
p1
1 − p1
= −(1 − perr ) · [p1 · ld (p1 ) + (1 − p1 ) · ld (1 − p1 )]
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
23/57
5.2
The channel capacity is the maximum transinformation flow with respect to the
probabilities of the input symbols:
C =
=⇒
=⇒
1
· max T (X, Y )
∆T p(xi )
wanted: Maximum of a function
1st derivation:
∂
!
T (X, Y ) = T ′ (X, Y ) = 0
∂p1
"
T ′ (X, Y ) = −(1 − perr ) · ld (p1 ) + p1 ·
1
1
·
p1 ln(2)
#
1
1
·
−ld (1 − p1 ) − (1 − p1 )
1 − p1 ln(2)
1
1
= −(1 − perr ) · ld (p1 ) +
− ld (1 − p1 ) −
ln(2)
ln(2)
!
= −(1 − perr ) · [ld (p1 ) − ld (1 − p1 )] = 0
!
=⇒ ld (p1 ) = ld (1 − p1 )
=⇒
p1 = 1 − P1
1
=⇒
p1 =
2
T (X, Y )
= (1 − perr )
1
p1 = 2
=⇒
C = (1 − perr ) ·
1
∆T
5.3 Shannons theorem of channel capacity:
bits of an information source is
If the average information flow R =
ˆ information
second
smaller than the channel capacity C there is a source and channel coding / decoding
method such the information of the source can be transmitted via the channel and
the residual error probability after decoding can be infinite small.
Shannon said: There is such a source and channel (de-)coding method, but he did
not say anything about this method.
The entropy H(X̂) of the source is the average information content per symbol.
The source is the same as in the first lecture and its entropy is determined to:
H(X̂) = 1.16
Date: 12. Juli 2011
bit
symbol
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
24/57
symbol
. So,
The source emitts symbols with a symbol rate Rsymbol , [Rsymbol ] =
second
the information flow of the source is:
R = H(X̂) · Rsymbol = 1.16
bit
· Rsymbol
symbol
1
The channel capacity of the binary erasure channel is C = (1 − perr ) · ∆T
where
1
is the binary symbol period after source and channel encoding (so it
∆T = Rbinary,c
is the binary symbol period of the transmission via the channel) .
With Perr = 0.1 and the binary symbol rate over the channel of Rbinary,c = 1000
the channel capacity is:
C = (1 − perr ) · Rbinary,c
bit
= 900
second
So, the information flow of the source must be less than 900
R = 1.16
Date: 12. Juli 2011
bit .
second
bit
bit
· Rsymbol ≤ C = 900
symbol
second
bit
900
second
−→ Rsymbol =
1.16 bit
symbol
symbol
= 775.862
second
symbols
second
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
25/57
Solution Problem 6
6.1 Each linear block code can be described by :
c = u·G
u : uncoded information word, k bits
c : code word for information word u , n bit
G : generator matrix, k × n - matrix, k rows, n columns
Each information word corresponds to a unique code word and vice versa.
The number of rows of the generator matrix is the number of information bits k
of the information words, the number of columns of the generator matrix is the
number of code word bits n of the code words.
=⇒
=⇒
=⇒
code rate =⇒
k = 3
n = 7
N = 2k = 23 = 8 code words
k
3
Number of information bits
Rc =
=
= 0.43 =
ˆ ratio
n
7
Number of code word bits
With the information word u = u0 u1 u2 , the matrix multiplication is as follows:
c = u·G


1 1 0 1 0 0 1
c = u0 u 1 u2 ·  1 0 1 0 0 1 1 
1 1 1 0 1 0 0
= u0 · 1 1 0 1 0 0 1 +
{z
}
|
st
1 row of G u1 · 1 0 1 0 0 1 1 +
|
{z
}
2nd row of G
u2 · 1 1 1 0 1 0 0
|
{z
}
3rd row of G
Summation and multiplication is done in the binary domain
(0 + 0 = 0, 0 + 1 = 1, 1 + 1 = 0).
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
26/57
n
u0
u1
u2
u3
u4
u5
u6
u7
=
=
=
=
=
=
=
=
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
}|
z 

{
 1 1 0 1 0 0 1
k  1 0 1 0 0 1 1  =G

1 1 1 0 1 0 0
0 0 0 0 0 0 0 = c0
1 1 1 0 1 0 0 = c1
1 0 1 0 0 1 1 = c2
0 1 0 0 1 1 1 = c3
1 1 0 1 0 0 1 = c4
0 0 1 1 1 0 1 = c5
0 1 1 1 0 1 0 = c6
1 0 0 1 1 1 0 = c7
0
1
0
1
0
1
0
1
wH (ci )
0
4
4
4
4
4
4
4
The complete code is given by all linear combinations of the rows of G
6.2 The minimun distance dmin of the code is the minimun number of digits, in which
two code words are different. It is shown in the lecture that the minimum distance
equals the minimum weight of the code words.
dmin = min {wH (ci ) |ci 6= 0}
= 4
=⇒ number of errors in code words that can be detected at the decoder side:
te = dmin − 1 = 3
number of errors that can be corrected at the decoder side:
t =



dmin −2
2
dmin is even
dmin −1
2
dmin is odd
Here: dmin is even:
=⇒
t=
dmin − 2
=1
2
6.3 “Each linear blockcode can be converted into equivalent systematic code”
G −→ G′
c −→ c′
G′ = k × n − matrix
The generator matrix of the systematic code G′ has the following structure:
G′ =
Date: 12. Juli 2011
. Ik .. P
Ik :
Identity matrix (k × k)
P : Parity bit matrix ( k × (n − k) )
NTS/IW
UDE
Exercise problems for
Coding Theory

1 0 0
I3 =  0 1 0 
0 0 1
Page
27/57

The rows of G′ are generated by combinations
of the G′ is the identity matrix Ik

st
nd
rd
(1 + 2 + 3 ) row of G −→  1
(2nd + 3rd ) row of G −→ 
 0
(1st + 3rd ) row of G −→
0
So, for the given code the parity bit matrix

1 1
P= 0 1
1 1
of the rows of G, such the first part

.
0 0 .. 1 1 1 0

.
′
1 0 .. 0 1 1 1 
=G
.
0 1 .. 1 1 0 1
P is:

1 0
1 1 
0 1
The code words of the systematic code are obtained by the matrix equation:
c′ = u · G′

.
1 0 0 .. 1 1 1 0


..

G′ = 
0
1
0
.
0
1
1
1


..
0 0 1 . 1 1 0 1
.
= (1 0 1)
( 1 0 1 .. 0 0 1 1 ) = c′a
| {z }
|
{z
}
=ua
parity check bits
..
= (0 1 1)
( 0 1 1 . 1 0 1 0 ) = c′b
|
{z
}
| {z }
=ub
parity check bits

ua
ub
6.4 Parity check matrix : H′
is used for error check and error correction.
Property of every parity check matrix H:
c · HT = 0
x · HT 6= 0
Generation of H′
Date: 12. Juli 2011
if c is a valid code word
if x is not a valid code word
NTS/IW
UDE
Exercise problems for
Coding Theory
.
G′ = ( Ik .. P )
.
H′ = ( PT .. In−k )
Page
28/57
generator matrix
parity check matrix
With the above determined parity bit matrix P:

1
P =  0
1

 1

 1
′
=⇒ H = 
 1


1 1 0
1 1 1 
1 0 1
..
.
..
.
..
.
..
.
0 1
1 1
1 0
0 1 1
−→
PT

1 0 0 0 

0 1 0 0 

0 0 1 0 

0 0 0 1

1
 1
= 
 1
0
0
1
1
1

1
1 

0 
1
H′ is parity check matrix for code c and code c′ and equivalent codes (codes with
the same set of code words) !
6.5 Transmission model:
e
u
G′
x
H′
y
channel
=⇒ y is the output of the channel
y =
x
+
e
↑
↑
code word
error vector
Syndrome vector :
T
s = y · H′
T
= (x + e) · H′
T
T
= |x ·{z
H′ } +e · H′ = e · H′
0
Date: 12. Juli 2011
T
x (errorfree)
NTS/IW
UDE
Exercise problems for
Coding Theory
=⇒
Page
29/57
syndrome table for single errors
=
ˆ e has only one “1”
Error at bit No.2
e.g. e = ( 0 0 1 0 0 0 0 )
T
=⇒ s = e · H′
= ( 1 1 0 1 )
=
ˆ third column of H′
error at bit No.
0
1
2
3
4
5
6
no error
Syndrome s
1 1 1 0
0 1 1 1
1 1 0 1
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
6.6 Received word y ( perhaps with errors )
step 1: calculate syndrome s : s = y ·H′ T
step 2:
check s
if s = 0 =⇒ accept the received word
( perhaps more than te = 3 errors)
if s 6= 0 =⇒ search in table
a). s included in the table
=⇒ determine error vector e
b). s not included in the table
=⇒ more than t = 1 errors
=⇒ not correctable
step 3:
correction of the error
ycorr = y + e
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory

H′ T
ya = ( 0 0 0 1 1 0 1 )
yb = ( 1 1 1 0 0 0 0 )
yc = ( 1 0 0 0 1 0 0 )




=




Page
30/57
1
0
1
1
0
0
0
1
1
1
0
1
0
0
1
1
0
0
0
1
0
0
1
1
0
0
0
1










( 1 1 0 1 ) = sa
( 0 1 0 0 ) = sb
( 1 0 1 0 ) = sc
The according error vectors are obtained from the syndrome table.
sa
sb
sc
=
=
=
( 1 1 0 1 ) =⇒ ea = ( 0 0 1 0 0 0 0 )
( 0 1 0 0 ) =⇒ eb = ( 0 0 0 0 1 0 0 )
( 1 0 1 0 ) =⇒ not included
=⇒ ya,corr = ya + ea = ( 0 0 1 1 1 0 1 )
yb,corr = yb + eb = ( 1 1 1 0 1 0 0 )
yc,corr =⇒ not correctable
6.7
T
=⇒ ( s0 s1 s2 s3
=⇒
Date: 12. Juli 2011
s0
s1
s2
s3
s = y · H′
T
) = ( y0 y1 y2 y3 y4 y5 y6 ) · H′
=
=
=
=
y0 + y2 + y3
y0 + y1 + y2 + y4
y0 + y1 + y5
y1 + y2 + y6







parity equations
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
31/57
Solution Problem 7
Generally, for a binary (n, k) block code with the capability to correct t errors yields:
t X
n
≤ 2n
=⇒ 2 ·
i
k
i=0
Hamming codes are perfect which satisfy the equality:
t X
n
= 2n
=⇒ 2 ·
i
k
i=0
=⇒ All received words fall into the decoding spheres with a codeword in the center.
decoding sphere
t=2
dH = 5
For each decoding sphere there is a valid codeword in the center and there are other
words in it, which are in maximum t bits difference. Each received word that falls within
the decoding sphere is decoded with the codeword in the center.
=⇒ These invalid words can be different from the valid codeword in the center in
dH = 1 bit
dH = 2 bits
..
.
dH = t bits
Generally, if an invalid word has a difference of t bits from a codeword, t ”1”s have to be
distributed to n bits
Date: 12. Juli 2011
NTS/IW
UDE
=⇒
Exercise problems for
Coding Theory
n
t
Page
32/57
possible invalid words that are different from the codeword in t bits
=⇒ Number of vectors within a decoding sphere:
invalid words
}|
{
z }| {
n
n
n
1 +
+
+... +
|{z}
1
2
t


| {z }
| {z }
| {z }
n


different in 1 bit
different in 2 bits
different in t bits
0


 n
n 
n
n
k
k 

+ ... +
+
+
2 codewords =⇒ 2 · 
t 
2
1
0
| {z }
codeword
z =1
k
=
2 ·
|
t X
i=0
n
i
{z
2n
|{z}
≤
total number of vectors
}
number of vectors within decoding spheres
7.1 Hamming Code:
t X
n
≤ 2n
2 ·
i
k
i=0
given: k = 4, t = 1
k
2 ·
n
0
+
=⇒ 1 + n = 2n−k
=⇒ 1 + (m + k)
=⇒ 1 + k
=⇒ 2m − m
m
1
2
3
4
=⇒ m = 3, n = k + m = 4 + 3 = 7
Date: 12. Juli 2011
n
1
=
=
=
=
= 2n
2m
2m
2m − m
5
2m − m
1
2
5
12
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
33/57
7.2 Hamming codes can be written as systematic codes
..
T ..
=⇒ G = Ik . P ⇐⇒ H = P . In−k
syndrome: s = e · HT
• n = 7 different syndromes are used for 7 positions of single errors
All zero vector = no error
• Single error at position i
=⇒ syndrome si is the row no. i of HT
⇐⇒ syndrome si is the column no. i of H
=⇒ all n rows of HT are different and no row is the all zero vector
=⇒ use all different vectors si (except s = 0) for the columns of H

.
1 1 1 0 .. 1 0 0


..

=⇒H = 
1
1
0
1
.
0
1
0


..
1 0 1 1 . 0 0 1

G
=
.
P .. In−k
=
.
Ik .. P

T
 1 0 0

 0 1 0
=⇒G = 
 0 0 1

0 0 0
0
0
0
1
..
.
..
.
..
.
..
.

1 1 1 

1 1 0 

1 0 1 

0 1 1
7.3 Generally (without proof): All hamming codes have dmin = 3
=⇒ Number of errors that can be detected
te = dmin − 1 = 2
=⇒ Number of errors that can be corrected
t=
Date: 12. Juli 2011
(dmin − 1)
= 1 (dmin odd)
2
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
34/57
Solution Problem 8
Notation of sum-construction of a code
Assume:
ca = linear block code with ( n, ka , dmin,a )
cb = linear block code with ( n, kb , dmin,b )
Creating a new linear block code cnew with
( nnew = 2n, knew = ka + kb , dmin,new ) by cnew = ca & cb
.
=⇒ cnew =
ca .. ca + cb ← all combinations of codewords
.
=
ca,0 , ca,1 , · · · ca,n−1 , .. ( ca,0 + cb,0 ) , ( ca,1 + cb,1 ) , · · · ( ca,n−1 + cb,n−1 )
{z
} |
{z
}
|
n bits
n bits
= ( cnew,0 , cnew,1 , · · · cnew,2n−1 )
|
{z
}
2n bits
Ga
Ga
0
Gb
Gnew =
Reed Muller-Codes are linear block codes
Notation
RM (r, m)
0≤r≤m
m
, dmin = 2m−r
=⇒ n = 2m , k =
i
r
P
i=0
=⇒ RM (0, 0) =⇒
=⇒
RM (0, 1) =⇒
=⇒
n = 1, k = 1, dmin = 1
G00 = (1)
n = 2, k = 1, dmin = 2
G01 = (1 1)
General
r = 0 =⇒ k = 1, n = 2m , dmin = 2m
=⇒ repitition code =⇒ G0 m = ( |1 1 {z
· · · 1} )
2m
m
r = m =⇒ dmin = 1, n = 2
k = 2m (without proof) =⇒ n = k


1 0 ··· 0 

 0 1 · · · 0 

 m

=⇒ uncoded =⇒ Gmm = Im =  .. .. . . ..  2
 . .
. . 


0 0 ··· 1 
|
{z
}
2m
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
35/57
Recursive sum construction:
RM (r + 1, m + 1) = RM (r + 1, m) & RM (r, m)
Gr+1,m Gr+1,m
=⇒
Gr+1,m+1 =
0
Gr,m
Construction by submatrices:
G0 = ( |1 1 {z
· · · 1} )
n=2m
m
G1 = m × 2 matrix
where the columns contain all possible words of the length m
Gl = each row is the vector product of l different rows of G1
For 8.4 : G23


G0
=⇒ G23 =  G1 
G2
G0 = ( 1 1 1 1 1 1 1 1 )
{z
}
|
n=2m


0 0 0 0 1 1 1 1 
row 1


0 0 1 1 0 0 1 1
G1 =
m row 2

0 1 0 1 0 1 0 1
row 3
|
{z
}
n=2m


row 1 × row 2
0 0 0 0 0 0 1 1


row 1 × row 3
0 0 0 0 0 1 0 1
G2 =
row 2 × row 3
0 0 0 1 0 0 0 1
8.1
G12
G12
0
G02
G13 =
G02 = ( 1 1 1 1 )
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
=⇒
1
0
0
0
G1,3 =
0
1
0
0
1
0
1
0
0
1
1
0
Page
36/57
1010
0101
0011
1111
8.2
G22
G22
0
G12
G23 =
G22 =?
r = 2, m = 2 =⇒ uncoded =⇒ k

1
 0
=⇒ G22 = 
 0
0

1
 0

 0


=⇒ G23 =  0

 0

 0
0
|
8.3
= 22 = 4 = n

0 0 0
1 0 0 

0 1 0 
0 0 1
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1








0 0 0 1 0 1 0 

0 0 0 0 1 0 1 
0 0 0 0 0 1 1
{z
}
n=8
1
2
3
4
5
6
7










k=7









RM (2, 3) =⇒ r = 2, m = 3
m
n = 2 = 8,
k=
r X
m
i=0
i
3
3
3
+
+
=
2
1
0
3!
3!
+
1!2! 2!1!
= 1+3+3=7
= 1+
dmin = 23−2 = 2 = min wH (c) | c ∈ RM (2, 3)
c 6= 0
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
8.4 Alternative way to determine G23


G0
 G1 


Grm =  .. 
 . 
Gr

1 1 1
 0 0 0

 0 0 1

=⇒ G23 = 
 0 1 0
 0 0 0

 0 0 0
0 0 0
6= G23 |8.2
=⇒
1
2
3
4
5
6
7
=
=
=
=
=
=
=
1
0
1
1
0
0
1
1
1
0
0
0
0
0
1
1
0
1
0
1
0
1
1
1
0
1
0
0
1
1
1
1
1
1
1










Page
37/57
1+2+3+4
5+6
3+4
2+4
7
6
4
8.2
a
b
c
d
e
f
g
a+c+d+g
d+g
c+g
g
b+f
f
e
By simply matrix conversions it can be shown that this is equivalent to the result
from 8.2.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
38/57
Solution Problem 9
9.1
n = 2m = 8,
k=
r X
i=0
RM (1, 3) : r = 1, m = 3
3
3
m
= 4,
+
=
1
0
i
dmin = 2m−r = 22 = 4
submatrix construction:
G13 =
G0
G1
9.2

1
 0
=
 0
0
1
0
0
1
1
0
1
0
1
0
1
1
1
1
0
0
1
1
1
0

1
1 

1 
1
1
0
1
0
1
0
1
1
1
1
0
1
u = (1 0 0 1)
x = u · G13 = (1 0 1 0 1 0 1 0)
9.3 Majority vote decoding

=⇒
1
 0
(x0 x1 x2 x3 x4 x5 x6 x7 ) = (u0 u1 u2 u3 ) · 
 0
0
a.)
b.)
c.)
d.)
e.)
f.)
g.)
h.)
x0
x1
x2
x3
x4
x5
x6
x7
=
=
=
=
=
=
=
=
1
0
0
1
1
1
0
0
u0
u0 + u 3
u0 + u 2
u0 + u 2 + u 3
u0 + u 1
u0 + u 1 + u 3
u0 + u 1 + u 2
u0 + u 1 + u 2 + u3
Goal: determine the information word u from the received word x
Date: 12. Juli 2011
1
1
0
1
1
1
1
0

1
1 

1 
1
NTS/IW
UDE
Exercise problems for
Coding Theory
n
o
a.) + b.) =⇒
n
o
c.) + d.) =⇒
n
o
e.) + f.) =⇒
n
o
g.) + h.) =⇒
u3
=
x0 + x1
u3
=
x2 + x3
u3
=
x4 + x5
u3
=
x6 + x7
n
o
a.) + e.) =⇒
n
o
b.) + f.) =⇒
n
o
c.) + g.) =⇒
n
o
d.) + h.) =⇒
n
o
a.) + c.) =⇒
n
o
b.) + d.) =⇒
n
o
e.) + g.) =⇒
n
o
f.) + h.) =⇒
Page
39/57
u2
=
x0 + x2
u2
=
x1 + x3
u2
=
x4 + x6
u2
=
x5 + x7
u1 = x 0 + x 4
u1 = x 1 + x 5
u1 = x 2 + x 6
u1 = x 3 + x 7
These are the equations on which to apply a majority vote. After determination of
u1 , u2 , u3 , start to determine u0 :
v = x + (0 u1 u2 u3 ) · G13
n o
a.) =⇒
n o
b.) =⇒
n o
c.) =⇒
n o
d.) =⇒
n o
e.) =⇒
n o
f.) =⇒
n o
g.) =⇒
n o
h.) =⇒
u0 = x 0
u0 = x 1 + u3
u0 = x 2 + u2
u0 = x 3 + u2 + u 3
u0 = x 4 + u1
u0 = x 5 + u1 + u 3
u0 = x 6 + u1 + u 2
u0 = x 7 + u1 + u 2 + u 3
9.4
x = (1 0 1 0 1 0 1 0)
y = (1 0 1 |{z}
1 1 0 1 0)
error
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
u3
u3
u3
u3
=
=
=
=
y0 + y1
y2 + y3
y4 + y5
y6 + y7
=
=
=
=
1
0
1
1




majority vote: u3 = 1
u2
u2
u2
u2
=
=
=
=
y0 + y2
y1 + y3
y4 + y6
y5 + y7
=
=
=
=
0
1
0
0




majority vote: u2 = 0
u1
u1
u1
u1
=
=
=
=
y0 + y4
y1 + y5
y2 + y6
y3 + y7
=
=
=
=
0
0
0
1




majority vote: u1 = 0
now determine u0 :
v = y + (0 u1 u2 u3 ) · G13










1
 0
= (1 0 1 1 1 0 1 0) + (0 0 0 1) · 
 0
0
= (1 1 1 0 1 1 1 1)
majority vote on elements of v:
=⇒ u0 = 1
=⇒
u = (1 0 0 1)
9.5 dmin = 4
td = dmin − 1 = 3 errors can be detected.
tc =
dmin −2
2
= 1 error can be corrected.
Date: 12. Juli 2011
Page
40/57
1
0
0
1
1
0
1
0
1
0
1
1
1
1
0
0
1
1
0
1
1
1
1
0

1
1 

1 
1
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
41/57
Solution Problem 10
10.1
n = 7 =⇒ 7 codeword bits
Generally :
Here :
g(x) = g0 + g1 · x + g2 · x2 + g3 · x3 + · · ·
g(x) = 1 + x + x3 = 1 + 1 · x + 0 · x2 + 1 · x3
=⇒ g0 = 1
g1 = 1
g2 = 0
g3 = 1
7 codeword bits =⇒ g4 = 0, g5 = 0, g6 = 0
g0
=⇒
g1
g2
g3
g4
g5
g6
1
1
0
1
0
0
0
0
1
1
0
1
0
0
0
0
1
1
0
1
0
0
0
0
1
1
0
1

1
1
0
0
=G
10.2
ua = ( 0 1 1 0 )
=⇒ ca = ua · G
1
 0
= (0110)·
 0
0
ub
=⇒ cb
=
( 0
= (1010)
= ub · G

1
 0
= (1010)·
 0
0
=
0
1
1
0
1
0
1
1
0
1
0
1
0
0
1
0

0
0 

0 
1
1 0 1 1 1 0 )
1
1
0
0
0
1
1
0
1
0
1
1
0
1
0
1
0
0
1
0

0
0 

0 
1
( 1 1 1 0 0 1 0 )
10.3 Conversion to polynomial description:
Vector ua = ( 0 1 1 0 ) = ( u0 u1 u2 u3 )
{z
}
|
x0 x1 x2 x3
length k=4
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
polynomial =⇒ ua (x) = u0 · x0 + u1 · x1 + u2 · x2 + u3 · x3
= 0 · x0 + 1 · x1 + 1 · x2 + 0 · x3
= x + x2
Page
42/57
degree of
polynomial
k−1
Vector ub = ( 1 0 1 0 ) = ( u0 u1 u2 u3 )
polynomial =⇒ ub (x) = u0 · x0 + u1 · x1 + u2 · x2 + u3 · x3
= 1 + 0 · x1 + 1 · x2 + 0 · x3
= 1 + x2
∗ Coding by multiplication of the information polynomial
with the generator polynomial
=⇒ c(x) = u(x) · g(x)
=⇒ ca (x) = ua (x) · g(x)
= (x + x2 ) · (1 + x + x3 )
= x · (1 + x + x3 ) + x2 · (1 + x + x3 )
x
= x
+ x2
x2
+
+ x3
x3
+
x4
+
+ x4
x
Modulo 2 summation
(1 + 1 = 0)
5
+ x5
= 0 + 1 · x + 0 · x2 + 1 · x3 + 1 · x4 + 1 · x5 + 0 · x6
! ( Degree of polynomial : n − 1 = 6 ) !
⇒ ca = ( 0 1 0 1 1 1 0 )
cb (x) = ub (x) · g(x)
= (1 + x2 ) · (1 + x + x3 )
= 1 · (1 + x + x3 ) + x2 · (1 + x + x3 )
1
+ x
+
x
=
1
+ x
2
+ x2
x3
+ x3
+
+
x5
x5
= 1 + 1 · x + 1 · x2 + 0 · x3 + 0 · x4 + 1 · x5 + 0 · x6
⇒ cb = ( 1 1 1 0 0 1 0 )
Non systematic coding : c(x) = u(x) · g(x)
10.4 Systematic coding
1.) Multiply ua (x) with xn−k (degree of generator polynomial)
2.) Perform a polynomial division with g(x)
r(x)
(ua (x) · xn−k ) : g(x) = q(x) + g(x)
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
43/57
3.) Add remainder r(x)
⇒ codeword: ca (x) = u(x) · xn−k + r(x)
⇒ g(x) is a divisor of c(x)
.
ca (x)=c
ˆ a = ( r .. ua )
ua = ( 0 1 1 0 )
1.) xn−k = x7−4 = x3
ua (x) · xn−k = ua (x) · x3 = (x + x2 ) · x3
= x4 + x5
2.) ua (x) · xn−k : g(x)
(x5 + x4 ) : (x3 +
+ (x5 +
x3 + x2 )
0
+ x4
x4
0
1
x + 1) = x2 + x + 1 + x3 +x+1
. . . . . . . . . . . . . . . . x2 · (x3 + x + 1)
+ x3
+
+ x2
x2
+ x
........
x · (x3 + x + 1)
+ x3
x3
0
+ 0
+
+
+ x
x
0
+ 1···
+ 1···
1 · (x3 + x + 1)
r(x)
3.) ca,s (x) = ua (x) · xn−k + r(x)
= 1 + x4 + x5
= 1 + 0 · x + 0 · x2 + 0 · x3 + 1 · x4 + 1 · x5 + 0 · x6
.
⇒ ca,s = (1| {z
0 0} .. |0 1{z1 0})
=r(x)
ˆ
=u
ˆ a (x)
ub = ( 1 0 1 0 )
1.) ub (x) · xn−k =(1 + x2 ) · x3
=x3 + x5
2.) ub (x) · xn−k : g(x)
(x5 + x3 ) : (x3 +
+ (x5 + x3 + x2 )
x
+
1) = x2 +
x2
x3 +x+1
x2 = r(x)
3.) cb,s (x)=ub (x) · xn−k + r(x)
=x2 + x3 + x5
=0 + 0 · x + 1 · x2 + 1 · x3 + 0 · x4 + 1 · x5 + 0 · x6
.
⇒ cb,s =(0| {z
0 1} .. |1 0{z1 0})
=r(x)
ˆ
=u
ˆ b (x)
10.5 Cyclic code: The codeword length n is equal to the period
of the generator polynomial r
r is the smallest integer number that fulfills:
xr + 1 = 0 mod g(x)
=⇒ xr + 1 : g(x) = h(x) without remainder
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
44/57
Determination of r by reversion of the polynomial division:
4
x7
+
0
x5
+ x5
x7
+
0
+
0
x
+ 0
+ x4
+
0
x3
+ 0
+ x3
+ 0
+ x2
+ x2
+ x
+ x1
+
1
= g(x) · 1
= g(x) · x
= g(x) · x2
= g(x) · x4
+
+
+
+
1
= x7 + 1
⇒r=7
0
0
0
4
2
Proof: (x7 + 1) : (x3 + x + 1) = x
| + x {z+ x + 1}
h(x)
x7
+
x5
+ x4
x5
x5
+ x4
+
4
x
x4
+
x
+ x
+ x
+
3
+ x2
x2
x3
x3
Easier:
x7
1
1
x6
0
0
+
+
write down only the coefficients:
( target: 1 0 0 0 0 0 0 1 )
x5
x4
1
1
0
1
0
1
0
x3
1
0
1
x2
0
1
1
x1
1
1
x0
1
0
0
0
1
=x
ˆ 7+1
Number of information bits k :
degree of g(x) = n − k = 3
=⇒ k = n − degree of g(x)
= 7−3=4
Date: 12. Juli 2011
1
3
2
+
+ x
x
x
1
+
+
1
1
0
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
45/57
Solution Problem 11
Galois-field
Direct fields : GF (p) , p = prime number
Extended fields : GF (p m ) , p = prime number
m = integer number, m > 1
here: Direct Field (p=5, prime number)
GF (p) = {0, 1, 2, · · · p−1}, valid only for direct fields
GF (5) = {0, 1, 2, 3, 4}
Properties of elements of the Galois-field (direct or extended)
ai ⊕ ak = al
ai ⊗ ak = am
∈
∈
GF
GF
⊕ =Modulo
ˆ
p addition
⊗ =Modulo
ˆ
p multiplication
Non-zero primitive elements of direct Galois-fields:
Every non-zero element of the Galois-field GF(p) can be written as ak = (z x ) mod p
with 0 ≤ x < p − 1
z is called a primitive element
Property of inverse elements :
• With respect to addition :
a ⊕ (−a) = 0 =⇒ a + (−a) = n · p
where (-a) is an inverse element, (-a) ∈ GF
• With respect to multiplication:
a ⊗ (a−1 ) = 1 =⇒ a · (a−1 ) = n · p + 1
where (a−1 ) is an inverse element, (a−1 ) ∈ GF
11.1
a
−a
a−1
0
0
x
1
4
1
2
3
3
3
2
2
a + (−a) = 0 mod 5
= n · 5 mod 5
with (−a) ∈ GF(5)
Date: 12. Juli 2011
4
1
4
a · (a−1 ) = 1 mod 5
= i · 5 + 1 mod 5
= 1; 6; 11; 16; mod 5
with (a−1 ) ∈ GF(5)
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
46/57
11.2 For Reed Solomon Codes:
=⇒
n = p m − 1 = 51 − 1 = 4
dmin − 1
t = 1=
2
dmin = 2t + 1 = 3
Singelton-Bound
dmin ≤ n − k + 1
is reached for RS
=⇒ dmin = n − k + 1
=⇒ k = n + 1 − dmin
= 2
=⇒
t −→ dmin can be chosen until k > 0
but t ↑ =⇒ k ↓ =
ˆ more redundancy
11.3 Code word vector in time domain a
Code word vector in frequency domain A
a = (a1 , a2 , a3 , a4 )
A = (A0 , A1 , A2 , A3 )
Matrix description:
AT = MDFT · aT



A0
1 1
1
1
 A1 
 1 z −1 z −2 z −3



 A2  = −  1 z −2 z −4 z −6
A3
1 z −3 z −6 z −9
with z =primitive
ˆ
element
 

a0
  a1 
·

  a2 
a3
=⇒ z = 2
24 = 16 = 1 mod 5
z n = z 0 = 1 mod 5
In a Galois field GF(pm ), z i can be written as
zi
= zi
mod (p m −1)
To get rid of any negative exponent, one can add k (pm − 1) to the exponent without
changing the result
=⇒ z i
with pm − 1
Date: 12. Juli 2011
m
= z [i+k·(p −1)]
here
= 51 − 1 = 4
mod (p m −1)
NTS/IW
UDE
Exercise problems for
Coding Theory
MDFT =
=
z=2, modulo 5 calculation =
inverse elements =
Page
47/57

1 1
1
1
 1 z −1 z −2 z −3 

−
 1 z −2 z −4 z −6 
1 z −3 z −6 z −9


1 1 1 1
 1 z3 z2 z1 

−
 1 z2 z0 z2 
1 z1 z2 z3


1 1 1 1
 1 3 4 2 

−
 1 4 1 4 
1 2 4 3


4 4 4 4
 4 2 1 3 


 4 1 4 1 
4 3 1 2

11.4 Inverse Transform:
aT = MIDFT · AT



a0
1 1
 a1 
 1 z1



 a2  =  1 z 2
a3
1 z3

1 1

m
1 z1
z i = z i mod (p −1) ! = 
 1 z2
1 z3

1 1
 1 2
=⇒ MIDFT = 
 1 4
1 3
 
A0
1 1
 A1
z2 z3 
·
z 4 z 6   A2
z6 z9
A3

1 1
z2 z3 

z0 z2 
z2 z1

1 1
4 3 

1 4 
4 2
Property of the matrices :
MDFT · MIDFT = MIDFT · MDFT = I
Date: 12. Juli 2011




NTS/IW
UDE
Exercise problems for
Coding Theory
Page
48/57
11.5 Coding:
Codeword in frequency domain:
A =
=
A0
.
A1 .. 0 0
2 3 0 0
=⇒ Code word in time domain:
aT = MIDFT · AT
  
2
1


3   3 

·
4   0 
0
2

 
1
1

 
 1 
+3· 2 +0·
= 2·

 4 
 1 
3
1


 
5
0
 8 
 3 

 
= 
 14  mod 5 =  4 
11
1

1
 1
= 
 1
1

1
2
4
3
1
4
1
4

=⇒ transmitted codeword: a =
0 3 4 1


1

4 
+0·

1 
4

1
3 

4 
2


4

1 
+1·


4
1

4
3 

1 
2
11.6
AT = MDFT · aT
  
0
4


3   3 

·
1   4 
1
2
 

4
4
 4 
 2 


 

= 0·
 4 +3· 1 +4·
4
3

 

2
32
 3 
 13 

 
= 
 20  mod 5 =  0 
0
15

4
 4
= 
 4
4

Date: 12. Juli 2011
4
2
1
3
4
1
4
1

NTS/IW
UDE
Exercise problems for
Coding Theory
Page
49/57
Problem 12
Considered is :
RS (4, 2) over GF(5)
General : RS (n, k) over GF (pm )
here
12.1 r = a + e
(modulo pm = 5 operation)
r is the received vector in the time domain
=⇒ r = ( 0 3 4 1 ) + ( 0 0 3 0 )
= ( 0 3 7 1 ) (without modulo 5 operation)
= (0321)
Transforming r in the frequency domain (DFT)
r b r R
RT = MDFT · rT


4 4 4 4
 4 2 1 3 

= 
 4 1 4 1 
4 3 1 2
{z
}
|
MDFT , determined
in problem 8
 
4
 1 

= 
 2 
3
=⇒ R = ( 4 1 ... 2 3 )

 
0
0+2+3+4
 3   0+1+2+3
 
·
 2 = 0+3+3+1
1
0+4+2+2
Without error:
R =
.
( A0 A1 .. 0 0 )
| {z }
| {z }
information
“parity frequencies”
word
length
length k
n−k
Here (with error):
.
R = ( 4 1 ..
2 3)
|{z}
S = ( S0 S1 ) = ( 2 3 )
| {z }
n−k=2
General: R = A + E
A=
ˆ Codeword in the frequency domain
A consists of 0 in the last (n − k) digits
=⇒ S consists of the last (n − k) digits of R
=
ˆ the last (n − k) digits of E
Date: 12. Juli 2011




NTS/IW
UDE
Exercise problems for
Coding Theory
Page
50/57
=⇒ S = 0 =⇒ error free codeword
S 6= 0 =⇒ erroneous received word
12.2 Error position polynomial
time domain:
c(x) b r C(x)
ci · ei = 0 =⇒ ci = 0
b
r
frequency domain:
if
ei 6= 0
=⇒ ci = 0 if error occurs at position i
C(x) · E(x) = 0 mod (xn − 1)
⇒ x = z i ( z is primitive element ) is a zero, if an error
occurs at position
i
Q
⇒ C(x) =
( x − zi )
i,ei 6=0
Determination of the coefficients of C(x) = C0 + · · · + Ce · xe
e is number of errors
t = 1 error can be corrected
assume: e ≤ t
( degree e )
Matrix representation :


Se · · ·
S0

 ..
..
.
..
2t−e
 .
.

 S
·
·
·
S
2t−1
2t−e−1
{z
|
e+1



}





·



C0
..
.
Ce













! 

 e+1 = 











0
..
.
0









(∗)
Equation is fulfilled for correct value of e ( unknown )
e is unknown
=⇒ Starting with the most probable case: e = 1
index of the left element in the last row of the matrix:
index of the right element in the last row of the matrix:
C 0
=⇒ S1 S0 ·
= 0
C1
t = 1 =⇒
=⇒
=⇒
2t − 1 = 1
2t − e − 1 = 0
S1 · C0 + S0 · C1 = 0
3 · C0 + 2 · C1 = 0 (under-determined, 1 equation 2 variables)
=⇒ Normalize C(x) : e.g. C1 = 1
C(x) is always normalizable, only zeros are important.
=⇒
=⇒
=⇒
Date: 12. Juli 2011
3 · C0 + 2 = 0
3 · C0 = 3
C0 = 6 mod 5
= 1
|−2 =
ˆ + 3 ( inverse element )
| · 3−1 =
ˆ · 2 ( inverse element)
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
51/57
=⇒ C(x) = 1 + 1 · x
if the matrix equation (∗) is not solvable for e = 1 =⇒ try e = 2, 3, 4 · · · t
if the matrix equation (∗) is not solvable at all =⇒ no correction possible
Position of the errors:
C(x = z i ) = 0 ⇐⇒ Error on position i
( i = 0 · · · n − 1, |z {z
= 2} )
primitive element
=⇒ C(z 0 = 1) = 2 6= 0
C(z 1 = 2) = 3 6= 0
C(z 2 = 4)[= 5] = 0 ( modulo 5 ) =⇒ Error on position i = 2
C(z 3 = 3) = 4 6= 0 ( must be, only single error assumed e = 1 )
Position is known, but value ?
12.3 Error vector in the frequency domain:
r = a+e
b
r
R = A+E
A is 0 in the last (n − k) digits
=⇒ R and E are equal in the last (n − k) digits
( equals S, s.a. )
=⇒ E = ( E0
E1
..
.
E1
..
. S0
E E )
| 2 {z }3
n−k=2
= ( E0
S1 ) = ( E0 E1
| {z }
k=2
recursive determination of left Ej
Ej =
−(C0−1 )
..
. 2 3)
·
e
X
Ci mod n · E(j−i) mod n
i=1
=⇒ E0
= −(C0−1 ) · C1 mod 4 · E(0−1) mod 4
=
=
=
=
[=
=
E1
−(1−1 ) · C1 · E(−1) mod 4
−(1) · C1 · E3 mod 4
−(1) · C1 · E3
4·1·3
12]
2
x mod y = (x + y) mod y
= −(C0−1 ) · C1 mod 4 · E(1−1) mod 4
= 4
·
1
·
2
= 3
Date: 12. Juli 2011
j = 0···k − 1
NTS/IW
UDE
Exercise problems for
Coding Theory
=⇒ E = ( 2 3 2 3 )
12.4
 = R “ − ” E =
(modulo 5 operation)
=
=
=
Date: 12. Juli 2011
(4 1 2 3)−(2 3 2 3)
(4 1 2 3)+(3 2 3 2)
(7 3 5 5)
(2 3 0 0)
Page
52/57
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
53/57
Problem 13
Denotation of convolutional codes:
Information blocks


 ur = (ur,1 , ur,2 , ur,3 , . . . , ur,k )

 ur−1 = (ur−1,1 , ur−1,2 , ur−1,3 , . . . , ur−1,k )
length k
ur−2 = (ur−2,1 , ur−2,2 , ur−2,3 , . . . , ur−2,k )



..

.
: actual block
: previous block
: before previous block
:
Code block
length n
ar = (ar,1 , ar,2 , ar,3 , . . . , ar,n )
Memory length
m is the memory length =
ˆ number of information blocks of the past that are used to
create ar .
13.1 Here
• Information blocks consist only of one bit
ur = (ur,1 )
=⇒ k = 1
ur−1 = (ur−1,1 )
• Only one information block of the past (ur−1 ) is used to create ar
=⇒ m = 1
• Code block consists of two bit
ar = (ar,1 , ar,2 )
=⇒ n = 2
• Coderate:
RC =
Date: 12. Juli 2011
k
1
=
n
2
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
54/57
13.2 Meaning of states: Content of memory (here: ur−1,1 )
Obtaining of the state diagram of the state table:
The state table shows the output and next state of the coder for every possible
combination of information blocks (input and memory (state)) of the coder.
Input Actual State
ur
ur−1
ur,1
ur−1,1
Output
Next State
ar
ur−1
ar,1
ar,2
ur−1,1
= ur,1 ⊕ ur−1,1
= ur−1,1
= ur,1
0
0
0
0
0
0
1
1
1
0
1
0
1
0
1
1
1
0
1
1
From this table the state diagram can be directly obtained.
memory content=state
ˆ
1 (10)
0 (00)
0
1
1 (01)
0 (11)
Input
Output
For example: Consider the highlighted values: they are obtained from the 2nd row
of the state table
If the actual state of the encoder is ’1’ and the input is ’0’ then the output is ’11’
and the next state will be ’0’.
13.3 The Trellis diagram describes also the states of the encoder, but with a temporal
approach.
• The starting state is always the zero state.
• The reaction of the encoder (output, next state) at the actual state to every
possible input is determined.
• Then the next state is considered and again the reaction of the encoder (output,
next state) at the actual state to every possible input is determined.
• And so on . . .
Date: 12. Juli 2011
0(00)
0(00)
0
0
0(1
1)
1
1
1(01)
2
0
1
1(01)
3
0(00)
0
)
10
1(
0
1(01)
0(00)
)
10
1(
1
0
)
10
1(
)
10
1(
)
10
1(
1
0(00)
Page
55/57
0(1
1)
0
0(1
1)
starting
state
Exercise problems for
Coding Theory
0(1
1)
NTS/IW
UDE
1
1(01)
4
1
5
time
13.4 Fundamental way:
• starting at zero state and ending at zero state, without being in the zero state
in between.
• Not all zero state way!
• The way has to be chosen that the number of ’1’ at the output is minimized.
So, the fundamental way is highlighted in the next diagram:
0
0(00)
0
0(1
1)
1
1
1(01)
2
0
1
3
1(01)
0(00)
0
)
10
1(
0
1(01)
0(00)
)
10
1(
1
0
)
10
1(
)
10
1(
)
10
1(
1
0(00)
0(1
1)
0(00)
0(1
1)
0
0(1
1)
starting
state
1
4
1(01)
1
5
time
Every different way, that starts at zero state and ends at zero state (except all zero
state way) has got a higher number of ’1’ at the output.
The free distance df is the number of ’1’ at the output using the fundamental way,
so that it is the distance to the all zero sequence.
=⇒ df = 3
The free distance is a measure for the capability to correct errors. The higher df ,
the higher is the capability to correct errors.
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
56/57
13.5 Every segment is terminated with a single ’0’. So, the sequence, that has to be
encoded is:
Termination
z}|{ 1| 0{z1 1} | 0
u
The according way in the Trellis diagram is highlighted in the following diagram.
0
0(00)
0
0(1
1)
0(
11
)
1
1
1(01)
2
1
1(01)
3
0(00)
0
)
10
1(
0
1(01)
0
)
10
1(
1
0(00)
0
)
10
1(
)
10
1(
)
10
1(
1
0(00)
0(
11
)
0(00)
0
0(1
1)
starting
state
1
1(01)
4
1
5
time
Terminated
uncoded sequence
encoded
1
1 0
0
1 1
1
1 0
1
0 1
0
1 1
13.6 For a not terminated encoder:
RC =
1
k
= =50%
ˆ
n
2
Terminated after T = 4 bit with m = 1 ’0’:
T
RC
=
infobits
T ·k
4
=
=
=
ˆ 40%
codebits
(T + m) · n
10
13.7 Decoding: Viterbi-Algorithm
Maximum Likelihood decision, which sequence is the most likely transmitted codeword, if a sequence a is received.
• Starting at the zero state (left) of the trellis diagram.
• Calculating the the number of matches of the output and the corresponding
received bits for any possible way (every arrow in the trellis diagram) to the
next stage.
• For every state on every stage only the maximum number of matches is used
for the further calculations. All other numbers are canceled (thus the ways
that belong to this numbers are also canceled).
Date: 12. Juli 2011
NTS/IW
UDE
Exercise problems for
Coding Theory
Page
57/57
• Finally, the way with the highest number of matches is used and retraced to
the beginning of the trellis diagram.
Using this scheme yields:
0
10
0(00) 1
0
0(1
1)
3
1(01) 2
0(00) 4
1
0
4
5
1(01) 3
1
11
0(00) 6
0(00) 6
0
)
10
1(
3
00
)
10
1(
1
0
)
10
1(
1
0(00) 2
)
10
1(
)
10
1(
2
10
5
5
1(01) 6
0(1
1)
10
0(1
1)
a
0(1
1)
received
sequence
1
0
8
7
1(01) 7
1
So, the received sequence is decoded:
decoded
corresponding codeword
received word
1
0
1
1
0
1 0
1 0
1 1
1 0
1 0
1 0
0 1
0 0
1 1
1 1
So, 2 bit errors are corrected, at 4th digit and 8th digit.
Date: 12. Juli 2011
Download