Problem 1.

advertisement
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Problem 1.
For an (rk,k) repetition code, the code word c corresponding to a source
word b is in general given by
c  bb
...
b.


r times
So clearly, the generator matrix G for this kind of a code appears as
G  I k I k ... I k ,


r times
where Ik is an identity matrix of dimensions kk (then obviously c = bG is
of the desired form, right?).
In general, for a generator matrix G = [Ik | P] of a binary code, the parity
check matrix H appears as H = [PT | In–k]. So for an (rk,k) repetition code,
the matrix P is given by
P  I k I k ... I k ,


r 1 times
and the parity check matrix H is of the form ( n–k = (r–1)k )
© Mikko Valkama / TUT
1 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
I k

I

k
H   I nk  .




I k

Next we find the minimum distance and the corresponding error correction
and detection capabilities.
For an (rk,k) repetition code, the code word c  bb
...
b . By definition, the


r times
source word b can have all the possible source bit combinations. Then, the
minimum distance between any two code words c1  b1b1...b1 and
c 2  b 2b 2 ...b 2 corresponds to the situation where the source words b1 and
b2 differ by only one source bit. In this kind of situation, the distance
between c1 and c2 is clearly r.
=> the minimum distance for an (rk,k) repetition code is dmin = r.
Based on this
 r – 1 errors can be detected
or
 (r – 1)/2 errors can be corrected ( . denotes the floor function)
in hard decoding.
© Mikko Valkama / TUT
2 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Problem 2.
For purpose of illustration, we consider a (6,2) repetition code. The source
word b = b1b2 is represented by a source word polynomial b(z) = b1 + b2z.
The corresponding code word is c = b1b2b1b2b1b2 and the polynomial c(z)
appears as c(z) = b1 + b2z + b1z2 + b2z3 + b1z4 + b2z5. In general, the
generator polynomial g(z) relates the code word and source word
polynomials as c(z) = b(z)g(z). In our case, this polynomial g(z) is then
clearly
g( z )  1  z 2  z 4
Proof (simple):
b(z)g(z) = (b1 + b2z)(1 + z2 + z4) = b1 + b2z + b1z2 + b2z3 + b1z4 + b2z5 = c(z)
For further illustration, the possible source- and code words for a binary
(6,2) repetition code are presented in the following table.
Source word b
Code word c
00
00|00|00
01
01|01|01
10
10|10|10
11
11|11|11
© Mikko Valkama / TUT
3 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
The corresponding polynomial representation is given as
Source word polynomial b(z)
Code word polynomial c(z)
0
0
z
z + z3 + z5
1
1 + z2 + z4
1+z
1 + z + z2 + z3 + z4 + z5
You can easily verify that the code word polynomials can be obtained
from the source word polynomials as c(z) = b(z)g(z) where g(z) is the
given polynomial. For example,
(1  z )(1  z 2  z 4 )  1  z  z 2  z 3  z 4  z 5 .
The results can be easily generalized to a more general case. For an (rk,k)
repetition code, the generator polynomial is in general given by
g( z )  1  z k  z 2k  ...  z ( r 1) k .
© Mikko Valkama / TUT
4 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Problem 3.
A linear binary cyclic (6,k) code which contains the given code word
c1 = 110110
will also contain the code words (cyclic shifts of c1)
c2 = 011011 and
c3 = 101101.
This is, indeed, because the code is defined to be cyclic. Also because the
code is linear, all the linear combinations of the code words should belong
to the code. So
c1 + c2 = 101101 = c3
“OK”
c1 + c3 = 011011 = c2
“OK”
c2 + c3 = 110110 = c1
“OK”
c1 + c2 + c3= 000000
“OK”
Then, the set of 4 words {000000, 101101, 011011, 110110} constitute a
linear, cyclic (6,2) code.
© Mikko Valkama / TUT
5 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Problem 4.
The coder is reproduced below as follows.
Ck(1)
Bk
D
D
Ck(2)
The operation of the coder can be represented by a finite state-machine.
The number of state values depends on the memory of the coder and on the
size of the symbol alphabet. Here, we have a binary coder and the length
of the memory is two. So the possible values of the state are {Bk-1,Bk-2} =
{0,0}, {0,1}, {1,0} and {1,1}. The state transition diagram is presented
below, the arcs are labeled with input-output data as (Bk, Ck(1)Ck(2)).
(0,00)
0,0
(0,11)
0,1
(1,00)
(1,11)
(0,10)
(0,01)
1,0
© Mikko Valkama / TUT
(1,10)
6 / 19
1,1
(1,01)
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
The same information can be represented by a one-stage trellis-diagram.
This is given below for our coder. The arcs are again labeled with inputoutput data as (Bk, Ck(1)Ck(2)).
(0,00)
0,0
0,0
(0,11)
(1,11)
0,1
0,1
(1,00)
(0,01)
1,0
1,0
(0,10)
(1,10)
1,1
(1,01)
© Mikko Valkama / TUT
1,1
7 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
a) What follows is the hard-decoder trellis-diagram for the given observed sequence 10 01 11 01 00 01 with
transition weights (Hamming distances between the observation and the corresponding error-free outputs
of the coder) labeled.
Then, the decoding corresponds to finding the shortest (lowest cumulative weight) path through the trellis.
Notice the similarity between this decoding operation and the sequence detection operation discussed in
the earlier exercises. Actually, they both utilize the maximum-likelihood sequence detection principle of
finding the sequence that is closest to the observed sequence. In decoding, the measure of closeness is
either the Euclidian distance (soft decoding) or, like in this problem, the Hamming distance (hard
decoding). Notice also the similarity of convolutional coders and ISI channels: They both introduce some
redundancy into their output sequence (this is really true also for ISI channels).
The shortest path can be again obtained iteratively by the Viterbi algorithm. Also, the conventions of allzero initial and final states hold. The surviving paths at each instant are presented in the following pages.
© Mikko Valkama / TUT
8 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
The complete trellis with transition weights:
0,0
1
1
1
2
1
0
1
1
2
0,0
1
1
0
0,1
1
0
0,1
0
,
2
0
1
1
1,0
1
1
0
1,0
1
2
1
2
2
1
1,1
10
© Mikko Valkama / TUT
01
1
0
11
01
9 / 19
1,1
00
01
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Survivers at each instant:
0,0
1
1
1
0,1
1,0
1
1,1
10
© Mikko Valkama / TUT
10 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
0,0
1
1
1
2
1
0,1
1
0
1,0
2
2
1,1
3
10
© Mikko Valkama / TUT
01
11 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
0,0
1
1
1
0
1
1
0
0,1
0 3
,
0
1
1
1,0
2
1
1,1
3
10
© Mikko Valkama / TUT
01
11
12 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
0,0
1
1
1
2
0
1
1
0,1
1
0
2
0
,
0
1
0
1,0
2
1
1,1
0
10
© Mikko Valkama / TUT
01
01
11
13 / 19
3
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
1
0,0
0
2
0
1
1
0,1
3
0
,
0
1
1
1,0
1,1
10
© Mikko Valkama / TUT
01
01
11
14 / 19
00
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Final surviving path for this example:
1
0,0
0
1
0,0
0
1
0,1
0,1
0
,
0
1
1,0
1,0
1,1
1,1
10
© Mikko Valkama / TUT
01
01
11
15 / 19
00
01
3
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
This path corresponds to an input sequence (source sequence) {B0, B1, B2, B3, B4, B5,} = {1, 0, 0, 0, 0, 0}.
Due to the assumption of all-zero final state, B4 and B5 are actually constrained to be zero for this coder.
Thus, the actual useful decoded data is then 1 0 0 0.
b) Same kind of calculations are needed for the second observed sequence 11 10 01 01 10 11. You can carry
out the Viterbi algorithm by yourself (try!!). Here, we only present the complete trellis and the final
surviving path.
© Mikko Valkama / TUT
16 / 19
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
The complete trellis:
0,0
1
2
1
1
1
1
0
1
1
0,0
1
1
1
0,1
1
1
0,1
0
,
1
2
1
0
1,0
1
2
0
1,0
0
2
2
0
2
2
1,1
11
© Mikko Valkama / TUT
10
0
0
01
01
17 / 19
1,1
10
10
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
Final surviving path:
0,0
0,0
1
0
0,1
0,1
0
,
1
1,0
1,0
0
0
1,1
11
© Mikko Valkama / TUT
10
0
0
01
01
18 / 19
1,1
10
10
TLT-5400/5406 DIGITAL TRANSMISSION, Exercise 10, Spring 2016
This path corresponds to an input sequence (source sequence) {B0, B1, B2, B3, B4, B5,} = {1, 1, 1, 1, 0, 0}.
Since B4 and B5 were constrained to be zero, the actual useful decoded data is now 1 1 1 1.
© Mikko Valkama / TUT
19 / 19
Download