# linear block codes

```3.
CHAPTER 3 - LINEAR BLOCK CODES
In this chapter we introduce basic concepts of block codes. For ease of code synthesis and
implementation, we restrict our attention to a subclass of the class of all block codes, the linear block
codes. Because information in most current digital data communication and storage systems is coded
in the binary digits 0 and 1, we discuss only the linear block codes with symbols from the binary field
GF(2). It is straightforward to generalize the theory developed for the binary codes to codes with
symbols from a non-binary field.
First, linear block codes are defined and described in terms of generator and parity-check matrices.
The parity-check equations for a systematic code are derived.
Encoding of linear block codes is discussed. In Section 3.2 the concept of syndrome is introduced. The
use of syndrome for error detection and correction is discussed. In Sections 3.3 and 3.4 we define the
minimum distance of a block code and show that the random-error-detecting and random-errorcorrecting capabilities of a code are determined by its minimum distance. Probabilities of a decoding
error are discussed. In Section 3.5 the standard array and its application to the decoding of linear
block codes are presented. A general decoder based on the syndrome decoding scheme is given.
References [1] through [6] contain excellent treatments of linear block codes.
3.1.
3.1 Introduction to Linear Block Codes
We assume that the output of an information source is a sequence of the binary digits 0 and 1. In block
coding, this binary information sequence is segmented into message blocks of fixed length; each
message block, denoted by u, consists of k information digits. There are a total of 2k distinct messages.
The encoder, according to certain rules, transforms each input message u into a binary n-tuple v with
n &gt; k. This binary n-tuple v is referred to as the code-word (or code vector) of the message u.
Therefore, corresponding to the 2k possible messages, there are 2k code-words. This set of 2k codewords is called a block code. For a block code to be useful, the 2k code-words must be distinct.
Therefore, there should be a one-to-one correspondence between a message u and its code-word v.
For a block code with 2k code-words and length n, unless it has a certain special structure, the encoding
apparatus will be prohibitively complex for large k and n, since it has to store the 2k code-words of
length n in a dictionary. Therefore, we restrict our attention to block codes that can be mechanized in a
practical manner.
A desirable structure for a block code to possess is linearity, which greatly reduces the encoding
complexity, as we shall see.
DEFINITION 3.1 A block code of length n and 2k code-words is called a linear (n, k) code if and only
if its 2k code-words form a k-dimensional subspace of the vector space of all the n-tuples over the field
GF(2).
In fact, a binary block code is linear if and only if the modulo-2 sum of two code-words is also a codeword. The block code given in Table 3.1 is a (7, 4) linear code. One can easily check that the sum of
any two code-words in this code is also a code-word.
Because an, k) linear code C is a k-dimensional subspace of the vector space V„ of all the binary ntuples, it is possible to [find k linearly independent code-words, go, gl, …, gk _ s. in C such that every
code-word av in C is a linear combination of these k code-words [ that is,
where = 0 or 1 for 0 &lt; i &lt; k. We arrange these k linearly independent code-words as the rows of a k x a
matrix as follows:
where gi = (grit), …, g1 -1) for 0 &lt; i &lt; k. If an = (uo, u s, …, u1_1) is the message to be encoded, the
corresponding code-word can be given as follows:
106728959.doc
petak, 12. veljače 2016
1
Clearly, the rows of C generate (or span) the (n, k) linear code C. For this reason, the matrix C is
called a generator matrix for C. Note that any k linearly independent code-words of an (n, k) linear
code can be used to form a generator matrix for the code. It follows from (3.3) that an (n, k) linear code
is completely specified by the k rows of a generator matrix G. Therefore, the encoder has only to store
the k rows of C and to form a linear combination of these k rows based on the input message as = (no,
0 1,, uk _ 1).
1. EXAMPLE 3.1 The (7, 4) linear code given in Table 3.1 has the following matrix as a
generator matrix
The (7, 4) linear code given in Table 3.1 has the following matrix as a generator matrix:
TABLE 3.1: Linear block code with k = 4 and n = 7.
Messages Code-words
If a = (1101) is the message to be encoded, its corresponding code-word, according to (3.3), will be
A desirable property for a linear block code to possess is the systematic structure of the code-words, as
shown in Figure 3.1, in which a code-word is divided into two parts, the message part and the
redundant checking part. The message part consists of k unaltered information (or message) digits, and
the redundant checking part consists of n &lt; k parity-check digits, which are linear sums of the
information digits. A linear block code with this structure is referred to as a linear systematic block
code. The (7, 4) code given in Table 3.1 is a linear systematic block code: the rightmost four digits of
each code-word are identical to the corresponding information digits.
FIGURE 3.1: Systematic format of a code-word.
A linear systematic (n, k) code is completely specified by a k x /I matrix G of the following form:
where pi; = 0 or 1. Let ilk denote the k k identity matrix. Then, G = llkl, Let e. = (no, n •, u k._-1) be
die message to be encoded. The corresponding code-word is
It follows from (3.4) and (3.5) that the components of v are
and
for 0 &lt; j &lt; n - k. Equation (3,6a) shows that the rightmost k digits of a code-word v are identical to the
information digits Ito. /It, •, uk to be encoded, and (3.6b) shows that the leftmost 11 - k redundant
digits are linear sums of the information digits. The a k equations given by (3.6b) are called paritycheck equations of the code.
2. EXAMPLE 3.2 The matrix G, given in Example 3.1 is in systematic form
The matrix G, given in Example 3.1 is in systematic form. Let u = (u0, u1, u2, u3) be the message to be
encoded, and let v = (v0, v1, v2, v3, v4, v5, v6) be the corresponding code-word. Then,
By matrix multiplication, we obtain the following digits of the code-word v:
The code-word corresponding to the message (1 0 1 1) is (1 0 0 1 0 1 1).
There is another useful matrix associated with every linear block code. As stated in Chapter 2, for any
k x n matrix G with k linearly independent rows, there exists an (a &lt; k) x a matrix i f, with a &lt; k
linearly independent rows such that any vector in the row space of G is orthogonal to the rows of H,
and any vector that is orthogonal to the rows of t l is in the row space of G. Hence, we can describe the
(a, k) linear code C generated by G in an alternative way as follows: An n-tuple v is a code-word in the
code C generated by G if and only if v  HT = 0.
The code is said to be the null space of t i. This matrix d 1 is called a parity-check matrix of the code.
The 2&quot;-k linear combinations of the rows of matrix H form an (a, a &lt; k) linear code Cd. This code is
the null space of the (a, k) linear code C generated by matrix G (i.e., for any v E C and any w E Cd, v 
w = 0). Cd is called the dual code of C. Therefore, a parity-check matrix for a linear code C is a
2
106728959.doc
petak, 12. veljače 2016
generator matrix for its dual code Cd. If the generator matrix of an (a, k) linear code is in the
systematic form of (3.4), the parity-check matrix may take the following form:
where PT is the transpose of the matrix P. Let hj be the jth row of H. We can readily check that the
inner product of the i th row of G given by (3.4) and the jth row of H given by (3.7) is
for 0 &lt; i &lt; k and 0 &lt; j &lt; n - k. This implies that G = 0. Also, the n - k rows of t &sect; are linearly
independent. Therefore, the H matrix of (3.7) is a parity-check matrix of the (a, k) linear code
generated by the matrix G of (3.4).
The parity-check equations given by (3.6b) also can be obtained from the parity-check matrix H of
(3.7). Let n u1, …, 1_1) be the message to be encoded. In systematic form the corresponding codeword will be
Using the fact that v = 0, we obtain
for 0 &lt; j &lt; a &lt; k. Rearranging the equations of (3.8), we obtain the same parity-check equations of
(3.6b). Therefore, an (a, k) linear code is completely specified by its parity-check matrix.
3. EXAMPLE 3.3 Consider the generator matrix of the (7, 4) linear code
Consider the generator matrix of the (7, 4) linear code given in Example 3.1. The corresponding
parity-check matrix is
At this point we summarize the foregoing results: For any (a, k) linear block code C there exists a k x a
matrix C whose row space gives C. Furthermore, there exists an (a &lt; k) x a matrix HI such that an atuple v is a code-word in C if and only if v PNiT = P. If C is of the form given by (3.4), then iliil may
take the form given by (3.7), and vice versa.
Based on the equations of (3.6a) and (3.6b), the encoding circuit for an (n, k) linear systematic code
can easily be implemented. The encoding circuit is shown in Figure 3.2, where -n- denotes a shiftregister stage (e.g., a flip-flop),  denotes a modulo-2 adder, and ---- denotes a connection if [Jo = 1,
and no connection if Pit = 0. The encoding operation is very simple. The message = (u p. //I.  &deg;  ilk-1)
to be encoded is shifted into the message register and simultaneously into the channel.
As soon as the entire message has entered the message register the a &lt; k parity-check digits are formed
at the outputs of the a &lt; k modulo-2 adders. These parity-check digits are then serialized and shifted
into the channel. We see that the complexity of the encoding circuit is linearly proportional to the
block length of the code. The encoding circuit for the (7, 4) code given in Table 3.1 is shown in Figure
3.3, where the connection is based on the parity-check equations given in Example 3.2.
FIGURE 3.2: Encoding circuit for a linear systematic (n. k) code.
FIGURE 3.3: Encoding circuit for the (7, 4) systematic code given in Table 3.1.
3.2.
3.2 Syndrome and Error Detection
Consider an (11, k) linear code with generator matrix G and parity-check matrix Let v = (vo, vi, …,
v„_ t) be a code-word that was transmitted over a noisy channel. Let r = (r0 1-1, …, r„-t) be the
received vector at the output of the channel. Because of the channel noise, it may be different from -v.
The vector sum
is an n-tuple, where e, = 1 for 1-, v,, and ei = 0 for r, = v,. This n-tuple is called the error vector (or
error pattern), which simply displays the positions where the received vector it differ from the
transmitted code-word v. The 1’s in e are the transmission errors caused by the channel noise. It
follows from (3.9) that the received vector it is the vector sum of the transmitted code-word and the
error vector; that is,
106728959.doc
petak, 12. veljače 2016
3
Of course, the receiver does not know either w or e. On receiving r, the decoder must first determine
whether it contains transmission errors. If the presence of errors is detected, the decoder will either
take actions to locate the errors and correct them (FEC) or request a retransmission of v (ARQ).
When r is received, the decoder computes the following (77 - k)-tuple:
which is called the syndrome of r. Then, s = 0 if and only if it is a code-word, and s 0 0 if and only if it
is not a code-word. Therefore, when s 0 0, we know that it is not a code-word and the presence of
errors has been detected. When s = 0, it is a code-word, and the receiver accepts it as the transmitted
code-word. It is possible that the errors in certain error vectors are not detectable (i.e., it contains errors
but s = r  T = 0).
This happens when the error pattern a is identical to a nonzero code-word. In this event, a is the sum of
two code-words, which is a code-word, and consequently. ihlr = 0. Error patterns of this kind are
called undetectable error patterns. Because there are 2k – 1 nonzero code-words, there are 2k – 1
undetectable error patterns.
When an undetectable error pattern occurs, the decoder makes a decoding error. In a later section of
this chapter we derive the probability of an undetected error for a BSC and show that this error
probability can be made very small.
Based on (3.7) and (3.10), the syndrome digits are as follows:
If we examine the preceding equations carefully, we find that the syndrome s is simply the vector sum
of the received parity digits (r0, r1,, r„_k_i) and the parity-check digits recomputed from the received
information digits (r„-k, i'„_k+i, r„_1). Therefore, the syndrome can be formed by a circuit similar to
the encoding circuit. A general syndrome circuit is shown in Figure 3.4.
4. EXAMPLE 3.4 Consider the (7, 4) linear code whose parity-check matrix is given in Example
3.3
Consider the (7, 4) linear code whose parity-check matrix is given in Example 3.3. Let a = (1-0, r1, r2,
r5, r6) be the received vector. Then, the syndrome is given by
The syndrome digits are
The syndrome circuit for this code is shown in Figure 3.5. The syndrome 5 computed from the
received vector a depends only on the error pattern a and not on the transmitted code-word v. Because
Fr is the vector sum of v and a, it follows from (3.10) that
FIGURE 3.4: Syndrome circuit for a linear systematic (n, k) code.
FIGURE 3.5: Syndrome circuit for the (7, 4) code given in Table 3.1.
however, v  = 0. Consequently, we obtain the following relation between the syndrome and the error
pattern:
If the parity-check matrix is expressed in the systematic form as given by (3.7), multiplying out e HT
yields the following linear relationship between the syndrome digits and the error digits:
The syndrome digits are simply linear combinations of the error digits. Clearly, they provide
information about the error digits and therefore can be used for error correction.
At this point, one would think that any error correction scheme would be a method of solving the a k
linear equations of (3.13) for the error digits. Once the error pattern a was found, the vector FC e
would be taken as the actual transmitted code-word. Unfortunately, determining the true error vector e
is not a simple matter. This is because the - k linear equations of (3.13) do not have a unique solution
but have 2k solutions (this will be proved in Theorem 3.6).
In other words, there are 2k error patterns that result in the same syndrome, and the true error pattern e
is just one of them, Therefore, the decoder has to determine the true error vector from a set of 2 k
candidates. To minimize the probability of a decoding error, the most probable error pattern that
4
106728959.doc
petak, 12. veljače 2016
satisfies the equations of (3.13) is chosen as the true error vector. If the channel is a BSC, the most
probable error pattern is the one that has the smallest number of nonzero digits.
The notion of using syndrome for error correction may be clarified by an example.
5. EXAMPLE 3.5 Again, we consider the (7,4) code whose parity-check matrix is given in
Example 3.3
Again, we consider the (7,4) code whose parity-check matrix is given in Example 3.3. Let = (1001011)
be the transmitted code-word and r = (1001001) be the received vector. On receiving rr, the receiver
computes the syndrome:
Next, the receiver attempts to determine the true error vector c = (e0, et, e2, e3, es, e6), which yields
the given syndrome. It follows from (3.12) or (3.13) that the error digits are related to the syndrome
digits by the following linear equations:
There are 24 = 16 error patterns that satisfy the preceding equations, namely,
The error vector a = (0000010) has the smallest number of nonzero components. If the channel is a
BSC, e = (0000010) is the most probable error vector that satisfies the preceding equations. Taking e =
(0000010) as the true error vector, the receiver decodes the received vector r = (1001001) into the
following code-word:
We see that vl is the actual transmitted code-word. Hence, the receiver has performed a correct
decoding. Later we show that the (7, 4) linear code considered in this example is capable of correcting
any single error over a span of seven digits; that is, if a code-word is transmitted and if only one digit
is changed by the channel noise, the receiver will be able to determine the true error vector and to
perform a correct decoding.
More discussion on error correction based on syndrome is given in Section 3.5. Various methods of
determining the true error pattern from the n -k linear equations of (3.13) are presented in later chapters.
3.3.
3.3 The Minimum Distance of a Block Code
In this section we introduce an important parameter of a block code called the minimum distance. This
parameter determines the random-error-detecting and random-error-correcting capabilities of a code.
Let v = (vet, v1, …, v„_1) be a binary n-tuple. The Hamming weight (or simply weight) of v, denoted
by w(v), is defined as the number of nonzero components of v. For example, the Hamming weight of v
= (1001011) is 4. Let v and w be two n-tuples. The Hamming distance (or simply distance) between v
and w, denoted d(v, w), is defined as the number of places where they differ. For example, the
Hamming distance between v = (1001011) and w = (0100011) is 3; they differ in the zeroth, first, and
third places. The Hamming distance is a metric function that satisfies the triangle inequality. Let v. w,
and x be three n-tuples. Then,
(The proof of this inequality is left as a problem.) It follows from the definition of Hamming distance
and the definition of modulo-2 addition that the Hamming distance between two n-tuples v and w is
equal to the Hamming weight of the sum of v and w; that is.
For example, the Hamming distance between v = (1001011) and w = (1110010) is 4, and the weight of
v w = (0111001) is also 4.
Given a block code C, one can compute the Hamming distance between any two distinct code-words.
The minimum distance of C, denoted by 4117, is defined as.
If C is a linear block code, the sum of two code-words is also a code-word. It follows from (3.15) that
the Hamming distance between two code-words in C is equal to the Hamming weight of a third codeword in C. Then, it follows from (3.16) that
The parameter w,„i„ {w(tt) : E C, A = 0) is called the minimum weight of the linear code C.
Summarizing the preceding result, we have the following theorem.
106728959.doc
petak, 12. veljače 2016
5
THEOREM 3.1 The minimum distance of a linear block code is equal to the minimum weight of its
nonzero code-words and vice versa.
Therefore, for a linear block code, determining the minimum distance of the code is equivalent to
determining its minimum weight. The (7, 4) code given in Table 3.1 has minimum weight 3; thus, its
minimum distance is 3. Next, we prove a number of theorems that relate the weight structure of a
linear block code to its parity-check matrix.
THEOREM 302 Let C be an (n, k) linear code with parity-check matrix H. For each code-word of
Hamming weight 1, there exist /columns of H such that the vector sum of these 1 columns is equal to
the zero vector. Conversely, if there exist /columns of H whose vector sum is the zero vector, there
exists a code-word of Hamming weight I in C.
Proof We express the parity-check matrix in the following form:
where hi represents the ith column of H. Let nv = (vo, v1, …, v„_1) be a code-word of weight 1. Then,
v has 1 nonzero components. Let vi,, vi,, -  -, vi, be the /nonzero components of v, where 0 &lt; it &lt; i2
&lt; …• &lt; ii &lt; n &lt; 1. Then, vi, = vi, = …= vi, = 1. Because v is a code-word, we must have
This proves the first part of the theorem.
Now, suppose that hi,, hi,, …, hi, are 1 columns of HI such that
We form a binary n-tuple x = (A-0, &deg;  &deg;, xi-1) whose nonzero components are x1. 1, xi,, …• Th-e
Hamming weight of is 1. Consider the product
It follows from (3.18) that x  HT = 0. Thus, x is a code-word of weight /in C.
This proves the second part of the theorem.
The following two corollaries follow from Theorem 3.2.
COROLLARY 3.2.1 Let C be a linear block code with parity-check matrix H. If no d &lt; 1 or fewer
columns of add to 0, the code has minimum weight at least d.
COROLLARY 3.2.2 Let C be a linear code with parity-check matrix i i. The minimum weight (or the
minimum distance) of C is equal to the smallest number of columns of H that sum to 0.
Consider the (7, 4) linear code given in Table 3.1. The parity-check matrix of this code is
We see that all columns of i are nonzero and that no two of them are alike. Therefore, no two or fewer
columns sum to 0. Hence, the minimum weight of this code is at least 3; however, the zeroth, second,
and sixth columns sum to 0. Thus, the minimum weight of the code is 3. From Table 3.1 we see that
the minimum weight of the code is indeed 3. It follows from Theorem 3.1 that the minimum distance is
3.
Corollaries 3.2.1 and 3.2.2 are generally used to determine the minimum distance or to establish a
lower bound on the minimum distance of a linear block code.
3.4.
3.4 Error-Detecting and Error-Correcting Capabilities of a Block Code
When a code-word v is transmitted over a noisy channel, an error pattern of l errors will result in a
received vector r that differs from the transmitted code-word v in l places [i.e., d(v, r) = 1]. If the
minimum distance of a block code C is dmin, any two distinct code-words of C differ in at least dmin
places. For this code C, no error pattern of dmin &lt; 1 or fewer errors can change one code-word into
another. Therefore, any error pattern of d,11117 &lt; 1 or fewer errors will result in a received vector r
that is not a code-word in C. When the receiver detects that the received vector is not a code-word of C,
we say that errors are detected.
Hence, a block code with minimum distance dr„,,, is capable of detecting all the error patterns of dmin
– 1 or fewer errors. However, it cannot detect all the error patterns of dmin errors because there exists at
least one pair of code-words that differ in dmin places, and there is an error pattern of d„„, errors that
6
106728959.doc
petak, 12. veljače 2016
will carry one into the other. The same argument applies to error patterns of more than d„„.„ errors.
For this reason we say that the random-error-detecting capability of a block code with minimum
distance dmin is dmin – 1.
Even though a block code with minimum distance dmin guarantees detection of all the error patterns of
dmin &lt; 1 or fewer errors, it is also capable of detecting a large fraction of error patterns with dmin or
more errors. In fact, an (n, k) linear code is capable of detecting 2&quot; - 2k error patterns of length n. This
can be shown as follows. Among the 2&quot; – 1 possible nonzero error patterns, there are 2k – 1 error
patterns that are identical to the 2k – 1 nonzero code-words. if any of these 2k - error patterns occurs, it
alters the transmitted code-word v into another code-word w.
Thus, w is received, and its syndrome is zero. in this case, the decoder accepts w as the transmitted
code-word and thus performs an incorrect decoding. Therefore, there are 2k – 1 undetectable error
patterns. If an error pattern is not identical to a nonzero code-word, the received vector r will not be a
code-word and the syndrome will not be zero. In this case, an error will be detected. There are exactly
2&quot; - 2k error patterns that are not identical to the code-words of an (a, k) linear code. These 2&quot; - 2k error
patterns are detectable error patterns. For large a, 2k &lt; 1 is, in general, much smaller than 2&quot;. Therefore,
only a small fraction of error patterns pass through the decoder without being detected.
Let C be an (a, k) linear code. Let A; be the number of code-words of weight i in C. The numbers Ao,
A1, …, A„ are called the weight distribution of C. If C is used only for error detection on a ISC, the
probability that the decoder will fail to detect the presence of errors can be computed from the weight
distribution of C. Let P„(E) denote the probability of an undetected error. Because an undetected error
occurs only when the error pattern is identical to a nonzero code-word of C,
where p is the transition probability of the BSC. If the minimum distance of C is dmin, then Al to
Consider the (7, 4) code given in Table 3.1. The weight distribution of this code is Ao = 1, Al = A2 = 0,
A3 = 7, A4 = 7, As = A6 = 0, and A7 = 1. The probability of an undetected error is
If p = 1072, this probability is approximately 7 x 10-6. In other words, if 1 million code-words are
transmitted over a SSC with p = 10-2, on average seven erroneous code-words pass through the
decoder without being detected.
If a block code C with minimum distance dmin is used for random-error correction, one would like to
know how many errors the code is able to correct. The minimum distance dmin is either odd or even.
Let t be a positive integer such that
Next, we show that the code C is capable of correcting all the error patterns of t or fewer errors. Let 'v
and r be the transmitted code-word and the received vector, respectively. Let w be any other codeword in C. The Hamming distances among v, r, and w satisfy the triangle inequality:
Suppose that an error pattern of t' errors occurs during the transmission of v. Then, the received vector
r differs from v in t' places, and therefore d(v, r) = t'. Because v and w are code-words in C, we have
Combining (3.21) and (3.22) and using the fact that d(v, r) = t', we obtain the following inequality:
If t' &lt; t,
The preceding inequality says that if an error pattern of t or fewer errors occurs, the received vector r is
closer (in Hamming distance) to the transmitted code-word v than to any other code-word w in C. For
a BSC, this means that the conditional probability P(r1v) is greater than the conditional probability
P(r1w) for w v. Based on the maximum likelihood decoding scheme, r is decoded into v, which is the
actual transmitted code-word. The result is a correct decoding, and thus errors are corrected.
In contrast, the code is not capable of correcting all the error patterns of 1 errors with 1 &gt; t, for there is
at least one case in which an error pattern of 1 errors results in a received vector that is closer to an
incorrect code-word than to the transmitted code-word. To show this, let v and w be two code-words in
C such that
106728959.doc
petak, 12. veljače 2016
7
Let el and e2 be two error patterns that satisfy the following conditions:
i. el + e2 = v + w•
ii. el and e2 do not have nonzero components in common places.
Obviously, we have
Now, suppose that v is transmitted and is corrupted by the error pattern el. Then, the received vector is
The Hamming distance between v and B. is
The Hamming distance between w and r is
Now, suppose that the error pattern e1 contains more than r errors (i.e., w(e1) &gt; r). Because 2t + 1 &lt;
dmin &lt; 2t 2, it follows from (3.23) that
Combining (3.24) and (3.25) and using the fact that w(e1) &gt; t and w(e2) &lt; t +1, we obtain the
following inequality:
This inequality says that there exists an error pattern of 1(1 &gt; t) errors that results in a received vector
that is closer to an incorrect code-word than to the transmitted code-word. Based on the maximum
likelihood decoding scheme, an incorrect decoding would be performed.
In summary, a block code with minimum distance c/min guarantees correction of all the error patterns
of t = – 1(61„„„ – 1)/2J or fewer errors, where [(d„,„, – 1)/k] denotes the largest integer no greater than
(d,„„, – 1)/2. The parameter r = - 1)/2] is called the random-error-correcting capability of the code.
The code is referred to as a t-error-correcting code. The (7, 4) code given in Table 3.1 has minimum
distance 3 and thus r = 1. The code is capable of correcting any error pattern of single error over a
block of seven digits.
A block code with random-error-correcting capability t is usually capable of correcting many error
patterns of t -1- 1 or more errors. A t-error-corrects g (a, k) linear code is capable of correcting a total
of 2&quot;-k error patterns, including those with t or fewer errors (this will be seen in the next section). If a
t-error-correcting block code is used strictly for error correction on a BSC with transition probability p,
the probability that the decoder commits an erroneous decoding is upper bounded by
In practice, a code is often used for correcting X or fewer errors and simultaneously detecting 1(1 &gt; X)
or fewer errors. That is, when X or fewer errors occur, the code is capable of correcting them; when
more than X but fewer than /H- 1 errors occur; the code is capable of detecting their presence without
making a decoding error. For this purpose, the minimum distance d,17 of the code is at least X + 1 + 1
(left as a problem). Thus, a block code with ci„,i„ = 10 is capable of correcting three or fewer errors
and simultaneously detecting six or fewer errors.
So far, we have considered only the case in which the receiver makes a hard-decision for each received
symbol; however, a receiver may be designed to declare a symbol erased when it is received
ambiguously (or unreliably). In this case, the received sequence consists of zeros, ones, or erasures. A
code can be used to correct combinations of errors and erasures. A code with minimum distance d,„1„
is capable of correcting any pattern of v errors and e erasures provided the following condition
is satisfied. To see this, delete from all the code-words the e components where the receiver has
declared erasures. This deletion results in a shortened code of length - e. The minimum distance of this
shortened code is at least d„,„, - e &gt; 2v + 1.
Hence, v errors can be corrected in the un-erased positions. As a result, the shortened code-word with e
components erased can be recovered. Finally, because (1177 17 &gt; e +1, there is one and only one codeword in the original code that agrees with the unerased components. Consequently, the entire codeword can be recovered. This correction of combinations of errors and erasures is often used in practice.
From the preceding discussion we see that the random-error-detecting and random-error-correcting
capabilities of a block code are determined by the code's minimum distance. Clearly, for a given a and
8
106728959.doc
petak, 12. veljače 2016
k, one would like to construct a block code with as large a minimum distance as possible, in addition to
the implementation considerations.
Often an (n, k) linear block code with minimum distance d„,11, is denoted by (n, k, d„,111). The code
given by Table 3.1 is a (7, 4, 3) code.
3.5.
3.5 Standard Array and Syndrome Decoding
In this section we present a scheme for decoding linear block codes. Let C be an (n, k) linear code. Let
vi, v2, …, v2i, be the code-words of C. N0 matter which code-word is transmitted over a noisy channel,
the received vector r may be any of the 2&quot; n-tuples over GF(2). Any decoding scheme used at the
receiver is a rule to partition the 2&quot; possible received vectors into 2k disjoint subsets D1, D2, - …, D2k
such that the code-word vi is contained in the subset Di for 1 &lt; i &lt; 2k. Thus, each subset Di is one-toone correspondence to a code-word vi. If the received vector r is found in the subset D,, r is decoded
into vi. Decoding is correct if and only if the received vector r is in the subset Di that corresponds to
the code-word transmitted.
A method to partition the 2&quot; possible received vectors into 2k disjoint subsets such that each subset
contains one and only one code-word is described here. The partition is based on the linear structure of
the code. First, we place the 2k code-words of C in a row with the all-zero code-word vi = (0, 0, …, 0)
as the first (leftmost) element. From the remaining 2&quot; - 2k n-tuples, we choose an n-tuple 02 and place
it under the zero vector v1. Now, we form a second row by adding e2 to each code-word vi in the first
row and placing the sum e2 + vi under vi.
Having completed the second row, we choose an unused n-tuple e3 from the remaining n-tuples and
place it under v1. Then, we form a third row by adding 03 to each code-word 'if; in the first row and
placing e3 + vi under vi. We continue this process until we have used all the n-tuples. Then, we have
an array of rows and columns, as shown in Figure 3.6.
FIGURE 3.6: Standard array for an (n, k) linear code.
This array is called a standard array of the given linear code C.
It follows from the construction rule of a standard array that the sum of any two vectors in the same
row is a code-word in C. Next, we prove some important properties of a standard array.
THEOREM 3.3 N0 two n-tuples in the same row of a standard array are identical. Every n-tuple
appears in one and only one row.
Proof The first part of the theorem follows from the fact that all the code-words of C are distinct.
Suppose that two n-tuples in the lth row are identical, say + = a1 with i j. This means that a; = a1, which
is impossible.
Therefore, no two n-tuples in the same row are identical.
It follows from the construction rule of the standard array that every n-tuple appears at least once. Now,
suppose that an n-tuple appears in both the lth row and the ;nth row with 1 &lt; m. Then this n-tuple must
be equal to e1 + vi for some i and equal to e„, + a; for some j. As a result, ei + ai = e,„ + a1. From this
equality we obtain e„, = e1 + + TO. Because ai and a1 are code-words in C, a; + ai is also a code-word
in C, say as. Then e„, = + as. This implies that the n-tuple e„, is in the lth row of the array, which
contradicts the construction rule of the array that e„, the first element of the 117th row, should be
unused in any previous row. Therefore, no n-tuple can appear in more than one row of the array. This
concludes the proof of the second part of the theorem.
From Theorem 3.3 we see that there are 2&quot;/2k` = 2&quot;-1c disjoint rows in the standard array, and that
each row consists of 2k distinct elements. The 2&quot;-k rows are called the cosets of the code C, and the
first n-tuple e i of each coset is called a coset leader (or coset representative). The coset concept for a
subgroup was presented in Section 2.1. Any element in a coset can be used as its coset leader. This
does not change the elements of the coset; it simply permutes them.
106728959.doc
petak, 12. veljače 2016
9
6. EXAMPLE 3.6 Consider the (6, 3) linear code generated by the following matrix
Consider the (6, 3) linear code generated by the following matrix:
The standard array of this code is shown in Figure 3.7.
A standard array of an (n, k) linear code C consists of 2k disjoint columns. Each column consists of 2&quot;k n-tuples, with the topmost one as a code-word in C. Let D1 denote the jth column of the standard
array. Then,
FIGURE 3.7: Standard array for a (6, 3) code.
where vj is a code-word of C, and e2, e3, …, e2,-k are the coset leaders. The 2k disjoint columns D1,
D2, …, D2/, can be used for decoding the code C as described earlier in this section. Suppose that the
code-word v j is transmitted over a noisy channel. From (3.27) we see that the received vector r is in
D1 if the error pattern caused by the channel is a coset leader. In this event, the received vector [r will
be decoded correctly into the transmitted code-word v.,: however, if the error pattern caused by the
channel is not a coset leader, an erroneous decoding will result. This can be seen as follows. The error
pattern x caused by the channel must be in some coset and under some nonzero code-word, say in the
lth coset and under the code-word v, = 0. Then, x - = er + v,, and the received vector is
The received vector r is thus in Ds and is decoded into vs, which is not the transmitted code-word. This
results in an erroneous decoding. Therefore, the decoding is correct if and only if the error pattern
caused by the channel is a coset leader. For this reason, the 2&quot;-k coset leaders (including the zero
vector 0) are called the correctable error patterns. Summarizing the preceding results, we have the
following theorem:
THEOREM 3.4 Every (n, k) linear block code is capable of correcting 2&quot;-k error patterns.
To minimize the probability of a decoding error, the error patterns that are most likely to occur for a
given channel should be chosen as the coset leaders. For a BSC, an error pattern of smaller weight is
more probable than an error pattern of larger weight. Therefore, when a standard array is formed, each
coset leader should be chosen to be a vector of least weight from the remaining available vectors. If
coset leaders are chosen in this manner, each coset leader has minimum weight in its coset. As a result,
the decoding based on the standard array is the minimum distance decoding (i.e., the maximum
likelihood decoding). To see this, let ir be the received vector. Suppose that r is found in the ith column
D, and lth coset of the standard array. Then, r is decoded into the code-word IT;. Because ir = ei v„ the
distance between ir and v, is
Now, consider the distance between r and any other code-word, say,
Because vi and v, are two different code-words, their vector sum, v, +, is a nonzero code-word, say vs.
Thus,
Because e1 and e1 + vs are in the same coset and since w (el) &lt; w (el + vs), it follows from (3.28) and
(3.29) that
This says that the received vector is decoded into a closest code-word. Hence, if each coset leader is
chosen to have minimum weight in its coset, the decoding based on the standard array is the minimum
distance decoding, or MLD.
Let ai denote the number of coset leaders of weight i. The numbers no, al,, oe,, are called the weight
distribution of the coset leaders. Knowing these numbers, we can compute the probability of a
decoding error. Because a decoding error occurs if and only if the error pattern is not a coset leader,
the error probability for a BSC with transition probability p is
7. EXAMPLE 3.7 Consider the (6, 3) code given in Example 3.6
Consider the (6, 3) code given in Example 3.6. The standard array for this code is shown in Figure 3.7.
The weight distribution of the coset leaders is no = 1, al = 6, a2 = 1, and a3 = as = us = a6 = 0. Thus,
For p = 10-2, we have P(E) ti 1.37 x 10–3.
10
106728959.doc
petak, 12. veljače 2016
An (n, k) linear code is capable of detecting 2&quot; - 2k error patterns, however, it is capable of correcting
only 2&quot;-k error patterns. For large n, 2&quot;-k is a small fraction of 2&quot; - 2k. Therefore, the probability of a
decoding error is much higher than the probability of an undetected error.
THEOREM 3.5 For an (n, k) linear code C with minimum distance dmin, all the n-tuples of weight t =
L(dmin &lt; 1)/2J or less can be used as coset leaders of a standard array of C. If all the n-tuples of weight t
or less are used as coset leaders, there is at least one n-tuple of weight t + 1 that cannot be used as a
Proof. Because the minimum distance of C is dmin, the minimum weight of C is also dmin. Let a and y
be two n-tuples of weight t or less. Clearly, the weight of a + y is
Suppose that a and y are in the same coset: then, a y must be a nonzero code-word in C. This is
impossible, because the weight of a + y is less than the minimum weight of C. Therefore, no two iituples of weight r or less can be in the same coset of C, and all the n-tuples of weight t or less can be
Let v be a minimum weight code-word of C [i.e., w(v) = dmin]. Let a and y be two n-tuples that satisfy
the following two conditions:
i. + y = v,
ii. a and y do not have nonzero components in common places.
It follows from the definition that a and y must be in the same coset and
Suppose we choose y such that w(y) = r + 1. Because 2t + 1 &lt; dmin &lt; 2t + 2, we have w(x) = t or t + 1.
If x is used as a coset leader, then y cannot be a coset leader.
Theorem 3.5 reconfirms the fact that an (n, k) linear code with minimum distance dmin is capable of
correcting all the error patterns of [(dmin – 1)/2] or fewer errors, but it is not capable of correcting all
the error patterns of weight t + 1.
A standard array has an important property that can be used to simplify the decoding process. Let i i be
the parity-check matrix of the given (n, k) linear code C.
THEOREM 3.6 All the 2k n-tuples of a coset have the same syndrome. The syndromes for different
cosets are different.
Proof Consider the coset whose coset leader is el. A vector in this coset is the sum of el and some
code-word v, in C. The syndrome of this vector is
(since v, = 0). The preceding equality says that the syndrome of any vector in a coset is equal to the
syndrome of the coset leader. Therefore, all the vectors of a coset have the same syndrome.
Let e1 and le/ be the coset leaders of the jth and lth cosets, respectively, where j &lt; 1. Suppose that the
syndromes of these two cosets are equal. Then,
This implies that e, + e1 is a code-word in C, say vi. Thus, e j + e1 = v;, and e1 = ej + vi. This implies
that e1 is in the jth coset, which contradicts the construction rule of a standard array that a coset leader
should be previously unused. Therefore, no two cosets have the same syndrome.
We recall that the syndrome of an n-tuple is an (n &lt; k)-tuple, and there are 2&quot;-k distinct (/- k)-tuples. It
follows from Theorem 3.6 that there is a one-to-one correspondence between a coset and an (11 - k)tuple syndrome; or there is a one-to-one correspondence between a coset leader (a correctable error
pattern) and a syndrome. Using this one-to-one correspondence relationship, we can form a decoding
table, which is much simpler to use than a standard array. The table consists of 2&quot;-k coset leaders (the
correctable error patterns) and their corresponding syndromes. This table is either stored or wired in
the receiver. The decoding of a received vector consists of three steps:
1. Compute the syndrome of r, r  HT.
106728959.doc
petak, 12. veljače 2016
11
2. Locate the coset leader e1 whose syndrome is equal to r  HT. Then el is assumed to be the error
pattern caused by the channel.
3. Decode the received vector r into the code-word v* = r + el.
The described decoding scheme is called the syndrome decoding or table-lookup decoding. In principle,
table-lookup decoding can be applied to any (n, k) linear code. It results in minimum decoding delay
and minimum error probability; however, for large ri - k, the implementation of this decoding scheme
becomes impractical, and either a large storage or a complicated logic circuitry is needed. Several
practical decoding schemes that are variations of table-lookup decoding are discussed in subsequent
chapters. Each of these decoding schemes requires additional properties of a code other than the linear
structure.
8. EXAMPLE 3.8 Consider the (7, 4) linear code given in Table 3.1
Consider the (7, 4) linear code given in Table 3.1. The parity-check matrix, as given in Example 3.3, is
The code has 23 = 8 cosets, and therefore there are eight correctable error patterns (including the allzero vector). Because the minimum distance of the code is 3, it is capable of correcting all the error
patterns of weight 1 or 0. Hence, all the 7-tuples of weight 1 or 0 can be used as coset leaders. There
are = 8 such vectors. We see that for the (7, 4) linear code considered in this example, the number of
correctable error patterns guaranteed by the minimum distance is equal to the total number of
correctable error patterns. The correctable error patterns and their corresponding syndromes are given
in Table 3.2.
Suppose that the code-word v = (1001011) is transmitted, and r = (1001111) is received. For decoding
r, we compute the syndrome of IC:
TABLE 3.2: Decoding table for the (7, 4) linear code given in Table 3.1.
Syndrome
From Table 3.2 we find that (0 1 1) is the syndrome of the coset leader e = (0000100). Thus, (0000100)
is assumed to be the error pattern caused by the channel, and r is decoded into
which is the code-word transmitted. The decoding is correct, since the error pattern caused by the
Now, suppose that v = (0000000) is transmitted, and r = (1000100) is received. We see that two errors
have occurred during the transmission of v. The error pattern is not correctable and will cause a
From the decoding table we find that the coset leader e = (0000010) corresponds to the syndrome s =
(111). As a result, r is decoded into the code-word
Because v* is not the code-word transmitted, a decoding error is committed.
Using Table 3.2, we see that the code is capable of correcting any single error over a block of seven
digits. When two or more errors occur, a decoding error will be committed.
The table-lookup decoding of an (n, k) linear code may be implemented as follows. The decoding table
is regarded as the truth table of n switching functions:
where so, s1, …• sn-k _1 are the syndrome digits, which are regarded as switching variables, and eo,
el, …, are the estimated error digits. When these n switching functions are derived and simplified, a
combinational logic circuit with the n &lt; k syndrome digits as inputs and the estimated error digits as
outputs can be realized. The implementation of the syndrome circuit was discussed in Section 3.2. The
general decoder for an (11, k) linear code based on the table-lookup scheme is shown in Figure 3.8.
FIGURE 3.8: General decoder for a linear block code.
The cost of this decoder depends primarily on the complexity of the combinational logic circuit.
12
106728959.doc
petak, 12. veljače 2016
9. EXAMPLE 3.9 Again, we consider the (7, 4) code given in Table 3.1
Again, we consider the (7, 4) code given in Table 3.1. The syndrome circuit for this code is shown in
Figure 3.5. The decoding table is given by Table 3.2. From this table we form the truth table (Table
3.3).
TABLE 3.3: Truth table for the error digits of the correctable error patterns of the (7, 4) linear code
given in Table 3.1.
The switching expressions for the seven error digits are
where A denotes the logic-AND operation and s' denotes the logic-COMPLEMENT of s. These seven
switching expressions can be realized by seven 3-input AND gates. The complete circuit of the
decoder is shown in Figure 3.9.
FIGURE 3.9: Decoding circuit for the (7, 4) code given in Table 3.1.
3.6.
3.6 Probability of an Undetected Error for Linear Codes over a BSC
If an (n, k) linear code is used only for error detection over a BSC, the probability of an undetected
error 11), (E) can be computed from (3.19) if the weight distribution of the code is known. There exists
an interesting relationship between the weight distribution of a linear code and the weight distribution
of its dual code. This relationship often makes the computation of P„ (E) much easier. Let fAo,
A1,. …, An} be the weight distribution of an (a, k) linear code C, and let { 0, B1,, B„) be the weight
distribution of its dual code Cd. Now, we represent these two weight distributions in polynomial form
as follows:
Then, A(z) and B(z) are related by the following identity:
This identity is known as the MacWilliams identity [13]. The polynomials A(z) and B(z) are called the
weight enumerators for the (a, k) linear code C and its dual C,1. From the MacWilliams identity, we
see that if the weight distribution of the dual of a linear code is known, the weight distribution of the
code itself can be determined. As a result, this gives us more flexibility in computing the weight
distribution of a linear code.
Using the MacWilliams identity, we can compute the probability of an undetected error for an (a, k)
linear code from the weight distribution of its dual. First, we put the expression of (3.19) into the
following form:
Substituting z = p/(1 - p) in A(z) of (3.31) and using the fact that Ao = 1, we obtain the following
identity:
Combining (3.33) and (3.34), we have the following expression for the probability of an undetected
error:
From (3.35) and the MacWilliams identity of (3.32), we finally obtain the following expression for P„
(E):
where
Hence, there are two ways for computing the probability of an undetected error for a linear code; often,
one is easier than the other. If n - k is smaller than k, it is much easier to compute P„ (E) from (3.36);
otherwise, it is easier to use (3.35).
10. EXAMPLE 3.10 Consider the (7, 4) linear code given in Table 3.1
Consider the (7, 4) linear code given in Table 3.1. The dual of this code is generated by its paritycheck matrix,
(see Example 3.3). Taking the linear combinations of the rows of !, we obtain the following eight
vectors in the dual code:
106728959.doc
petak, 12. veljače 2016
13
Thus, the weight enumerator for the dual code is B(z) = 1 + 7z4. Using (3.36), we obtain the
probability of an undetected error for the (7, 4) linear code given in Table 3.1:
This probability was also computed in Section 3.4 using the weight distribution of the code itself.
Theoretically, we can compute the weight distribution of an (n, k) linear code by examining its 2k
code-words or by examining the 2&quot;-k code-words of its dual and then applying the MacWilliams
identity; however, for large n, k, and n &lt; k, the computation becomes practically impossible. Except for
some short linear codes and a few small classes of linear codes, the weight distributions for many
known linear codes are still unknown. Consequently, it is very difficult, if not impossible, to compute
their probability of an undetected error.
Although it is difficult to compute the probability of an undetected error for a specific (n, k) linear
code for large n and k, it is quite easy to derive an upper bound on the average probability of an
undetected error for the ensemble of all (n, k) linear systematic codes. As we showed earlier, an (n, k)
linear systematic code is completely specified by a matrix G of the form given by (3.4). The submatrix
consists of k(n &lt; k) entries. Because each entry pi,/can be either a 0 or a 1, there are 2k(7-k) distinct
matrices G's of the form given by (3.4). Let F denote the ensemble of codes generated by these 2 k(&quot;-k)
matrices. Suppose that we choose a code randomly from F and use it for error detection. Let C1 be the
chosen code.
Then, the probability that C1 will be chosen is
Let A11 denote the number of code-words in C1 with weight i. It follows from (3.19) that the
probability of an undetected error for C1 is given by
The average probability of an undetected error for a linear code in F is defined as
where in denotes the number of codes in F. Substituting (3.37) and (3.38) into (3.39), we obtain
A nonzero n-tuple is contained in either exactly 2(k-1)(&quot;) codes in F or in none of the codes (left as a
problem). is Because there are 77 n-tuples of weight i, we have
Substituting (3.41) into (3.40), we obtain the following upper bound on the average probability of an
undetected error for an (n, k) linear systematic code:
Because [1 - (1 - p)11 ] &lt; 1, it is clear that 1?„(E) &lt; 2-&deg;1-k).
The preceding result says that there exist (n, k) linear codes with the probability of an undetected error,
Pi,(E), upper bounded by 2-(&quot;-h). In other words, there exist (n k) linear codes with P,(E) decreasing
exponentially with the number of parity-check digits, n &lt; k. Even for moderate 7i - k, these codes have
a very small probability of an undetected error. For example, let n - k = 30. There exist (n, k) linear
codes for which P„ (E) is upper bounded by 2-30 10-9. Many classes of linear codes have been
constructed for the past five decades; however, only a few small classes of linear codes have been
proved to have a P„ (E) that satisfies the upper bound 2-&deg;1-k). It is still not known whether the other
known linear codes satisfy this upper bound.
Good codes for error detection and their applications in error control will be presented in later chapters.
An excellent treatment of error-detecting codes can be found in [12].
3.7.
3.7 Single-Parity-Check Codes, Repetition Codes, and Self-Dual Codes
A single-parity-check (SPC) code is a linear block code with a single parity check digit. Let n = (no,
01, uk_) be the message to be encoded. The single parity-check digit is given by
which is simply the modulo-2 sum of all the message digits. Adding this parity-check digit to each kdigit message results in a (k +1, k) linear block code. Each code-word is of the form
From (3.43), we readily see that p = 1 if the weight of message u is odd, and p = 0 if the weight of
message 1111 is even. Therefore, all the code-words of a SPC code have even weights, and the
14
106728959.doc
petak, 12. veljače 2016
minimum weight (or minimum distance) of the code is 2. The generator of the code in systematic form
is given by
From (3.44) we find that the parity-check matrix of the code is
Because all the code-words have even weights, a SPC code is also called an even-parity-check code.
SPC codes are often used for simple error detection. Any error pattern with an odd number of errors
will change a code-word into a received vector of odd weight that is not a code-word. Hence, the
syndrome of the received vector is not equal to zero. Consequently, all the error patterns of odd weight
are detectable.
A repetition code of length n is an (n, 1) linear block code that consists of only two code-words, the
all-zero code-word (0 0 …• 0) and the all-one code-word (1 1 …1). This code is obtained by simply
repeating a single message bit n times. The generator matrix of the code is
From (3.44) through (3.46), we readily see that the (n, 1) repetition code and the (n, n &lt; 1) SPC code
are dual codes to each other.
SPC and repetition codes are often used as component codes for constructing long, powerful codes as
will be seen in later chapters.
A linear block code C that is equal to its dual code Cd is called a self-dual code. For a self-dual code,
the code length n must be even, and the dimension k of the code must be equal to n/2. Therefore, its
rate R is equal to z. Let G be a generator matrix of a self-dual code C. Then, G is also a generator
matrix of its dual code Cd and hence is a parity-check matrix of C. Consequently,
Suppose CI is M systematic form, =, 7 /2[. From (3.47), we can easily see that
Conversely, if a rate-4 (n, n/2) linear block code C satisfies the condition of (3.47) [or (3.48)], then it is
a self-dual code (the proof is left as a problem).
11. EXAMPLE 3.11 Consider the (8, 4) linear block code generated by the matrix
Consider the (8, 4) linear block code generated by the matrix
The code has a rate R It is easy to check that GT = Therefore, it is a self-dual code.
There are many good self-dual codes but the most well known self-dual code is the (24, 12) Cola,/code,
which will be discussed in Chapter 4.
106728959.doc
petak, 12. veljače 2016
15
```