docx

advertisement
EECS 270 Fall 2014, Lecture 20
Page 1 of 6
State machine encoding and start on
error correction.
Today we’ll touch on two topics:
 a look at optimizing the combinational logic needed for a state machine by selecting encodings
rather than just randomly assigning arbitrary state bits to a state
 the basics of error correction
For the first, we’re going to consider this in the context of other state machine optimizations we’ve
looked at (state reduction and using Mealy machines). For the second, we will just get started today,
but error detection and even correction are an important application of digital logic that shows up in a
number of surprising places.
State Machine Optimizations: Encoding (6.3, pages 354-358)
When we’ve been encoding states of an FSM, we have been using pretty much any “minimal” encoding.
That is, to encode N states we’ve used lg(N) bits and arbitrarily selected what bits to assign to each
state. Why would we care to do anything differently? Well, one can imagine that wisely selecting state
assignment might minimize the next-state logic or the output logic…
If we have those N states, there are N! ways of assigning state. One would hope that some of those
assignments might simplify the logic. Our book gives an example (figure 6.40) where one assignment
reduces the next-state logic by about a factor of 2 (15 gate-inputs to 8 gate inputs). We aren’t going to
spend any serious time considering this—there is no nice algorithm to find the optimal assignment and
doing this would be more of a programming problem than anything (probably a good short 281
assignment really). But we are going to look at using non-minimal encodings to achieve fairly simple
logic and, perhaps as importantly, a very fast technique for creating circuits which implement a state
machine. In particular, we’ll look at one-hot encodings (pages 355-358 of our text).
EECS 270 Fall 2014, Lecture 20
Page 2 of 6
One-hot state encoding
Let’s look at an example from the exam.
Design a state machine which implements the following state transition diagram. Assign state
bits S[1:0] as 01 for state X, 00 for state Y, and 11 for state Z. You are to assume that you will
never reach the state S[1:0]=10, so you don’t care what happens in that case. You must show
your work to get any credit! You only need to compute the next state and output logic, you don’t
need to draw the gates or flip-flops! Place your answer where shown, all answers must be in
minimal sum-of-products form.
!B
Input is B.
Output is W.
W is a “1” in State X
and Y (and is “0” in Z).
X
Y
!B
B
B
Z
NS1
NS0
B/S0S1 00 01 11 10
0 00 2 0 6 0 4 d
1 11 3 1 7 0 5 d
W
B/S0S1 00 01 11 10
0
0
11
2
6
4
3
7
5
B/S0S1 00 01 11 10
00
11
2
6
4
3
7
5
NS1=____________
NS0= ____________
W= !S1
What if we instead used X=001, Y=010, and Z=100?
NS1= _____________
NS0= _____________
W= ______________
Not a lot simpler, but a lot easier. And in general things are also going to get simpler (see the examples
in the text where the results are quite pronounced).
EECS 270 Fall 2014, Lecture 20
Page 3 of 6
Quick discussion
We are trading off additional bits of state for simpler next state (and a bit simpler output logic usually).
Is this worth it? After all, flip-flops, which store our state, are much more complex than a simple gate.
In fact they are, in general, made up of more than a dozen gates. Our generic answer is “sometimes”.
But, as we will later see, there are times when gates are more expensive than a bit of state. One
important case is in an FPGA like you have in lab. There we use memory (state) to create gates rather
than the other way around (we’ll cover that in the next couple of weeks). So the trade-off is pretty easy.
And in fact our FPGA tools use a variation of one-hot encoding (all-but-one hot encoding).
Random example on your own:
EECS 270 Fall 2014, Lecture 20
Page 4 of 6
Basic Error Detection
Digital logic has applications in any number of places. One of those places is in communication
technology. Most communication devices (your cell phones for example) tend to have communication
errors. In the case of a cell phone, your voice is converted from an analog signal to a digital one. A
primary reason for doing this is so that small errors that occur (due to interference) don’t actually
change what’s being received. This is true for two reasons:


A digital “bit” that gets a bit of noise in it (say is 4.8V rather than 5V) is easy to correct back to
the correct binary value.
Even if the noise is so large a “1” becomes a “0” or a “0” becomes a “1” we can add some extra
bits to the message and either detect or correct the error.
Terms (a bit formally)
Let x and y be bit strings of the same length. Define h(x,y) to be the number of bits of x which need to
be flipped in order to get x=y. For example, if x=1010 and y=1100 then h(x,y)=2. Notice that
h(x,y)=h(y,x). The function h() is a measure of the Hamming Distance between two bit strings.
Now consider the function H(X) where X is a set of bit strings {x1, x2, …xn} each of the same length. Let
H(X) be the minimum Hamming distance between any two elements of the set X. (That is for any a and
b in the range of 1 to n where a b, h(xa,xb)H(X).) For example, if X={1111,1000, 0000},H(X)=1 because
1000 and 0000 have a Hamming Distance of 1, and no two elements of that set have a Hamming
distance of 0.
In the context of error correction and detection, the notion of the Hamming distance of a set is useful
because it tells us something about the redundancy of the information in the set. That is, it tells us how
many bits could flip before we might confuse two bit strings in that set. For example, say we are
communicating one of Y possible messages from one location to another. If the set of Y possible
messages used has a Hamming distance of 2, then a single bit-flip during transmission cannot confuse
any two messages. That is, we can detect a single bit-flip. Of course two bit-flips during transmission
could still cause one message to look like the other.
Example problems:
1. Consider the set X={10000, 00110, 11111}. What is H(X)?
Ans., h(10000,00110)=3, h(10000,11111)=4, h(00110,11111)=3. As the smallest of these is 3,
H(X)=3.
2. Consider set X={1001,1111,1010). Say we were communicating between two locations,
using only encodings available in X. What is the minimum number of bit flips during
transmission that could cause a given message in that set to appear to be a different
message in that set?
Ans., H(X)=2. So two bit-flips are needed to cause one legal message to be mistaken for another.
EECS 270 Fall 2014, Lecture 20
Page 5 of 6
Parity
Given a set X of all possible bit-strings of a certain length, that set has a Hamming distance of 1 as any
given bit-flip will cause one member of that set to appear to be a different member of that set. So how
do we generate a set that has a Z elements and a Hamming distance of X? In general, this is a hard
problem if we want the length of the bit strings used to be minimal. However, for a Hamming distance
of 2, this is fairly simple. If we have a set of Z bit strings with a Hamming distance of 1, we can simply
add a single bit to the end of each bit-string. Add that last bit so that the number of 1’s in the bit-string
is even. Call this set of extended bit-strings Y. Now H(Y)=2. How do we know that? Well, a single bit-flip
will either convert a 0 to a 1 or a 1 to a 0. In either case the number of 1’s in the bit-string will no longer
be even. As we know that every legal encoding has an even number of 1’s, the bit-string with a single bit
flip cannot be an element in the set Y. Thus, 1 bit flip cannot make any element of Y look like any other
element of Y.
This last bit we are adding is called a parity bit and the exact scheme we used is called even parity
because we made the number of 1s in the string even. Odd parity also works, but you can’t mix and
match them. Within a given set you have to consistently use either even or odd parity. Those of you
who have played with modems much may have noticed that you have to set either even or odd parity.
That is because modems use parity to protect against errors which occur over the phone-line.
Example problems:
3. Add an even parity bit to the following bit-strings 11, 1110, 10101.
Ans., 110, 11101, 101011. Notice that in each case the number of 1’s is even.
4. Using even parity come up with a set of 4 bit-encodings that has a Hamming distance of 2.
Ans., {000, 011, 101, 110}.
Error correction
Now the above scheme gave as a method of detecting a one-bit flip. This is useful because we can, at
the least, not use bad data, and we could perhaps even ask for the data to be resent. But what if we
want to fix the data without needing to ask for a resend? It turns out we can do that if the set of
messages we are using has a Hamming distance of 3 or more.
Why is that?
Let’s walk a specific scheme with 4 data bits and 3 parity bits
A, B, C, D are data bits while X, Y, Z as parity bits
Let P(x1..xn) be the function that generates even ones parity over the inputs x1 to xn.
 Let X=P(A,B,C)
 Let Y=P(A,B,D)
 Let Z=P(A,C,D)
From here, we’re going to assume only one bit will go bad in a given message? What happens if more
than one bit gets flipped? Our message will be received incorrectly. But let’s ignore that case for now.
EECS 270 Fall 2014, Lecture 20
Page 6 of 6
Which parity bits will incorrect if the following bit were flipped in transmission?
A
B
C
D
X
Y
Z
How does that help us?
Other thoughts
1. How can we use that set to detect 2-bit errors?
2. How do we build the encoder (takes in 4 data bits, generates 7 bit message)?
3. How do we build the decoder (takes in 7 bit message, generates corrected 4-bit data)?
Download