Uploaded by sudhipthashettyk

VLSI Training Handout-Digital 2

advertisement
Introduction to VLSI and Design using Verilog
Hand-out
Module: Digital Design –Part II
Digital Design Part II
2|Page
Signed and Unsigned Numbers
Quantity vs Numbers
An important d1’stinction must be made between "Quantities" and "Numbers". A quantity 1’s
simply some amount of "stuff"; five apples, three pounds, and one automobile are all
quantities of different things. A quantity can be represented by any number of different
representations. For example, tick-marks on a piece of paper, beads on a string, or stones in a
pocket can all represent some quantity of something. One of the most familiar representations
are the base-10 (or "decimal") numbers, which cons1’st of 10 digits, from 0 to 9. When more
then 9 objects needs to be counted, we make a new column with a 1 in it (which represents a
group of 10), and we continue counting from there.
Computers, however, cannot count in decimal. Computer hardware uses a system where
values are represented internally as a series of voltage differences. For example, in most
computers, a +5V charge 1’s represented as a "1" digit, and a 0V value 1’s represented as a "0"
digit. There are no other digits possible! Thus, computers must use a numbering system that has
only two digits: the "Binary", or "base-2", number system.
Binary Numbers
Understanding the binary number system 1’s difficult for many students at first. It may help to
start with a decimal number, since that 1’s more familiar. It 1’s possible to write a number like
1234 in "expanded notation," so that the value of each place 1’s shown:
Notice that each digit 1’s multiplied by successive powers of 10, since th1’s a decimal, or base
10 system. The "ones" digit ("4" in the example) 1’s multiplied by 100, or "1". Each digit to the
left of the "ones" digit 1’s multiplied by the next higher power of 10 and that 1’s added to the
preceding value.
Now, do the same with a binary number; but since th1’s 1’s a "base 2" number, replace powers
of 10 with powers of 2:
The subscripts indicate the base. Note that in the above equations:
Digital Design Part II
3|Page
10112 = 1110
Binary numbers are the same as their equivalent decimal numbers, they are just a different way
to represent a given quantity. To be very simpl1’stic, it does not really matter if you have 1011 2
or 1110 apples, you can still make a pie.
Bits
The term Bits 1’s short for the phrase Binary Digits. Each bit 1’s a single binary value: 1 or zero.
Computers generally represent a 1 as a positive voltage (5 volts or 3.3 volts are common
values), and a zero as 0 volts.
Most Significant Bit and Least Significant Bit
In the decimal number 48723, the "4" digit represents the largest power of 10 (or 10 4), and the
3 digit represents the smallest power of 10 (100). Therefore, in th1’s number, 4 1’s the most
significant digit and 3 1’s the least significant digit. Consider a situation where a caterer needs
to prepare 156 meals for a wedding. If the caterer makes an error in the least significant digit
and accidentally makes 157 meals, it 1’s not a big problem. However, if the caterer makes a
m1’stake on the most significant digit, 1, and prepares 256 meals, that will be a big problem!
Now, consider a binary number: 101011. The Most Significant Bit (MSB) 1’s the left-most bit,
because it represents the greatest power of 2 (25). The Least Significant Bit (LSB) 1’s the rightmost bit and represents the least power of 2 (20).
Notice that MSB and LSB are not the same as the notion of "significant figures" that 1’s used in
other sciences. The decimal number 123000 has only 3 significant figures, but the most
significant digit 1’s 1 (the left-most digit), and the least significant digit 1’s 0 (the right-most
digit).
Standard Sizes
Digital Design Part II
4|Page
Nibble
a Nibble 1’s 4 bits long. Nibbles can hold values from 0 to 15 (in decimal).
Byte
a Byte 1’s 8 bits long. Bytes can hold values from 0 to 255 (in decimal).
Word
a Word 1’s 16 bits, or 2 bytes long. Words can hold values from 0 to 65535 (in
Decimal). There 1’s occasionally some confusion between th1’s definition and that of a
"machine word". See Machine Word below.
Double-word
a Double-word 1’s 2 words long, or 4 bytes long. These are also known simply as
"DWords". DWords are also 32 bits long. 32-bit computers therefore, manipulate data
that 1’s the size of DWords.
Quad-word
a Quad-word 1’s 2 DWords long, 4 words long, and 8 bytes long. They are known
simply as "QWords". QWords are 64 bits long, and are therefore the default data size in
64-bit computers.
Machine Word
A machine word 1’s the length of the standard data size of a given machine. For
instance, a 32-bit computer has a 32-bit machine word. Likew1’se 64-bit computers
have a 64-bit machine word. Occasionally the term "machine word" 1’s shortened to
simply "word", leaving some ambiguity as to whether we are talking about a regular
"word" or a machine word.
Summary
Name
Length/bits
Bit
1
Nibble
4
Digital Design Part II
5|Page
Byte
8
Word
16
Double-word 32
Quad-word
64
Machine Word Depends
Negative Numbers
It would seem logical that to create a negative number in binary, the reader would only need
to prefix the number with a "–" sign. For instance, the binary number 1101 can become
negative simply by writing it as "–1101". Th1’s seems all fine and dandy until you realize that
computers and digital circuits do not understand minus sign. Digital circuits only have bits, and
so bits must be used to d1’stingu1’sh between positive and negative numbers. With th1’s in
mind, there are a variety of schemes that are used to make binary numbers negative or
positive: Sign and Magnitude, One's Complement, and Two's Complement.
Sign and Magnitude
Under a Sign and Magnitude scheme, the MSB of a given binary number 1’s used as a "flag" to
determine if the number 1’s positive or negative. If the MSB = 0, the number 1’s positive, and
if the MSB = 1, the number 1’s negative. Th1’s scheme seems awfully simple, except for one
simple fact: arithmetic of numbers under th1’s scheme 1’s very hard. Let's say we have 2
nibbles: 1001 and 0111. Under sign and magnitude, we can translate them to read: -001 and
+111. In decimal then, these are the numbers –1 and +7.
When we add them together, the sum of –1 + 7 = 6 should be the value that we get.
However:
001
+111
---000
And that 1’sn't right. What we need 1’s a dec1’sion-making construct to determine if the MSB
1’s set or not, and if it 1’s set, we subtract, and if it 1’s not set, we add. Th1’s 1’s a big pain, and
therefore sign and magnitude 1’s not used.
Digital Design Part II
6|Page
Another big flaw 1’s for the same zero(0),in th1’s scheme we have two representations +0 and
-0 which 1’s meaningless.
One's Complement
Let's now examine a scheme where we define a negative number as being the logical inverse of
a positive number. We will use the same "!" operator to express a logical inversion on multiple
bits. For instance, !001100 = 110011. 110011 1’s binary for 51, and 001100 1’s binary for 12. but
in th1’s case, we are saying that 001100 = –110011, or 110011(binary) = -12 decimal. let's
perform the addition again:
001100
+110011
------111111
(12)
(-12)
We can see that if we invert 0000002 we get the value 1111112. and therefore 1111112 1’s
negative zero! What exactly 1’s negative zero? it turns out that in th1’s scheme, positive zero
and negative zero are identical.
However, one's complement notation suffers because it has two representations for zero: all 0
bits, or all 1 bits. As well as being clumsy, th1’s will also cause problems when we want to
check quickly to see if a number 1’s zero. Th1’s 1’s an extremely common operation, and we
want it to be easy, so we create a new representation, two's complement.
Two's Complement
Two's complement 1’s a number representation that 1’s very similar to one's complement. We
find the negative of a number X using the following formula:
-X = !X + 1
Let's do an example. If we have the binary number 11001 (which 1’s 25 in decimal), and we
want to find the representation for -25 in twos complement, we follow two steps:
1. Invert the numbers:
11001 → 00110
2. Add 1:
00110 + 1 = 00111
Digital Design Part II
7|Page
Therefore –11001 = 00111. Let's do a little addition:
11001
+00111
-----00000
Now, there 1’s a carry from adding the two MSBs together, but th1’s 1’s digital logic, so we
d1’scard the carrys. It 1’s important to remember that digital circuits have capacity for a certain
number of bits, and any extra bits are d1’scarded.
Most modern computers use two's complement.
Below 1’s a diagram showing the representation held by these systems for all four-bit
combinations:
Signed vs Unsigned
One important fact to remember 1’s that computers are dumb. A computer doesn’t know
whether or not a given set of bits represents a signed number or an unsigned number (or, for
that matter, and number of other data objects). It 1’s therefore important for the programmer
(or the programmers’ trusty compiler) to keep track of th1’s data for us.
Digital Design Part II
8|Page
Consider the bit pattern 100110:




Unsigned: 38(decimal)
Sign+Magnitude: -6
One's Complement: -25
Two's Complement: -26
See how the representation we use changes the value of the number! It 1’s important to
understand that bits are bits, and the computer doesnt know what the bits represent. It 1’s up
to the circuit designer and the programmer to keep track of what the numbers mean.
Coding Techniques
When numbers, letters, or words are represented by a special group of symbols, we say that
they are being encoded, and the group of symbols 1’s called a code. Probably one of the most
familiar codes 1’s the Morse code, where a series of dots and dashes represent letters of the
alphabet.
We have seen that any decimal number can be represented by an equivalent binary number.
The group of Os and 1’s in the binary number can be thought of as a code representing the
decimal number. When a decimal number 1’s represented by its equivalent binary number, we
call it straight binary coding.
Digital systems all use some form of binary numbers for their internal operation, but the
external world 1’s decimal in nature. Th1’s means that conversions between the decimal and
binary systems are being performed often. We have seen that the conversions between decimal
and binary can become long and complicated for large numbers. For th1’s reason, a means of
encoding decimal numbers that combines some features of both the decimal and the binary
systems 1’s used in certain situations.
We will see most popular coding techniques and their advantages and d1’sadvantages also why
they are popular.
BCD
If each digit of a decimal number 1’s represented by its binary equivalent, the result 1’s a code
called binary-coded-decimal (hereafter abbreviated BCD). Since a decimal digit can be as large
as 9, four bits are required to code each digit (the binary code for 9 1’s 1001).
To illustrate the BCD code, take a decimal number such as 874. Each digit 1’s changed to its
binary equivalent as follows:
Digital Design Part II
9|Page
Once again, each decimal digit 1’s changed to its straight binary equivalent. Note that four bits
are always used for each digit. The BCD code, then, represents each digit of the decimal
number by a four-bit binary number. Clearly only the four-bit binary numbers from 0000
through 1001 are used. The BCD code does not use the numbers 1010, 1011, 1100, 1101, 1110,
and 1111. In other words, only 10 of the 16 possible four-bit binary code groups are used. If any
of the "forbidden" four-bit numbers ever occurs in a machine using the BCD code, it 1’s usually
an indication that an error has occurred.
Compar1’son of BCD and Binary
It 1’s important to realize that BCD 1’s not another number system like binary, octal, decimal,
and hexadecimal. It 1’s, in fact, the decimal system with each digit encoded in its binary
equivalent. It 1’s also important to understand that a BCD number 1’s not the same as a straight
binary number. A straight binary code takes the complete decimal number and represents it in
binary; the BCD code converts each decimal digit to binary individually.
To illustrate, take the number 137 and compare its straight binary and BCD codes:
The BCD code requires 12 bits while the straight binary code requires only eight bits to
represent 137. BCD requires more bits than straight binary to represent decimal numbers of
more than one digit. Th1’s 1’s because BCD does not use all possible four-bit groups, as pointed
out earlier, and 1’s therefore somewhat inefficient.
The main advantage of the BCD code 1’s the relative ease of converting to and from decimal.
Only the four-bit code groups for the decimal digits 0 through 9 need to be remembered.
Digital Design Part II
10 | P a g e
Th1’s ease of conversion 1’s especially important from a hardware standpoint because in a
digital system it 1’s the logic circuits th at perform the conversions to and from decimal.
Below Table gives the representation of the decimal numbers 1 through 15 in the binary, octal,
hex number systems, and in BCD code. Examine it carefully and make sure you understand
how it was obtained. Especially note how the BCD representation always uses four bi ts for e
ach decimal digit.
Advantages


Many non-integral values, such as decimal 0.2, have an infinite place-value representation in
binary (.001100110011...) but have a finite place-value in binary-coded decimal (0.0010).
Consequently a system based on binary-coded decimal representations of decimal fractions
avoids errors representing and calculating such values.
Scaling by a factor of 10 (or a power of 10) 1’s simple; th1’s 1’s useful when a decimal scaling
factor 1’s needed to represent a non-integer quantity (e.g., in financial calculations)
Digital Design Part II



11 | P a g e
Rounding at a decimal digit boundary 1’s simpler. Addition and subtraction in decimal does not
require rounding.
Alignment of two decimal numbers (for example 1.3 + 27.08) 1’s a simple, exact, shift.
Conversion to a character form or for d1’splay (e.g., to a text-based format such as XML, or to
drive signals for a seven-segment d1’splay) 1’s a simple per-digit mapping, and can be done in
linear (O(n)) time. Conversion from pure binary involves relatively complex logic that spans
digits, and for large numbers no linear-time conversion algorithm 1’s known (see Binary numeral
system).
Disadvantages



Some operations are more complex to implement. Adders require extra logic to cause them to
wrap and generate a carry early. 15–20% more circuitry 1’s needed for BCD add compared to
pure binary. Multiplication requires the use of algorithms that are somewhat more complex
than shift-mask-add (a binary multiplication, requiring binary shifts and adds or the equivalent,
per-digit or group of digits 1’s required)
Standard BCD requires four bits per digit, roughly 20% more space than a binary encoding.
When packed so that three digits are encoded in ten bits, the storage overhead 1’s reduced to
about 0.34%, at the expense of an encoding that 1’s unaligned with the 8-bit byte boundaries
common on ex1’sting hardware, resulting in slower implementations on these systems.
Practical ex1’sting implementations of BCD are typically slower than operations on binary
representations, especially on embedded systems, due to limited processor support for native
BCD operations.
Gray code
The reflected binary code, also known as Gray code after Frank Gray, 1’s a binary numeral system
where two successive values differ in only one bit.
The reflected binary code was originally designed to prevent spurious output from
electromechanical switches. Today, Gray codes are widely used to facilitate error correction in
digital communications such as digital terrestrial telev1’sion and some cable TV systems
Digital Design Part II
12 | P a g e
Reflected binary codes were applied to mathematical puzzles before they became known to
engineers. The French engineer Émile Baudot used Gray codes in telegraphy in 1878. He received
the French Legion of Honor medal for h1’s work. The Gray code 1’s sometimes attributed,
incorrectly, to El1’sha Gray (in Principles of Pulse Code Modulation, K. W. Cattermole, [6] for
example).
Frank Gray, who became famous for inventing the signaling method that came to be used for
compatible color telev1’sion, invented a method to convert analog signals to reflected binary
code groups using vacuum tube-based apparatus. The method and apparatus were patented in
1953 and the name of Gray stuck to the codes. The "PCM tube" apparatus that Gray patented
was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall,
who credited Gray for the idea of the reflected binary code.
Many devices indicate position by closing and opening switches. If that device uses natural
binary codes, these two positions would be right next to each other:
...
011
100
...
The problem with natural binary codes 1’s that, with real (mechanical) switches, it 1’s very
unlikely that switches will change states exactly in synchrony. In the transition between the
two states shown above, all three switches change state. In the brief period while all are
changing, the switches will read some spurious position. Even without keybounce, the transition
might look like 011 — 001 — 101 — 100. When the switches appear to be in position 001, the
observer cannot tell if that 1’s the "real" position 001, or a transitional state between two other
positions. If the output feeds into a sequential system (possibly via combinational logic) then the
sequential system may store a false value.
The reflected binary code solves th1’s problem by changing only one switch at a time, so there
1’s never any ambiguity of position,
Dec Gray Binary
0
1
2
3
4
000
001
011
010
110
000
001
010
011
100
Digital Design Part II
13 | P a g e
5 111 101
6 101 110
7 100 111
Notice that state 7 can roll over to state 0 with only one switch change. Th1’s 1’s called the
"cyclic" property of a Gray code. A good way to remember gray coding 1’s by being aware
that the least significant bit follows a repetitive pattern of 2. That 1’s 11, 00, 11 etc. and the
second digit follows a pattern of fours.
More formally, a Gray code 1’s a code assigning to each of a contiguous set of integers, or to
each member of a circular l1’st, a word of symbols such that each two adjacent code words
differ by one symbol. These codes are also known as single-d1’stance codes, reflecting the
Hamming d1’stance of 1 between adjacent codes. There can be more than one Gray code for a
given word length, but the term was first applied to a particular binary code for the nonnegative integers, the binary-reflected Gray code, or BRGC, the three-bit version of which 1’s
shown above.
PARITY METHOD FOR ERROR DETECTION
The movement of binary data and codes from one location to another 1’s the most frequent
operation performed in digital systems. Here are just a few examples:
• The transm1’ssion of digitized voice over a microwave link
• The storage of data in and retrieval of data from external memory devices such as
Magnetic tape and d1’sk
• The transm1’ssion of digital data from a computer to a remote computer over telephone
lines (i.e., using a modem) . Th1’s 1’s one of the major ways of sending and receiving
information on the Internet.
Whenever information 1’s transmitted from one de vice (the transmitter) to another device
(the receiver), there 1’s a possibility that errors can occur such that the receiver does not
receive the identical information that was sent by the transmitter.
The major cause of any transm1’ssion errors 1’s electrical no1’se, which cons1’sts of spurious
fluctuations in voltage or current that are present in all electronic systems to varying degrees.
Digital Design Part II
14 | P a g e
The transmitter sends a relatively no1’se-free serial digital signal over a sign al line to a receiver.
However, by the time the signal reaches the receiver, it contains a certain degree of no1’se
superimposed on the original signal. Occasionally, the no1’se 1’s large enough in amplitude
that it will alter the logic level of the signal, as it does at point x .When th1’s occurs, the
receiver may incorrectly interpret that bit as a logic 1, which 1’s not what the transmitter has
sent.
Most modern digital equipment 1’s designed to be relatively error-free, and the probability of
errors such as shown in Figure 2-2 1’s very low. However, we must realize that digital systems
often transmit thousands, even millions, of bits per second, so that even a very low rate of
occurrence of errors can produce an occasional error that might prove to be bothersome, if
not d1’sastrous. For th1’s reason, many digital systems employ some method for detection (and
sometimes correction) of errors. One of the simplest and most widely used schemes for error
detection 1’s the parity method.
Parity Bit
A parity bit 1’s an extra bit that 1’s attached to a code group that 1’s being transferred from
one location to another. The parity bit 1’s made either 0 or 1, depending on the number of 1’s
that are contained in the code group. Two different methods are used.
In the even-parity method, the value of the parity bit 1’s chosen so that the total number of 1’s
in the code group (including the parity bit) 1’s an even number. for example, suppose that the
group 1’s 1000011. Th1’s 1’s the ASCII character "e." The code group has three 1’s . Therefore,
we will add a parity bit of 1 to make the total number of 1’s an even number. The new code
group, including the parity bit, thus becomes
* The parity bit can be placed at either end of the cod e group but 1’s usually placed to the left
Digital Design Part II
15 | P a g e
of the MSB.
If the code group contains an even number of 1’s to begin with, the parity bit 1’s given a value
of o. For example, if the code group were 1000001 (the ASCII code for "A "), the assigned
parity bit would be 0, so that the new code, including the parity bit, would be 01000001.
The odd-parity method 1’s used in exactly the same way except that the parity bit 1’s chosen so
the total number of 1’s (including the parity bit) 1’s an odd number. For example, for the code
group 1000001, the assigned parity bit would be a 1. For the code group 1000011, the parity
bit would be a 0.
Regardless of whether even parity or odd parity 1’s used, the parity bit becomes an actual part
of the code word. For example, adding a parity bit to the seven-bit ASCII code produces an
eight-bit code. Thus, the parity bit 1’s treated just like any other bit in the code. The parity bit
1’s 1’ssued to detect any single-bit errors that occur during the transm1’ssion of a code from
one location to another. For example, suppose that the character "A" 1’s being transmitted and
odd parity 1’s being used. The transmitted code would be
When the receiver circuit receives th1’s code, it will check that the code contains an odd
number of 1’s (including the parity bit) . If so, the receiver will assume that the code has been
correctly received. Now, suppose that because of some no1’se or malfunction the receiver
actually receives the following code:
The receiver will find that th1’s code has an even number of 1’s . Th1’s tells the receiver that
there must be an error in the code, since presumably the transmitter and receiver have agreed
to use odd parity. There 1’s no way, however, that the receiver can tell which bit 1’s in error,
since it does not know what the code 1’s supposed to be.
It should be apparent that th1’s parity method would not work if two bits were in error,
because two errors would not change the "oddness" or "evenness" of the number of 1’s in the
code. In practice, the parity method 1’s used only in situations where the probability of a single
error 1’s very low and the probability of double errors 1’s essentially zero.
Digital Design Part II
16 | P a g e
When the parity method 1’s being used, the transmitter and the receiver must have agreement,
in advance, as to whether odd or even parity 1’s being used. There 1’s no advantage of one
over the other, although even parity seems to be used more often. The transmitter must attach
an appropriate parity bit to each unit of information that it transmits. For example, if the
transmitter 1’s sending ASCII-coded data, it will attach a parity bit to each seven-bit ASCII code
group. When the receiver examines the data that it has received from the transmitter, it checks
each code group to see that the total number of 1’s (including the parity bit) 1’s cons1’stent
with the agreed-upon type of parity. Th1’s 1’s often called checking the parity of the data. In
the event that it detects an error, the receiver may send a message back to the transmitter
asking it to retransmit the last set of data. The exact procedure that 1’s followed when an error
1’s detected will depend on the particular system. We will see the Parity Generation and
checking circuits in next sections.
Boolean Algebra
George Boole, in 1854, developed a system of mathematical logic, which we now call Boolean
algebra. Based on Boole’s idea, Claude Shannon, in 1938, showed that circuits built with
binary switches can easily be described using Boolean algebra. The
As pointed out in previous sections, digital (logic) circuits operate in the binary mode where
each input and output voltage is either a 0 or a 1; the 0 and 1 designations represent
predefined voltage ranges. This characteristic of logic circuits allows us to use Boolean algebra
as a tool for the analysis and design of digital systems.
Boolean algebra is a relatively simple mathematical tool that allows us to describe the
relationship between a logic circuit's output(s) and its inputs as an algebraic equation (a
Digital Design Part II
17 | P a g e
Boolean expression). In this chapter we will study the most basic logic circuits-logic gates-which
are the fundamental building blocks from which all other logic circuits and digital systems are
constructed. A variable x is called a Boolean variable if x takes on only values in B, i.e. either 0
or 1.
Usage
Boolean algebra is also a valuable tool for coming up with a logic circuit t that will produce a
desired input/output relationship. Since Boolean algebra expresses a circuit's operation in the
form of an algebraic equation, it is ideal for inputting the operation of a logic circuit into a
computer that is running software that needs to know what the circuit looks like. The software
may be a circuit simplification n routine that takes the input Boolean algebra equation,
simplifies it, and comes up with a simplified version of the original logic circuit. Another
application n would be the software that is used to generate the fuse maps needed to program
a PLD (programmable logic device). The operator punches in the Boolean equations for the
desired circuit operation, and the software converts it to a fuse map.
Clearly, Boolean algebra is an invaluable tool in describing, analyzing, designing, and
implementing digital circuits. The student who expects to work in the digital field must work
hard at understanding and becoming comfortable with Boolean algebra (believe us, it's much,
much easier than conventional algebra). Do all of the examples, exercises, and problems, even
the ones your instructor doesn't assign. When those run out, make up your own. The time you
spend will be well worth it as you see your skills improve and your confidences grow.
Boolean variables are often used to represent the voltage level present on a wire or at the
input/output terminals of a circuit. For example, in a certain digital system the Boolean value
of 0 might be assigned to any voltage in the range from 0 to 0.8 V while the Boolean value of
1 might be assigned to any voltage in the range 2 to 5 V.*
* Voltages between 0.8 and 2 V are undefined (neither 0 nor 1) and
under normal circumstances should not occur.
Thus, Boolean 0 and 1 do not present actual numbers but instead represent the state of a
voltage variable, or what is called its logic level A voltage in a digital circuit is said to be at the
logic 0 level or the logic 1 level, depending on its actual numerical value. In digital logic several
other terms are used synonymously with 0 and 1. Some of the more common ones are shown
in below Table. We will use the 0/ 1 and LOW/HIGH designations most of the time.
Digital Design Part II
18 | P a g e
Basic Logic Operators and Logic Expressions
Two binary switches can be connected together either in series or in parallel as shown in Figure
below.
If two switches are connected in series as in (a), then both switches have to be on in order for
the output F to be a 1. In other words, F = 1 if x = 1 AND y = 1. If either x or y is off, or both
are off, then F = 0.
Translating this into a logic expression, we get F = x AND y Hence, two switches connected in
series give rise to the logical AND operator. In a Boolean function (which we will explain in
more detail) the AND operator is either denoted with a dot (•) or no symbol at all. Thus we
can rewrite the above expression as F = x • y or simply F = xy.
If we connect two switches in parallel as in (b), then only one switch needs to be on in order
for the output F to be a 1. In other words, F = 1 if either x = 1, or y = 1, or both x and y are
1’s. This means that F = 0 only if both x and y are 0’s. Translating this into a logic expression,
we get F = x OR y and this gives rise to the logical OR operator.
In a Boolean function, the OR operator is denoted with a plus symbol (+). Thus we can
rewrite the above expression as F = x + y In addition to the AND and OR operators, there is
another basic logic operator – the NOT operator, also known as the INVERTER. Whereas, the
AND and OR operators have multiple inputs, the NOT operator has only one input and one
Digital Design Part II
19 | P a g e
output. The NOT operator simply inverts its input, so a 0 input will produce a 1 output, and a
1 becomes a 0. In a Boolean function, the NOT operator is either denoted with an apostrophe
symbol ( ' ) or a bar on top ( ) as in F = x' or F = x.
When several operators are used in the same expression, the precedence given to the
operators are, from highest to lowest, NOT, AND, and OR. The order of evaluation can be
changed by means of using parenthesis. For example, the expression
F = xy + z'
means (x and y) or (not z), and the expression F = x(y + z)' means x and (not (y or z)).
Truth Tables
The operation of the AND, OR, and NOT logic operators can be formally described by using a
truth table. A truth table is a two-dimensional array where there is one column for each input
and one column for each output (a circuit may have more than one output). Since we are
dealing with binary values, each input can be either a 0 or a 1. We simply enumerate all
possible combinations of 0’s and 1’s for all the inputs.
Usually, we want to write these input values in the normal binary counting order. With two
inputs, there are 2^2 combinations giving us the four rows in the table. The values in the
output column are determined from applying the corresponding input values to the functional
operator. For the AND truth table, F = 1 only when x and y are both 1, otherwise, F = 0.
For the OR truth table, F = 1 when either x or y or both is a 1, otherwise F = 0. For the NOT
truth table, the output F is just the inverted value of the input x.
Using a truth table is one method to formally describe the operation of a circuit or function.
The truth table for any given logic expression (no matter how complex it is) can always be
derived. Examples on the use of truth tables to describe digital circuits are given in the
Digital Design Part II
20 | P a g e
following sections. Another method to formally describe the operation of a circuit is by using
Boolean expressions or Boolean functions.
Duality Principle
Specifically, we define the dual of a logic expression as one that is obtained by changing all +
operators with • operators, and vice versa, and by changing all 0’s with 1’s, and vice versa. For
example, the dual of the logic expression
(x•y'•z) + (x•y•z' ) + (y•z)
is
(x'+y+z' ) • (x'+y'+z) • (y'+z' )
The duality principle states that if a Boolean expression is true, then its dual is also true. Be
careful in that it does not say that a Boolean expression is equivalent to its dual x • 0 = 0 is
true, thus by the duality principle, its dual, x + 1 = 1 is also true. However, x • 0 = 0 is not
equal to x + 1 = 1, since 0 is definitely not equal to 1.
We will see in later sections that the duality principle is used extensively in digital logic design.
Whereas an expression might be complex to implement, its dual might be simpler. In this case,
implementing its dual and converting it back to the original expression will result in a smaller
circuit.
Out of all these
 Dual
 De-Morgan (16 and 17)
 Direct Hit(Very Effective) (15a and 15b)
Digital Design Part II
21 | P a g e
The above three are most effective ones by which we can eliminate K-map which we use for
simplification of the Boolean Expressions. As K-map is effective up to only 4 variables functions
we can easily simplify any equation using above three. Of course there might exist some cases
where direct truth table is ready and you have to simplify it or you have don’t cares :in those
cases still K map holds good.
In practice the above 3 formulas efficiently simplifies almost 90% of the Boolean expressions
when compared to K-map.
Active High and Active Low logics
When an input or output line on a logic circuit symbol has no bubble on it, that line is said to
be active-HIGH. When an input or output line does have a bubble on it, that line is said to be
active-LOW. The presence or absence of a bubble, then, determines the active-HIGH/ activeLOW status of a circuit's inputs and output, and is used to interpret the circuit operation.
To illustrate, below figure shows the standard symbol for a NAND gate. The standard symbol
has a bubble on its output and no bubbles on its inputs. Thus, it has an active-LOW output and
active-HIGH inputs. The logic operation represented by this symbol can therefore be
interpreted as follows:
The output goes LOW only when all of the inputs are HIGH.
Note that this says that the output will go to its active state only when all of the inputs are in
their active states. The word "all" is used because of the AND symbol.
Digital Design Part II
22 | P a g e
Interpretation of the two NAND gate symbols
The alternate symbol for a NAND gate shown in Figure (b) has an active HIGH output and
active-LOW inputs, and so its operation can be stated as follows:
The output goes HIGH when any input is LOW.
This says that the output will be in its active state whenever any of the inputs is in its active
state. The word "any" is used because of the OR symbol.
With a little thought, it can be seen that the two interpretations for the NAND symbols in
Figure 3-34 are different ways of saying the same thing.
Summary
At this point you are probably wondering why there is a need to have two different symbols
and interpretations for each logic gate. We hope the reasons will become clear after reading
the next sections.
For now, let us summarize the important points concerning the logic-gate representations.
1. To obtain the alternate symbol for a logic gate, take the standard symbol and change its
operation symbol (OR to AND, or AND to OR), and change the bubbles on both inputs and
output (i.e., delete bubbles that are present, and add bubbles where there are none).
2. To interpret the logic-gate operation, first note which logic state, 0 or 1, is the active state
for the inputs and which is the active state for the output. Then realize that the output's active
state is produced by having all of the inputs in their active state (if an AND symbol is used) or
by having any of the inputs in its active state (if an OR symbol is used).
Boolean Function and the Inverse
As we have seen, any digital circuit can be described by a logical expression, also known as a
Boolean function. Any Boolean functions can be formed from binary variables and the
Boolean operators •, +, and ' (for AND, OR, and NOT respectively). For example, the
following Boolean function uses the three variables or literals x, y, and z. It has three AND
terms (also referred to as product terms), and these AND terms are OR’ed (summed) together.
The first two AND terms contain all three variables each, while the last AND term contains
Digital Design Part II
23 | P a g e
only two variables. By definition, an AND (or product) term is either a single variable, or two
or more variables ANDed together. Quite often, we refer to functions that are in this format as
a sum-of-products or or-of-ands.
The value of a function evaluates to either a 0 or a 1 depending on the given set of values for
the variables. For example, the function above evaluates to a 1 when any one of the three
AND terms evaluate to a 1, since 1 OR x is 1.
The first AND term, xy'z, equals to a 1 if x = 1, y = 0, and z = 1 because if we substitute these
values for x, y, and z into the first AND term xy'z, we get a 1. Similarly, the second AND term,
xyz', equals to a 1 if x = 1, y = 1, and z = 0. The last AND term, yz, has only two variables.
What this means is that the value of this term is not dependent on the missing variable x.
In other words x can be either a 0 or a 1, but as long as y = 1 and z = 1, this term will equal to
a 1.Thus, we can summarize by saying that F evaluates to a 1 if x = 1, y = 0, and z = 1 Or x =
1, y = 1, and z = 0 or x = 0, y = 1, and z = 1 or x = 1, y = 1, and z = 1. Otherwise, F
evaluates to a 0. It is often more convenient to summarize the above verbal description of a
function with a truth table as shown in Figure under the column labeled F. Notice that the four
rows in the table where F = 1 match the four cases in the description above.
The inverse of a function, denoted by F', can be easily obtained from the truth table for F by
simply changing all the 0’s to 1’s and 1’s to 0’s as shown in the truth table in Figure under the
Digital Design Part II
24 | P a g e
column labeled F'. Therefore, we can write the Boolean function for F' in the sum-of-products
format, where the AND terms are obtained from those rows where F' = 1. Thus, we get F' =
x'y'z' + x'y'z + x'yz' + xy'z'
To deduce F' algebraically from F requires the use of DeMorgan’s Theorem (Theorem 16)
twice. For example, using the same function
F = xy'z + xyz' + yz
we obtain F' as follows
F' = (xy'z + xyz' + yz)'
= (xy'z)' • (xyz')' • (yz)'
= (x'+y+z' ) • (x'+y'+z) • (y'+z' )
There are three things to notice about this equation for F'. First, F' is just the dual of F as efined
in above sections.
Second, instead of being in a sum-of-products format, it is in a product-of-sums (and-of-ors)
format where three OR terms (also referred to as sum terms) are ANDed together. Third, from
the same original function F, we obtained two different equations for F'. From the truth table,
we obtained
F' = x'y'z' + x'y'z + x'yz' + xy'z'
and from applying DeMorgan’s theorem to F, we obtained
F' = (x'+y+z' ) • (x'+y'+z) • (y'+z' )
Hence, we must conclude that these two expressions, where one is in the sum-of-products
format, and the other is in the product-of-sums format, are equivalent. In general, all functions
can be expressed in either the sum-of products or product-of-sums format.
Thus, we should also be able to express the same function F = xy'z + xyz' + yz in the productof-sums format.
We can derive it using one of two methods. For method one, we can start with F' and apply
DeMorgan’s Theorem to it just like how we obtained F' from F.
F = F' '
= (x'y'z' + x'y'z + x'yz' + xy'z' )'
= (x'y'z' )' • (x'y'z)' • (x'yz' )' • (xy'z' )'
= (x+y+z) • (x+y+z' ) • (x+y'+z) • (x'+y+z)
Minterms and Maxterms
Digital Design Part II
25 | P a g e
As you recall, a product term is a term with either a single variable, or two or more variables
ANDed together, and a sum term is a term with either a single variable, or two or more
variables ORed together. To differentiate between a term that contains any number of
variables with a term that contains all the variables used in the function, we use the words
minterm and maxterm.
Minterms
A minterm is a product term that contains all the variables used in a function. For a function
with n variables, the notation mi where 0 ≤ i < 2n, is used to denote the minterm whose
index i is the binary value of the n variables such that the variable is complemented if the value
assigned to it is a 0, and uncomplemented if it is a 1.
For example, for a function with three variables x, y, and z, the notation m3 is used to
represent the term in which the values for the variables xyz are 011 (for the subscript 3). Since
we want to complement the variable whose value is a 0, and uncomplement it if it is a 1.
Hence m3 is for the minterm x'yz. Figure (a) shows the eight minterms and their notations for
n = 3 using the three variables x, y, and z.
When specifying a function, we usually start with product terms that contain all the variables
used in the function. In other words, we want the sum of minterms, and more specifically the
sum of the one-minterms, that is the minterms for which the function is a 1 (as opposed to the
zero-minterms, that is the minterms for which the function is a 0). We use the notation 1minterm to denote one-minterm, and 0-minterm to denote zero-minterm.
The function from the previous section
F = xy'z + xyz' + yz
= x'yz + xy'z + xyz' + xyz
and repeated in the following truth table has the 1-minterms m3, m5, m6, and m7.
Digital Design Part II
26 | P a g e
Thus, a shorthand notation for the function is
F(x, y, z) = m3 + m5 + m6 + m7
By just using the minterm notations, we do not know how many variables are in the original
function. Consequently, we need to explicitly specify the variables used by the function as in
F(x, y, z). We can further simplify the notation by using the standard algebraic symbol Σ for
summation. Therefore, we have F(x, y, z) = Σ(3, 5, 6, 7).
These are just different ways of expressing the same function. Since a function is obtained from
the sum of the 1-minterms, the inverse of the function, therefore, must be the sum of the 0minterms. This can be easily obtained by replacing the set of indices with those that were
excluded from the original set.
Maxterms
Analogous to a minterm, a maxterm is a sum term that contains all the variables used in the
function. For a function with n variables, the notation Mi where 0 ≤ i < 2n, is used to denote
the maxterm whose index i is the binary value of the n variables such that the variable is
complemented if the value assigned to it is a 1, and uncomplemented if it is a 0.
For example, for a function with three variables x, y, and z, the notation M3 is used to
represent the term in which the values for the variables xyz are 011. For maxterms, we want to
complement the variable whose value is a 1, and uncomplement it if it is a 0. Hence M3 is for
the maxterm x + y' + z'. Figure 2.9 (b) shows the eight maxterms and their notations for n = 3
using the three variables x, y, and z.
We have seen that a function can also be specified as a product of sums, or more specifically, a
product of 0-maxterms, that is, the maxterms for which the function is a 0. Just like the
Digital Design Part II
27 | P a g e
minterms, we use the notation 1-maxterm to denote one-maxterm, and 0-maxterm to denote
zero-maxterm.
Thus, the function
F(x, y, z) = xy'z + xyz' + yz = (x + y + z) • (x + y + z') • (x + y' + z) • (x' + y + z)
Which h is shown in the following table
can be specified as the product of the 0-maxterms M0, M1, M2, and M4. The shorthand
notation for the function is F(x, y, z) = M0 • M1 • M2 • M4
Again, by using the standard algebraic symbol Π for product, the notation is further simplified
to F(x, y, z) = Π (0, 1, 2, 4)
The following summarizes these relationships for the function F = xy'z + xyz' + yz and its
inverse. Comparing these equations with those in Figure 2.8, we see that they are identical.
Digital Design Part II
28 | P a g e
Canonical, Standard, and non-Standard Forms
Any Boolean function that is expressed as a sum of minterms, or as a product of maxterms is
said to be in its canonical form. For example, the following two expressions are in their
canonical forms
F = x' y z + x y' z + x y z' + x y z
F' = (x+y'+z' ) • (x'+y+z' ) • (x'+y'+z) • (x'+y'+z' )
As noted from the previous section, to convert a Boolean function from one canonical form to
its other equivalent canonical form, simply interchange the symbols Σ with Π, and list the
index numbers that were excluded from the original form. For example, the following two
expressions are equivalent
F1(x, y, z) = Σ(3, 5, 6, 7)
F2(x, y, z) = Π(0, 1, 2, 4)
To convert a Boolean function from one canonical form to its dual (inverse), simply
interchange the symbols Σ with Π, and list the same index numbers from the original form. For
example, the following tw o expressions are duals
F1(x, y, z) = Σ(3, 5, 6, 7)
F2(x, y, z) = Π(3, 5, 6, 7)
A Boolean function is said to be in a standard form if a sum-of-products expression or a
product-of-sums expression has at least one term that is not a minterm or a maxterms
respectively. In other words, at least one term in the expression is missing at least one variable.
For example, the following expression is in a standard form because the last term is missing the
variable x.
F = xy'z + xyz' + yz
Sometimes, common variables in a standard form expression can be factored out. The resulting
expression is no longer in a sum-of-products or product-of-sums format. These expressions are
in a non-standard form.
For example, starting with the previous expression, if we factor out the common variable x
from the first two terms, we get the following expression, which is in a non-standard form.
F = x(y'z + yz') + yz
Download